id
stringlengths 10
10
| title
stringlengths 3
179
| track
stringclasses 1
value | status
stringclasses 3
values | keywords
stringlengths 2
2.39k
| primary_area
stringclasses 21
values | author
stringclasses 501
values | authorids
stringclasses 501
values | aff
stringclasses 1
value | aff_domain
stringclasses 1
value | position
stringclasses 1
value | rating
stringclasses 355
values | confidence
stringlengths 0
19
| soundness
stringclasses 642
values | contribution
stringclasses 596
values | presentation
stringclasses 782
values | rating_avg
float64 0
9
| confidence_avg
float64 0
5
| soundness_avg
float64 0
4
| contribution_avg
float64 0
4
| presentation_avg
float64 0
4
| corr_rating_confidence
float64 -1
1
| project
stringclasses 1
value | github
stringclasses 1
value | Review
listlengths 2
10
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0yXqV8VJKi | Understanding Complexity in VideoQA via Visual Program Generation | main | Active | video understanding;codegen | applications to computer vision, audio, language, and other modalities | 3;5;5;6 | 4;4;3;3 | 3;3;3;3 | 2;2;2;3 | 3;3;3;3 | 4.75 | 3.5 | 3 | 2.25 | 3 | -0.688247 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "-\tThe paper represents subtrees in question code using simple one-hot encoding without additional processing, which might ignore the frequency of subtrees and their structural relationships. Additionally, this representation can be very sparse. How does this affect CodePlexity's performance?\n-\tTo what extent does the choice of code generation model affect the method's results? Would using different code generation models lead to identifying different patterns of complexity, and how might this impact the method's consistency?\n-\tSince the method evaluates difficulty without considering visual information, there might be cases where code generation is challenging due to question ambiguity, but the task would be straightforward with visual context. Is CodePlexity truly measuring question difficulty, or is it primarily measuring code generation complexity?\n-\tMost models evaluated in the paper are large video-language models built upon language model architectures. Since these models share similar language model foundations with the code generation approach, they might inherently struggle with the same types of questions that are difficult for code generation. Does this architectural similarity explain their correlated performance? How well would the method generalize to fundamentally different architectures that might employ different reasoning mechanisms?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "-\tThe paper introduces a novel and interesting approach using code complexity to evaluate question complexity.\n-\tThe paper offers interesting insights into the differences in human evaluation, text-based and code-based evaluation.\n-\tThe experiment results show clear empirical evidence to support the claims."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an approach to analyzing and generating complex questions for Video QA. The authors propose the use of generated code complexity as a metric to evaluate question difficulty. They introduce \"CodePlexity,\" a novel algorithm that analyzes generated code's structure and content to identify patterns that make questions challenging for ML models. This allows the creation of a new VideoQA dataset named \"CodePlex-QA\" which features more complex questions than existing datasets without relying on human expertise."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please see the question section below"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Why the authors do not validate the proposed approach on other VideoQA datasets besides NExT-QA?\n- How do errors in code generation impact the complexity metrics?\n- Are there any important patterns that might be missed by your current subtree merging approach?\n- What specific criteria were used to manually filter out the 12% of questions?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The idea to measuring question complexity using code generation is well-motivated\n- The ablation studies and analysis are well-designed"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel approach to analyzing and measuring question complexity in VideoQA by leveraging generated code complexity. The authors demonstrate that code-based metrics correlate better with model performance compared to human judgment and introduce CodePlexity, an algorithm for estimating question complexity. They use this to identify challenging patterns in VideoQA and automatically generate a new benchmark dataset, CodePlex-QA, which proves to be significantly more challenging than existing datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- All experiments are conducted on a single dataset (NExT-QA) for analysis. Therefore, it leads to the limited evaluation across different types of VideoQA tasks.\n- The authors should discuss more how errors of code generation might affect the complexity metrics\n- The human evaluation study uses only 150 questions, which is a relatively small sample\n- We concern that the merging of subtrees that \"always co-occur\" could potentially miss important patterns in edge cases\n- The manual filtering process for the generated dataset (removing 12% of questions) could introduce selection bias"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What are the results of the recent visual programming methods on this task?\n2. How complex is CodePlex-QA compared with Next-QA ATP-T/ATP-C split?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper proposes a novel metric to evaluate the complexity of the VideoQA problem and also proposes a CodePlex-QA that is more difficult for current VideoQA models. The idea of leveraging code to develop a new metric for question difficulty analysis is interesting.\n2. The paper conducts thorough experiments on testing the correlation of the metric."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the author claims that the questions considered difficult are different from the actual difficult questions to current VideoQA models. Therefore, the authors propose a new metric: CodePlexity, to evaluate the difficulty of a VideoQA question. This metric is based on the recent visual programming methods, where a visual program of a question is generated and which complexity is evaluated. Based on this metric, the authors found that most models struggle with questions where multiple frames are included and frame order need to be take into consideration. Then based on the metric, the author proposes CodePlex-QA and claims that it is 1.9 times harder than existing benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The criteria for what makes a question complex or challenging for models—especially compared to human intuition—seem speculative. Without more rigorous validation, it’s unclear whether the proposed complexity measures truly capture inherent question difficulty or just reflect the limitations of current VideoQA models. Also, the idea of identifying questions that are “easy for humans but hard for machines” is ambiguous. It seems plausible that any difference in difficulty may be more a result of model architecture and training rather than the intrinsic complexity of the question itself.\n2. Visual programming is a natural way (and probably the best way) to address the CodePlex-QA task. The author didn't report how well the recent visual programming methods (Visprog, ViperGPT, CodeVQA), especially those on addressing complex logical questions (eg. RVP[1]) addresses the task.\n3. The comparison between Next-QA and CodePlex-QA (Table2) is not convincing enough as previous works have shown that Next-QA have easy questions[2]. How is CodePlex-QA compared with Next-QA ATP-Hard split?\n\n[1] Recursive Visual Programming\n[2]Revisiting the \"Video\" in Video-Language Understanding"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- SeViLA \"leverages a single image-language model (BLIP2) to tackle both temporal keyframe localization and question answering on videos\" How the findings in this paper generalize to models that take multiple video frames for VQA? (e.g., VideoChat2 [5], Video-LLaVA [6])?\n\n- NExT-QA uses videos from VidOR [7] -- CodePlex-QA use videos from MOMA, ActivityNet, ActivityNet-Entities, ActivityNet-Captions), Action Genome, and Charades. Why not using the same videos, or at least the same source video dataset (VidOR)?\n\n[5] Li, Kunchang, et al. \"Mvbench: A comprehensive multi-modal video understanding benchmark.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[6] Lin, Bin, et al. \"Video-llava: Learning united visual representation by alignment before projection.\" arXiv preprint arXiv:2311.10122 (2023).\n\n[7] Xindi Shang, Donglin Di, Junbin Xiao, Yu Cao, Xun Yang, and Tat-Seng Chua. Annotating objects and relations in user generated videos. In ICMR, pages 279–287, 2019"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ This paper is easy to read, the method is easy to follow and the qualitative examples help to illustrate the motivation of their work.\n\n+ Leveraging Visual Programming outputs as a proxy for assessing the difficulty of a given task is an underexplored domain -- in software engineering, an active area of study is the correlation between the code complexity and difficulty of the task. This is an open problem, and this paper proposes an interesting generalization of this task.\n\n+ Decomposing the output code from the VP module via ASTs, and identifying subtrees that decrease performance via scoring, may give an interesting angle for generating an interpretable pipeline to analyze subprocesses that might be hurting the models' performance. In principle, this looks like an interesting approach and a viable metric for VP-based methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores a data-driven method for evaluating question complexity for Video Question Answering (VQA) by collecting the visual programs generated by ViperGPT. Then, they analyze the code outputs and leverage them to assess the difficulty of the questions in a VQA triplet.\nGiven the output code from the Visual Programming module (ViperGPT), they propose CodePlexity, an algorithm that parses the generated code via Abstract Syntax Trees (AST), and later identifies valid subtrees. These subtrees are then scored by correlating subtrees with difficult questions -- subroutines that hurt the VQA model performance. The authors use NExT-QA as the benchmark to analyze their proposed metric. \nGiven this metric, they also propose a pipeline to generate difficult question-answer pairs for a set of videos from MOMA, ActivityNet (+Entities/Captions), Action Genome, and Charades. \nFinally, they compare the results of models SeViLA-ZS, InternVideo and VIOLET on their proposed new dataset (CodePlex-QA) against NExT-QA."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Visual Programming (VP) is an interpretable framework for several visio-linguistic tasks, however, VP may output very simple code for a quite complex task, yielding false positive or false negative hard cases, and VP might not be reliable -- for a very complex task, the output code could be simple; the LLM might regard it as a simple task, but it's actually the opposite\nIn addition, VP falls short against end-to-end models (e.g., ViperGPT acc. is 60% vs SeViLa (finetuned) acc. is 73.8%) -- given its underperformance, it is hard to justify the use of its outputs as a proxy for evaluation of complexity. It is also hard to justify using VP for measuring task complexity and then evaluating on end-to-end models.\n\n- Disregarding visual inputs: \"Text-based metrics (above) perform worse than the code-based ones (below), and our approach demonstrates the highest correlation with the models’ performance.\" -- this completely ignores the visual modality, which seems problematic.\n\n- The authors claim: \"In summary, we discovered that VideoQA methods struggle with fine-grained temporal reasoning and lack spatio-temporal, object-centric representations. This is in accord with prior studies\" [3] -- however, those studies regard both visual and language modalities for assessing temporal reasoning in video QA, giving high importance to the video part.\n\n- Human evaluations: Figure 6: We ask human annotators to provide the relative ordering of three provided questions\naccording to the estimated complexity of answering the question about an unseen video -- how to correctly assess the complexity of the task without looking at the video?\n\n- Experimental setup:\nBaselines [1] focus on grammar and text only. BERT and GPT-4 also focus on text only.\n\n- Generalizability: ViperGPT is the VP module used in this paper, however, VP has significantly progressed. Different methods leverage multiple LLMs, there has been extensive problem decomposition and iteration that impact the code outputs. Further experiments with other VP approaches might be required to ensure generalization. Similarly, the proposed metric and dataset is only compared against NExT-QA. Other benchmarks might be necessary to validate the proposed metric and dataset (e.g., MVBench [2] compiles a collection of datasets (with subsets of samples from a diverse range of sources), which includes spatio-temporal analysis, spatial action, object, position, scene, count, attribute and cognition).\n\n- For dataset generation, the authors compare their pipeline with [4] -- however, an important step for EgoSchema is the manual curation of the whole dataset in the final step -- details for this step is not further detailed in this paper.\nFurthermore, as a comparison, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data -- significantly larger than the 1981 samples -- what is the length of each video sample?\n\n\n[1] Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Brian MacWhinney, and Chris Dyer. Learning the curriculum with bayesian optimization for task-specific word representation learning. In ACL, 2016. \n\n[2] Li, Kunchang, et al. \"Mvbench: A comprehensive multi-modal video understanding benchmark.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[3] Shyamal Buch, Crist´obal Eyzaguirre, Adrien Gaidon, Jiajun Wu, Li Fei-Fei, and Juan Carlos Niebles. Revisiting the” video” in video-language understanding. In CVPR, 2022.\n\n[4] Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. EgoSchema: A diagnostic benchmark for very long-form video language understanding. In NeurIPS, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a data-driven method to assess question complexity in Video Question Answering (VideoQA) based on code primitives, and use it to create a new benchmark, which is nearly twice as difficult for models compared to existing datasets."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024understanding,\ntitle={Understanding Complexity in Video{QA} via Visual Program Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0yXqV8VJKi},\nnote={under review}\n}"
},
"abstract": {
"value": "We propose a data-driven approach to analyzing query complexity in Video Question Answering (VideoQA). Previous efforts in benchmark design have largely relied on human expertise to construct challenging samples. In this work, we experimentally demonstrate that humans struggle to accurately estimate which questions are hard to answer for machine learning models. \n Our alternative, automated approach takes advantage of recent advances in code generation for visual question answering. In particular, we use generated code complexity as a proxy for the question complexity and demonstrate that it indeed shows a much stronger correlation with the models' performance, compared to human estimates. We then present a novel algorithm for estimating question complexity from code. It identifies fine-grained primitives which correlate with the hardest questions. These human-interpretable results lead to a number of discoveries about the key sources of complexity for VideoQA models. Finally, we extend our approach to generate complex questions for a given set of videos. This allows us to automatically construct a new benchmark, which is 1.9 times harder for VideoQA methods than existing manually designed datasets."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"video understanding",
"codegen"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5dca235cf7060cf72798cff5a5c86564629f69c8.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/0b2646320259785d00546826ecb9443f8ba683c1.zip"
},
"title": {
"value": "Understanding Complexity in VideoQA via Visual Program Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
0ydseYDKRi | Beyond The Rainbow: High Performance Deep Reinforcement Learning On A Desktop PC | main | Active | Reinforcement Learning;Computational Efficiency;High Performance;Atari;Value-Based;DQN;Rainbow DQN;BeyondTheRainbow | reinforcement learning | 3;5;6;8 | 4;4;4;3 | 1;2;3;3 | 3;3;2;3 | 2;2;3;3 | 5.5 | 3.75 | 2.25 | 2.75 | 2.5 | -0.800641 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper is well-written and well-organized. The ideas are clear and could be easily understood\n2. The experiments are comprehensive and the results are strong. As shown in Section 4, the proposed BTR algorithm could greatly outperform state-of-the-art baselines in two classic benchmarks and handle three hard and complex modern games with a desktop PC.\n3. The paper includes extensive ablation studies and experimental data. Section 5 presents a detailed analysis of the performance and impact of each component of the BTR algorithm, providing readers with insights into the sources of the algorithm's performance gains. Additionally, the authors include complete experimental results and settings in the appendix, helping to clarify any potential confusion or misunderstanding for readers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Beyond The Rainbow (BTR), a novel reinforcement learning (RL) algorithm that enhances Rainbow DQN by integrating six key improvements. The BTR algorithm is computationally efficient, capable of training powerful agents on a standard desktop computer within a short time. Experimental results show that BTR outperforms state-of-the-art RL algorithms on both the Atari-60 and Procgen benchmarks. Additionally, BTR can handle training agents for challenging levels in complex, modern games. Finally, this paper includes a comprehensive ablation study to analyze the performance and impact of each component within the BTR algorithm."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The BTR integrates six improvements from existing RL literature to Rainbow DQN. While the algorithm demonstrates strong performance, its novelty might appear limited. Could you further clarify the novelty of this work? Or specifically, could you briefly discuss if there is any challenges in integrating these existing improvements into the BTR algorithm?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I am just slightly worried about the use of somewhat modern Nintendo games as RL environments through the use of emulators, is the use of emulators for research legal?"
},
"flag_for_ethics_review": {
"value": [
"Yes, Legal compliance (e.g., GDPR, copyright, terms of use)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. What exactly is adaptive maxpooling? Would it be possible to add a description of it with either an equation, pseudo-code, or diagram?\n2. Where did the formula 0.05/batch_size for Adam's epsilon come from?\n3. The final algorithm has a considerable number of hyperparameters, would it be possible to discuss a bit which ones are the most important to tune should someone try to apply this algorithm to a new domain?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The presentation is overall clear, the methodology is sound, and the results are compelling. Both extensive use of ablations, and the connection to other important metrics related to pathologies in Deep RL algorithms are an example that more papers should follow.\nThe appendices are also data rich, showing ablations' performances on each of ALE's 60 games, and even having one appendix about things that were tried but did not lead to improvements in performance, which may help others not repeat the experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a variant of Rainbow that adds further architectural and algorithmic improvements to improve not only the agent's score but also to increase its training speed to around 3x what has been previously reported, while running it on top-notch consumer hardware. Finally the authors also show that their improved version of rainbow can deal with modern games with complex graphics and physics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Adaptive Maxpooling is never defined. It's not a common layer in reinforcement learning and it's never defined in the paper, in fact skimming (Schmidt and Schmied, 2021) that layer is also not defined, I believe this is the only seious weakness in the paper's presentation, but still I believe it is a serious weakness (though hopefully the authors can fix it and so I can increase their grade).\n2. There are at least 2 relevant citations missing, \"Spectral Normalisation for Deep Reinforcement Learning: An Optimisation Perspective\" when talking about Spectral Normalisation, and \"On the consistency of hyper-parameter selection in value-based deep reinforcement learning\" when talking about the need for tuning Deep RL hyperparameters and the benefits of using layer norm between dense layers.\n3. I believe it's slightly misleading to not specify \"a high-end PC\" when talking about the kind of machine that can run the algorithm in 12 hours (4090 RTXs are quite expensive, and i9s are Intel's high-end consumer line)\n4. I believe a more direct comparison with Schmidt and Schmied, 2021 is warranted, given its foundational importance to the paper.\n5. Using only 3 seeds while having a large increase in the number of tuned hyperparameters weakens the validity of the results as explained in \"Empirical Design in Reinforcement Learning\", though at the same time the analysis of metrics beyond simply the score and the extensive use of ablations help."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The 12 hour number/other timings, is that the total time it takes to train BTR on a single game or on all 57 games?\n- It seems like you made quite a few hyperparameter choices (e.g. how often to update, etc.) Do you use the same values for each domain?\n- What is the shaded area in the plots? If it is standard deviation it seems like the proposed BTR algorithm is very inconsistent across seeds. Could you elaborate please/maybe provide results for individual seeds?\n- Figure 3 does show that you can apply your approach to other games, which is great. I would really like to see some point of comparison, however, to act as a reference point. For instance, run vanilla PPO or DQN or Rainbow as a baseline.\n- Why is fig 4 using raw score as the y-axis, as opposed to e.g. normalised?\n- Figure 4 is somewhat hard to follow as there are so many lines and it seems like most of them overlap quite a lot.\n- Is it feasible to run rainbow with vectorisation? This is not that crucial, it just seems like something obvious to run given figure 5, where vectorisation is the main speedup factor.\n- Table 2: Would be nice to have another method, e.g. rainbow or DQN to act as a reference point.\n- One recent work that seems to have a similar purpose is \"Simplifying Deep Temporal Difference Learning\" (https://arxiv.org/pdf/2407.04811). It seems like they use vectorisation as well to achieve large speedups. More importantly, however, is that they primarily use JAX---which is becoming increasingly common in RL, and is reducing computational load significantly/making RL more accessible to compute-poor labs/institutions. Could you please comment on a few things\n\t- How does this paper's score compare to yours?\n\t- How does the walltime compare to yours?\n\t- What do you see as the benefits/disadvantages of this hardware accelerated paradigm compared to the more classic approach you are taking?\n- I know it is not usual in these types of papers but I would really appreciate a PPO comparison, both in terms of walltime and performance."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This paper gives RL researchers a way to do pretty well in atari without expending too significant computational resources.\n- They perform ablations on their individual changes to identify what helps and what has the most effect on performance/walltime. This is quite useful.\n\nI am not giving this a lower score because I think making RL more accessible is worthwhile, and this paper takes a step towards this, and further analyses many of these independent components to see what their effect is. I am not giving a higher score because I think the paper's significance does not warrant it."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper combines several different RL improvements to a single algorithm, with a focus on high performance with reasonable computational requirements. In doing so, they find that their approach achieves a new SoTA (not including recurrent methods), while being able to be run on a desktop machine in under a day.\n\nThey analyse the factors that led to this performance in detail through several ablations.\n\nOverall, this paper makes rainbow/dqn-type methods more accessible to non-industry labs"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- To me it is unclear if 12 hours is for all games or just 1.\n- I wonder how this fits in with the recent trend of hardware-accelerated RL (see e.g., Stoix/PureJaxRL/Gymnax and all of the hardware-accelerated environments). Does that line of work better achieve the goal of making RL more accessible? In that setting, the environment is often run entirely on the GPU, leading to hundreds or thousands of times speedups."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses.\n\n\nOverall, this work is just not good enough in its current format. I recommend that the authors fix the presentation of the results, especially adding error bars and effective aggregation using a tool like rliable. Given the significant problems with every figure and table in the main body of the paper, this work is not good enough for this venue in its current form and would require wholesale changes to fix that.\n\n[1] Deep Reinforcement Learning at the Edge of the Statistical Precipice. Agarwal et al. Neurips 2021.\n\n[2] Aitchison, Matthew, Penny Sweetser, and Marcus Hutter. \"Atari-5: Distilling the arcade learning environment down to five games.\" International Conference on Machine Learning. PMLR, 2023.\n\n[3] Sokar, Ghada, et al. \"The dormant neuron phenomenon in deep reinforcement learning.\" International Conference on Machine Learning. PMLR, 2023."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper has a number of positive points:\n- The core idea of trying to achieve strong performance using Q-learning on a desktop PC has significant merit and would constitute an interesting contribution\n- The introduction of new games to evaluate on is interesting and the games chosen would make good potential benchmarks.\n- The paper is easy to follow and clearly written"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present Beyond the Rainbow (BTR), an algorithm combining advances in Q-learning based approaches to Atari developed since Rainbow. \n\nThe authors train their agent on Atari, Procgen and 3 games which aren't well-established benchmarks in RL (Super Mario Galaxy, Mario Kart and Mortal Kombat). They run ablations on their method and demonstrate that the Impala architecture contributes the most to their method's performance. They also demonstrate that vectorization of the environment is key to the faster runtime of their algorithm."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I slightly feel for the authors of this paper. I would like to be able to commend the paper on its empirical results, or the performance on the new environments, but the results are not presented scientifically enough for me to do that, and so I can't recommend acceptance at this venue.\n\nTo make my objections more concrete:\n- In Figure 2, the authors claim that their method outperforms the red baseline, but this is plainly not the case from the plot. The error bars so significantly overlap the red line there is no way this result is significant. \n- The authors do not aggregate their results in accordance with recommended practice [1]. Although they use the inter-quartile mean, they do not provide bootstrapped confidence intervals to estimate errors and do not seem to provide errors in their baseline results. This issue appears in Figures 1 and 2. As far as I know, the authors do not state what the error bars in Figures 1 and 2. If the plotted error bars are standard \n- While the evaluation of their method on new games is nice, I can't take any information away from this without even a semblance of a baseline. Training an RL policy on Wii games has no intrinsic scientific value -- it is only by contextualisation of a baseline that this would be a compelling result. Similarly, the authors provide no error bars in this domain.\n- Figure 4 again because of the way the results were processed provides almost no information. Atari-5 [2] provides a way to estimate the median given performance on those 5 games. But this is only after the application of a regression procedure. Without the application of this summary metric, it is just not clear what to take away from these results. This figure does not even present human normalised scores, as is standard. This Figure should therefore be replaced by a plot of the regressed median for Atari-5 with bootstrapped confidence intervals. The authors can use rliable [1] for this.\n- Again, the analysis in Section 5.2 *should* be compelling and interesting reading, but it's just not done thoroughly enough. Figure 6 is presented without error bars and so are the results in Table 2 and the IQM in Table 3. It's just not possible to believe the authors' conclusions on their work without any estimates of error. \n- Additionally, the authors use dormancy [3], but set a threshold of 0.1. Although resetting dormant neurons was shown to improve performance, neurons with a small activation are not in themselves a problem! A neuron followed by a ReLU activation that always outputs 0 is not learning, which clearly constitutes a problem, but a neuron that outputs a small value is still perfectly plastic. The dormancy results therefore also aren't a proxy for any form of plasticity. \n- The authors make multiple claims about their method being \"state-of-the-art for a desktop PC\" (or similar). These should be removed from the paper as they are just impossible to verify. Even as an expert, I do not know the hardware that every paper ran experiments on and whether it would be possible to run it on a desktop PC, and it is not a claim that can be clearly backed-up. I note that the authors did not do all their experimentation on a desktop PC, but only claim that their method can run on one effectively."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We create a high-performance Deep Reinforcement Learning algorithm capable of solving even modern games on a desktop PC."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024beyond,\ntitle={Beyond The Rainbow: High Performance Deep Reinforcement Learning On A Desktop {PC}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0ydseYDKRi},\nnote={under review}\n}"
},
"abstract": {
"value": "Rainbow Deep Q-Network (DQN) demonstrated combining multiple independent enhancements could significantly boost a reinforcement learning (RL) agent’s performance. In this paper, we present \"Beyond The Rainbow'\" (BTR), a novel algorithm that integrates six improvements from across the RL literature to Rainbow DQN, establishing a new state-of-the-art for RL using a desktop PC, with a human-normalized interquartile mean (IQM) of 7.4 on Atari-60. Beyond Atari, we demonstrate BTR's capability to handle complex 3D games, successfully training agents to play Super Mario Galaxy, Mario Kart, and Mortal Kombat with minimal algorithmic changes. Designing BTR with computational efficiency in mind, agents can be trained using a desktop PC on 200 million Atari frames within 12 hours. Additionally, we conduct detailed ablation studies of each component, analyzing the performance and impact using numerous measures."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Reinforcement Learning",
"Computational Efficiency",
"High Performance",
"Atari",
"Value-Based",
"DQN",
"Rainbow DQN",
"BeyondTheRainbow"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c56f3eb04d73f7143c9e5a5a825865d20fac2be9.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/78ee19bcfb191885cfd57f2583128722dd9bae59.zip"
},
"title": {
"value": "Beyond The Rainbow: High Performance Deep Reinforcement Learning On A Desktop PC"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
0yvZm2AjUr | Monitoring Latent World States in Language Models with Propositional Probes | main | Active | Interpretability;Language models;AI Safety | interpretability and explainable AI | 6;6;6;8 | 3;4;3;4 | 3;2;3;4 | 3;3;3;4 | 3;3;3;3 | 6.5 | 3.5 | 3 | 3.25 | 3 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Suggestion for Table 1: Make it clearer that the (P) and (FT) columns correspond to specific adversarial settings\n\nIt would be interesting with some more error analysis for what breaks down when the subspace hypothesis fails, to get insights into the potential for these methods to scale to more complex settings."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Although I am not very familiar with this field, the method for identifying the binding subspaces seems quite novel, and potentially will provide useful insights into model behavior.\n\nThe task setup, although very synthetic in nature, has the PARA and TRANS variants which make it a potentially fruitful testing ground for these kinds of questions.\n\nThe general topic of understanding mechanisms and world states inside of LLMs is both interesting and important."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies how LLMs might encode propositions stated in the context, like \"Greg is a nurse. Laura is a physicist.\", by looking at the activations associated with the Greg/nurse tokens, and trying to identify \"propositional probes\" through a \"binding subspace\" of these vectors which are aligned when the proposition holds.\n\nThey use a synthetic set with 4 domains (people, countries, foods, occupations), each with a set of non-overlapping entities (from 14 to 60 per domain). They define a somewhat heuristic \"domain identifier\" probe to pick up tokens associated with each entity, and then (main novelty) use a low-rank Hessian approach to identify these binding subspaces.\n\nThere is analysis for how effective these subspaces are in changing the binding between entities (e.g., to make Greg the physicist after the context above, when answering a question like \"Therefore, Greg's occupation is\"). The conclusion is that it \"works\" to some extent, but with caveats, especially when the context gets more complicated (going from 2 to 3 entities). In addition to testing on the synthetic contexts, there is an LLM generated variant (PARA) that turns the sequence of propositions into an actual story format, and one that translates this story into Spanish (TRANS). There is non-trivial carry over of the effect to these cases. There are also comparisons to other probing baselines.\n\nFinally, they also test on some \"adversarial\" cases: 1) Prompt injection (encourage the model to answer wrongly), 2) Backdoor attacks (model fine-tuned to do badly in Spanish), 3) Gender bias (measure amount of gender bias in output probabilities for stereotypical vs anti-stereotypical occupation assignments). In all three cases they find the propositional probes are more faithful to the underlying \"true\" propositions vs the actual model output."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The \"domain probes\" to classify tokens into domain seem quite heuristic (using the mean vector of all the entities in the domain), and it seems like there could be some evaluation to see how it works (e.g., is it always the \"obvious\" tokens, like the \"Alice\" token is the name token?).\n\nSome of the decisions in the binding space design seem quite arbitrary, like \"For tractability, we parameterize x and y so that they are shared across layers\". Maybe it would then be better to just focus on a few layers? But it's perhaps fine to leave that for future investigation.\n\nFor the Prompt Injection setting (instructing the model to \"Always answer the opposite.\"), it's hard to say what a \"correct\" output should be, in fact the prompting method should probably \"ideally\" always be \"wrong\". So saying \"prompting performs worse\" is a bit confusing, although it's still an interesting result that the probing outputs are virtually unchanged."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "My main concern and suggestions on how to alleviate it is given in the weaknesses. If the authors can present evidence that helps rule out the positional/order explanation I'm more than happy to raise my score."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "**Originality:** The paper proposes a novel method for identifying a low-dimensional subspace which appears to causally mediate binding behavior in language models. Compared to the rather indirect evidence seen in prior work (Feng & Steinhardt, 2023), the present submission directly identifies this subspace, which results in a much greater degree of manipulability and interpretability.\n\n**Quality:** The method is well-motivated (§5.2) and -- at least on the data used -- works well empirically (§6.1). The qualitative analysis (Figures 5, 7, 8 and related discussion) nicely illustrates similarity in the low-dimensional dimensional subspace, as well as the limitations of the method.\n\n**Clarity:** The paper is structured well, is clearly written and flows naturally.\n\n**Significance:** Binding is generally believed to be an essential component in the formation of internal/situation/world models. As such, any progress towards understanding if/how language models perform binding on a representational level is an important contribution."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method for finding a low-dimensional linear subspace of an LM's activation space which, so the main claim of the paper, encodes binding information.\n\nWhile *binding* is a very broad and complex concept (see Greff et al., 2020, https://arxiv.org/abs/2012.05208 for a recent overview), in this paper binding refers to the process by which textual mentions of two or more entities and attributes that participate in a certain relation are bound together into a more abstract representation of that relation. For example, understanding the text \"Alice lives in Laos\" entails recognizing \"Alice\" and \"Laos\" as mentions of entities and then forming an abstraction of their relation that can be expressed as a proposition like LivesIn(Alice, Laos). On the representational level in a neural network this requires creating internal representations of the entities in question, and then performing some transformation on those representations that signals to subsequent layers that these two representations are \"bound together\".\n\nThe main hypothesis of the paper is that this transformation can be seen as a function that takes two entity representations x and y as input and outputs their \"binding strength\" F(x, y), i.e., a score that quantifies whether the two entities are bound or not. Assuming that F is bilinear in the entity representations x and y, the authors propose a method to estimate F via its Hessian matrix H. If the binding subspace is low-dimensional, then F and the Hessian H should be low rank, which motivates the authors to analyze the rank-k truncated SVD of H. By measuring the efficacy of interchange interventions as a function of k, the authors find that a k=50 dimensional subspace mediates binding, i.e., when manipulating activations in this subspace model output changes accordingly. For example, given the input \"Alices lives in Laos. Bob lives in Peru.\" one can make the LM say \"Bob lives in Laos\" by intervening on activations in this low-dimensional subspace.\n\nHaving developed the machinery to probe an LM for internal representations of propositions, the paper demonstrates several use cases for analyzing discrepancies between the model's internal representations and its output, finding cases in which the model appears to internally represent a proposition but generates output that is inconsistent with it."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper does not do enough to rule out an alternative, simpler hypothesis that could potentially explain the results. Concretely, it appears possible that, due to the highly regular nature of the data, the low-dimensional subspace claimed to encode binding information primarily encodes positional information or order information. The running example \"Alices lives in Laos. Bob lives in Peru.\" has highly regular structure with fixed positional offsets between person and country names, so it is conceivable that the proposed method actually identifies a \"position subspace\" or \"entity/attribute order subspace\" and that interchange interventions claimed to modify binding information in fact modify positional or order information. The paper takes two steps into the direction of ruling out this alternative explanation, but I do not believe that they are sufficient:\n1. Using a LM to rewrite the template-generated texts into short story-like paraphrases. My concern here is that it is unclear how much of the original regularity remains in the paraphrases and how variations in the paraphrases relate to probe performance in Table 2. Since the probe performance exact match metric on the paraphrase is much lower than on the template-based data, it is possible that the probe works best on the paraphrases that are structurally closer to the templates and drops as the paraphrases become more varied and creative. An additional analysis looking at, say, probe performance as a function of token distance between entities and attributes in the paraphrases could provide evidence for or against position being encoded in the identified low-dimensional subspace.\n2. A qualitative comparison in which position and order are varied (\"parallel\" setting in Figure 5, coreference examples in Figure 7). While encouraging, these are only a few qualitative examples of representational similarities. Here, systematic, quantitative experiments would go a long way towards ruling out alternative explanations. Data for such experiments could be relatively easily generated by varying position and order in the templates, e.g., \"Alice lives currently and has always lived Laos. Bob lives in Peru\", which varies the token distance between the bound arguments or \"Alices lives in Laos. Peru is where Bob lives.\", which swaps the order of arguments. If the authors can show that the subspace mediates binding in a similar manner, this would make a much stronger justification for calling it a \"binding subspace\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- I noticed that the number of values of k that were evaluated is not super high, why is that? Is Hessian-based algorithm computationally intensive? \n- Separate probes are learned for each domain, but every domain contains only one predicate, do you have any sense of how well propositional probes might generalize across predicates? Is there a good way to quantify how different the probes for each domain are?\n- Do you think anything be gleaned from the singular values of H? Do they correlate with the accuracies in Figure 4 at all?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- A new type of probe and corresponding algorithm to construct them are presented, this method is likely to be very useful to the interpretability community. Propositional probes will allow probing LLMs for the ability to represent entities as standing in certain relations to one another. \n- The subspace identified is clearly shown to be causally implicated.\n- Probes are shown to outperform prompting in adversarial setups."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new probing technique, called propositional probes. Such probes are functions with two arguments both of which are language model representations. When applied to entities in the probe's domain, the output of the function is a symmetric metric which is expected to be high if the corresponding tokens are bound. \nA Hessian-based algorithm is proposed to find a low-rank bilinear propositional probe. The algorithm starts with a way to query the language such that giving the correct answer depends on the ability to identify if entities are bound. In the paper's experiments, the language model is asked to repeat some relational information provided in-context (e.g. which country does entity0/entity1 live in). However, the representations of the two entities are set to their midpoint, such that the Hessian reveals how the representations would have to change in order to accurately represent their binding. After the Hessian is calculated, SVD is applied and only the top k-dimensional subspace is kept. \nTo evaluate this algorithm, 'interchange interventions' are performed where the positions in the identified subspace of two (out of three) entity representations are swapped. When the model is queried, it reports the 'wrong' entity with close to perfect accuracy for some values k. The binding strength is also visualized for some example inputs.\nFurther evaluations demonstrate that the probe match prompting performance in ordinary setting, and outperform it in adversarial settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Side effects of the interventions are not investigated, it would be great to evaluate how much performance is/isn't lost, as an indication of how precise the interventions are.\n- Results are limited to one model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Why did you go from proposition to text and not the other way around: use existing text (from the wild) and generate propositions from it?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and comes with multiple contributions. The contributions include the use of propositional probes, the definition of the hessian-based algorithm and the confirmation of two hypotheses - that propositions can be decided from internal activations and that these propositions are faithful to the input context."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a method to extract latent world states in the form of propositional probes. They form predicate-argument-argument triples for multiple domains. They propose a method based on a Hessian-based algorithm in order to identify the binding subspace. They evaluate the propositional probes in both standard and adversarial settings. For the adversarial setting they find that the propositional probes stay more faithful to the input."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I would have liked to see a stronger focus on the adversarial experiments, in the paper. In particular, a deeper analysis on why probes remain faithful and how backdoor attacks and prompt injection could be prevented using your method.\n- The synthetic dataset setup seems very simplistic and could have been made more true to real life use, such as by using paragraphs of existing texts and extracting propositions from them."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We develop propositional probes, which extract logical propositions describing a language model's internal world state"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024monitoring,\ntitle={Monitoring Latent World States in Language Models with Propositional Probes},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0yvZm2AjUr},\nnote={under review}\n}"
},
"abstract": {
"value": "Language models (LMs) are susceptible to bias, sycophancy, backdoors, and other tendencies that lead to unfaithful responses to the input context. Interpreting internal states of LMs could help monitor and correct unfaithful behavior. We hypothesize that LMs faithfully represent their input contexts in a latent world model, and we seek to extract these latent world states as logical propositions. For example, given the input context ``Greg is a nurse. Laura is a physicist.'', we aim to decode the propositions WorksAs(Greg, nurse) and WorksAs(Laura, physicist) from the model's internal activations. To do so we introduce _propositional probes_, which compositionally extract lexical concepts from token activations and bind them into propositions. Key to this is identifying a _binding subspace_ in which bound tokens have high similarity (Greg $\\leftrightarrow$ nurse) but unbound ones do not (Greg $\\not\\leftrightarrow$ physicist). Despite only being trained on linguistically simple English templates, we find that propositional probes generalize to inputs written as short stories and translated to Spanish. Moreover, in three settings where LMs respond unfaithfully to the input context---prompt injections, backdoor attacks, and gender bias--- the decoded propositions remain faithful. This suggests that LMs often encode a faithful world model but decode it unfaithfully, which motivates the search for better interpretability tools for monitoring LMs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Interpretability",
"Language models",
"AI Safety"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8fd3da46525a3a2649703d2c6d894c39c3b688d4.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Monitoring Latent World States in Language Models with Propositional Probes"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
0zGvf2yRMQ | MeshGen: Generating PBR Textured Mesh with Render-Enhanced Auto-Encoder and Generative Data Augmentation | main | Active | 3D Generation;Texture Generation | generative models | 3;5;5;6;6 | 3;4;4;4;5 | 1;3;2;3;2 | 2;2;1;3;3 | 2;2;2;3;3 | 5 | 4 | 2.2 | 2.2 | 2.4 | 0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "please tell me the line number where you state: what the input is to the system, ie, how many views are assumed to be input? Are surface normals also assumed to be input?\nIf you both train and test on the Objaverse dataset, then why can't you report more quantitative measures of performance? Presenting small thumbnails as the research output leaves the reader wondering if the results we're viewing are just the examples that happened to work well."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The final images do indeed look better than the rendered comparison images, for the images shown."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the problem of recovering 3d and some material properties of the objects studied from an image (or from multiple images, or from multiple images and their normal maps. Which of these is correct was not clear from the paper).\nExtensive experimentation was performed to optimize the terms of various loss functions in order to give high-quality visual results of the re-rendered captured shapes. Extra care was taken to ensure that physically-based rendering of the captured surfaces allowed for the captured objects to be rendered under different lighting conditions.\n\nThe paper operated within a subset of the Objaverse dataset, consisting of 35k multi-view images."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There aren't numerical results given for the system performance. Thus, the reader is constantly questioning, \"are these results cherry picked\"?\nThe paper is written for an audience of researchers operating in the same sub-field: people reconstructing 3d, training and testing of Objaverse dataset images. I'm not in that set of researchers (and ICLR readers generally won't be) and many aspects of the paper were unclear to me, see the questions below.\nThis is an engineering paper, a paper showing how to tweak parameters\nto achieve slightly better results in a very crowded field. As I read\nthe paper, I kept asking myself, \"what do I learn from this?\" and I\nrarely came up with an answer to that question. The message is,\nextensive parameter tweaking results in slightly better performance.\nI don't feel that's a message that we need to convey to the ICLR\naudience.\n\nMy concerns with the paper: \n(1) There's no high-level story presented, no obvious set of take-aways that the reader learns.\n(2) The paper doesn't present the work in a way that's accessible to readers outside of this particular subfield.\n(3) Quantitative performance evaluations are not given, just lots of thumbnail images. This is unsatisfying, and not persuasive, since the reader wonders about bad results not being shown. If the results are indeed a random selection of the system outputs, please say so.\n(4) Generalization beyond the one dataset trained on was relegated to one figure in the appendix. Same with failure cases."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "No"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is easy to read. \n- The results in all figs look good, when comparing with existing methods, especially for the geometric details."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Building large models for translating images to 3D models is now a very hot topic. This work is a follow-up in this direction. As mentioned, three things are new: 1) incorparating render-based perceptual loss into the auto-encoder training; 2) two augmentation stratigies are proposed; 3) a texturing pipeline with reference-based attention mechanism is presented. The experiments validate the effectiveness of the proposed designs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My concerns include:\n- It lacks training details, especially the data. What is the scale of the training 3D models? Is it same with meshLRM, meshFormer and Craftsman? If no, the comparison with them may be not fair. \n\n- It lacks quantitative anlaysis of the image-to-shape models. Currently, there are only some selected examples are shown which is not enough to support the claim of SOTA accuracy. \n\n- The paper is not well-motivated. What kind of issues does this paper aim to address? This is not clear to me. \n\n- Lack technical insights. Involving of render-based perceptual loss, the propose new augmentation strategies, attention-based texturing pipeline etc. All the claimed new things are some engineering methods. I believe these can improve the performance, however it cannot brings the community new insights."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Please include quantitative comparisons with the latest LRM papers (e.g., TripoSR, Stable Fast3D) on datasets like GSO or OmbiObject3D.\n\nI understand that you're training a native 3D diffusion model, which is inherently more complex and resource-intensive compared to LRMs. It would be helpful to discuss whether your results are constrained by resource limitations, and to what extent techniques like rendering augmentation were necessary to achieve your results."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Authors provide a comprehensive image-to-3D pipeline (notably, based on 3D diffusion), which performs on par or better than LRMs, and far better than other native 3D methods\n* The paper is well-structured, clear, and easy to follow\n* A key strength is that the authors correctly point out a common issue with 3D latent diffusion models, where the outputs often look symmetric. They visually prove that geometric alignment augmentation is a well-suited solution for this problem. This is original and significant contribution of the paper.\n* Generative rendering augmentation is shown to be an effective augmentation pipeline in practice. This idea, to my knowledge, is original."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce solution to image-to-3D generation problem. Instead of predicting textured mesh, they design native 3D generation approach, with a separate PBR texturing stage.\n\n* 3D generation stage consists of training a render-enhanced point-to-shape auto-encoder; they chose triplane representation instead of previously used vector set for rendering efficiency reasons.\n* Following the autoencoder training stage, they employ a diffusion UNet on top of h-stacked triplane features, with cross-attention on input image DINOv2 features\n* For texture prediction, authors train a multiview ControlNet, applied on top of Zero123++ to predict the multiview shaded renders, with another Instruct-pix2pix decomposer to separate it into PBR materials."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The paper lacks any quantitative evaluation, particularly in two key areas:\n * autoencoder quality: This could be benchmarked against models like 3DShape2VecSet.\n * (biggest weakness) 3D reconstruction quality (geometry-only): Without quantifiable metrics, it's difficult to assess the quality claims. Comparisons with recent LRM papers such as TripoSR and Stable Fast3D on datasets like GSO or OmbiObject3D would be beneficial.\n\n* The proposed render-enhanced autoencoder feels more like a combination of existing methods (e.g., 3DShape2VecSet + render-based loss, a technique used in prior works like DMV3D) rather than a novel contribution.\n\n* I find the justification for ray-based regularization questionable. The paper mentions that\n> render loss alone leads to severe floaters\n\n Isn't that what the BCE loss on occupancy is meant to address? If used correctly, BCE should perform as well as, if not better than, ray-based regularization.\n\n* The generative rendering augmentation seems like a training trick to artificially boost dataset diversity. While this may improve performance, it could complicate future comparisons. I'd recommend reporting metrics without this augmentation for a clearer evaluation.\n\n* Finally, the texturing pipeline appears to be a technical application of existing ideas and seems more complementary to the paper’s core contribution.\n\nIn summary, while the paper lacks core significant novelty, it presents a well-executed combination of techniques for image-to-3D problem with native 3D diffusion models."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "What is the detailed training recipes about training dataset, GPU specs, and training time. \n\nThe paper needs to show qualitative results with unseen evaluation datasets, such as Google Scanned Objects (GSO) datasets to a stronger rationale for the improved results. Only the qualitative result of the ablation study is given.\n\nFor the geometric alignment augmentation, the reviewer wonders why it is the augmentation. In L. 261-263, the authors say that “we select one view from multi-view images as the condition and rotate the point cloud’s azimuth to align the object’s orientation with the selected image as the target”. This seems to be just training with an image corresponding to each multi-view angle for a given point cloud, but it's hard to understand why this is an augmentation. The reviewer wonders if training to correspond to multi-view images for a given 3D point cloud is not an existing method for multi-view learning, and what is added differently by augmentation.\n\nFor the multi-view PBR decomposition, are the generated images between different PBR components of the same view image, and multi-view images consistent?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper provides a thorough literature survey of previous studies, analyzes the weaknesses of these methods, and presents a concrete training model and pipeline.\n\nThe qualitative results are improved than to the previous methods, especially for the fine-grained details and well-presenting to the given input view images. The human head results in Fig. 5 is the promising result, because it is out domain of the object dataset (especially for the Objaverse). It will be better to add more human head (out-domain) results.\n\nThe paper can deal with PBR textures which is essential to practical applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose PBR textured mesh reconstruction method from single view image.\n\nThe model has several components, which are render-enhanced auto-encoder, image-to-shape diffusion model with augmentations, and image-conditioned PBR texture generation.\n\nCompared to the previous methods, the paper shows enhanced and fine-grained textured mesh generation results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The figures of the pipeline and modules are hard to understand and make less intuitive because of heavy abbreviation, especially for Fig. 1, Fig. 2, and Fig. 4. \n\nIt is hard to understand the whole pipeline process of the paper’s modules. Especially for the Fig. 1, the reviewer wonders what is the connection between (a) render-enhanced auto-encoder and (b) image-to-shape diffusion model with tailored augmentation. It is nice to have a well thought out and fleshed out design, but is is hard to understand the connectivity of the entire module.\n\nFor the Fig. 4 and Fig.7, it is better to denote which is the paper’s method (ours).\n\nThe paper lacks the discussion of limitations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Geometry comparison with commercial products? I acknowledge that lower mesh quality, attributable to the constraints of high-quality data and computational resources, is a reasonable compromise. However, I am intrigued by your rationale for deeming the alignment termed \"symmetry\" as unnecessary.\n\n2. PBR material generation from scratch V.S. PBR material decomposition from shaded images? The paper presents a promising approach involving multi-view RGB image generation and subsequent multi-view RGB-to-PBR decomposition. Concurrently, an alternative methodology exists for generating albedo, metallic, and roughness attributes from scratch, as exemplified by the HyperHuman Rodin (CLAY). I am curious about your decision to opt for PBR material decomposition over this alternative technique. Additionally, I am interested in your assessment of the strengths and weaknesses of these two techniques (discussion without qualitative results is acceptable since HyperHuman Rodin is not open-source).\n\n3. More details about reference attention? Although I am not familiar with the reference attention, I am quite positively impressed by it. Could you please provide more detail?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Generative rendering augmentation appears to hold significant promise.\n\n2. The results of geometry generation demonstrate satisfactory performance relative to available open-source non-commercial methods.\n\n3. The outcomes of PBR material generation are both compelling and credible, particularly the examples of metal objects presented in the appendix."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel pipeline, termed MeshGen, designed for the generation of high-quality 3D meshes with physically based rendering (PBR) textures from a single image.\nDuring the geometry generation stage, the authors first utilize a render-enhanced auto-encoder to encode 3D meshes into a compact latent space. Subsequently, an image-to-shape diffusion model is trained, incorporating geometric alignment and generative rendering augmentation to address challenges related to image-shape misalignment and the model's generalization capability.\nIn the texture generation stage, the paper establishes a reference attention-based multi-view generator, which is subsequently followed by a PBR decomposer to extract PBR components, along with a UV-space inpainter to complete the rendering of occluded areas."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Robustness of PBR material decomposition? During the texture generation stage, the method initially produces multi-view shaded images, which are subsequently decomposed into their PBR components. However, the variability in light and shadow effects within these shaded images can be substantial. I am particularly interested in the robustness of the PBR decomposer. Specifically, I am curious to know whether the decomposer can effectively manage scenarios involving more intricate lighting conditions.\n\n2. PBR material generation results on more complex metal objects? The paper has demonstrated promising PBR material generation results on various metal objects, including a teapot and a roaster. I am intrigued by the potential outcomes of PBR generation on more complex metal objects, such as a game asset axe or a detailed representation of Iron Man."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024meshgen,\ntitle={MeshGen: Generating {PBR} Textured Mesh with Render-Enhanced Auto-Encoder and Generative Data Augmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0zGvf2yRMQ},\nnote={under review}\n}"
},
"abstract": {
"value": "In this paper, we present MeshGen, an advanced image-to-3D pipeline designed to generate high-quality 3D objects with physically based rendering (PBR) textures. Existing methods struggle with issues such as poor auto-encoder performance, limited training datasets, misalignment between input images and 3D shapes, and inconsistent image-based PBR texturing. MeshGen addresses these limitations through several key innovations. First, we introduce a render-enhanced point-to-shape auto-encoder that compresses 3D shapes into a compact latent space, guided by perceptual loss. A 3D-native diffusion model is then established to directly learn the distribution of 3D shapes within this latent space. To mitigate data scarcity and image-shape misalignment, we propose geometric alignment augmentation and generative rendering augmentation, enhancing the diffusion model's controllability and generalization ability. Following shape generation, MeshGen applies a reference attention-based multi-view ControlNet for image-consistent appearance synthesis, complemented by a PBR decomposer to separate PBR channels. Extensive experiments demonstrate that MeshGen significantly enhances both shape and texture generation compared to previous methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"3D Generation",
"Texture Generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a0343c0597f627fe6785409ecaaad9d62e8b31a3.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/00ac5c75561117d3535be1def028b309f6302df6.zip"
},
"title": {
"value": "MeshGen: Generating PBR Textured Mesh with Render-Enhanced Auto-Encoder and Generative Data Augmentation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
0zRuk3QdiH | Multi-Shot Character Consistency for Text-to-Video Generation | main | Active | text to video;subject consistency;video personalization;motion alignment;feature injection;extended attention | applications to computer vision, audio, language, and other modalities | 5;5;5;8 | 4;3;4;4 | 2;3;3;3 | 2;2;2;3 | 1;2;2;4 | 5.75 | 3.75 | 2.75 | 2.25 | 2.25 | 0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Does the overall generation quality decrease after the proposed method?\n2. How does the motion quality changed after the proposed method?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The method is tuning-free and does not require any further training.\n2. The results outperform baseline methods. Some of the provided visual results look good."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper targets the problem of character consistency in text-to-video generation. The authors propose a training-free method to solve this problem. They find the query in the attention encodes the information of both motion and identify, which leads to the trade-off between motion dynamics and identity consistency. The experimental model used is VideoCrafter2. To solve the trade-off problem, they propose a new query injection method. Specifically, they share features between different video clips. Then, they replace the Q (query) with those from the original generation (to maintain motion from the original generation). After that, they leverage the flow map from vanilla keyframes to guide the Q injection. Their results achieve the character consistency while keeping the original motion dynamics and text alignment. The text alignment is evaluated via user study. The overall metrics for evaluation are three aspects: motion degree, id consistency, and motion text alignment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper is relatively hard to follow regarding the details of the method part.\n2. Lack of novelty: 1. The id presevering mechanism is built upon the SDSA. the SDSA is adopted from ConsiStory [Tewel et al. 2024] with two minor modifications: (1) The attention not attend to each other with all frames from different clips, but one single frame from each clip. (2) The mask estimation use ClipSeg, rather than estimated from the cross attention. 2. The motion preserving is leveraging TokenFlow [Geyer et al. 2023] to inject the motion based on the flow from original keyframes. Thus, the method is like a A+B combination with some minor modifications.\n3. The key insight \"self-attention query features (O) encode both motion and identity\" lack experimental results to demonstrate.\n4. The results are not perfect, e.g., inconsistent hairstyles in the 3rd row of Figure 1.\n5. The evaluation does not contain the overall video generation quality and the qualitative semantic alignment scores. \n6. Minor formate issues like inconsistent figure reference: Figure 1 and Fig. 4; And strange line break at line 244 and line 291."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The paper only demonstrates the performance of a single character across different videos, and the reviewer is curious about how the proposed method performs with multiple characters.\n2. The prompt in the paper provides overly detailed descriptions of the character. Would a more concise description impact character consistency? For example, replace the \"Cinematic, middle-aged female athlete\" in Fig.8 with \"A woman\"."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper introduces a training-free method to ensure charactoer consistency and motion adherence in producing multi-shot video sequences.\n2. This paper presents a two-phase query injection strategy to balance encoding motion and identity.\n3. A benchmark and evalution protocol are proposed to evaluate consistency of video generation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents Video Storyboarding, a training-free method that enhances pre-trained text-to-video models to generate multiple shots with consistent characters while maintaining high video quality and responsiveness to text prompts. By leveraging self-attention query features that capture motion and identity, the method addresses the trade-off between character consistency and video dynamics through a novel query injection strategy. Experimental results show significant improvements in character consistency and motion quality, offering insights into the video generation process and the interplay of structure and motion in diffusion models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The conducted experiments are not comprehensive, including two aspects: (a) The paper only provides several comparison samples. (b) The paper misses some important baseline methods, e.g., VSTAR[1]\n2. Although the purpose of the paper is to maintain consistency of characters across different video clips, the results are not particularly good. For example, in Fig.3, the color and style of clothes change across different video shots.\n\n[1] Li, Yumeng, William Beluch, Margret Keuper, Dan Zhang, and Anna Khoreva. \"VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis.\" arXiv preprint arXiv:2403.13501 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Overall, this is a well-designed and rigorously evaluated method that represents a significant advancement in generating coherent multi-shot video sequences. The authors have done a commendable job in addressing the complex challenge of maintaining character consistency while preserving natural motion.\n\nSome questions for the authors:\n\n1. Have you explored any strategies to further improve the balance between identity preservation and motion quality? Are there other techniques beyond query injection that could be investigated?\n2. How do you envision this approach scaling to longer video sequences? What additional challenges might arise, and how could the method be adapted to handle them?\n3. The user study results showed that the ConsiS Im2Vid baseline achieved the highest set consistency among the baselines. Can you comment on the strengths of this approach and how it might be combined or compared with your Video Storyboarding method?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Framewise Subject-Driven Self-Attention to maintain consistency without compromising motion\n2. Novel two-phase query injection strategy to balance identity preservation and motion quality\n3. Adaptation of refinement feature injection to both conditional and unconditional denoising steps"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method called \"Video Storyboarding\" to generate multi-shot videos with consistent characters across different scenes while preserving motion quality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Approach limited to short video clips, unsure how it would scale to longer videos\n2. Balancing identity preservation and motion quality is still challenging, with potential tradeoffs"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No Ethics Concerns"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. **Inconsistencies in Object Appearance and Action**: In the ablation study on query preservation (`ablation_q`), inconsistencies persist. For example, in the first video, the right hand of the Muppet character appears red, while in the third shot, it is not. Additionally, although the Muppet is intended to perform aerobics in a Sesame Street setting, it merely flicks its hand briefly, failing to convey the intended action sequence.\n\n2. **Static Object Issues in ConsiStory Component Ablation Study**: In the ablation study on ConsiStory components for video generation, the rabbit character intended to surf, train, and ride a bike appears mostly static in the first and third shots. This raises the question of whether these issues stem from limitations in the base model’s dynamic capabilities. If so, would using models with stronger dynamic performance, such as Dynamic Crafter or CogVideo, potentially improve motion consistency and address these static object limitations?\n\nIf the video dynamic problem is addressed, I am willing to increase my score."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. **Training-Free Approach for Subject Consistency Across Shots**: This work offers a training-free method for generating subjects with consistent identity across varying scenarios and shot transitions, which is valuable for practical applications where maintaining coherence in subject appearance is essential.\n\n2. **Novel Insights on Self-Attention Query Features**: The authors provide fresh insights into the role of self-attention query features, demonstrating that these features effectively capture both motion and identity.\n\n3. **Query-Preservation and Q-flow Techniques**: By preserving query features during early denoising and applying a tokenflow-inspired approach to select keyframes, the method achieves partial injection of query features to adjacent frames. Although it draws heavily from ConsisStory and TokenFlow, this approach has demonstrated effectiveness in enhancing subject consistency and motion dynamics to a certain extent."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to generate multi-shot videos with consistent characters in a zero-shot manner. They claim that there is a trade-off between preserving character identity and video dynamics, thereby designing a two-phase approach, Q-preservation and Q-Flow, to balance the two respects."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Limited Novelty in Video Storyboarding**: The innovation of the proposed video storyboarding approach is limited. The primary method relies on frame-wise SDSA, which largely mirrors the approach used in ConsiStory. The only notable difference lies in the mask source, utilizing CLIPseg and OTSU segmentation rather than cross-attention. \n\n2. **Poor Writing and Project Organization**: The paper's writing and the project page's layout hinder comprehension, making it difficult for readers to follow the key contributions the authors intend to convey.\n\n3. **Minimal Improvement over Baseline Models**: The generated video storyboarding results appear similar to those produced by existing video generation baselines like Videocrafter2 or TokenFlow encoder, with little noticeable difference in output quality.\n\n4. **Lack of Motion Dynamics**: The method demonstrates limited motion dynamics. In most video segments, the objects remain static, and in every case, the object consistently occupies the center of the frame, resulting in rigid, uninspired visuals.\n\n5. **Overclaiming the Benchmark**: The authors’ claim of establishing a benchmark based on a dataset of only 30 videos, each containing 5 video shots, is unsubstantiated. This dataset is insufficiently sized and lacks diversity, with evaluations limited to character consistency and dynamic degree, providing a narrow view that does not comprehensively assess the model's capabilities."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A training-free approach for generating video shots of the same characters, preserving identity and motion to prompt agreement."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024multishot,\ntitle={Multi-Shot Character Consistency for Text-to-Video Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0zRuk3QdiH},\nnote={under review}\n}"
},
"abstract": {
"value": "Text-to-video models have made significant strides in generating short video clips from textual descriptions. Yet, a significant challenge remains: generating several video shots of the same characters, preserving their identity without hurting video quality, dynamics, and responsiveness to text prompts. We present Video Storyboarding, a training-free method to enable pretrained text-to-video models to generate multiple shots with consistent characters, by sharing features between them. Our key insight is that self-attention query features (Q) encode both motion and identity. This creates a hard-to-avoid trade-off between preserving character identity and making videos dynamic, when features are shared. To address this issue, we introduce a novel query injection strategy that balances identity preservation and natural motion retention. This approach improves upon naive consistency techniques applied to videos, which often struggle to maintain this delicate equilibrium. Our experiments demonstrate significant improvements in character consistency across scenes while maintaining high-quality motion and text alignment. These results offer insights into critical stages of video generation and the interplay of structure and motion in video diffusion models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"text to video",
"subject consistency",
"video personalization",
"motion alignment",
"feature injection",
"extended attention"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/170f02322ab92d2cea34d111e32da1bbf49332aa.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/24468342d85c3fc4676090987d5d5d77386b4f11.zip"
},
"title": {
"value": "Multi-Shot Character Consistency for Text-to-Video Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
0zZEbHLTwf | DeepFDM: A scientific computing method for Neural Partial Differential Equation (PDE) operators | main | Active | Partial Differential Equations;neural operators;solution operators;interpretable models;out of distribution;dataset shift;physical models | applications to physical sciences (physics, chemistry, biology, etc.) | 3;3;3;5 | 5;4;4;3 | 2;2;2;2 | 2;1;1;2 | 3;3;1;3 | 3.5 | 4 | 2 | 1.5 | 2.5 | -0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why do the lines in Fig. 5 start from different initial points? It looks like the authors use different initializations, which is unfair for comparison.\n2. What is the motivation for using Hellinger distance, not the KL divergence, for example? KL also admits closed-form for the distance between multivariate Gaussians."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Fair benchmarking of the neural PDE solvers and evaluation of their robustness to the input data is very important for understanding the current state of this field and identifying the gaps in the current SOTA methods.\n2. The proposed DeepFDM method provides more accurate predictions than competitors\n3. The benchmarking procedure is well-described and could be used in other works for evaluation of the new neural PDE solvers,"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The manuscript considers the problem of benchmarking neural PDE solvers and analysis of the robustness with respect to the diverse distributions. The authors propose DeepFDM, a benchmark method, and the procedure to generate train/test data for benchmarking with quantified shifts in distributions. The main idea of the DeepFDM method is to represent finite difference approximation of a particular type of PDEs through a convolutional neural network that parameterizes variable coefficients. Therefore, given a ground-truth input/output pair, such a model fits the target coefficients over the used grid. The second part of the work is the procedure to generate data with a controlled distribution shift that helps evaluate the trained model's robustness to input data out of the distribution where training data was generated. In experiments, DeepFDM shows better robustness to input data distribution shifts for the broad classes of equations than competitors while requiring fewer trainable parameters."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The main weakness of this work is that the authors combined two different contributions in a single study: a dataset generation procedure for benchmarking neural PDE solvers and a DeepFDM method for fitting the PDE coefficients. \n2. The idea of parameterizing the finite-difference method via CNN is not new and has already appeared in other works like the smoothing operator in the multigrid methodб https://arxiv.org/abs/2102.12071 \n3. The proposed method's scalability is not discussed or compared with competitors.\n4. The presentation of the problem statement is confusing since the authors start not from the inverse problem of coefficient reconstruction but from the solution reconstruction problem."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. How did you condition the neural PDE solvers on the coordinate-dependent PDE parameters? By adding the spatial coordinates to the solver inputs?\n2. How did you create the spatially varying PDE parameters? Using the same method as generating the initial condition?\n3. What did your data look like exactly? You mentioned 1D and 2D problems. Which PDE in Tab. 1 is 1D, which is 2D? How large was the dataset? How large were the spatial and temporal resolutions?\n4. What learning rate did you use? What optimizer? Did you train the models autoregressively or with 1-step errors only?\n5. How did you introduce the distribution shift? Did you increase or decrease the standard deviation of the PDE parameters? How many basis functions N did you use in the beginning? Did all of them have the same standard deviation?\n6. Why is the Hellinger distance between the parameters generating the initial conditions a good measure for the distribution shift?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "A novel way of comparing neural PDE solvers against numerical methods, which is in a sense more fair to the neural PDE solvers (if the neural PDE solvers are trained on real-word data)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper compares the performance of an inverse design method combined with a numerical solver against neural PDE solvers. The authors assume they are given data generated from a known PDE family but with unknown, spatially dependent coefficients. In this problem setting, the paper proposes to compare neural PDE solvers against numerical simulators. Since the PDE parameters are unknown, they are estimated by minimizing the difference between the output of a differentiable numerical solver and the given data. The experiments show that the proposed model converges quicker than the considered neural PDE solvers and achieves lower errors both on in- and out-of-distribution data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper is not very clear in the type of problem it approaches. The writing could be improved to make the definition of the problem more easy to understand.\n2. The usual setup in neural PDE solvers is that the PDE parameters are known. In this setting, neural PDE solvers have already been compared to numerical methods. The authors could better motivate their specific choice of problem definition (i.e., unknown, spatially-dependent PDE parameters).\n3. The method is only a useful baseline if the neural PDE solver is trained on real-world data. When the neural PDE solver is used as a surrogate for a numerical solver, the PDE parameters would be known (since they would have been used to generate the training data).\n4. There is no inference time evaluation. Faster inference is one of the main reasons for utilizing a neural PDE surrogate instead of a numerical method like the one considered in the paper.\n5. Many experimental and model details are missing (see questions)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "You need to compare your methods fairly with both ResNet and Unet, as well as FNO, since they represent different categories—traditional neural networks versus neural operators.\n\nPlease discuss the trade-offs between finite difference and automatic differentiation in your specific context, and to provide justification for the choice of FD and AD. There is considerable evidence that automatic differentiation outperforms FD in terms of training loss.\n\nCan you explain why DeepFDM doesn't show oscillation in Fig. 5, unlike the other methods?\nHow does the computational cost and training time of DeepFDM compare to the other approaches?\nGiven that Fig. 5 doesn't show significant improvement, can you clarify what advantages DeepFDM offers in terms of training dynamics?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper highlights a need for benchmarking in PDE solutions, pointing out that while neural networks can flexibly solve a variety of equations, there’s limited systematic comparison with established numerical methods. This motivation is well-founded, especially in scientific and engineering fields that demand rigorous performance metrics.\n\n DeepFDM seems to target both in-distribution (ID) and out-of-distribution (OOD) performance, providing a structured method for generating training and test data that reflects distribution shifts. This contribution is valuable since robust OOD performance is crucial for practical applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents DeepFDM, a benchmark approach for learning PDE solution operators that combines the structure of traditional PDE solvers with neural networks. By leveraging the strengths of scientific computing, DeepFDM offers interpretability and aims to enhance accuracy and generalization within a specific class of PDEs, rather than acting as a replacement for neural PDE solvers.\n\nWhile DeepFDM is designed specifically for certain types of PDEs, it shows limited generalization to other PDE classes, reducing its applicability in diverse scenarios. Additionally, the paper lacks a detailed analysis explaining why DeepFDM outperforms traditional methods, which weakens the justification of its advantages. Providing a rigorous theoretical analysis with established approaches would strengthen the work and clarify the specific benefits of DeepFDM in terms of accuracy and generalization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "DeepFDM is presented as a benchmark method; however, its applicability is limited to a specific class of partial differential equations (PDEs). The paper does not sufficiently discuss how this restriction affects DeepFDM's generalizability, particularly in scenarios that require flexibility across various forms of PDEs, such as nonlinear PDEs and complex boundary conditions.\n\nFor instance, in the case of hyperbolic equations with shock locations, finite difference methods (FDM) may struggle to accurately capture the discontinuities inherent in these solutions. This limitation could significantly impact the performance and reliability of DeepFDM when applied to a broader range of PDE types.\n\nPlease explicitly state the objectives and justify the choice of comparison methods in the context of those objectives. More specifically,\nWhy do the authors compare DeepFDM to both neural networks like ResNet and Unet, as well as neural operators like FNO? It’s unclear whether the authors aim to solve individual instances of PDEs or to learn a solution operator."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could you explain your definition of the benchmarking method? What makes DeepFDM a benchmarking method for PDE operator learning?\n- Do you want to focus on the inverse problem or forward problem? Could you explain how you make sure it is fair to compare with FNO, U-Net, and ResNet?\n- Could you provide a detailed description on datasets and training process?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper explores benchmarking neural operator methods and OOD performance of neural PDE operators, which is meaningful in scientific computing. \n\nFor one family of PDEs, it provides a neural network based solver with coefficients inferred."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The goal of this work is to design a benchmark method for learning PDE solution operators based on numerical PDE solvers. The authors proposed DeepFDM, which focuses on one class of PDEs and takes advantage of the structure of PDEs. DeepFDM learns the coefficients of the PDEs, and distribution shifts using the Hellinger distance are quantified. The results are compared with FNO, U-Net, and ResNet."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The motivation of choosing the known family of time dependent PDEs with periodic boundary conditions, and bounded coefficients is not clear to me. The problem setup seems very restricted, and the method is only applicable to learn from initial conditions.\n- It should be made clear that if the method is data-driven/known PDE, PDE solver/operator learning in the beginning. According to my understanding, the PDE needs to be known to use the finite differences solver. DeepFDM learns an operator from the initial condition for a specific family of PDEs to the solution at next time step, and the iteratively solve for a longer time. The problem setup should be more rigorous. \n- The literature review in Section 2.1 is not well organized or well-written. The papers of PDE discovery, PINN and operator learning are mentioned without a focus. Some claims are not correct and language is vague. For example, “Lu et al. (2019) propose the DeepONet architecture, which learns PDE solution operators. However, in this case, the PDE is fully known and the PDE residual is included in the loss.” It is not correct. There is no PDE known in vanilla data-driven DeepONet. The authors may refer to Physics-informed DeepONet. “Neural PDE operators aim to learn to solve a given PDE from data, without assuming that the form of the PDE is known.” This claim is conflicting with the above point. \n- One main issue is that it ‘s not fair to compare DeepFDM with FNO, U-Net, and ResNet, since the PDE structure is known and of course it can perform better than pure data-driven methods. This makes the results not convincing. \n- There is an existing paper on distribution shift quantification: M. Zhu, H. Zhang, A. Jiao, G. E. Karniadakis, & L. Lu. Reliable extrapolation of deep neural operators informed by physics or sparse observations. Computer Methods in Applied Mechanics and Engineering, 412, 116064, 2023.\n- Some notations are clearly defined. For example, m in the dataset, A, A* and \\hat{A}.\n- There are no metrics for coefficient fields if the author considers solving the inverse problem.\n- I don't see Appendix B in the manuscript. \n- \"In this case, the solution is generated on a higher resolution grid, and then coarsened (upsampled).\" It should be \"downsampled\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduct a benchmark scientific computing approach to PDE operator learning, and a benchmark method for OOD dataset generation for PDEs"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024deepfdm,\ntitle={Deep{FDM}: A scientific computing method for Neural Partial Differential Equation ({PDE}) operators},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0zZEbHLTwf},\nnote={under review}\n}"
},
"abstract": {
"value": "Solving Partial Differential Equations (PDE) has long been a critical challenge in many scientific and engineering domains. Recently, neural networks have shown great promise in solving PDEs by learning solution operators from data, offering a flexible and adaptive alternative to traditional numerical solvers. Despite these advancements, there is still a need for systematic benchmarking of neural operator methods against conventional approaches and for the development of datasets representing diverse distributions for robust evaluation.\nIn this paper, we introduce DeepFDM, a benchmark method for learning PDE solution operators based on numerical PDE solvers. \nDeepFDM leverages the structure of the PDE, in order to achieve better accuracy and generalization compared to neural solvers. It is designed as a solver for a specific class of PDEs and not as a replacement for neural solvers. Moreover, because DeepFDM learns the coefficients of the PDEs, it offers inherent interpretability. We also introduce a principled method for generating training and test data for PDE solutions, allowing for a quantifiable measure of distribution shifts. This method provides a structured approach to evaluate the out-of-distribution (OOD) performance of neural PDE operators. \nOur work sets a foundation for future comparisons of neural operator methods with traditional scientific computing approaches, providing a rigorous framework for performance benchmarking, at the level of the data and at the level of the neural solver."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Partial Differential Equations",
"neural operators",
"solution operators",
"interpretable models",
"out of distribution",
"dataset shift",
"physical models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/082d598b1e71de2bf2d513cbb6b3a70cf36f6c31.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "DeepFDM: A scientific computing method for Neural Partial Differential Equation (PDE) operators"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
0ziGSo4uWp | TimeCAT: Hierarchical Context-Aware Transformer with Dynamic Grouping for Time Series Forecasting | main | Withdraw | Time Series;Context-Aware;Transformer | learning on time series and dynamical systems | Yun Cheng | ~Yun_Cheng3 | 3;3;5 | 3;4;4 | 3;1;3 | 3;2;3 | 2;1;2 | 3.666667 | 3.666667 | 2.333333 | 2.666667 | 1.666667 | 0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- How is $\\tilde{X}$ downsampled to obtain $\\tilde{X}'$? \n- Time-series forecasting experiments are often sensitive to hyperparameters, such as learning rates and batch sizes. How were these parameters chosen?\n- Could you provide a comparison of the actual training times for the models listed in Table 2?\n- Could you include results for cases when $G=1$ and when $G$ is large (e.g., $G=16$, $G=32$)?\n- There are many unnecessary horizontal spaces between equations and sentences (e.g., L222, L226, and L301). These should be removed."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. This paper is well-motivated.\n2. The proposed model, TimeCAT, outperforms prior models in several time-series forecasting benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes TimeCAT, a transformer-based model for time-series forecasting that leverages hierarchical dependencies among fixed-length patches, dynamically grouped patches, and a global token. Specifically, TimeCAT first forms dynamic groups of input patches, then captures both fine-grained and coarse-grained information through hierarchical dependencies using Context-Aware Mixing Blocks. Experimental results demonstrate that TimeCAT outperforms previous models on several time-series forecasting benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation quality of this paper is unsatisfactory.\n - The method is not clearly described throughout the paper. The descriptions are overly wordy, with many unnecessary and confusing notations.\n - For instance, from group ratios $\\mathbf{r} \\in \\mathbb{R}^G$, how are group indices obtained, and how are they optimized? Such indices are often assigned through discretization, making it unclear how they can be optimized using gradient-based methods. In particular, it is unclear how to compute gradients for $\\mathbf{W}\\_g$ and $\\mathbf{b}\\_g$. Additionally, as group size is calculated using the ceiling function, the optimization of the embedding $\\mathbf{l}\\_{s_i}$ is also unclear.\n - Other confusing notations are as follows: What are $\\mathbf{g}'\\_{n,i}$ in Eq(3) and $\\tilde{\\mathbf{g}}\\_{n,i}$ in Eq (7)? Is $RD$ a single hyperparameter or a product of two hyperparameters, $R$ and $D$?\n - Notational consistency is also an issue, with symbols such as $\\textbf{X}$ vs $X$ and $x$ vs $\\tilde{x}$ vs $\\hat{x}$. The frequent use of accents and subscripts/superscripts could easily confuse readers.\n2. Claims are not well supported.\n - The paper emphasizes the importance of the dynamic grouping mechanism. However, the experiments show that $G=2$ is sufficient to achieve good results. This very small number of groups does not adequately demonstrate the necessity of grouping for solving the problem, as all groups may still be too coarse.\n - Additionally, the grouping is determined by a single linear transformation of the input matrix, which raises doubts about the quality of the grouping.\n - Why does training become unstable without Eq (12) and (15)? Furthermore, there is no ablation study to justify the inclusion of this gradient detachment technique.\n - What is the rationale for the order of operations in the context-aware mixing block? One could simply apply self-attention across all tokens to capture intra-group and inter-group dependencies. If $G$ is as small as used in this paper (e.g., $G=2$), the computational complexity is not significantly greater than that of TimeCAT. The hierarchical design lacks both a clear explanation and experimental validation.\n - Why does Figure 5 highlight the effectiveness of the grouping strategy? For example, I cannot see any alignment between Figures 5(a) and 5(b), and the t-SNE plots in Figures 5(c)-(e) show no meaningful pattern. Why should distinct clusters of global tokens reflect effective separation of variables and high-level interactions? A more detailed explanation would be helpful."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The writing of the article needs improvement. In Equation 3, what does g' mean?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. TimeCAT's hierarchical structure, with its focus on intra-group, inter-group, and global interactions, provides a comprehensive framework for capturing both local and global temporal patterns. This multi-scale modeling is a significant advancement in the field.\n2. The paper demonstrates a substantial reduction in computational complexity by applying self-attention within groups rather than across the entire sequence. This efficiency gain is crucial for handling longer sequences and larger datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "TimeCAT is designed to address the limitations of existing Transformer-based models that struggle with capturing complex temporal dependencies and suffer from computational inefficiencies, particularly with long sequences. The core innovation of TimeCAT is its dynamic grouping mechanism, which segments input sequences into semantically coherent groups, enabling efficient hierarchical mixing at different levels of context. This approach facilitates the modeling of both local patterns within groups and global trends across the entire sequence."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The main experiments in the article are limited. It might be worth considering adding short-term experiments and incorporating new datasets. For example, there are many new datasets available here: https://huggingface.co/datasets/Salesforce/lotsa_data.\n2. How does the model’s performance change as the input length increases?\n3. Wouldn’t using the downsampled version of x to perform group division result in information loss?\n4. The spacing between the image and the title is too large."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I have a few suggestions which may improve the quality of the paper:\n(a) Please revise Section 3 to enhance its readability.\n(b) Include a qualitative comparison to demonstrate improvements in computational complexity in terms of parameters, FLOPs, and memory.\n(c) I recommend including transformer-based baselines that aim to improve efficiency."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The proposed TimeCAT model was evaluated against benchmarks, and its accuracy shows promising results compared to recent state-of-the-art baselines. The Context-Aware Mixing Block enhances information exchange. In particular, the grouping mechanism allows the transformer to capture dependencies between highly related patches, thereby eliminating unnecessary computations between less related patches and improving efficiency. This mechanism demonstrates potential for effectively extracting information from Multivariate Time Series (MTS) data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This manuscript introduces a framework that employs a hierarchical context-aware transformer, TimeCAT, designed to dynamically group time series patches and efficiently capture dependencies. TimeCAT was compared with recent state-of-the-art methods, and the results demonstrate promising accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The hyperparameter path size, \\(P\\), is crucial for capturing temporal dependencies and reducing computational complexity. This manuscript notes that in scenarios where \\(P\\) is significantly greater than \\(G\\), increasing \\(G\\) leads to higher computational savings. However, a detailed discussion on the size of \\(P\\) is absent. A parameter sensitivity study regarding \\(P\\), along with a table similar to Table 3 to highlight the computational savings, would strengthen the manuscript. Additionally, providing a specific example that compares the complexity reduction in terms of parameters, FLOPs, and memory would be beneficial.\n\nThe clarity of the context-aware mixing workflow could be improved by detailing the dimensions of all intermediate tensors.\n\nWhile the efficiency of TimeCAT is highlighted as a key contribution, the manuscript does not compare it with other efficient transformer-based MTS forecasting baselines. This comparison would provide a more comprehensive evaluation of TimeCAT's performance and efficiency."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\ncheng2024timecat,\ntitle={Time{CAT}: Hierarchical Context-Aware Transformer with Dynamic Grouping for Time Series Forecasting},\nauthor={Yun Cheng},\nyear={2024},\nurl={https://openreview.net/forum?id=0ziGSo4uWp}\n}"
},
"abstract": {
"value": "Transformer-based models have achieved significant success in time series forecasting by modeling global dependencies through self-attention mechanisms. However, these models often rely on fixed patch settings with locality constraints, tokenizing time series into spatially connected sub-series. This approach can hinder the capture of semantic relationships and lead to computational inefficiencies, especially when dealing with long sequences with complex temporal dependencies. \nIn this work, we introduce \\textbf{TimeCAT}—a \\underline{Time} series \\underline{C}ontext-\\underline{A}ware \\underline{T}ransformer that dynamically groups input sequences into semantically coherent groups, enabling efficient modeling of both local and global dependencies. By appending group and global tokens, TimeCAT facilitates fine-grained information exchange through a novel \\emph{Context-Aware Mixing Block}, which utilizes self-attention and MLP mixing operations. This hierarchical approach efficiently models long sequences by processing inputs in structured contexts, reducing computational overhead without sacrificing accuracy.\nExperiments on several challenging real-world datasets demonstrate that TimeCAT achieves consistent state-of-the-art performance, significantly improving forecasting accuracy and computational efficiency over existing methods. This advancement enhances the Transformer family with improved performance, generalization ability, and better utilization of sequence information."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Yun_Cheng3"
]
},
"authors": {
"value": [
"Yun Cheng"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Time Series",
"Context-Aware",
"Transformer"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "cheng|timecat_hierarchical_contextaware_transformer_with_dynamic_grouping_for_time_series_forecasting"
},
"pdf": {
"value": "/pdf/35ba6b859749a6330a375c222f72e8d34c0e6912.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "TimeCAT: Hierarchical Context-Aware Transformer with Dynamic Grouping for Time Series Forecasting"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
0zmHFyZwkA | Hierarchical Graph Learners for Cardinality Estimation | main | Active | Cardinality Estimation;Many small models;Graph Hash;Group-by-template;Fast Learning | learning on graphs and other geometries & topologies | 3;5;5 | 4;4;5 | 1;3;3 | 1;2;2 | 1;2;2 | 4.333333 | 4.333333 | 2.333333 | 1.666667 | 1.666667 | 0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- It is unclear how the plan tree is constructed. For example, is it correct to interpret that (A ? B) ? C and A ? (B ? C) are non-isomorphic plans?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- S1: The evaluation experiments used six types of workloads, including up to five-table join queries, and achieved a higher Q-error than MSCN (CIDR2019). It was confirmed that the fine-grained model, H1, achieved the fastest convergence in training and the highest accuracy.\n\n- S2: Since machine learning-based methods often underperform in high-percentile cases, the fallback mechanism is beneficial in practice. As shown in Table 4, the deeper hierarchical fallback mechanism achieves a higher Q-error.\n\n- S3: As a query-driven approach, the featurizer’s ability to characterize predicates is useful."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a workload-driven approach for cardinality estimation aiming at workloads containig repetative queries. The proposal utilizes multiple templatizer to derive hierarchical cardinality estimation models, taking a query plan tree as input and obtaining data with different granularity. It employs general predictors like PostgreSQL to perform cardinality estimation at the lowest level."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- W1: In addition to Q-error, P-error should be also evaluated, which is becoming a standard metric.\n\n- W2: As shown in Table 3, H1 demonstrates the highest performance, so a structure of H1 -> PostgreSQL seems optimal, making the multiple hierarchy setup (in Equation 15) unnecessary. The authors should clarify the benefit of using multiple hierarchy levels. \n\n- W3: Although the proposal adopts a workload-driven approach, recent trends favor data-driven or hybrid approaches. Notably, data-driven approaches have the advantage of robustness for unknown queries. Combining a workload-driven approach with data-driven methods could enhance accuracy in cases where prediction errors are large. While a workload-driven approach has the advantage of faster inference time, it is not obvious workload-driven approach alone is useful. \n\n- W4: In Section 2.5, the statement \"All graphs whose templates are isomorphic share the same model\" is based on graph isomorphism as defined in Definition 1, relying on edge information only. The templatizer, H1, thus does not use graph attribute information. However, Section 3.1 states, \"Hence, query graphs found in the same H1 template differ only by the constant values,\" which appears contradictory.\n\n- W5: Section 3.3 includes a calculation for the hash size, such as s(#T1), but according to Equation 6, the hash length is fixed at h, so the case distinction in Equation 10 does not seem valid.\n\n- W6: Although Cardbench is used as the evaluation benchmark, it is proposed in an arXiv paper and is not yet a standard benchmark. It would be better to use the widely accepted join order benchmark or explain the strong justification for using Cardbench."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please refer to W1-W4."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "S1. The authors tackle an important issue in analytical databases: cardinality estimation for repetitive queries.\n\nS2. The authors propose a cardinality estimator, which leverages hierarchical localized models that are trained online."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a supervised cardinality estimation method that enhances accuracy for workloads with repetitive queries (i.e., queries repeated in templates with only the constant parameters changed) by leveraging hierarchical, online localized models. This approach transforms each SQL query into a Directed Acyclic Graph (DAG) representation, groups isomorphic DAGs into the same partition, and trains an online model for each partition as queries are executed. Grouping is conducted hierarchically at multiple levels of granularity, ranging from fine-grained (i.e., queries varying only in constant terms are placed in the same partition) to coarse-grained (e.g., queries varying only in constants, operator types, and column names are placed in the same partition). During runtime, given a query, the method begins with the fine-grained model and moves to coarser-grained models until it finds a confident model for the query. If no suitable model exists, it defaults to a learned or traditional model. Using the CardBench benchmark, the authors demonstrate that this method yields more accurate cardinality estimates compared to competitors such as PostgreSQL and MSCN."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. The Experiments section needs to be improved.\n- The authors should compare their method with state-of-the-art (SOTA) data-driven cardinality estimation methods such as [1]. Currently, the comparison is limited to MSCN, a query-driven method proposed in 2019 [2], and PostgreSQL, a traditional estimator.\n- The “Imperfect Admission Experiments” in Section 4 seems unfair, as the q-error percentiles of each method are reported on different query subsets: PostgreSQL is evaluated on the entire query set, MSCN on a subset excluding disjunctions, while the proposed method seems to be evaluated on a subset of simple queries (e.g., repetitive queries varying only in constant terms).\n- The authors should report end-to-end time (including both planning and query execution time), as shown in [1,3], along with its breakdown for further clarity.\n- Reporting training time and model size would provide further insights into the method’s practical feasibility.\n- The authors should clarify why the q-error is higher in certain intervals with larger training sample sizes in Figure 4.\n- The experimental setup requires a more thorough explanation, particularly regarding how query repetitiveness was generated and its extent.\n\n\nW2. Motivation is inadequate.\n- The authors should explain why existing learned estimators struggle with repetitive workloads to highlight the necessity for their proposed method.\n- The authors should clarify why a hierarchical model structure is effective in improving the accuracy of cardinality estimation.\n\nW3. The presentation needs to be improved.\n- The process of generating a DAG from a query needs further explanation. If the DAG refers to a query plan, the authors should specify which query plan is used, as multiple plans can exist for a single query.\n- There are some undefined terms, such as d_{\\psi} in Section 2.\n- There are inconsistent notations throughout the paper. For instance, “query graph” and “query plan” are used interchangeably.\n- Numerous typos appear throughout the paper, such as “geoping” and “hases” in Section 5.\n\nW4. There are some misstatements regarding existing work\n- The authors seem to overstate the limitations of existing methods. They claim that “NN-based estimators perform well if they are trained with large amounts of query samples,” which is true specifically for query-driven learned estimators, not all NN-based methods.\n- The authors state that “50% of the real world clusters have more than 90% queries repeated in templates (only changing the constant parameters),” citing [4]. However, according to [4], the correct value is 80%, not 90%.\n\n[1] Kim, Kyoungmin, et al. \"Asm: Harmonizing autoregressive model, sampling, and multi-dimensional statistics merging for cardinality estimation.\" Proceedings of the ACM on Management of Data 2.1 (2024): 1-27.\n[2] Kipf, Andreas, et al. “Learned cardinalities: Estimating correlated joins with deep learning.” In Biennial Conference on Innovative Data Systems Research, 2019.\n[3] Wu, Ziniu, et al. \"FactorJoin: a new cardinality estimation framework for join queries.\" Proceedings of the ACM on Management of Data 1.1 (2023): 1-27.\n[4] van Renen, Alexander, et al. \"Why tpc is not enough: An analysis of the amazon redshift fleet.\" Proceedings of the VLDB Endowment 17.11 (2024): 3694-3706."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Q1. The addition of mean absolute error (MAE) and relative prediction error (RPE) would allow for a more comprehensive evaluation of the accuracy of different cardinality estimators.\nQ2. It should include a broader variety and greater number of baseline methods by introducing more advanced cardinality estimation approaches. Adding comparisons with data-driven cardinality estimation methods or experiments against paradigmatic methods like QueryFormer would enhance the analysis.\nQ3. It should include comparative experiments under different workloads. Additionally, experiments on cardinality estimation with lower query redundancy should be added to provide a more comprehensive evaluation.\nQ4. Could you clarify what specific models are used at each level of cardinality estimation in this paper? This detail is not adequately explained in the manuscript."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "S1. Hierarchical cardinality estimation methods have significant advantages for cardinality estimation methods in the presence of a large number of repeated queries. They allow for the training of separate cardinality estimators for each query type, thereby saving training costs and improving the accuracy of cardinality estimation.\nS2. The method presented in this paper demonstrates strong cardinality estimation performance, and it also exhibits good convergence stability through hierarchical training.\nS3. Compared to traditional cardinality estimators, the method proposed in this paper achieves faster convergence speed with lower overhead, making it suitable for practical industrial applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To address the issue of repeated queries in cardinality estimation, this paper proposes an on-line cardinality estimation method. Unlike traditional cardinality estimation approaches that rely on a single model, this study introduces a hierarchical cardinality estimation framework. Queries are categorized into three levels, with different structural classifications applied at each level. For each level, distinct estimator models are trained based on various classification templates, and evaluations are conducted hierarchically. Additionally, these models utilize an extension of Merkle-Trees to hash directed acyclic graph (DAG) query plans. Finally, an ensemble learning method is used to statistically aggregate the results and produce the final cardinality estimates. Compared to traditional cardinality estimators and the query-based cost estimation method MSCN, this approach achieves superior results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. This paper employs only the Q-Error as an evaluation metric, which can assess the stability of the cardinality estimator but does not provide an intuitive measure of its accuracy. The addition of mean absolute error (MAE) and relative prediction error (RPE) would allow for a more comprehensive evaluation of the accuracy of different cardinality estimators.\nW2. This paper compares only with two relatively outdated query-based cardinality estimation methods, MSCN and MSCN+. It should include a broader variety and greater number of baseline methods by introducing more advanced cardinality estimation approaches. Adding comparisons with data-driven cardinality estimation methods or experiments against paradigmatic methods like QueryFormer would enhance the analysis.\nW3. The experimental workload in this paper lacks clarification regarding query redundancy. It should include comparative experiments under different workloads. Additionally, experiments on cardinality estimation with lower query redundancy should be added to provide a more comprehensive evaluation."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Given many labeled graphs, each representing database query, where graph label is cardinality, we group graphs by graph-structure and learn simple model per group (within group, feature dimension is constant)."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024hierarchical,\ntitle={Hierarchical Graph Learners for Cardinality Estimation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=0zmHFyZwkA},\nnote={under review}\n}"
},
"abstract": {
"value": "Cardinality estimation -- the task of estimating the number of records that a database query will return -- is core to performance optimization in modern database systems. Traditional optimizers used in commercial systems use heuristics that can lead to large errors. Recently, neural network based models have been proposed that outperform the traditional optimizers. These neural network based estimators perform well if they are trained with large amounts of query samples. In this work, we observe that data warehouse workloads contain highly repetitive queries, and propose a hierarchy of localized on-line models to target these repetitive queries. At the core, these models use an extension of Merkle-Trees to hash query plans which are directed acyclic graphs. The hash values can divisively partition a large set of graphs into many sets, each containing few (whole) graphs. We learn an online model for each partition of the hierarchy. No upfront training is needed; on-line models learn as the queries are executed. When a new query comes, we check the partitions it is hashed to and if no such local model was sufficiently confident along the hierarchy, we fall-back onto a default model at the root. Our experimental results show that not only our hierarchical on-line models perform better than the traditional optimizers, they also outperform neural models, with robust errors rates at the tail."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Cardinality Estimation",
"Many small models",
"Graph Hash",
"Group-by-template",
"Fast Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a797ce27036a1f3593a57a960711d98c47e24fbc.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Hierarchical Graph Learners for Cardinality Estimation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
107ZsHD8h7 | Autoformulation of Mathematical Optimization Models Using LLMs | main | Active | Large Language Models;optimization modeling | applications to robotics, autonomy, planning | 5;5;5;6 | 4;3;3;4 | 2;2;3;3 | 2;2;3;3 | 3;2;3;3 | 5.25 | 3.5 | 2.5 | 2.5 | 2.75 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can you justify why optimality gap and computational efficiency, which depends on solver, are also included in the evalutation metric?\n2. What is the relationship of Types I to Types III problems with the rest of this paper?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The approach to utilize LLM both as hypothesis generators and evaluators seems novel. \n2. A pruning technique based on Satisfiability Modulo Theories (SMT) solvers is introduced to eliminate redundant, trivially equivalent formulations, which can improve the search efficiency in my understanding."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper is along the popular direction of using LLM to automatically fomulate mathematical optimization problems. It introduces a formal definition of autoformulation and identifies three main challenages. To address these challenges, the authors propose a method using large language models (LLMs) within a Monte-Carlo Tree Search (MCTS) framework. LLMs are utilized both as hypothesis generators and evaluators, while MCTS incrementally explores the formulation space. Additionally, a pruning technique based on Satisfiability Modulo Theories (SMT) solvers is introduced to eliminate redundant, trivially equivalent formulations. The method was tested on benchmarks (NL4OPT and IndustryOR) and show superior results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The concept of autoformulation defined in this paper is broad, including mathematical formulation and computational formulation, and optimality gap and computational efficiency. However, the main contribution of this paper is on mathematical formulation if I understand correctly. The rest in the proposed concept is not addressed in this paper.\n2. I do not see why adding optimality gap and computational efficiency to the evaluation metric. They're solver dependent and is not the responsbility of formulation.\n3. The authors defined Types I to Types III problems, but I do not see how they are related to the main contribution of this paper.\nOverall, I feel like the concept proposed in this paper is too broad and distract readers from the main contribution of this paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Section 5.2, paragraph (2) ‘Getting Local Scores,’ how does the DFS expand the tree? Does it expand the child with the highest prior score first, or does it expand a random model generated by the LLM?\n\n2. How should Figure 5 be interpreted? Section 5.4 doesn’t seem directly related.\n\n3. How is exploration of the hypothesis space ensured? Figure 4 shows that pruning is effective, but does this imply that the LLM fails to generate diverse partial formulations?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Leveraging LLMs for both hypothesis generation and evaluation enables verification of partial problems.\n2. Using MCTS to efficiently navigate the large hypothesis space, along with introducing the SMT pruning procedure, saves computational resources and focuses on unique, meaningful formulations.\n3. Strong experimental results compared to baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a framework to automate the creation of mathematical optimization models from natural language descriptions. This process is essential in operations research but traditionally requires significant expertise. There are three main challenges for autoformulation: defining a vast hypothesis space, navigating this space efficiently, and ensuring formulation correctness. To address these, the authors integrate LLMs within an MCTS framework, using LLMs to generate potential formulations and evaluate correctness. They also apply a pruning technique with SMT solvers to eliminate redundant formulations. Experiments on benchmarks for linear and mixed-integer programming show this approach is both accurate and efficient, potentially making optimization modeling more accessible."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The same LLM is used for both hypothesis generation and verification, which introduces a high correlation between generation and evaluation stages. This could limit the objectivity and diversity of the verification.\n\n2. Lacks a clear explanation of how the LLM is guided to explore the hypothesis space with sufficient diversity.\n\n3. It would be beneficial to provide more insight into partial problem formulation at intermediate steps. Improved verification of partial models could enhance accuracy and reduce runtime for complex problems.\n\n\nMinor issue: The font size in the figures is too small"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Could you clarify how the developed system will be utilized by domain scientists or in other potential use cases?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper provides a clear and compelling description of the motivation behind the research, along with a well-articulated explanation of the proposed method.\n- It includes an extensive experimental comparison that effectively demonstrates the performance and advantages of the proposed approach relative to existing methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper identifies three core challenges: defining the vast, problem-dependent hypothesis space, efficiently searching this space under uncertainty, and evaluating the correctness of formulations. \n\nTo tackle these issues, the authors propose a novel method that leverages Large Language Models (LLMs) within a Monte-Carlo Tree Search framework. LLMs function as both hypothesis generators and evaluators of formulation correctness, while a pruning technique is employed to eliminate trivial equivalent formulations. \n\nEmpirical evaluations demonstrate that this approach significantly improves the efficiency and accuracy of formulating optimization models for linear and mixed-integer programming problems, making it accessible to users without deep optimization expertise."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In my experience with Operations Research, many papers focus on formulating new problems into linear programming or mixed-integer linear programming. These papers typically begin with a clear textual definition of the problem, followed by rigorous proofs, given the critical applications involved. However, I find that this work may not be particularly useful for practitioners in the field, as they still need to undertake similar formulation efforts themselves. The primary benefit seems to lie in providing students with examples for their coursework rather than advancing practical applications for researchers and professionals.\n\n- In line 128, the quote from the textbook stating, \"Once this formulation is done, solving the problem is ... (almost) technology,\" highlights a distinction from the problem formulation presented in this paper. The textbook emphasizes identifying the context of the problem, extracting the decision variables, and compiling all relevant real-world constraints, which cannot be adequately captured through mere textual descriptions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Questions:\n1. Why having a probabilistic model $p_\\psi$ with complex joint optimization in Section 2.1? (See Weaknesses 1 for details)\n\nSuggestions:\n1. Highlight the relation and difference between this paper and other works combining LLMs with MCTS.\n2. Align the problem definition and the methods. If the quality of computational representation is not sufficiently addressed in this paper, the problem definition may not be so complex. (In my opinion, a method which fit the current problem definition will be, for example, bi-level finetuning of two LLMs $p_\\phi$ and $p_\\psi$ to maximize $Q$)\n3. It may worth fine-tuning the model with MCTS-based self enhancement in the future (check arXiv:2309.17179 for an example)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Good writing, easy to follow.\n2. Specific improvements on MCTS to fit the optimization scenario (merging trivially equivalent formulations, including solver signals to rewards, etc.)\n3. Sufficient experimental details."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a MCTS-enhanced LLM approach, to transform descriptions of certain types of optimization problems into formal optimization models consisting of (1) variables, (2) objective, (3) equality constraints and (4) inequality constraints. The MCTS has a depth of 4 corresponding to the four components of an optimization model. When expanding a node, LLMs are used to generate a set of candidate formulations, and trivally identical formulations are pruned by SMT solvers to improve efficiency. The expanded nodes will be assigned a prior by an LLM-based ranking evaluation. The reward at the terminal node is a combination of LLM evaluation and whether optimality is reached from solvers. Experimental results show that the proposed approach outperforms other non-MCTS approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The rationality of the definition of \"autoformulation\" is not sufficiently backed by the main part of the paper. The connection between the problem setting and the proposed method is not very clear. For the definition, two transformations are highlighted (mathematical formulation model $p_\\phi$ and computational representation model $p_\\psi$), and \"autoformulation\" is defined to jointly optimize $(\\phi, \\psi)$ so that the expection of a quality function $Q$ is maximized. This problem setting is a bit different from other \"LLM+OR\" papers. However, in the main part of the paper, no clear effort is paid on optimizing parameters, neither $\\phi$ nor $\\psi$ (GPT-4 is used without fine-tuning). I feel that the main objective of this paper is still the same as other papers, proposing a $P_{\\phi\\text{-LLM}}^\\text{improved}$ that empirically performs better than a vanilla use of $P_{\\phi\\text{-LLM}}$. So I am a bit confused why bother to propose such a definition. Specifically, it seems unneeded to have a probabilistic model $p_\\psi$ with complex joint optimization, if the computational representation step can be simply done by a deterministic parser (line 241) or commertial packages (line 194).\n2. There are already lots of literatures working on combining LLMs with MCTS to solve general or domain-specific problems (see arXiv:2309.17179, arXiv:2402.03289, arXiv:2409.09584, arXiv:2406.07394, arXiv:2402.08147 for some examples), in both training and inference stage, which may unfortunately raise the bar for another \"LLM+MCTS\" paper to appear on top conferences. It is good to have LLM+MCTS applied on OR problems, but more unique features might be required to distinguish this paper. This paper has paid some effort on it (see Strength 2), but apart from these adaptations, the whole LLM+MCTS pipeline seems quite standard, and only applied at the inference stage.\n3. While LLM contains general knowledge of optimization, the proposed method is limited to (reformulated) convex problems (type I and II in this paper)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024autoformulation,\ntitle={Autoformulation of Mathematical Optimization Models Using {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=107ZsHD8h7},\nnote={under review}\n}"
},
"abstract": {
"value": "Mathematical optimization is fundamental to decision-making across diverse domains, from operations research to healthcare. Yet, translating real-world problems into optimization models remains a formidable challenge, often demanding specialized expertise. This paper formally introduces the concept of *autoformulation*---an automated approach to creating optimization models from natural language descriptions for commercial solvers.\nWe identify the three core challenges of autoformulation: (1) defining the vast, problem-dependent hypothesis space, (2) efficiently searching this space under uncertainty, and (3) evaluating formulation correctness (ensuring a formulation accurately represents the problem).\nTo address these challenges, we introduce a novel method leveraging *Large Language Models* (LLMs) within a *Monte-Carlo Tree Search* framework. This approach systematically explores the space of possible formulations by exploiting the hierarchical nature of optimization modeling. LLMs serve two key roles: as dynamic formulation hypothesis generators and as evaluators of formulation correctness. To enhance search efficiency, we introduce a pruning technique to remove trivially equivalent formulations. \nEmpirical evaluations across benchmarks containing linear and mixed-integer programming problems demonstrate our method's superior performance. Additionally, we observe significant efficiency gains from employing LLMs for correctness evaluation and from our pruning techniques."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models",
"optimization modeling"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6da87802a6b8d8f636375a46bc11a1eecc11f30d.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Autoformulation of Mathematical Optimization Models Using LLMs"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
10DtLPsdro | Factor Graph-based Interpretable Neural Networks | main | Active | interpretable neural network;factor graph;perturbation;explanation rectification;graph learning | interpretability and explainable AI | 5;5;5;6;6 | 4;4;3;4;3 | 3;3;3;3;3 | 3;3;2;2;3 | 2;3;2;3;3 | 5.4 | 3.6 | 3 | 2.6 | 2.6 | -0.166667 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Defining the logic rule set seems expensive. Would it be difficult to construct a factor graph by connecting concepts and categories with a bipartite complete graph and estimating the weights $w_i$?\n- How do you distinguish between coexistence and exclusion in the factor graph?\n- It would be better to describe the specific estimation algorithm of $w_i$ in the Appendix.\n- Line 336-337: What is \"attributional training\"?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The proposed method enables both the detection of logical errors and the correction of these errors within a single framework.\n- Compared to other concept-based interpretable neural networks, the proposed method achieves higher comprehensiveness in explanations, regardless of whether the perturbations are known or unknown."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed a method using factor graphs to correct errors in concept-based explanations caused by perturbations. \nSpecifically, it constructs a factor graph using predefined logical rules between concepts or between concepts and categories. \nThis graph helps identify logical errors in the output from concept-based interpretable neural networks by evaluating the likelihood of these errors. \nAdditionally, by leveraging the factor graph, it is possible to correct these logical errors in the explanations. \nExperimental comparisons on three datasets demonstrate that the proposed method outperforms existing concept-based approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While the proposed method assumes that explanations change due to perturbations without affecting predictions, this seems unrealistic. Particularly in interpretable neural networks with a concept bottleneck structure, as assumed in this study, changes in the concepts outputted by the neural network would naturally lead to changes in predictions, which undermines this assumption.\n- The proposed method requires predefined logic rules between concepts and categories. If these rules are already known, wouldn’t it be possible to detect inconsistencies between concepts and predictions without the need for the factor graph? The advantage of using a factor graph is unclear.\n- As noted in the minor comments below, there is room for improvement in the writing.\n\nMinor comments:\n- The explanation of Figure 3 is insufficient.\n- There is no reference to Figure 4."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Can the authors provide a comparison against one of the methods injecting knowledge (prior or learnt)?\n- Scalability: The two limitations that have been reported (domain knowledge changes, correct prediction categories) are non-negligible. How could this method be extended to face these limitations?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- **Good presentation and structure** The paper is well structured and easy to read. The mathematical definition of the proposed method is clear and well-defined. \n- **Nice experimental campaign**: Extensive experiments on three datasets (CUB, MIMIC-III EWS, and Synthetic-MNIST) demonstrate the superior performance of AGAIN over the compared baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces AGAIN (fActor GrAph-based Interpretable Neural Network), which generates comprehensible explanations for neural network decisions even under unknown perturbations. Unlike traditional adversarial training methods, AGAIN incorporates logical rules through a factor graph to identify and rectify explanatory logical errors during inference. The model is validated on three datasets: CUB, MIMIC-III EWS, and Synthetic-MNIST, demonstrating superior performance compared to existing baselines"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Major Issues\n\n- **Novelty, Related work and Compared methods**: the main issue with the current paper is that it only considers methods injecting knowledge into the models by means of factor graphs. However, the integration of knowledge into model predictions has been fully studied under different point of views: probabilistic (e.g., DeepProblog [1] and all its variants), logic constraints (see the survey of Giunchiglia et al. [2]). Also, it is not the first method defending against adversarial attack with knowledge and without employing adversarial training. Some of these methods have been already employed to defend against adversarial attacks, such as [3-4]. [5] is a survey entirely dedicated to enhancing interpretability and adversarial robustness with prior knowledge. [6] has shown in the context of concept-based models that it can learn logic rules at training time and use them at inference time to defend against adversarial attacks. This is also reflected in the experimental campaign that is extensive but does not consider any methods injecting prior knowledge to defend against adversarial attacks. The only compared methods are CBMs or adversarial-trained versions of the same models. \n\n\n- **Paper positioning and Preliminaries**: the method provides explanations and a defence mechanism that is based on concept predictions; thus, it applies only to concept-based models. Most of the compared methods also belongs to this niche. While this is fully acceptable, explicit references to concept-based models only appears in Section 3-4. Therefore, I think it should state earlier that this is the focus of the paper, as most of the related work mentioned in the paper does not focus on concept-based explanations. Furthermore, there is no explicit mentions to concept-based models in the preliminaries. The “Interpretable Neural Network” paragraph should include citations to this literature and explain concept-based models.\n\n## Minor Issues\n- P.2 “[…] even if the adversarial samples are available, retraining is only effective for at most one perturbation type” I think this statement is quite strong, and although it may have been proved for some attacks in Tan & Tian, 2023, I don’t think it is a general assumption that we can make. I think this sentence should be soften to “retraining is effective only for few perturbation types at the same time”. \n- P.2 “[…] to ensure the expectation of the potential function is in its maximum value”. It is not clear at this point what is the potential function. Consider rephrasing this sentence.\n- P.2 “The explanations that are further regenerated align with the exogenous logic.” Not clear in this case what are the exogenous factors. \n- P.2-3: The term \"defenses against comprehensibility\" seems a bit misleading. It implies that the goal is to prevent explanations from being understandable, which is not the case. Instead, the focus is on defending the comprehensibility of explanations against perturbations. “defenses of comprehensibility” colud be a more appropriate definition.\n\n\n[1] Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, Luc De Raedt: DeepProbLog: Neural Probabilistic Logic Programming. NeurIPS 2018: 3753-3763\n\n[2] Giunchiglia, E., Stoian, M. C., & Lukasiewicz, T. (7 2022). Deep Learning with Logical Constraints. In L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22 (pp. 5478–5485). doi:10.24963/ijcai.2022/767\n\n[3] Yin, M., Li, S., Cai, Z., Song, C., Asif, M. S., Roy-Chowdhury, A. K., & Krishnamurthy, S. V. (2021). Exploiting multi-object relationships for detecting adversarial attacks in complex scenes. In proceedings of the IEEE/CVF international conference on computer vision (pp. 7858-7867).\n\n[4] Melacci, Stefano, et al. \"Domain knowledge alleviates adversarial attacks in multi-label classifiers.\" IEEE Transactions on Pattern Analysis and Machine Intelligence 44.12 (2021): 9944-9959.\n\n[5] Mumuni, Fuseini, and Alhassan Mumuni. \"Improving deep learning with prior knowledge and cognitive models: A survey on enhancing interpretability, adversarial robustness and zero-shot learning.\" Cognitive Systems Research (2023): 101188.\n\n[6] Ciravegna, Gabriele, et al. \"Logic explained networks.\" Artificial Intelligence 314 (2023): 103822."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "How is the reasoning conducted on G? How is the probability estimation implemented in Equation 1? Please provide more details."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper proposes a novel and interesting idea that uses the logical correctness of explanation to detect and defense against noises and adversaries.\n2. The three stage framework is reasonable to me. \n3. The examples used in the paper are intuitive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes AGAIN (fActor GrAph-based Interpretable Neural Network), a new approach to maintain comprehensible neural network explanations when inputs are affected by unknown perturbations. AGAIN builds factor graphs from explanations, and integrates logical rules through the graphs. The system detects adversaries by identifying violations of real-world logic, and uses an interactive intervention strategy to fix incorrect explanations. Tested on three datasets (CUB, MIMIC-III EWS, and Synthetic-MNIST), AGAIN outperformed existing methods in generating comprehensible explanations under both known and unknown perturbations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The concept bottleneck model is not easy to scale, as it requires manual annotations of concepts.\n2. The implementation details of Section 4.2 is not very clear (the introduction is too conceptual). For example, is $\\mathcal{G}$ a graph or a model? At the beginning I thought $\\mathcal{G}$ is a graph, but in Line 221 it \"reasons about a conditional probability\".\n3. The datasets used in this work are not very strong. I doubt if the work is applicable to real-world situations. At least, the datasets used cannot reflect adversarial scenarios in practice."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The authors employed black-box perturbations to test robustness. Adding input noise could provide more convincing evidence.\n2. I am concerned about the potential impact of large factor graphs on computational efficiency. I would like to see an analysis on this problem.\n3. Currently, the algorithm explores all intervention options to select the optimal one. Is it possible to employ a simplified strategy, such as a heuristic or greedy algorithm, to reduce computational load?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper presents an innovative method, AGAIN, that combines factor graphs with concept-level explanations to improve model interpretability under unknown perturbations.\n2. The authors evaluate AGAIN across multiple datasets and baseline models, providing a broad view of its effectiveness.\n3. The paper is clear and well-structured."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes AGAIN, a neural network model that generates comprehensible explanations under unknown perturbations by integrating logical rules directly during inference, rather than relying on adversarial training. Using a factor graph to identify and rectify logical inconsistencies, AGAIN demonstrates superior interpretability and robustness compared to existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The factor graph requires predefined logical rules, which could be challenging to construct or generalize across different domains.\n2. The algorithm’s process of exploring all possible intervention options to find the optimal solution could create computational overhead."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper offers a well-motivated and clear presentation of a unique approach. The use of a factor graph to handle perturbations is innovative and highly relevant to the research community. Furthermore, the authors have provided sufficient theoretical support and empirical evidence to justify the effectiveness of their approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose a novel method called AGAIN to generate comprehensible explanations under unknown perturbations by employing a factor graph approach. The method presents a significant contribution to the field, addressing an important gap with a new perspective. The paper is well-structured, and the authors have provided thorough theoretical justifications and experimental analyses. These analyses effectively demonstrate the superiority of the proposed method over existing ones."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Clarification on Figure 3:\n In Figure 3, does the final output include hat{c}_{re} and prediction? Does the hat{c}_{re} mean the interpretation? Additionally, the figure lacks clarity on how hc is generated, which is crucial for understanding the method. Adding details on hc generation would make the figure more comprehensive.\n \n2. Comparison Metrics:\n The authors compare their method with all baselines using the LSM metric. However, they only compare their method with CBM in terms of accuracy (P-ACC and E-ACC). It would be beneficial to extend the accuracy comparison to include all baselines to provide a more complete evaluation of the method’s performance.\n \n3. Figure 7 Interpretation and Readability:\n In Figure 7, for each example, the authors provide two bar charts. Does the left bar chart represent the initial interpretation, and the right bar chart represent the interpretation combined with the factor graph? Clarification on this aspect would enhance the understanding of the figure. Additionally, some symbols and text in Figure 7 are overlapping.\n \n4. Inconsistency in Line 463 and Table 5:\n The authors mention in line 463 that their method is compared with two baselines on the Synthetic-MNIST dataset. However, Table 5 lists four baselines. Furthermore, \"ProbCBM\" in Table 5 should be corrected to \"ProCBM\" for consistency. It is recommended that the authors proofread the paper to eliminate such inconsistencies and typographical errors."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024factor,\ntitle={Factor Graph-based Interpretable Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=10DtLPsdro},\nnote={under review}\n}"
},
"abstract": {
"value": "Comprehensible neural network explanations are foundations for a better understanding of decisions, especially when the input data are infused with malicious perturbations. Existing solutions generally mitigate the impact of perturbations through adversarial training, yet they fail to generate comprehensible explanations under unknown perturbations. To address this challenge, we propose AGAIN, a fActor GrAph-based Interpretable neural Network, which is capable of generating comprehensible explanations under unknown perturbations. Instead of retraining like previous solutions, the proposed AGAIN directly integrates logical rules by which logical errors in explanations are identified and rectified during inference. Specifically, we construct the factor graph to express logical rules between explanations and categories. By treating logical rules as exogenous knowledge, AGAIN can identify incomprehensible explanations that violate real-world logic. Furthermore, we propose an interactive intervention switch strategy rectifying explanations based on the logical guidance from the factor graph without learning perturbations, which overcomes the inherent limitation of adversarial training-based methods in defending only against known perturbations. Additionally, we theoretically demonstrate the effectiveness of employing factor graph by proving that the comprehensibility of explanations is strongly correlated with factor graph. Extensive experiments are conducted on three datasets and experimental results illustrate the superior performance of AGAIN compared to state-of-the-art baselines."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"interpretable neural network",
"factor graph",
"perturbation",
"explanation rectification",
"graph learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e27cefe47e843cdc9ae47f185ccbf3238aac30a7.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Factor Graph-based Interpretable Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
10JOlFIPjt | In vivo cell-type and brain region classification via multimodal contrastive learning | main | Active | contrastive learning;electrophysiology;extracellular;multimodal;neuroscience;cell type;brain region;Neuropixels;deep learning | applications to neuroscience & cognitive science | 5;6;8;8 | 4;5;3;3 | 3;3;4;4 | 3;2;3;4 | 3;3;4;4 | 6.75 | 3.75 | 3.5 | 3 | 3.5 | -0.754337 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "The waveform template is restricted to one channel with maximal amplitude and multi-channel template results are shown in Supplement E. What exactly is the definition of a channel in this context?\n\nWhy was additive Gaussian noise chosen as the sole data augmentation? Can a brief rationale for this specific choice be included in the paper? The authors demonstrate that adding two template augmentations: amplitude jitter and electrode dropout does not significantly improve the performance of the two downstream classification tasks. How do other data augmentation types impact the performance and results?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "This is a very well-written paper with clear organization of the figures and sound presentation of the data sets used, methods applied, and results obtained. The authors apply a successful contrastive learning method to develop a new framework that outperforms comparable state-of-the-art methods in both the cell-type classification task and the less widely-explored brain-region classification task. The task relies on two electrophysiological recording modalities: spiking activity and EAPs, which are more accessible, and the decent classification performances come with minimal fine-tuning. NEMO is able to differentiate between VIP and SST cells, which is highly valued in systems neuroscience, and its ability to yield data embeddings that lead to separable classification regions is impressive. The method described in this paper will be particularly helpful and useful to systems neuroscientists interested in applying this technique with the goal of decoding the neural circuitry underlying multiple biological functions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel application of contrastive learning to solve two important problems in systems neuroscience by incorporating just two electrophysiological recording modalities: spiking activity and extracellular action potentials (EAPs). The authors developed a new framework called Neuronal Embeddings via Multimodal Contrastive Learning (NEMO) by combining the well-established Contrastive Language-Image Pretraining (CLIP) framework with task-specific data augmentations and encoders. The authors demonstrate the multimodality as well as the power and utility of NEMO by evaluating its performance on two very different downstream tasks:\n\n1.\tcell-type classification among parvalbumin (PV), somatostatin (SST), and vasoactive intestinal peptide (VIP) inhibitory interneurons using opto-tagged Neuropixels Ultra (NP Ultra) recordings from the mouse visual cortex, and\n\n2.\tbrain-region classification among 10 broad areas using the public International Brain Laboratory (IBL) brain-wide map data set.\n\nThis paper’s novelty mainly stems from the utilization of a paired data set that combines an autocorrelogram (ACG) image of every neuron’s spiking activity and a waveform template of the neurons’ EAPs, and from the application of two separate encoders for the aforementioned two modalities. In both cell-type and brain-region classification tasks, the authors show that NEMO outperforms the state-of-the-art multimodal cell-type embedding methods including PhysMAP and VAE-based methods as well as a fully supervised method in terms of balanced accuracies and macro-averaged F1 scores with minimal fine-tuning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Detailed graphical representations of the architectures used in the authors’ method may help the readers understand the details of NEMO better. A more comprehensive description, such as including explanations of 10D and 500D in the VAE baseline versions’ latent spaces of the encoder architectures, would have further aided clarity to the method’s explanations.\n\nThe authors restrict the number of example neurons, whose recorded activities are inputted to the overall architecture, to five neurons without an explanation of why the input neuron number was kept low. Providing a rationale for limiting the number of input neurons to five as well as an explanation of how the results of the authors' method change with varying input neuron number would be greatly appreciated.\n\nThe authors state that they fixed the hyperparameters for all methods across the two data sets used in the experiments due to the limited number of labels for evaluation for the cell-type classification task. It is unclear whether this choice led to fair performance comparisons among the state-of-the-art methods. It would help to know that separate hyperparameter optimization among the different methods and data sets would not yield different results. Providing a brief analysis on the sensitivity of the results and the overall performance to variations in specific hyperparameters will notably strengthen the claims made in this paper.\n\nMinor comments:\n\nThere seems to be a citation error or a missing preposition in Section 1 when the authors cite IBL et al.\n\nRadford et al. 2021 to Radford et al., 2021 in Section 4\n\nTable 6 is non-existent in Section 6.1. The authors may be referring to Table 1.\n\nThere seems to be a citation error or a missing preposition in Section 6.3 when the authors cite Chen et al. (2020).\n\nFigure 5 (b) caption's word \"then\" should be changed to \"than.\"\n\nIn Section 7, the phrase \"should also be useful these down-stream tasks\" should be changed to \"should also be useful in these down-stream tasks.\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.The authors emphasized “in vivo”, however, are there any results about the computational efficiency of the NEMO model that can support it? Does it support real-time processing?\n\n2.I would expect a validation of the data augmentation strategy. It is understandable that the construction of ACG images is computationally expensive. But the authors are encouraged to validate the data augmentation strategy adopted in the study, augmentations directly for the ACG images, is reasonable by showing a couple examples.\n\n3.The authors compared NEMO with the VAE-based method. However, in my opinion, it seems that a critical comparison is missing: the comparison between naive models, specifically, NEMO without fine-tuning and the VAE-based method without fine-tuning. This comparison would highlight the representational power of the two methods ."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "As mentioned above, this study is among the earliest efforts to apply contrastive learning for joint modeling of extracellular action potential data and spiking activity data. Most key components in the proposed analytical framework are fetched from previous work (e.g., CLIP contrastive learning and the ACG encoder), increasing the reproducibility of the study. In addition, the study was well presented and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors proposed a multimodal contrastive learning approach for joint modeling of extracellular action potential data and spiking activity data. The proposed approach can be fine-tuned for different downstream tasks, including in vivo cell-type and brain region classification. More specifically, the author applied the classic contrastive learning framework established in CLIP on extracellular action potential data and spiking activity data. Although the theoretical innovation is relatively limited, the authors made the earliest efforts (as far as I know) to apply contrastive learning for joint modeling of extracellular action potential data and spiking activity data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "As mentioned above, the theoretically contribution of the study is relatively limited. The readers may expect to see some components that are specifically designed with consideration for the unique characteristics of the data and the problem being addressed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- The VAE training is unclear to me. Do you jointly embed waveforms and autocorrelograms, or do you use two separate encoders/decoders with a shared latent space (which would require cross-decoding, i.e. encode waveform and decoder autocorrelogram)?\n- How important are the data augmentations? Can you provide an ablation experiment for that?\n- Waveforms and autocorrelograms are two reasonable choices for input modalities. However they are not the only conceivable choices. Have you thought/tried other choices or thought about learning an encoding of spiking activity directly?\n- What are the results when you cluster on the latent embeddings directly instead of running UMAP first? How stable is the clustering? I.e. if you train two models of NEMO from different seeds and then cluster neurons, how consistently are two neurons assigned to the same cluster (as measured by adjusted rand index or similar)?\n- Can you provide a definition for the modularity index?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The paper is well written, the figures are well made, descriptions are generally clear.\n- The paper contains extensive additional material for more details.\n- The training and experimental setup seems to be carefully chosen and sound.\n- Limitations are discussed at the end."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces NEMO (Neuronal Embeddings via MultimOdal contrastive learning), a method for classifying neurons by their cell type and brain region location using electrophysiological data. NEMO uses CLIP-like contrastive learning to jointly analyze two types of neural data: the shape of neural electrical signals (waveforms) and patterns of neural activity over time (autocorrelograms). \n\nThe authors evaluated NEMO on two datasets: an opto-tagged mouse visual cortex dataset for cell-type classification and the International Brain Laboratory dataset for brain region classification. In comparative experiments, NEMO achieved higher classification accuracy than existing methods including PhysMAP and VAE-based approaches. The authors also demonstrated that using multiple units improved performance, and that the method maintained effectiveness even with limited labeled training data. The paper includes ablation studies examining the importance of joint versus independent learning of the two data modalities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I only found minor weaknesses. \n\n- Although, the authors did a great job providing necessary details, sometimes the training specifics for the control models and the clustering analyses were a bit too short in my opinion. It would be great if you could provide a bit more detail on this.\n- It would be great to see an ablation for the data augmentation to see how important that is (see questions).\n\n**Minor (do not expect you to respond to this)**\n\n- Figure 3 typo in legend “Supervise” instead of “Supervised”\n- Supplementary Figure 7 is blurred likely due to plotting settings\n- As multi-unit has a particular meaning in neuronscience, I find the wording “multi-unit brain region classification”. I think you mean “multi-neuron” here. Unless I misunderstood and you are really using multi-unit activity, I would change the wording."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Figure 1a, the text \"Neuropixels 1.0\" should be replaced with \"Neuropixels Ultra\" since your VIP/SST/PV data comes from NP Ultra, not NP 1.0, which is used in the IBL dataset for classifying brain regions. Also, please update the inset to show the schematic of NP Ultra instead of NP 1.0.\n\n2. Figure 3b is not mentioned in the text. It could be referenced between L350 and L361.\n\n3. Figure 3c is also not mentioned in the text. It might fit well between L362 and L366.\n\n4. Consider changing the title of Figure 3b and 3c from \"unit\" to \"neuron\" to be consistent with the terminology used in the main text.\n\n5. In the PhysMAP paper that you benchmarked, they applied three public datasets from S1, A1, and V1/Hippocampus. Have you tested these datasets? While I am not requesting additional experiments, if you have tested them, it would be helpful to include the results in the supplementary materials.\n\n6. Please specify how you computed the two primary metrics: macro-averaged F1 score and balanced accuracy. Does the first metric equal the unweighted mean of all the per-cell-type F1 scores? Does the second metric equal the unweighted mean of sensitivity (True Positive) and specificity (True Negative)?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The writing is clear, easy to understand, and follows a smooth logical flow, with almost no typos and well-presented figures.\n\n2. The paper provides a thorough review of relevant literature in this field. I have closely followed the cell-type classification area, and all the papers I am aware of (and some I was not) have been accurately cited, except for one (see Weakness). Notably, the authors benchmarked two very recent models that are still in preprint format.\n\n3. This is the second work to use contrastive learning for cell-type classification and the first to combine two modalities.\n\n4. The multimodal approach outperforms single-modal models, which is consistent with previous studies."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors used multimodal (spike waveform + firing pattern) contrastive learning to classify three types of inhibitory neurons (PV/SST/VIP) and ten different brain regions. The proposed NEMO model is based on the widely used CLIP (Contrastive Language-Image Pre-Training) model. NEMO outperforms two previous multimodal models: PhysMAP and VAE."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In the best-case scenario (cell type classification, L344, Figure 2c), NEMO outperforms VAE by 11%. However, in brain region classification, the improvement is minimal. For example, in Figure 3e, comparing the deep orange (NEMO) to the deep blue (VAE) bars, the difference in balanced accuracy is less than 0.05. Are these differences statistically significant?\n\n2. Joint training shows little to no benefit over independent training. For example, in Figure 5b, comparing the deep orange (NEMO) to the deep violet (SimCLR) bars, the difference in balanced accuracy is around 0.01. Additionally, in Supplementary Table 9, independent NEMO performs either better (0.84 vs. 0.83) or similarly (0.83 vs. 0.84, 0.87 vs. 0.88) to joint NEMO. Are these differences statistically significant?\n\n3. To my knowledge, there is no neuroscience evidence suggesting a strong pairwise correlation between spike waveform and firing patterns. For example, layer 5 pyramidal neurons and cortical PV neurons both fire a large number of spikes (both spontaneously and evoked), but their spike waveforms are broad and narrow, respectively (Cortical connectivity and sensory coding, KD Harris, TD Mrsic-Flogel - Nature, 2013). Additionally, burst firing can be evoked in both excitatory (Chattering cells: superficial pyramidal neurons contributing to the generation of synchronous oscillations in the visual cortex. CM Gray, DA McCormick - Science, 1996) and SST neurons (Somatostatin-expressing neurons in cortical networks, Urban-Ciecko, AL Barth - Nature Reviews Neuroscience, 2016). This is fundamentally different from the relationship between an image and its description in CLIP. In other words, the word \"puppy\" and an image of a \"puppy\" represent the same concept, but a broad spike could be associated with either burst or dense firing, depending on whether the neuron is located in layer 2/3 or layer 5.\n\n4. This is the second paper to use contrastive learning for cell-type classification. The first is CEED (Vishnubhotla et al., 2023), which used an unsupervised model (SimCLR) to classify cell types and benchmarked against WaveMap (the single-modality version of PhysMAP). While the CEED work does not significantly compromise the novelty of this study, it should be more clearly acknowledged. The current citation, \"Contrastive learning has been applied to raw electrophysiological recordings (Vishnubhotla et al., 2024),\" is inappropriate."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024in,\ntitle={In vivo cell-type and brain region classification via multimodal contrastive learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=10JOlFIPjt},\nnote={under review}\n}"
},
"abstract": {
"value": "Current electrophysiological approaches can track the activity of many neurons, yet it is usually unknown which cell-types or brain areas are being recorded without further molecular or histological analysis. Developing accurate and scalable algorithms for identifying the cell-type and brain region of recorded neurons is thus crucial for improving our understanding of neural computation. In this work, we develop a multimodal contrastive learning approach for neural data that can be fine-tuned for different downstream tasks, including inference of cell-type and brain location. We utilize multimodal contrastive learning to jointly embed the activity autocorrelations and extracellular waveforms of individual neurons. We demonstrate that our embedding approach, Neuronal Embeddings via MultimOdal Contrastive Learning (NEMO), paired with supervised fine-tuning, achieves state-of-the-art cell-type classification for an opto-tagged visual cortex dataset and for brain region classification of the public International Brain Laboratory brain-wide map dataset. Our method represents a promising step towards accurate cell-type and brain region classification from electrophysiological recordings."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"contrastive learning",
"electrophysiology",
"extracellular",
"multimodal",
"neuroscience",
"cell type",
"brain region",
"Neuropixels",
"deep learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/430bdaf44a0512940ae8575e2683f9503e0bc3a0.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "In vivo cell-type and brain region classification via multimodal contrastive learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
10kBEqYKKN | Impact of Prompt on Latent Representations in LLMs | main | Active | Explainability;Representation analysis;LLM;prompting;zero-shot | interpretability and explainable AI | 3;3;3 | 4;4;4 | 2;1;1 | 1;1;1 | 1;2;2 | 3 | 4 | 1.333333 | 1 | 1.666667 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The idea of studying the effects of prompts through the geometry of latent representations is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the effect of prompts on the representation of the EOS token across prompts and LLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Studying only the representation of the EOS token seems to be an oversimplification. While this representation technically can depend on all of the context/generation, it might not attend to its meaningful parts, thus failing to capture interesting patterns.\n- The paper lacks clear/practical insights besides observing that the EOS token representation does depend on the prompt in some way.\n- For an empirically oriented work, studying only binary sentiment classification datasets is insufficient. Hypotheses should be verified across a broader range of tasks, including open-ended generation.\n\nOther:\n- Many typos throughout the paper (missing spaces and periods, inconsistent usage of citet and citep, etc.)\n- IsoScore is not defined in the paper. Only a high-level explanation is provided.\n- RIS is not carefully defined - is k' the same as c? \"value of k is equal to itself\" is not a clear statement."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "(1) What are the motivations of grouping the prompts and how the groups of prompts correlate to the focus of the model generation? Why do the authors use the last layer rather than the embedding layer for clustering the prompts? \n\n(2) What is the mathematical formulation of the Isoscore? What does it used for in the experiments?\n\n(3) Several studies, such as PromptRobust [1], focus on evaluating prompt robustness. How does this approach offer advantages over existing methods?\n\n(4) The paper offers limited technical contributions and focuses narrowly on the classification task. Could the work be applied to generation task? \n\n[1] Zhu, K., Wang, J., Zhou, J., Wang, Z., Chen, H., Wang, Y., ... & Xie, X. (2023). Promptbench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper tries to address the intriguing question of how prompt representations relate to model performance, with a specific focus on how prompt formulation impacts performance robustness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates how prompts affect the intrinsic geometry of LLMs. The authors explore two research questions: (1) whether prompts alter the intrinsic dimensionality of representations, and (2) whether prompts can be grouped by their impact on model performance and vector representations using clustering methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper's format is somewhat disorganized, with unexpected line breaks and misplaced punctuation. Also, some figures like figure 3 do not have illustrations, making it hard to understand."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Why are we interested in isotropy of the EOS token hidden states?\n\n- Can we use the isoscore as a selection criterion for picking a good prompt? This possibility is hinted at in the introduction \"Our objective is to establish a direct correlation between the prompts, latent representations, and the model performance.\" but not deeply explored. The results sound negative on this front---\"we cannot link the IsoScore to the performance of the model and prompt\". How does this finding relate to the cited works that show isotropy is a good property for embedding models?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper studies a general and interesting question of how LLM internals (such as hidden states) evolve/change when certain parts of the input (such as the prompt) change. Further understanding of this process would be useful for several downstream applications, such as (as shown in prior work) detecting adversarial prompt injection attacks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies how variations in the prompt change the distribution of hidden states in LLMs for zero-shot binary classification. The last hidden state representation is extracted at each layer for a variety of different prompts. The authors plot how the IsoScore of the hidden states evolves across layers. These hidden states are then clustered, based on which the authors conclude that \"[the] clustering tends to focus on other characteristics than the prompt itself.\""
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The biggest issue with the draft is the presentation of the results. There are numerous spelling, writing, and formatting errors, a subset of which I listed below. The first two paragraphs of the introduction are unnecessary background for the ICLR audience. The related work section on LLMs is not particularly related to the focus of this paper (understanding how prompts change internal model representations). That section is missing citations to LogitLens and its derivatives (e.g. https://arxiv.org/pdf/2303.08112), which also study how changes in the prompt change model internals. For example, that paper develops classifiers (based on features extracted from model internals) for detecting prompt injection. I.e., they already studied this paper's \"hypothesis 2\" that \"The geometrical characteristics are sufficiently discriminating to facilitate the grouping or separation of prompts and comprehend how the model processes them (HP2).\" The details of very relevant parts of the paper, such as a brief description of the IsoScore algorithm, are missing. Isotropy is never formally defined and the draft never explains why it is a desirable property for decoder-only models (beyond references to other work studying isotropy in the context of embedding models). The details of precisely how the IsoScore is computed are missing to the point where I'm not sure the results of the paper are reproducible.\n\nI think minor writing and presentation issues are not disqualifying, but in this case they are pervasive and make it difficult to judge the technical aspects of the paper on merit.\n\nSubset of writing issues:\n- L16 \"heuristics rules\" --> \"heuristic rules\"\n- L24 \"their impact\" --> \"its impact\"\n\n- first two paragraph of the intro are unnecessary bg for the ICLR audience.\n\n- L53 starts \"However\" but is not contradicting a previous point\n\n- L67 what does \"intrinsic\" mean here?\n- L67 double period\n- L69 missing period.\n\n- L74 \"LLMs representations leveraging prompting\" --> \"LLM representations that leverage prompting\"\n\n- L101 \"section 4\" --> \"Section 4\".\n\n- L123, 137, L286 have \\citet that should be \\citep\n\n- L174 \"information, Therefore\" --> \"information. Therefore,\"\n\n- L178 \"Dimensionnality\"\n\n- L184 \"PCA get some limitations\"\n\n- L214-L215 $k$ not in latex formatting\n\n- L284 \"We describe, as follow,\"\n\n- L290, L306 \"for instance :\", \"be written :\"\n\n- L303 \"we constraint the model\"\n\n- L317 \"First analyse results of isoscore are discussed\"\n\n- L322 \"Since IsoScore is a recent algorithms\"\n\n- L323 \"accross\"\n\n- L483 \"This allows us give answers\""
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Geometrical analysis of prompt impact on latent representation in LLMs"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024impact,\ntitle={Impact of Prompt on Latent Representations in {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=10kBEqYKKN},\nnote={under review}\n}"
},
"abstract": {
"value": "The effectiveness of zero-shot learning frameworks, particularly in Large Language Models (LLMs), has lately shown tremendous improvement. Nonetheless, zero-shot performance critically depends on the prompt quality. Scientific literature has been prolific in proposing methods to select, create, and evaluate prompts from a language or performance perspective, changing their phrasing or creating them following heuristics rules. While these approaches are intuitive, they are insufficient in unveiling the internal mechanisms of Large Language Models. In this work, we propose exploring the impact of prompts on the latent representations of auto-regressive transformer models considering a zero-shot setting. We focus on the geometrical properties of prompts' inner representation at different stages of the model. Experiments conducted give insights into how prompt characteristics influence the structure and distribution of vector representations in generative models. We focus on binary classification tasks on which prompting methods have shown robust performance and show that prompt formulation has indeed an influence on latent representation. However, their impact is dependent on the model family. Using clustering methods, we show that even though prompts are similar in natural language, surprisingly, their representations can differ. This is highly model-dependent, demonstrating the need for more precise analysis."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Explainability",
"Representation analysis",
"LLM",
"prompting",
"zero-shot"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/632c9289cdb828739212f20f08e123d679b624e5.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Impact of Prompt on Latent Representations in LLMs"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
10vaHIOdEe | One Model for One Graph: A New Perspective for Pretraining with Cross-domain Graphs | main | Active | Graph Pretraining; Cross-domain Graph Learning | learning on graphs and other geometries & topologies | 3;3;5;5 | 4;5;3;4 | 2;1;2;3 | 2;2;2;3 | 2;3;2;3 | 4 | 4 | 2 | 2.25 | 2.5 | -0.707107 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The paper focuses on and attempts to address a crucial yet highly challenging problem in the field of graph analysis—constructing a Graph Foundation Model (GFM).\n2. On commonly used graph datasets, the model OMOG presented in the paper achieves relatively good performance in both zero-shot and few-shot settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes the OMOG (One Model for One Graph) framework, which advances graph learning by pre-training a unique model for each graph within a model bank. By creating a bank of expert models, each pre-trained for a specific dataset, OMOG selects relevant experts for inference using gate modules tailored to the target graph’s domain. This approach mitigates negative transfer issues common in multi-domain pre-training, and it performs effectively in zero-shot and few-shot tasks like link prediction and node classification, showing its potential for cross-domain graph tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper does not clearly explain the differences from other MOE-based methods, such as GraphAlign and AnyGraph. The approach seems very similar to these methods, leaving it unclear what specific advantages OMOG has over them and why it achieves improved performance.\n2. A core idea of OMOG is that each pre-training dataset requires a dedicated expert. This approach poses challenges for scalability: as the volume of pre-training data increases, the model grows linearly with the data, which is detrimental to pre-training efficiency.\n3. Why is the expert model specifically a Transformer? How would the performance change if other models, such as GNN, Graph Transformer, or MLP, were used instead? Additionally, prior to entering the experts, the features and structure are fused through SGC. Why couldn’t this fusion step be incorporated within the experts themselves? After all, different graphs may require varying levels of neighbor aggregation.\n4. The core part for achieving zero-shot in this paper relies on calculating the similarity between label embeddings and prediction embeddings to obtain the final label prediction. In fact, most models that work under few-shot settings can be adapted to zero-shot using a similar approach. Consequently, Table 1 lacks several relevant baselines, such as GraphAlign, GCOPE, and GraphMAE.\n5. Do all experts contribute to downstream performance improvements? In Figure 6, while the number of experts is adjusted, the full set of pre-training data is still used to train the gating mechanism. Could you vary the number of pre-training datasets to examine how this affects downstream performance?\n6. Although this paper discusses GFM, which should be applicable to various downstream tasks, there is still an absence of experiments on graph-level tasks, such as graph classification or graph regression.\n7. Some parts of the paper lack clarity. For example, in Section 4.5, the phrase ‘select 10 samples from each dataset’ is ambiguous. Does this refer to selecting 10 nodes, subgraphs, or something else?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What is the time complexity or running time of the proposed method given that the proposed method needs to pretrained the model on several graphs from different domains?\n2. In line 207, the authors mention that the node-level feature are randomly maked, resulting in two maked views. Are these two masked views mutually exclusive? For instance, given a 10-d feature matrix, the first masked view is generated by masked 5 features and the second view is generated by masking the rest 5 features?\n3. How do you ensure that the generator only generates a matrix to mask the domain irrelevant features such that the filter features are domain-related?\n4. The authors mention that one key issue is negative transfer. Since the knowledge in the proposed methods is extracted from graphs in different domains, it inevitably increases the probability of facing the negative transfer issue. How does the proposed method address this issue? The top-k strategy seems to only filter out the low confidant knowledge, while it can not directly alleviate the negative transfer issue as the irrelevant knowledge might be included in the top k expert models. \n5. I try to reproduce the experimental results, but there is no instruction and datasets available in the provide GitHub link. The readme seems to be empty. Could you provide the datasets and the instruction to reproduce the results?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The presentation of this paper is good and most parts of the paper are clear.\n2. This paper proposes a novel cross-domain pretraining framework.\n3. The experimental results demonstrate the effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel cross-domain pretraining framework called \"one model for one graph,\" by pretraining a bank of expert models and using a gating function to choose a subset of experts to effectively integrate prior model knowledge."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The intuition of the generator and filter in Pretraining the gate module is not clear.\n2. The authors lack the discussion about the difference between the proposed method and the mixture-of-experts based methods. \n3. I am concerned about the negative transfer issue in the proposed method. Since the knowledge in the proposed methods is extracted from graphs in different domains (and most of them are irrelevant), it inevitably increases the probability of facing the negative transfer issue. How does the proposed method address this issue? The top-k strategy seems to only filter out the low confidant knowledge, while it can not directly alleviate the negative transfer issue as the irrelevant knowledge might be included in the top k expert models. \n4. I try to reproduce the experimental results, but there is no instruction and datasets available in the provide GitHub link."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see Weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tBy pretraining individual models for each graph dataset, OMOG effectively addresses the feature and structural heterogeneity found across diverse graphs.\n2.\tOMOG’s model bank allows the easy addition of new expert models without retraining the entire system, providing flexibility to expand the pretraining bank with new data and adapt quickly to novel domains."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes OMOG, an innovative cross-domain graph learning framework designed to enhance the adaptability and performance of Graph Neural Networks (GNNs) across various domains. By training a distinct expert model for each pre-training graph and employing adaptive gating functions during inference, OMOG dynamically selects relevant experts for unseen graphs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe primary motivation for adopting the “one model for one graph” approach is to alleviate the negative transfer limitations observed in the “one model for all graphs” method. It would be beneficial to provide comparisons and discussions on how this method differs from prior approaches that aim to reduce negative transfer through better pretraining data selection as [1].\n2.\tIt is recommended to identify which models, pretrained on specific graphs, are selected as the best match for various test graphs, with explanations for these selections. Additionally, I’m curious about whether pretraining data from different domains can contribute effectively or if only similar/same-domain data is more beneficial would strengthen the analysis. A case study is recommended to evaluate whether the proposed gating strategy actually mitigate issues stemming from conflicts across pre-training data from diverse domains.\n3.\tIt would be valuable to explore whether this pipeline could be adapted to other self-supervised learning approaches for pretraining graph-specific experts, and additional ablation studies are expected.\n4.\tOMOG’s design requires a separate model for each dataset and can result in a large model bank when many datasets are involved. Will it lead to high storage costs and maintenance overhead, especially in resource-constrained environments? A complexity analysis would also be helpful to understand OMOG’s computational feasibility at scale.\n\n[1] Better with less: a data-active perspective on pre-training graph neural networks. In NIPS '23."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. What is the means of t in Equation (3)? How do you ensure that each expert has different parameters? Based on Equation (3), it seems that each expert has the same input and parameters, leading to identical outputs. What, then, is the purpose of having multiple experts?\n2. In line 241, the mask matrix seems to be replaced by an MLP. Why are the node embeddings transformed by the MLP considered to be negative embeddings?\n3. In Equation (4), is f_center the average output of a single graph across multiple experts, or the average output of multiple source graphs through their respective experts? How does this function as an anchor point for training the gate?\n4. What are the parameters of the Gate in Equation (5)? Why is the Gate used before the Expert?\n5. According to the inference process, an unseen graph activates the top k experts based on the highest correlation between each expert's output and the output average. How do you ensure that the pre-trained graph domain encompasses a sufficiently heterogeneous feature space to handle all potentially unseen graph domains?\n6. How is the total number of experts determined, especially when the number of graph domains during pre-training and testing is uncertain?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The approach of building an expert model for each graph intuitively addresses the issue of negative transfer in graph pre-training.\n2. The problem addressed in this paper is a crucial part of foundational research on graph models and is currently a topic of significant interest among researchers.\n3. The experiments involve graph data from multiple domains and include comparisons with recent, noteworthy methods in graph transfer learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes OMOG to pretrain one model for one graph in cross-domain transformation. OMOG uses SGC as message aggregation before experts learning, and uses the contrastive method for expert and gate training. In the inference stage, OMOG activates the top-k of experts to infer node labels. The experimental results proved the effectiveness of the method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. According to the method statement, graphs from different domains are required to have input spaces of the same size, which seems difficult to satisfy with real-world data.\n2. The construction of multiple experts for input graphs appears to be relatively naive, as it merely involves repeating several encoders and using similarity ranking with a central vector for averaging activation."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "The work proposes a new framework to boost knowledge transfer during cross-domain graph pretraining."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024one,\ntitle={One Model for One Graph: A New Perspective for Pretraining with Cross-domain Graphs},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=10vaHIOdEe},\nnote={under review}\n}"
},
"abstract": {
"value": "Graph Neural Networks (GNNs) have emerged as a powerful tool to capture intricate network patterns, achieving successes across different domains. However, existing GNNs require careful domain-specific architecture designs and training from scratch on each dataset, leading to an expertise-intensive process with difficulty in generalizing across graphs from different domains. Therefore, it can be hard for practitioners to infer which GNN model can generalize well to graphs from their domains. To address this challenge, we propose a novel cross-domain pretraining framework, \"one model for one graph,\" which overcomes the limitations of previous approaches that failed to use a single GNN to capture diverse graph patterns across domains with significant gaps. Specifically, we pretrain a bank of expert models, with each one corresponding to a specific dataset. When inferring to a new graph, gating functions choose a subset of experts to effectively integrate prior model knowledge while avoiding negative transfer. Extensive experiments consistently demonstrate the superiority of our proposed method on both link prediction and node classification tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Graph Pretraining; Cross-domain Graph Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d68cf29a660bd027219ea8dc1f02e556d61af29d.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "One Model for One Graph: A New Perspective for Pretraining with Cross-domain Graphs"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
11xgiMEI5o | OmniRe: Omni Urban Scene Reconstruction | main | Active | Gaussians Splatting;Neural Rendering;Dynamic Scene Reconstruction;Autonomous Driving | applications to robotics, autonomy, planning | 6;8;8 | 3;4;5 | 3;3;4 | 3;3;4 | 3;3;4 | 7.333333 | 4 | 3.333333 | 3.333333 | 3.333333 | 0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I wonder for items like causality and new synthesis if an approach more configurable could take place now that they have separated the pedestrians from the road. \n\nThinking of something like this \nWang, Cheng Yao, et al. \"CityLifeSim: A High-Fidelity Pedestrian and Vehicle Simulation with Complex Behaviors.\" 2022 IEEE 2nd International Conference on Intelligent Reality (ICIR). IEEE, 2022.\n\nWhere the data is later attached to an engine like Carla or airsim"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Comprehensive Dynamic Modeling: OmniRe can handle various actors in urban settings, unlike most previous methods that focus mainly on vehicles.\n\nScene Graphs and Gaussian Splatting: The system uses 3D Gaussian splatting for detailed scene and object rendering, including control over each object.\n\nHuman Behavior Simulation: Through SMPL modeling, OmniRe accurately reconstructs human motions, even in cluttered environments, enabling simulations of interactions between pedestrians and vehicles.\n\nState-of-the-Art Performance: Extensive testing on datasets like Waymo and others show OmniRe significantly outperforms existing methods in terms of visual fidelity and reconstruction accuracy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "OmniRe is a framework designed to create high-fidelity digital twins of dynamic urban scenes for simulations, particularly for applications in autonomous driving. OmniRe goes beyond vehicle modeling to support diverse dynamic actors like pedestrians and cyclists, enabling complex simulations that reflect real-world scenarios. It utilizes Gaussian Scene Graphs with multiple representations, allowing detailed and editable scene reconstructions with both rigid (e.g., vehicles) and non-rigid (e.g., pedestrians) actors."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are several limitations in OmniRe approach, which are correctly identified in the paper too.\n\nLighting Effects: OmniRe doesn’t model lighting variations explicitly. This can lead to visual inconsistencies when combining scene elements with differing lighting conditions, which may reduce realism in certain simulations. Addressing this would require additional modeling of lighting dynamics.\n\nNovel View Synthesis Limitations: OmniRe’s per-scene optimization approach struggles to generate satisfactory results when the camera view deviates significantly from the training trajectories. This could be a limitation for scenarios requiring a wide range of viewing angles, such as free navigation through the reconstructed scenes. The authors suggest incorporating data-driven priors or generative models as future work to address this.\n\nComputational Complexity: While the method achieves high-quality reconstructions, the complexity of the Gaussian Scene Graph and the joint optimization of multiple parameters (pose, appearance, etc.) require substantial computational resources. Training time per scene, though optimized for an RTX 4090 GPU, could still pose scalability issues for large datasets or continuous real-time simulation needs.\n\nChallenges with Real-Time Adaptability: The method’s reliance on SMPL modeling for human actors and per-node deformation fields, though effective, might introduce delays in real-time applications, particularly if scenes are highly dynamic or involve many non-rigid actors."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weaknesses."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. The paper is well-organized and easy to follow.\n2. The proposed method for in-the-wild human representation is straightforward yet crucial for driving scene reconstruction.\n3. Both quantitative and qualitative experiments effectively support the claims made in the introduction, with OmniRe achieving state-of-the-art results across various experimental settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces OmniRe, a novel approach for urban scene reconstruction that focuses on dynamic actors, including vehicles, pedestrians, and cyclists. OmniRe employs a Gaussian Scene Graph-based framework to model both static and dynamic objects. To address the limitations of previous methods in reconstructing non-rigid human models, OmniRe integrates SMPL for in-the-wild human representation, allowing for joint-level control. Extensive evaluations across several driving datasets demonstrate OmniRe's superior performance compared to baseline methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Handling Occlusions and Complex Dynamics: OmniRe addresses in-the-wild challenges, yet the performance might be limited by severe occlusions and overlapping actors in complex urban scenes. Further refinement or integration of advanced occlusion handling techniques could enhance reconstruction fidelity.\n2. Performance in Specific Urban Scenes: For specialized scenarios, such as highways (with fast-moving vehicles), nighttime environments, and adverse weather conditions, does OmniRe maintain high reconstruction quality under these challenging conditions?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. GS-based methods generally perform well in scenarios with static environments or low vehicle speeds, as demonstrated by most of the demos on the project page. However, I am curious about the reconstruction performance of this approach in situations where the ego vehicle is moving at higher speeds.\n2. I wonder about the computational cost of reconstructing a complete segment in the Waymo dataset, as the entire pipeline seems a bit complex.\n3. Why does it seem that the reconstruction quality of NuPlan is significantly worse than that of other datasets?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is clearly written, with illustrative figures that are easy to understand. The experiments are comprehensive.\n2. Modeling dynamic objects and simulating interactive behaviors are essential for closed-loop simulation in autonomous driving systems.\n3. This work is highly engineering-oriented and demonstrates impressive results. Additionally, the authors have committed to open-sourcing the code, which will have significant value in advancing autonomous driving simulation in the future."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces OmniRe, a comprehensive framework for dynamic urban scene reconstruction. It leverages neural scene graphs with Gaussian representations to unify the reconstruction of static backgrounds, moving vehicles, and non-rigidly dynamic actors. Additionally, it incorporates specialized designs for human modeling. The effectiveness of the approach is demonstrated across multiple datasets, showcasing superior performance in both reconstruction quality and novel view synthesis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "As mentioned by the authors in the limitations section, there are still two key shortcomings: 1. The lack of lighting modeling results in unnatural object insertions. 2. The synthesis of new viewpoints is constrained to the original trajectory, limiting the approach from achieving fully free-trajectory digital reconstruction."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024omnire,\ntitle={OmniRe: Omni Urban Scene Reconstruction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=11xgiMEI5o},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce OmniRe, a comprehensive system for efficiently creating high-fidelity digital twins of dynamic real-world scenes from on-device logs. Recent methods using neural fields or Gaussian Splatting primarily focus on vehicles, hindering a holistic framework for all dynamic foregrounds demanded by downstream applications, e.g., the simulation of human behavior. OmniRe extends beyond vehicle modeling to enable accurate, full-length reconstruction of diverse dynamic objects in urban scenes. Our approach builds scene graphs on 3DGS and constructs multiple Gaussian representations in canonical spaces that model various dynamic actors, including vehicles, pedestrians, cyclists, and others. OmniRe allows holistically reconstructing any dynamic object in the scene, enabling advanced simulations (~60 Hz) that include human-participated scenarios, such as pedestrian behavior simulation and human-vehicle interaction. This comprehensive simulation capability is unmatched by existing methods. Extensive evaluations on the Waymo dataset show that our approach outperforms prior state-of-the-art methods quantitatively and qualitatively by a large margin. We further extend our results to 5 additional popular driving datasets to demonstrate its generalizability on common urban scenes. We will make the code and data publicly available."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Gaussians Splatting",
"Neural Rendering",
"Dynamic Scene Reconstruction",
"Autonomous Driving"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1b615681674ee6446128cbfd3f1fe6afc4ce2693.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/a6075c5ab9b491cfda578768d4b601e58dc6dc5b.zip"
},
"title": {
"value": "OmniRe: Omni Urban Scene Reconstruction"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
12B3jBTL0V | Modeling the Human Visual System: Comparative Insights from Response-Optimized and Task-Optimized Vision Models, Language Models, and different Readout Mechanisms | main | Active | Neuro AI;vision;deep neural networks;representations;fMRI encoding | applications to neuroscience & cognitive science | 3;3;5;6 | 3;4;3;4 | 2;3;3;3 | 2;3;3;4 | 2;1;3;2 | 4.25 | 3.5 | 2.75 | 3 | 2 | 0.19245 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "My primary suggestion for strengthening this paper is more or less solely for the authors to lean further into its greatest strength -- and to further explicate or justify the expert methods that differentiate this work from so many others attempting to tackle similar questions. Needless to say, perhaps, there are a number of ways the authors could do this. Below are a few different “options” that (I hope) seem reasonable given the constraints of the current review. The authors should feel free to choose however many / whichever of these seems most feasible or intuitive. For me, at least, almost any movement along these vectors is movement that would increase the value of this work for the target audience it seems intended for:\n- “Beyond accuracy”: The primary justification of the authors’ “novel readout mechanism” is the general increase in accuracy it provides over other methods. But the emphasis on accuracy as the primary advantage rings a bit hollow if a major part of the goal here is to gain insight into the structure of representation in biological cortex. There are many alternative ways (e.g. data augmentation, denoising, nonlinearities) -- even “hacky” ones (e.g. smaller cross-validation splits) that one could use to increase the predictive accuracy of model readout mechanisms. What demonstrable advantage does the “semantic spatial transformer” readout have over readout methods with respect to the theoretical questions at play here? (An example answer: “ordinal least squares or regularized regression-style readouts do not preserve spatial information -- therefore making certain areas of the brain appear to be more transform-invariant than they likely are in reality. Here’s a metric that operationalizes the probability of transform-invariance in the fMRI data without models. And here is a side-by-side comparison of the transform-invariance we estimate with ridge regression and our STN readout, respectively.”)\n- “Semantics” without language model confounds: There are a number of issues (again, beyond the scope of this paper, but nonetheless relevant) with the use of language models as predictive models of visual fMRI data -- including the fact the inputs to these models (tokenized words) are already proto-symbolic at the time of their initial injection into the candidate representational models that embed them (and are thus more abstract by default than the pixels injected into vision models); and also, an increasing “convergence” between vision and language models [1] that suggests a sort of “default” alignment between these systems attributable (most likely, it seems) to biases in their training data. How to get at questions of “semantics” without over-interpreting language models? One way, perhaps, is to reconsider the brain data itself: It has been suggested by [2] that certain kinds of transformations on the underlying brain data to be modeled (e.g. aggregating across multiple neurons in the case of electrophysiology) can instantiate properties like linearly-separable category boundaries not otherwise apparent without those transformations. If something like this is done on the features of vision models (e.g. averaging across multiple images of the same visual concept), do vision models begin to look more “semantic”? Perhaps the semantic spatial transformer could be used to unveil precisely the kinds of feature transformations that occur along the gradient from early to late visual cortex.\n- “densifying” the “single” captions: The authors claim that the localized semantic descriptions inherent to their “dense” captioning method unveil a noticeable midpoint between early, more spatiotopic representations and later, “globally” abstract representations. But is this really about the local tagging of an image’s subparts? Providing more comprehensive “single captions” of the full image that includes more extensive specification of details might close the gap between the dense captioning method and the global captioning -- but in a way that obviates the need to manually subdivide the image. In short, adding further detail (with and without explicitly spatial language) seems like an important control for downstream interpretation of this result.\n\n[1] Huh, M., Cheung, B., Wang, T., & Isola, P. (2024). The platonic representation hypothesis. arXiv preprint arXiv:2405.07987.\n[2] Vinken, K., Prince, J. S., Konkle, T., & Livingstone, M. S. (2023). The neural code for “face cells” is not face-specific. Science Advances, 9(35), eadg1736."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The use of deep neural network models to predict and understand the structure of representation in the biological visual system is a practice rife with heretofore unanswered, but deeply foundational questions as to how it should be done. Bucking a trend that far too often recycles canonical, but relatively unscrutinized methods to new models or new brain data, this submission is impressive not just for the fact that it tackles these questions head-on, but tackles so many of them simultaneously -- and does so (mostly) without losing the forest for the trees. For this alone, I applaud the authors and can recommend that this paper be accepted."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the authors present a comprehensive suite of analyses comparing vision / language DNN models to human fMRI data. Using a novel “readout” mechanism designed explicitly to account for space in the mapping of DNN embeddings to brain activity, the authors report localizing 3 sub-regions in the human visual cortex that respond differentially to spatial and semantic information."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My major concern here (and one that I admit is not fully within the authors control, but which clarifying updates or different narrative focus could nonetheless address) is the lingering doubt as to whether even these newer, more expertly designed methods actually do give us any meaningful new “insights” about the biological system they’re nominally designed to give us insights about. An overly reductionist summary of the “findings” of this analysis with respect to the human visual brain could well be that they simply provide more evidence for what is already a amply established gradient of increasingly “abstract” visual information from early (more view-dependent) areas (where smaller, localized receptive fields and retinotopy are the dominant representational motifs) to later (less view-dependent) visual areas (where -- depending on which side of the ventral / dorsal divide those areas are closer to -- you begin to get “representations” that evoke “object categories”, “navigational affordances”, or “conceptual semantics”). And while much debate does remain as to many of the details here, it seems (to me at least) that the existence of this gradient is more or less a common consensus."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses section for the questions."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "I think overall, the authors' thorough experimentation is the greatest strength of this paper:\n\n* **Systematic Comparison:** They do a reasonably systematic comparison, comparing a diverse range of models and readout mechanisms, which offers valuable insights.\n* **Novel Readout Mechanism:** They propose (in the context of fMRI encoders) a novel readout mechanism—the using the previously proposed spatial transformer with differentiable bilinear sampling —and show that it indeed improves prediction accuracy. This is a significant contribution in the context of fMRI encoders.\n* **Identification of Cortical Regions:** They identify three cortical regions, largely aligned with prior hypotheses about visual cortical functionality. This further strengthens existing theories.\n* **Good Discussion of Prior Work:** The authors do a reasonably good job in discussing prior fMRI encoder literature, effectively contextualizing their research. This demonstrates a solid understanding of the field."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Using fMRI data from the Natural Scenes Dataset, the authors investigate how different encoder backbones and readout mechanisms predict neural responses in the human visual system. \n\nThey compare a range of models, including those optimized for visual recognition (e.g., AlexNet, ResNet), neural response prediction, and language or vision-language tasks (e.g., CLIP, MPNET). Furthermore, they explore various readout mechanisms to map model activations to fMRI signals, introducing a novel approach (in the context of fMRI encoders) called the Semantic Spatial Transformer readout.\n\nThey find that:\n\n1. Response-optimized models perform best in early visual areas (V1-V4): This suggests that these areas prioritize perceptual features not readily captured by linguistic descriptions or task-specific training.\n\n2. Task-optimized and language models do better in higher visual areas: This indicates a shift towards semantic processing in these regions. Large language model embeddings, particularly those using detailed contextual descriptions, prove highly effective.\n\n3. Semantic Spatial Transformer readout improves performance: This novel readout consistently outperforms existing methods like linear, Gaussian, and factorized readouts, boosting accuracy by 3-23%. This improvement stems from its ability to learn stimulus-specific modulations of receptive fields and feature maps."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The presentation of this paper could be *significantly* improved. I think the presentation quality of this paper does not match the quality of other ICLR papers I am currently reviewing or have reviewed in past years, or ICLR papers that have been accepted in prior years.\n\nThe figures are unclear and lack consistent formatting, notation often unexplained, and significant wasted space.\n\nMy specific concerns are below:\n1. Figure 1 -> This figure is very cluttered and very confusing. Why are the subfigure legends (A, and B) placed so randomly?\n2. Figure 1 -> What is the 'What' weight matrix, where does it come from? It is obvious if you are familiar with fMRI encoder literature, but describing the interaction between the weight matrix and green using a tensor product $\\otimes$ seems very misleading. This tensor product is also used in the upper part as well, which is deeply misleading, and instead should be expressed as a transposed matrix product. These symbols have a meaning and without redefinition this is pretty confusing.\n3. Figure 1 -> Why is the task optimized framework placed together with the response optimized framework without any clarification of which is which? \n4. Figure 1 -> Why is the dense captioning output also part of the response optimized framework with a rotation equivariant network?\n5. Where are the dense captions coming from? `Line 210` says `An image of size 424 ∗ 424 is divided into grids of size 53 ∗ 53`, but does not otherwise clarify the origin of the captions anywhere in the paper. What model is used here?\n6. `Line 224` please avoid vector matrix products. This assumes row vectors which is not standard.\n7. `Line 237` what are the shapes of the output of function $V_c(x,y)$? Could you describe this sampling in more detail?\n8. Spatial transformer section, `Line 269` it is unclear what $\\theta_2$ is. This is a really important part of the paper and a key claimed contribution. Could the authors mathematically clarify how $\\theta_2$ plays a role?\n9. `Line 286`, what is `AT`?\n10. `Line 297` are the models not voxel wise models?\n11. Figure 2A, the bottom descriptions of the models is very confusing. How is the \"Language Encoder\" being used with \"Semantic Spatial Transformer Readouts\"? \n12. Figure 2B, please use a proper categorical color map for discrete data.\n13. Minor -- Figure 3B, extra space before optimized\n14. Figure 3, how are you defining \"better predicted by Task Optimized\"? Is this the best of ResNet50 or AlexNet? Why use these models when CLIP has been shown to be the best model of visual responses?\n15. All flatmaps have significant wasted space.\n16. Lack of analysis of the spatial transformer networks. While the paper claims STNs as a significant contribution, there is no visualization of the affine parameters for each voxel. Do the affine parameters focus on population receptive fields that are provided in NSD?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "-\tWhy are the models in Figure 2b shown on a continuous color bar?\n-\tWhy were only a subset of the NSD subjects used?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "-\tCompare response optimized and task optimized models directly\n-\tCompared many different model-brain mapping functions\n- Present a new metric for model-brain mapping"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper compares task-optimized vision and language models to brain response-optimized networks on the natural scenes fMRI dataset (NSD), and introduce a new readout mechanism for model-brain mapping based on spatial receptive field and semantic content. They find that response-optimized networks provide the best match to early visual regions, while task optimized vision-language models better match high-level visual regions. They also find their readout mechanism using spatial transformers improves model to brain mapping (though only marginally).\n\nThe comparison between response and task optimized models is interesting, but overall the results provide only a marginal advance in our understanding and computational models of visual cortex. The spatial transformer network readout is novel, but it is not entirely clear what the value of this contribution is. It provides slightly better performance over other methods, but is much more complex (involves integrating an additional Resnet-50 module to weight the channels of each model) and provides only minimal quantitative gains over much simpler methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-\tThe models tested varied along many factors making it difficult to draw strong conclusions about the role of response vs. task-optimization or vision vs. language in model’s performance. For response vs. task, these points could have been made more compelling by training the same architecture on both task and neural responses\n-\tThe biggest issue is that the major findings of this paper have been shown previously (also on the same dataset). Prior work with vision and language models (e.g., Doerig et al.) that showed semantic content is more important for high than low-level visual regions, and Khosla & Wehbe which introduced the response optimized model used here and already showed it predicted NSD responses.\n-\tThe utility of the STN was not well motivated. Figure 2 suggests that the different readout mechanisms provide largely similar results with only minor quantitative differences. This slight boost is unsurprising given how much more complex the spatial transformer network is compared to other readouts."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Why did the authors only use 4 of the 8 participants from NSD? \n\nFigure 1A is confusing. I don’t follow how each of the different readout methods are shown here. Better labels would be very helpful."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Literature review is comprehensive, and overall, the paper is clearly written. \n\nThe paper is not highly original building on prior readout methods, and recent work conducting large-scale benchmarking of AI models against the brain. However, the addition of dense image captions to extract representations from the language models is a nice contribution to the literature on LM alignment with visual cortex. I have some reservations mentioned below that are impacting my score, but if addressed, I think the paper may constitute a meaningful contribution to the literature."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors aim to evaluate the extent to which LLMs (based on single or dense image captions) predict activity in high-level visual cortex relative to ImageNet-pretrained vision models or neural-response optimized vision models. They introduce a novel readout method that shows higher performance in predicting neural responses relative to linear regression and two other readout methods. They find three distinct regions of visual cortex that are better predicted by vision models, LLMs based on dense captioned images, or LLMs for single image captions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper needs to better justify why a different readout method is necessary. The authors state that the predominant readout method is linear ridge regression, which has high computational and memory demands, but representational similarity analysis (RSA) is nearly as commonly used in the human literature and is less computationally intensive (Kriegeskorte et al, 2008). More importantly, however, the reason that the NeuroAI field tends to rely on linear regression as a readout is based on the logic that we are interested in evaluating the similarity of the representations up to a linear transformation (in representation space) without introducing non-linearities in the readout method. The authors should provide better justification for why a novel readout method is needed within that framework. \n\nThe Semantic Spatial Transformer has greater improvement relative to Ridge regression for the vision model than the language model (Figure 2), and vision models are found to better predict more voxels in high-level visual cortex using the Semantic Spatial Transformer readout (Figure 4) than when using Ridge regression (Figure 5). To me, it is a problem that the readout method does not provide uniform improvements across models. This suggests to me that the readout method is introducing a bias in the conclusions. However, I welcome rebuttal on why this logic is faulty. \n\nFigure 4D shows three regions that respond more to vision model, single captions language model, or dense caption language model, but this is binarizing the difference between each pair of models. However, the claims of these three distinct regions would be strengthened by showing that high-level visual voxels, for example, have additional explained variance by the single caption language model after accounting for the variance explained by vision and dense caption models."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present a novel Spatial Transformer readout method that enhances accuracy (3-23%), identify 3 brain regions responsive to varying information content; and analyze various neural network models to evaluate their performance several brain regions."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024modeling,\ntitle={Modeling the Human Visual System: Comparative Insights from Response-Optimized and Task-Optimized Vision Models, Language Models, and different Readout Mechanisms},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=12B3jBTL0V},\nnote={under review}\n}"
},
"abstract": {
"value": "Over the past decade, predictive modeling of neural responses in the primate visual system has advanced significantly, largely driven by various deep neural network approaches. These include models optimized directly for visual recognition, cross-modal alignment through contrastive objectives, neural response prediction from scratch, and large language model embeddings. Likewise, different readout mechanisms—ranging from fully linear to spatial-feature factorized methods—have been explored for mapping network activations to neural responses. Despite the diversity of these approaches, it remains unclear which method performs best across different visual regions. In this study, we systematically compare these approaches for modeling the human visual system and investigate alternative strategies to improve response predictions. Our findings reveal that for early to mid-level visual areas, response-optimized models with visual inputs offer superior prediction accuracy, while for higher visual regions, embeddings from Large Language Models (LLMs) based on detailed contextual descriptions of images and task optimized models pretrained on large vision datasets provide the best fit. Through comparative analysis of these modeling approaches, we identified three distinct regions in the visual cortex: one sensitive primarily to perceptual features of the input that are not captured by linguistic descriptions, another attuned to fine-grained visual details representing semantic information, and a third responsive to abstract, global meanings aligned with linguistic content. We also highlight the critical role of readout mechanisms, proposing a novel scheme that modulates receptive fields and feature maps based on semantic content, resulting in an accuracy boost of 3-23\\% over existing SOTAs for all models and brain regions. Together, these findings offer key insights into building more precise models of the visual system."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Neuro AI",
"vision",
"deep neural networks",
"representations",
"fMRI encoding"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4eb3563f0a891e67101a29eda4a56ba2099e55af.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/35dfabd8266a7daedc718bfe74b77f52e23be8a9.zip"
},
"title": {
"value": "Modeling the Human Visual System: Comparative Insights from Response-Optimized and Task-Optimized Vision Models, Language Models, and different Readout Mechanisms"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
12gMsxpu4G | OCS+: Improving PTQ with Outlier Translation | main | Active | Post Training Quantization | optimization | 3;5;5 | 5;5;4 | 3;3;3 | 1;3;2 | 3;2;3 | 4.333333 | 4.666667 | 3 | 2 | 2.666667 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In Table 1, are those 3 results attained from the same weight parameters and clipping range? Otherwise, are the clipping ranges of the results different?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper highlights the problem of previous work, OCS.\n- The paper shows performance gain compared to OCS.\n- Experimental design and multifaceted analysis of the proposal are commendable.\n- Under the situations that the paper assumes (e.g., the target hardware and bit settings are fixed), significant performance improvements are expected with additional computation overhead."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Outliers while quantization are one of the representative elements that harm the performance of neural networks.\nWhat makes things worse is that target hardware is usually fixed and hard to change settings such as bit precision.\nThus, if the outlier problem becomes severe and the performance of the quantized model deteriorates, it may be difficult to resolve the problem by using a higher bit.\n\nThere already exists a prior work that tries to resolve this problem which is called OCS.\nBy halving the activation values of outlier channels and duplicating those channels as additional channels, OCS successfully alleviates the outlier problem.\nHowever, OCS causes another problem; rounding error of inliers.\n\nTo mitigate both problems (clipping error of outliers and rounding error of inliers) simultaneously, the paper proposes OCS+.\nThe paper adopts translation instead of halving operation.\nBy doing so, it achieves the same functional effect equivalent to using one more bit with moderate computational overhead.\nWith various experiments, the paper shows that OCS+ outperforms other previous works, even with the same number of channels added by OCS+."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Several problems that OCS already has can be the same problems of OCS+.\n - The computational overhead due to additional channels \n - The purpose of quantization is to run a large model on limited resources. Therefore, additional computational overhead induced by OCS+ has a worse impact on hardware that is hard to adjust bit precision, which is the target of OCS+.\n- The proportion of channels suffering the outlier problem and outlier channel ID can differ according to inputs. Analysis of channel sensitivity with different inputs can be a good experiment."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The proposed OCS+ preserves the outlier activations without sacrificing the precision of regular inliers, which allows for a theoretical increase in presentational power from b-bits to (b+1)-bits under the same hardware constraints.\n2.OCS+ is based on the offline mathematical transformations, which does not require additional training or hardware re-design. \n3.Experimental results show OCS+ achieves an improvement over the previous methods both on CNN and ViTs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a post-training quantization method named OCS+, which aimed at saving outlier activation without affecting the regular inliers. Based on the offline transformation with weight and activation, it does not require additional training. Experimental results show that the proposed method works for both CNN and ViTs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.The presentation need to be improved, such as Page 4.\n2.The actual speed-up need to be evaluated since OCS+ introduces additional computational costs.\n3.Some typos: such as \nLine 93 and Line 99 OCS+-.\nLine 36 quant parameters\nLine 253 translate down?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I am wondering what the point of extremely low-bit quantization is, such as 2-bit. Does this extremely low-bit make any practical use? Could the author provide some insight into this?\n\nI currently turn to reject this paper for its novelty and experiment comparison."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper provided comprehensive experimental results. \n\n2. This paper has clear logic that makes their method easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduced OCS+, a PTQ for solving the outlier in the quantization process. First, this paper demonstrated that outliers are non-trivial. Motivated by OCS, the original version of this paper, OCS+ is introduced. OCS+ duplicates the important (outlier) activation channel and corresponding weight. Thus, OCS+ achieves high performance with the costs of more computation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of necessary fair comparison. Actually, OCS+ introduces higher computation costs since it increases the weight and activations by 50%. It added 50% extra computation. This makes OCS+ less attractive since the compared methods do not incur extra computation. Also, some related descriptions such as the analysis of FLOP (compared with previous methods)are lacking in the paper.\n\n2. How to select the important activation channels missed in the paper?\n\n3. The implementation details of Table 7 are missing. What are the platform and running software?\n\n4. Running time comparison with previous methods that do not incur extra computation such as BRECQ?\n\n5. According to the paper, the important activation channels seem to be selected on the fly. Does this operation incur more inference time costs?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024ocs,\ntitle={{OCS}+: Improving {PTQ} with Outlier Translation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=12gMsxpu4G},\nnote={under review}\n}"
},
"abstract": {
"value": "Post-training quantization (PTQ) is an effective technique for accelerating DNN model inference, where activations typically follow a bell-shaped distribution. Since commodity hardware employs a linear quantization grid and limited quantization levels, prior PTQs optimize a clipping threshold to minimize overall quantization error, which excludes outliers from the bell-shaped data. However, outliers are non-trivial for low-bit and lightweight models. Thus OCS (Zhao et al.,2019) proposed to save outliers by halving and duplicating. However, in activation quantization, the original OCS sacrifices the precision of the regular inliers, leading to severe accuracy degradation. To address this, we propose OCS+ to save outlier activation without affecting the regular inliers. Consequently, OCS+ theoretically achieves one-bit higher representation under the predefined bitwidth hardware. OCS+ is based on offline mathematical transformation, thus it does not require additional training or re-design works on hardware. Experiments over CNNs and ViTs demonstrate OCS+ significantly outperforms OCS and help improve current PTQ SOTAs, e.g., OCS+ improves the current SOTAs by 12.73\\% in Acc@1 for W2A2 MobileNet-v2. The code will be released."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Post Training Quantization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7d125a4597d2307e999bab894df2a745e250865f.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "OCS+: Improving PTQ with Outlier Translation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
12iSWNLDzj | Text To Stealthy Adversarial Face Masks | main | Withdraw | facial recognition;adversarial accessories;diffusion models;adversarial benchmarks | alignment, fairness, safety, privacy, and societal considerations | Ben Lewis;Thomas Moyse;James Parkinson;Elizabeth Telford;Callum Whitfield;Ranko Lazic | ~Ben_Lewis1;~Thomas_Moyse1;~James_Parkinson1;~Elizabeth_Telford1;~Callum_Whitfield1;~Ranko_Lazic1 | 3;3;3;3 | 5;4;4;4 | 2;2;3;1 | 2;2;2;2 | 1;2;3;2 | 3 | 4.25 | 2 | 2 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We thank the reviewers for their work."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. It is recommended that the authors create an adversarial mask to test the attack effectiveness of DAFR in the real world.\n2. It is suggested that the authors add an ablation study of 𝐻: conduct an experiment comparing performance with and without optimizing over multiple images of the attacker.\n3. Does text prompt have an effect on the results?\n4. This paper should include a framework diagram to illustrate the DAFR method, which would enhance readability."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. This paper addresses an important issue, namely the security of face recognition models, which is meaningful for both academia and industry.\n2. The paper leverages the ability of diffusion models to generate realistic and natural images to create adversarial face masks, which is an meaningful design."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on adversarial attacks on face recognition models using face masks, claiming that existing methods lack in attack stealthiness. It proposes a diffusion-based adversarial face mask generation method, titled DAFR. DAFR controls the generation of the diffusion model using adversarial guidance. Additionally, this paper builds a benchmark to evaluate the performance of existing adversarial accessories."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of real-world attack experiments. This paper focuses on the stealthiness of adversarial attacks; however, discussing stealthiness without considering real-world attack implementation seems meaningless. For instance, digital adversarial attack methods can use subtle perturbations undetectable to the human eye, achieving stealthy attacks. The reason existing methods lack stealthiness is due to the need for higher perturbation intensity to achieve real-world attacks. Discussing attack stealthiness without addressing real-world implementation is thus unconvincing. \n2. Evaluation of stealthiness. Stealthiness is a subjective evaluation dimension, and it is unclear if the quantitative metric CMMD used in this paper matches human perception. I suggest adding a user study to support the paper's claims of improved stealthiness.\n3. Lack of ablation experiments. There is insufficient analysis of the effectiveness of key components. For example, to improve robustness, the authors optimize the adversarial pattern on a set of images of the attacker, 𝐻; however, it remains unclear if this design actually improves robustness.\n4. Missing essential references: Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition, CVPR 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "It is an interesting work on a relevant problem. The method performs well over previous baselines and including 3D rendering bridges the gap towards real world applicability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a method to generate adversarial face masks that can fool face recognition systems in a white box setting. They specifically borrow the adversarial attack framework from AdvDiff."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The attack is a simple adaptation of previous approaches i.e. AdvDiff. \n2. What is R in the Algorithm 1? Should it be $h_i$ instead of $h_n$? \n3. The generation process is firstly done in a white-box setting, while this in itself is not problematic the authors have not included any results on transferability to see whether it can be used for another model? \n4. The attack is not robust against state-of-the-art facial recognition models such as ArcFace. The stealth to attack success rate trade-off for R100 is quite large. SASMask achieves almost the same T-CMMD and M-CMMD scores with significantly higher SR. \n5. Line 208: Which dataset was the ArcFace model trained on? \n6. The test set consisting of 300 images is not statistically significant. \n7. A white mask has a SR 1000 of 0.7083 against the fine-tuned F100 model. Is this scenario even worth studying? This simply means that the model does not perform well to begin with. \n8. Is the attack success impacted by the facial pose?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "How does DAFR perform in the physical world (3D robustness) and against black-box models (attack transferability)?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Applying text-to-image diffusion models to optimizing adversarial accessories is new.\n2. Experiments are conducted on multiple datasets, models, text-guided styles, and with various metrics.\n3. The proposed attack outperforms the baselines regarding stealthiness. \n4. The attempt to form a benchmark is promising."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to use diffusion models to generate adversarial accessories (a mask) for attacking facial recognition. Two important attack properties, i.e., robustness (resilient to changes in viewing angles and environmental conditions) and stealthiness (do not attract suspicion by, for example, incorporating obvious facial features), are considered."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- This paper has claimed to improve the attack in both robustness and stealthiness. However, the evaluations in the main body are limited to the stealthiness (and the attack strength in the common 2D digital setting). Although some robustness results are added to the Appendix (Table 6), those results show that the proposed attack is worse than the baselines. In addition, no physical, 3D experiments are considered. \n- As acknowledged by the authors, the idea of using diffusion models to generate adversarial examples is not new. However, it is not clearly stated what the specific (technical) challenge is for generating adversarial masks, compared to other forms of perturbations, such as the makeups. Without identifying such challenges, the technical novelty of this paper is not clear.\n- It is interesting to report the results for different text prompts. However, since different prompts lead to dramatically different attack performance (see Table 4), it would be necessary to understand the relation between the prompt and the performance and finally learn to select the optimal one.\n- Presentation issues:\nThis paper contains lots of technical details but lacks an overview of the whole attack pipeline (maybe in the form of a flowchart).\nBefore introducing their method, the paper should include foundational knowledge related to adversarial masks and clarify the meanings of the mathematical symbols. For instance, the meanings of x and I in Algorithm 1 are not defined."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The literature review is thorough and effectively encompasses key works relevant to the field."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose Diffusion Attack against Facial Recognition (DAFR), a method capable of generating robust and stealthy face masks for dodging recognition systems using the guidance of textual prompts to determine the style. A new benchmark is also presented, the Face Accessory Attack Benchmark (FAAB) which includes a set of standardized tests and procedures to evaluate the performance of accessories."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Motivation: While I recognize the significance of prior works highlighting face masks as a potential threat vector during the COVID-19 pandemic, their prevalence has markedly declined since then. As a result, I find it challenging to accept the notion that face masks, irrespective of printed patterns, can currently be considered a genuinely stealthy accessory.\n2. Writing quality: the submission is mainly held back by the writing quality and lack of clarity in most sections. These are mainly focused around the method section (Section 2), which seems a bit unprofessional, lacking formulaic descriptions of relevant preliminaries and the attack’s full pipeline (e.g., how the components interact with each other) and overall extremely unorganized. I suggest the authors reorganize (e.g., split the section into subsections, each describing the main steps of the attack) and improve (formalize the entire pipeline) this section for better clarity of the novelties they propose.\n3. Novelty: The proposed method appears to lack substantial originality, as it primarily combines existing approaches adapted to this specific task, without introducing significant additional contributions. For instance, the mask rendering technique for facial images is adopted from Zolfi et al. (2022), while the diffusion-based adversarial attack approach is drawn from Dai et al. (2023), with adjustments made for the face recognition domain. Although the authors mention the use of textual prompts to control style, the method section lacks a clear methodological explanation of this aspect, including details on how prompts are selected and their impact on the attack's objectives.\n4. Evaluation: While the authors assess their attack against various baselines and across multiple models and datasets, several critical aspects remain unaddressed:\n- Shallow Analysis – The authors predominantly present empirical results without delving into deeper insights, examining edge cases, or discussing unexpected findings. \n- Missing Ablation Studies – For instance, Equation 1 claims that a scaling function is superior to a static value, yet no comparative analysis is provided to validate this assertion. \n- Lack of Transferability Experiments – A crucial aspect of adversarial attacks is their transferability to models beyond those on which they were trained. Testing transferability could offer valuable insights into the practicality of the proposed attack. \n- Absence of Real-World Experiments – Although digital evaluations form the core of the paper, an attack using a practical accessory would benefit from real-world testing to demonstrate efficacy beyond digital scenarios. \n- Implementation Details – The section includes an excess of low-level details (specific steps for each decision), which detracts from the key information. I recommend prioritizing content between the main paper and the appendix, allowing for more space for additional experiments, such as those in Sections D, E, and F of the appendix. \n- Results – The results across most models do not consistently outperform the baselines (notably SASMask). Ideally, the proposed method should exhibit at least comparable performance to baselines while enhancing stealthiness. The current configuration seems to prioritize stealthiness at the expense of attack success.\n\nMinor comments:\n1. Line 89 – CMMD is not introduced until line 446 (not even a reference to it).\n2. Line 125 – unclear why $f$ is mentioned here.\n3. Line 137 – unclear what $R$ is.\n4. Line 301 – table 3 is too far from where it was mentioned in the text. Maybe the table could be split to better fit in the flow of the text."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a diffusion based adversarial face mask attack, and a benchmark for evaluating face mask attacks."
},
"_bibtex": {
"value": "@misc{\nlewis2024text,\ntitle={Text To Stealthy Adversarial Face Masks},\nauthor={Ben Lewis and Thomas Moyse and James Parkinson and Elizabeth Telford and Callum Whitfield and Ranko Lazic},\nyear={2024},\nurl={https://openreview.net/forum?id=12iSWNLDzj}\n}"
},
"abstract": {
"value": "Recent studies have demonstrated that modern facial recognition systems, which are based on deep neural networks, are vulnerable to adversarial attacks, including the use of accessories, makeup patterns, or precision lighting. However, developing attacks that are both robust (resilient to changes in viewing angles and environmental conditions) and stealthy (do not attract suspicion by, for example, incorporating obvious facial features) remains a significant challenge. In this context, we introduce a novel diffusion-based method (DAFR) capable of generating robust and stealthy face masks for dodging recognition systems (where the system fails to identify the attacker). Specifically our approach is capable of producing high-fidelity printable textures using the guidance of textual prompts to determine the style. This method can also be adapted for impersonation purposes, where the system misidentifies the attacker as a specific other individual. Finally, we address a gap in the existing literature by presenting a comprehensive benchmark (FAAB) for evaluating adversarial accessories in three dimensions, assessing their robustness and stealthiness."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Ben_Lewis1",
"~Thomas_Moyse1",
"~James_Parkinson1",
"~Elizabeth_Telford1",
"~Callum_Whitfield1",
"~Ranko_Lazic1"
]
},
"authors": {
"value": [
"Ben Lewis",
"Thomas Moyse",
"James Parkinson",
"Elizabeth Telford",
"Callum Whitfield",
"Ranko Lazic"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"facial recognition",
"adversarial accessories",
"diffusion models",
"adversarial benchmarks"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "lewis|text_to_stealthy_adversarial_face_masks"
},
"pdf": {
"value": "/pdf/4433a437b437976e7c40bdd6156a2df66dd7f0eb.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/1e755401d06d1021e2d8e8fa0f3b8e040ba8d2e9.zip"
},
"title": {
"value": "Text To Stealthy Adversarial Face Masks"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
13G5KXm98a | Voronoi Tessellation-based Confidence Decision Boundary Visualization to Enhance Understanding of Active Learning | main | Active | Decision Boundary;Visualization;Active Learning | other topics in machine learning (i.e., none of the above) | 3;3;5;6 | 4;4;3;3 | 2;2;2;3 | 2;2;3;3 | 3;2;2;3 | 4.25 | 3.5 | 2.25 | 2.5 | 2.5 | -0.96225 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I don't quite understand how the 2D feature map is constructed. It is said that it directly comes from the neural network model, but how? Could the authors provide more details?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. By using Voronoi tessellation and introducing a ridge confidence metric, the paper provides a more detailed and interpretable visualization of decision boundaries, offering insights into the complexity and uncertainty of boundary regions.\n1. The authors compare multiple AL sampling methods (e.g., entropy, margin, BALD dropout, KMeans) on distinct datasets, providing valuable insights into the behaviors and trade-offs of each approach in different scenarios.\n1. The visualization effectively demonstrates how models handle uncertainty due to limited training samples and noisy regions, which is beneficial for identifying optimal query strategies for different types of uncertainties.\n1. While focused on active learning, the proposed visualization technique has potential utility in other areas of machine learning that require understanding complex decision boundaries in high-dimensional spaces."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a Voronoi tessellation-based visualization method for active learning (AL) to address the limitations of traditional visualizations, which often lack detail in uncertain or sparse boundary areas. By using a ridge confidence metric with Voronoi cells, the approach offers a clearer, more informative view of decision boundaries, aiding in analyzing different AL sampling strategies. Tested on MNIST and CIFAR-10 datasets, the method reveals unique sampling behaviors across entropy, margin, and diversity-based strategies, helping clarify how each handles uncertainty and impacts model learning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. This paper lacks a review of related work. It's important to examine the literature in the related fields, especially other visualization methods developed for active learning.\n1. While the work in the paper is valuable, it also lacks of the baseline methods. The paper demonstrates the effectiveness of Voronoi tessellation and ridge confidence in active learning throught several case studies, but it does not prove how much better it is compared with other visualization methods. I suggest a quantitative analysis with different visualization methods, like a user study.\n1. Section 3 is over lengthy. The authors should consider breaking it down. Subsection 3.1 and Subsection 3.2 should be independent sections.\n1. The discussion of the 2D feature map is missing. How much does space distortion affect the Voronoi tesselletion construction? The neighbors in the original space are not always the same in the feature space. How reliable is the decision boundary in this case? Since everything is visualized in the feature map, it's essential to give a comprehensive discussion on the feature map itself.\n1. The visualization strategies are informative, but the implementation in the plots are really hard to follow. I have several suggestions for improvement:\n - Figure 2, 3, 5, 6, 7, and 8 are visualization results, but they are too small and dense to read, which is fatal for a visualization paper. At least the authors should put high resolution images in the appendix.\n - In figure 5, the colormap of confidence interval should be different from that of the classes. I suggest different levels of gray to show the confidence interval\n - In the visualization plots, the scatter points, Voronoi ridges, and cells are crowded and overlapping, causing severe visual clutter. Actually, it's not necessary to show all the information in a single plot at once. It only makes the plot massive. One way to display a large quantity of information is to enable user interactions. For example, enable users to choose classes they are interested in, add a slider to show different levels of confidence, brush to zoom in on a decision boundary. \n - The Error Detection visualization results are interesting. It gives more information on how the active learning model behaves. I suggest putting multiple training iterations of Error Detection plots in the appendix to visualize its evolution over time. \n\nWhile I believe this work makes good contribution, I'm afraid the authors cannot resolve my concerns in a reasonable time. Therefore, I recommend a weak reject."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- It is unclear why one can assess the confidence of the predicted ridges with the representative Voronoi center points $\\mathbf{p_i}$ and $\\mathbf{p_j}$ (equation 2). Although each ridge can be represented by its representative point $\\mathbf{p}$, the predicted probabilities close to the decision boundary may differ strongly from their center. Could you please clarify why using the point $\\mathbf{p}$ gives an accurate prediction of the confidence?\n- How did you decide on which snapshots to visualize?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper sheds light on the crucial topic of getting insights into machine learning algorithms and provides visual results that highlight apparent differences between sampling strategies.\n- The chosen approach of this paper is generalizable to various machine learning models performing prediction tasks.\n- The authors provide extensive visualizations for each sampling strategy and compare their characteristics with the help of side-by-side visualizations.\n- The paper examines a high number of sampling strategies."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The proposed work deals with the challenging issue of getting insights into sampling strategies of active learning. Addressing this issue, the authors developed a novel visualization approach: The feature space of the samples is projected to a 2D plane and segmented into Voronoi cells. Ridges of the Voronoi cells are used to construct the decision boundary. Additionally, the confidence of this decision boundary is assessed by leveraging predicted probabilities of samples for each ridge. Thus, the confidence of the decision boundary varies locally. The authors apply the confidence decision boundary to visualize the learning and sampling behavior of eight active learning strategies on MNIST and CIFAR-10. Using various other visualizations, they qualitatively depict the characteristics of the sampling strategies and conclude that uncertainty caused by insufficient training samples may be effectively tackled by sampling. In contrast, uncertainty in noisy regions may be hard to tackle."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper contains errors in writing. For example, the sentence \"Since points on either side of the predicted ridges belong to different predicted classes, the confidence of the predictions vary, different sections of the decision boundary carry varying degrees of informative value due to differences in prediction confidence.\" in lines 110-113 should be corrected. \n- Visualizations lack proper color encoding. For example, the decision boundary, a continuous value, is represented with a qualitative color scale (see Figure 5).\n- It remains unclear which snapshots of the feature space were taken for visual elaboration. For example, it is not described why the third round of training was used to inspect the decision boundary in Figure 5. Thus, the authors' conclusions are only based on single visual findings, making them hard to retrace.\n- A clear methodology for selecting snapshots for inspection would improve the clarity and usefulness of the authors' approach. Furthermore, the evaluation relies on observations of a single training run - a more extensive evaluation with a higher number of training runs per sampling strategy would be a good starting point.\n- Some visualizations lack proper annotations, making them hard to understand. Figure 4 especially needs annotations on what the different colors encode. Similarly, which training rounds do we see in Figure 2?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "One aspect I am particularly concerned about is the scalability of this visualization. Currently, the data volume is relatively small, but with a large number of categories, how can the scalability of this visualization be ensured? I would suggest that the authors provide more discussion and experimental results on this point. When the dataset is large, the entire Voronoi becomes complex, with very small cells and densely packed ridges that obscure each other, making it visually unfriendly. I doubt whether users other than authors can derive similar insights from such dense and non-interactive visualizations. The authors could provide quantitative and qualitative user surveys to demonstrate the effectiveness and usability of their method. The authors could consider further optimizations, such as clustering before segmentation or refining the segmentation rules, rather than just using Voronoi tessellation.\nThe visualization method seems to lack interactivity. Providing an interactive tool would enhance the appeal of this work. Additionally, the authors do not explicitly mention which dimensionality reduction method was used. Different dimensionality reduction techniques may affect subsequent Voronoi and decision boundary construction. The authors should provide explanations and evaluations of this aspect."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "I’m glad to see that some work at the ICLR conference focuses on data visualization. The insights provided by data visualization can better help users understand some of the underlying aspects behind the models. Additionally, visualization on the existing dataset is able to provide better insights and differentiation capabilities."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the confidence decision boundary visualization method, which provides an interpretable comparison of various active learning strategies, yielding valuable insights. The experimental design and discussion are thorough.\nI’m glad to see that some work at the ICLR conference focuses on data visualization. The insights provided by data visualization can better help users understand some of the underlying aspects behind the models. Additionally, visualization on the existing dataset is able to provide better insights and differentiation capabilities.\n\nOne aspect I am particularly concerned about is the scalability of this visualization. Currently, the data volume is relatively small, but with a large number of categories, how can the scalability of this visualization be ensured? I would suggest that the authors provide more discussion and experimental results on this point. When the dataset is large, the entire Voronoi becomes complex, with very small cells and densely packed ridges that obscure each other, making it visually unfriendly. I doubt whether users other than authors can derive similar insights from such dense and non-interactive visualizations. The authors could provide quantitative and qualitative user surveys to demonstrate the effectiveness and usability of their method. The authors could consider further optimizations, such as clustering before segmentation or refining the segmentation rules, rather than just using Voronoi tessellation.\nThe visualization method seems to lack interactivity. Providing an interactive tool would enhance the appeal of this work. Additionally, the authors do not explicitly mention which dimensionality reduction method was used. Different dimensionality reduction techniques may affect subsequent Voronoi and decision boundary construction. The authors should provide explanations and evaluations of this aspect."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "One aspect I am particularly concerned about is the scalability of this visualization. Currently, the data volume is relatively small, but with a large number of categories, how can the scalability of this visualization be ensured? I would suggest that the authors provide more discussion and experimental results on this point."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Decision Boundary Representation: How do the fixed Voronoi tessellation results relate to the updated feature vectors during training? Can you explain how this approach ensures an accurate representation of the model's decision boundaries throughout the training process?\n2. Generalizability: How does the method work on more complicated datasets, such as ImageNet (with more classes) and CUB-200-2011 (fine-grained classification tasks)\n2. Boundary Effectiveness: Can you elaborate on how the Voronoi tessellation approach adds value beyond insights obtainable from scatterplots? What specific contributions does it provide in terms of understanding boundary effectiveness that traditional methods do not?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Comprehensive Comparison of Active Learning Strategies: The evaluation includes a systematic comparison of various active learning strategies. These potentially contribute to the broader discourse on active learning methods and offer practical guidance for future research in this area.\n2. Effective Use of Visualization Techniques: The paper incorporates visualization methods to facilitate the analysis of data and models. This makes the analysis and findings more accessible and intuitive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a Voronoi tessellation-based method to visualize decision boundaries in active learning. By highlighting ridges between cells with differing predictions, this method provides a clearer view of the decision boundaries. In addition, it introduces a ridge confidence metric to quantify prediction uncertainty. Experiments on MNIST and CIFAR-10 illustrate the effectiveness of this approach for analyzing various active learning strategies, providing insights into the behavior and effectiveness of different sampling techniques."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Concerns about Decision Boundary Generation**. From the figures, the projection results and the Voronoi tesselation results appear to be fixed across multiple rounds. However, as feature vectors update during training, the projection results and the decision boundaries should also be updated. It is essential to clarify how well these fixed tessellations with the updated predictions capture the model's behavior. In addition, the use of 2D projections to represent high-dimensional decision boundaries raises concerns, as results can vary significantly based on the selected projection method and parameters.\n2. **Insufficient evaluation**. While this paper compares different active learning strategies and summarizes some insights, this evaluation is not sufficient and rigorous. On the one hand, the evaluation is only conducted on MNIST and CIFAR-10, which only contain a few classes with substantial differences between them. It remains uncertain how the proposed method performs with more classes or finer distinctions. On the other hand, the evaluation of boundary effectiveness is inadequate. Many insights, such as oversampling on the boundary region in the uncertainty-based methods, can be identified from scatterplots alone, making the Voronoi approach seem unnecessary.\n3. **Omission of Relevant Literature**. The paper overlooks significant works that utilize Voronoi tessellation for visualizing sample boundaries. For instance, Chen et al. [*] use it to visualize the samples and boundaries in the analysis of label propagation. Including such references would enhance the contextual foundation of the research.\n[*] Interactive Graph Construction for Graph-Based Semi-Supervised Learning. TVCG 2021."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024voronoi,\ntitle={Voronoi Tessellation-based Confidence Decision Boundary Visualization to Enhance Understanding of Active Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=13G5KXm98a},\nnote={under review}\n}"
},
"abstract": {
"value": "The current visualizations used in active learning are quite basic, making it difficult for researchers to effectively observe and analyze the practical performance of different sampling strategies. To address this issue, we introduce a more informative visual evaluation approach observation metric, the confidence decision boundary, which is generated through Voronoi tessellation and evaluated using ridge confidence, a newly proposed measure. This approach enhances the information content in boundary regions where data distribution is sparse. Based on the confidence decision boundary, we conducted a series of visualizations to evaluate various active learning query strategies. These visualizations are able to capture nuanced variations regarding how models based on different strategies perform sampling, the characteristics of points selected by various methods, and the impact of newly sampled points on the model. This enables a much deeper understanding of the underlying mechanisms of existing query strategies."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Decision Boundary",
"Visualization",
"Active Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/750db6dff0610ce08dd03633f85849e0c286beeb.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Voronoi Tessellation-based Confidence Decision Boundary Visualization to Enhance Understanding of Active Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
13PclvlVBa | EEGMamba: Bidirectional State Space Model with Mixture of Experts for EEG Multi-task Classification | main | Active | EEG Classification;State Space Models;Mixture of Experts;Brain-Computer Interfaces | applications to neuroscience & cognitive science | 3;3;5;6;6 | 4;4;4;4;4 | 2;2;3;3;3 | 2;2;2;3;3 | 3;3;3;3;3 | 4.6 | 4 | 2.6 | 2.4 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could you elaborate on the choice of baseline models? How does EEGMamba’s performance compare with multi-task EEG models specifically designed for generalization across diverse tasks?\n\nHow does the ST-Adaptive module capture interactions between channels and temporal features, which are critical in EEG data? Have you considered modeling these dependencies more explicitly?\n\nHow would EEGMamba perform on out-of-distribution tasks or tasks not seen during training? Have you tested its ability to generalize to entirely new EEG task types?\n\nCould you clarify how this work builds on or differs from previous applications of SSMs or Mamba models in EEG classification? Adding this context could help readers better understand the specific contributions of EEGMamba.\n\nCan you evaluate this on longer contexts to better get a sense for the necessity and usefulness for Mamba?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "EEGMamba achieves state-of-the-art performance across multiple EEG tasks, demonstrating robust multi-task capability. Its bidirectional Mamba blocks enable efficient handling of long sequences, avoiding the high memory demands of Transformer models. The flexible ST-Adaptive module supports EEG signals of varied lengths and channels, and the task-aware Mixture of Experts (MoE) enhances task-specific accuracy, reducing interference across tasks. This adaptability and strong generalization across datasets position EEGMamba as a versatile model for EEG classification."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents EEGMamba, a model tailored for multi-task EEG classification. It aims to overcome the limitations of existing models in terms of computational complexity and generalization across tasks with varying signal lengths and channel counts. EEGMamba integrates three main innovations: the Spatio-Temporal-Adaptive (ST-Adaptive) module for unified feature extraction, Bidirectional Mamba to balance accuracy and computational efficiency, and a Task-aware Mixture of Experts (MoE) to handle the differences and similarities across EEG tasks. Evaluated across eight public datasets and covering seizure detection, emotion recognition, sleep stage classification, and motor imagery, EEGMamba demonstrates strong performance and adaptability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "EEGMamba’s evaluation lacks comparison the newest baseline models (notably LaBRaM from ICLR 2024), which limits the interpretability of its reported gains. The Spatio-Temporal-Adaptive (ST-Adaptive) module, while flexible for varying signal lengths and channel counts, may not adequately capture complex channel-time dependencies crucial in EEG data. Furthermore, training the ST module on a per-task basis could lead to representations that are too specialized, reducing generalizability, particularly for out-of-distribution (OOD) tasks, which were not included in the study. Additionally, the paper lacks a related works section detailing prior applications of SSMs or Mamba in EEG classification, making it difficult to contextualize EEGMamba’s specific advancements in this space."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In Section 4.2's experimental comparison, was the model trained using all the datasets at once? Could there be interactions between the datasets? Would training the model on each dataset separately improve the performance metrics? Have you conducted any experiments on this? I'm quite interested."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "### **Originality**\nThe originality of EEGMamba lies in its novel approach to EEG classification through the integration of bidirectional Mamba, Spatio-Temporal-Adaptive (ST-Adaptive) modules, and task-aware Mixture of Experts (MoE). The innovative combination of these elements addresses the computational complexity and variability in signal length and channels, which are critical challenges in EEG classification. This creativity, particularly in applying multitask learning to EEG signals, represents a significant advancement in the field.\n\n### **Quality**\nThe quality of this work is demonstrated through rigorous evaluations on multiple publicly available EEG datasets. EEGMamba's superior performance in seizure detection, emotion recognition, sleep quality, and emotion recovery highlights its robustness and effectiveness. The model's ability to handle long sequences and adapt to different feature extraction tasks while maintaining high accuracy and fast inference speed.\n\n### **Clarity**\nThe authors provide a detailed description of the EEGMamba architecture and its components. The step-by-step explanation of how the bidirectional Mamba, ST-Adaptive module, and task-aware MoE are integrated and function together contributes to a well-structured and coherent narrative.\n\n### **Significance**\nThe model's design, which allows for the efficient capture of both task-specific and general features, has the potential to transform how EEG data is processed and analyzed. This can lead to more accurate and comprehensive analyses of complex brain signal data, benefiting various applications such as medical diagnostics and cognitive research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces EEGMamba, a novel EEG classification network designed for multitask learning. It integrates spatiotemporal adaptive (ST-Adaptive) modules, bidirectional Mamba, and a mixture of experts (MoE) approach. This addresses challenges in EEG classification, such as computational complexity and variations in signal length and channels. The model efficiently handles long sequences and adapts to feature extraction while capturing both task-specific and general features. Evaluations on multiple public EEG datasets demonstrate that EEGMamba outperforms existing models in seizure detection, emotion recognition, sleep quality, and emotion recovery."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Overall, the experiments in this paper are comprehensive; however, in Section 4.1, the discussion on single-channel and multi-channel models only compares memory usage and inference speed, without evaluating the impact of multi-channel models on performance metrics. Additionally, the t-SNE visualization lacks layer-by-layer analysis of the model's influence on clustering results, which does not adequately demonstrate the feature extraction capability of each layer. It is recommended to visualize feature clustering by module."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Plz go and check weakness"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "EEGMamba achieves superior memory efficiency and faster inference speed than traditional transformer-based models, especially on longer EEG sequences, thus proving its practical value. \n\nThe proposed model demonstrates an approach to handling EEG data of varying lengths and channels, incorporating a class token for temporal adaptability and a task-aware MoE to distinguish task-specific features."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper \"EEGMAMBA: Bidirectional State Space Model with Mixture of Experts for EEG Multi-Task Classification\" presents EEGMamba, a multi-task learning framework for EEG classification tasks, addressing challenges related to signal length and channel variability. EEGMamba integrates a Spatio-Temporal-Adaptive (ST-Adaptive) module, bidirectional Mamba blocks, and a task-aware Mixture of Experts (MoE) to enhance adaptability and task-specific processing across diverse EEG datasets. The ST-Adaptive module standardizes data of various lengths and channel numbers, while the bidirectional Mamba captures temporal dependencies in EEG sequences, and the task-aware MoE module selects experts based on task, enhancing classification accuracy and generalization. Tested on eight datasets spanning four task types (seizure detection, emotion recognition, sleep stage classification, and motor imagery), EEGMamba achieves state-of-the-art results, demonstrating superior efficiency, memory usage, and accuracy across tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited Contribution: The author claims that EEGmamba is the first universal EEG classification network to effectively implement multi-task learning for EEG applications. However, several established methods are available that can be applied to multi-task EEG learning, as cited in [1][2][3].\n\n2. Inappropriate Comparisons: The choice of baselines for comparison lacks relevance. For instance, using AttnSleep [3] as a baseline for seizure detection/emotion recognition is incongruous, as it is specifically designed for sleep staging. Additionally, the author does not include multi-task learning methods as baselines, instead comparing against smaller models tailored to individual tasks. For fair assessment, each specific task should be compared to the most relevant model designed for that purpose. To evaluate cross-task capabilities, comparisons should involve multi-task learning methods like those in [1][2][3][5][6].\n\n3. Efficiency Evaluation: The author should provide quantitative evidence demonstrating the proposed method's efficiency advantages over previous approaches.\n\n\n\n[1] Jiang, W., Zhao, L., & Lu, B. L. Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI. In The Twelfth International Conference on Learning Representations.\n\n[2] Chen, Y., Ren, K., Song, K., Wang, Y., Wang, Y., Li, D., & Qiu, L. (2024). EEGFormer: Towards transferable and interpretable large-scale EEG foundation model. arXiv preprint arXiv:2401.10278.\n\n[3] Jiang, W. B., Wang, Y., Lu, B. L., & Li, D. (2024). NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap between Language and EEG Signals. arXiv preprint arXiv:2409.00101.\n\n[4] Eldele, E., Chen, Z., Liu, C., Wu, M., Kwoh, C. K., Li, X., & Guan, C. (2021). An attention-based deep learning approach for sleep stage classification with single-channel EEG. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 809-818.\n\n[5] Zhang, D., Yuan, Z., Yang, Y., Chen, J., Wang, J., & Li, Y. (2024). Brant: Foundation model for intracranial neural signal. Advances in Neural Information Processing Systems, 36.\n\n[6] Wang, C., Subramaniam, V., Yaari, A. U., Kreiman, G., Katz, B., Cases, I., & Barbu, A. (2023). BrainBERT: Self-supervised representation learning for intracranial recordings. arXiv preprint arXiv:2302.14367."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Introduction: (line 49-50):\n\nThe authors claim that CNNs are unable to handle long EEG signals, citing three papers: (Sakhavi etal.,2018), (Thuwajit etal.,2021) and (Schirrmeister etal.,2017). However, none of these studies provide evidence to support such a conclusion. In fact, Thuwajit et al. (2021) proposed EEGWavenet, which utilizes a multiscale CNN-based spatiotemporal feature extraction module. This module gradually increases its receptive field to match the length of the input EEG signal, indicating that CNNs can handle the global input sequences in certain contexts. While the authors' claim is not entirely substantiated, it raises interesting questions that I would like to see addressed:\n\na. For EEG classification, what defines a \"long\" signal in terms of sample size? At what point does a short receptive field cause CNN performance to degrade?\n\nb. Is global sequence modelling truly necessary for long-term EEG signals? From a neuroscience perspective, how much does brain activity from, say, 10 seconds ago, affect the current state? Which types of tasks specifically require such long-term modelling?\n\n2. section 2.2 ST-Adaptive Module:\n\nThe authors propose a spatio-temporal-adaptive module that transforms arbitrary EEG inputs into a uniform feature dimension, as depicted in Figure 3. However, I have several concerns:\n\na. Scalability to New Datasets: Is this approach scalable to new datasets once the model is trained? Given that different Conv1d layers are used to transform varying numbers of EEG channels into a common hidden state in the Spatial-Adaptive Convolution, how flexible is this method for new tasks where the number of channels may differ from the training datasets? Even if the number of channels is the same, the channels themselves vary. It appears that the model learns dataset-specific spatial filters rather than a truly global spatial representation, which may limit its generalizability to new datasets, which is a big issue for a backbone model, since people would like to use it on different tasks later on.\n\nb. Tokenize Layer: In the tokenize layer, two branches of CNN are used: one with a short kernel (15) and another with a long kernel (49). Was this choice based on experimental results? Why are there only two branches, and why were kernel sizes 15 and 49 specifically chosen? Since there’s no discussion of this configuration in the ablation study, it’s unclear whether this is the most optimal setup.\n\n3. section 3.2 Data Division:\n\nIn this study, the 5-fold cross-validation experiment was implemented as leave-subject-out, which is not a common approach in the BCI field due to the significant subject variability. This evaluation approach faces two challenges: (1) developing a population model trained on a group of subjects, and (2) addressing subject transfer by evaluating the model on unseen subjects. This raises several concerns:\n\na. Subject Variability Impact: In tasks with minimal subject variability, such as seizure detection and sleep stage detection, the classification results are high, as shown in Figure 1. However, tasks like motor imagery and other BCI tasks exhibit high subject variability, which severely impacts model performance. This is evident in the low accuracies achieved on the BCI-IV-2a task (44%) and the SEED task (57.2%), which are insufficient for practical BCI applications. A discussion on this performance discrepancy is necessary to demonstrate how the proposed model addresses these challenges and whether it shows superiority in domains with high subject variability.\n\nb. Performance Discrepancy with Benchmark Models: Many benchmark models, such as EEG Conformer (Song et al., 2022), were not designed to handle population transfer. EEG Conformer, for instance, was trained in a subject-specific manner and achieved state-of-the-art performance on the same BCI-IV-2a (78.66%) and SEED (95.30%) datasets. However, in this study, the EEG Conformer’s performance dropped significantly to 35.2% and below 50%, respectively. Could the authors explain this stark performance difference? Is it primarily due to the leave-subject-out evaluation setting? If so, would the proposed EEGMamba model also retain its high performance if evaluated in a subject-specific manner like EEG Conformer?\n\nc. Use of Separate Test Sets: Datasets like BCI-IV-2a include a separate test set specifically intended for evaluation. In this study, it is unclear whether the authors utilized this test set. Since many studies report performance on this designated test set, comparing the classification accuracy reported here with those from other studies may not be straightforward. Clarification on whether the official test set was used, or if an alternative test split was applied, is needed to ensure a fair comparison with prior work.\n\n4. section 4.1 Single-Task EEGMamba Performance Comparison\n\nLine 377: The authors mention the memory and inference time challenges of transformer models when handling long sequences. Similar to my previous concern, what qualifies as a \"long\" sequence in terms of signal length for transformers to encounter this bottleneck? Is there a real-world application where such long sequences need to be tackled? Many EEG tasks are only a few seconds in length, so it would be helpful to clarify the practical need for handling significantly longer sequences.\n\n5. section 4.2 EEGMamba for EEG Multi-Task Classification\n\nWhile the idea of training a single model to perform well across multiple datasets is interesting, its practical application is unclear. From Figure 1, the single-task models appear to achieve similar or even better performance on 5 out of the 8 benchmarked datasets (Siena, BCI-IV-2a, Shu, SEED, CHB-MIT). This raises a few questions:\n\na. Benefit of Multi-Task Training: Is there a demonstrable benefit to multi-task training? Specifically, is there any statistical difference between the performance of the single-task model and the multi-task model on the datasets? It would be helpful to clarify whether multi-task training consistently improves performance or if its benefits are marginal.\n\n6. Lines 454-456:\n\nThe authors argue that the model only needs to be trained once. However, the analysis was performed offline on 8 selected public datasets solely. To assess whether this model can be applied in real-world scenarios, the following questions need to be addressed:\n\na. Cross-Session Issues: In practice, many EEG-based applications require short calibration sessions to adjust for cross-session variability in subject-dependent models. Does the proposed model also require such calibration? If calibration is needed, would this involve further training or fine-tuning of the pre-trained model? In the case of the multi-task model, would calibration require data from other tasks as well, or could it be done independently?\n\nb. Generalization to New Datasets: How well does the model perform on a new dataset that belongs to one of the pre-trained tasks? If the model only needs to be trained once, does this mean it can be directly applied to similar tasks without additional training? For example, how would the model handle BCI-IV-2B, which has only 3 channels but is still a motor imagery task, or another sleep stage classification dataset? If yes, how would the model manage inconsistencies in the number of channels, and what performance can be expected? If not, wouldn’t this imply that researchers would still need to retrain the model, making it not different from other models?\n\nb. Domain-Specific Advantages: If multi-task training is advantageous, is this benefit domain-specific? For example, should a researcher developing a motor imagery decoder expect better results from multi-task training? How can researchers determine whether a single-task or multi-task approach will yield better performance for a specific domain or task?\n\nc. Practicality of Multi-Task Training: In practice, most researchers focus on specific tasks and experiments, collecting data for single tasks. Does this multi-task approach suggest that researchers in the future should also record additional tasks or rely on public datasets to improve performance? Or is there a scenario where it would make sense for a model to simultaneously classify motor imagery, emotion, sleep stages, and seizure events? More guidance on when and why to use multi-task training would be valuable.\n\n7. section 4.4 Ablation Study:\n\nIn Figure 6, it appears that the performances of the different configurations, except for the single-directional Mamba, are quite similar. Is there a statistically significant difference between these configurations? It would be helpful to include a discussion on whether the variations in performance are meaningful or simply within the margin of error.\n\n8. Conclusion lines 526-527:\n\nThe authors claim that EEGMamba is the first model to truly implement multi-task learning for EEG applications. However, I am curious about how EEGMamba differs from a simpler approach, such as using a shared backbone model as a feature extractor with separate classification heads for different tasks, as done in [1]. Could the authors clarify the key differences between the proposed model and such an approach using MoE?\n\n[1] Xie Y, Wang K, Meng J, Yue J, Meng L, Yi W, Jung TP, Xu M, Ming D. Cross-dataset transfer learning for motor imagery signal classification via multi-task learning and pre-training. J Neural Eng. 2023 Oct 20;20(5). doi: 10.1088/1741-2552/acfe9c. PMID: 37774694.\n\n9. Conclusion line 531:\n\nThe authors claim that the proposed model can better learn the commonalities among EEG signals from different tasks. However, what specific commonalities are being referred to? Is there any interpretation or evidence to support this claim? It would be helpful to understand how these commonalities are identified and whether the model offers any insight into them.\n\nAddressing the above questions could help clarify specific weaknesses and improve the overall impact of the study, more specifically:\n\nQuestion 5: Answering this question would clarify Weakness Point 1, providing more insight into the motivation for multi-task training.\n\nQuestions 1, 2(a), and 6: Answering these would address Weakness Points 2 and 3, which would significantly enhance the study's potential impact and score by clarifying practical applications and the model’s performance on motor imagery datasets.\n\nQuestions 4 and 7: These would address Weakness Point 4 by supporting the comparisons with a statistical validation."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors successfully introduce the new Mamba architecture to EEG decoding, achieving strong results across 8 public datasets.\n\n2. Mamba demonstrates memory efficiency and fast inference, making it advantageous for real-world applications.\n\n3. The model effectively addresses the multi-task classification problem, showcasing the feasibility of training a single model for multiple downstream tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce EEGMamba, a model designed for multi-task EEG classification. EEGMamba consists of an ST-Adaptive module that learns spatial filters for each task, transforming EEG inputs with varying channel counts into a uniform feature space. The module then tokenizes the data using both small and large kernels to capture short-term and long-term features. These tokens are processed by a BiMamba backbone with task-aware Mixture of Experts (MoE) layers, enabling the model to capture both task-specific and shared features. Finally, each task has a dedicated classification head.\n\nEEGMamba allows for multi-task EEG classification in a single training session. The authors evaluated the model’s performance against five other models across eight public datasets, covering tasks such as epilepsy detection, sleep stage classification, emotion recognition, and motor imagery. The experiments used a 5-fold cross-validation approach, where specific subjects were reserved for the test set in each fold. Results show that EEGMamba outperformed competing models under this evaluation, and it demonstrated efficient memory usage and inference speed, particularly with long-sequence data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The motivation for multi-task training is somewhat unclear. The authors should clarify why multi-task training is necessary and how it can be beneficial for specific applications.\n\n2. Certain questions remain unanswered regarding the practical use of the proposed model, which limits its potential impact. For instance, under the current evaluation scheme, it is unclear how the model would generalize to new datasets or subjects, particularly when new datasets vary in channel count. Does introducing a new dataset require retraining or additional multi-\n\ntask training even if it is single-task? Additionally, what would be the training strategy for developing a subject-dependent model within the multi-task framework if only one task is available from the subject? To address this, the authors could consider testing the model on an additional dataset to evaluate whether the pre-trained model can transfer effectively. If it cannot, are there still advantages to the EEGMamba multi-task approach?\n\n3. The authors should discuss the relatively low classification accuracy on the motor imagery datasets, which is currently too low for practical motor imagery classification. If this is due to the evaluation setting, additional experiments with per-subject models should be conducted to assess performance, and these results should be compared to other models.\n\n4. Statistical tests are needed to confirm whether the observed differences between models or modules are significant. For instance, statistical analysis should be conducted for the results in Figure 1 comparing EEGMamba and Single-task EEGMamba, as well as for the different ablation models in Figure 6."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could you please specify how the EEG tasks are encoded into task tokens?\n2. I noticed that the DEAP dataset was utilized in this study, but only data from four electrodes were selected. What is the rationale behind this choice? Additionally, regarding the binary classification on the DEAP dataset, does it pertain to valence, arousal, liking, or dominance? Furthermore, in Table 1, the authors provide the optimal segment lengths for all datasets. What references were used to determine these durations? I observed that, unlike most existing works that employ shorter segments of 1s, 2s, or 4s for the DEAP and SEED datasets, this paper utilizes segment lengths of 60s and 20s. What is the reasoning for selecting such unusually long data lengths?\n3. This work employs five-fold cross-validation for data partitioning, which does not appear to be a commonly used EEG dataset partitioning method. What is the rationale or basis for this choice?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper presents a novel multi-task EEG classification model that incorporates the Bidirectional Mamba and MoE modules, offering a structurally innovative approach. Experiments were conducted on eight datasets, covering a wide range of downstream EEG tasks. The paper is clearly written and easy to follow, with a particularly well-explained method section."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To address the issues of quadratic computational complexity in handling long-term dependencies in EEG classification models, and the lack of cross-task generalization as most models are designed for single tasks, this paper proposes the EEGMamba model. The model introduces the ST-Adaptive module to address the problem of varying EEG signal lengths and channel numbers across different datasets. It also proposes the Bidirectional Mamba to improve computational efficiency and incorporates the MoE (Mixture of Experts) module to simultaneously capture both the commonalities and differences in EEG signals. Relevant experiments were conducted on eight datasets across four EEG tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This paper proposes a multi-task EEG classification model aimed at performing classification across multiple datasets (downstream tasks) using a single model. However, the experimental results do not demonstrate a clear advantage over other single-task models, making it difficult to convincingly argue for the benefits and necessity of the multi-task approach. Additionally, the ablation study results for the various modules lack significant and consistent differences, making it challenging to prove the effectiveness of each module. Moreover, the motivation for using the Bidirectional Mamba is insufficiently justified. The ST-Adaptive method, proposed to address the issue of varying EEG channel numbers across datasets, is essentially a module integration, lacking in innovation.\n1. In this work, the ST-Adaptive module applies a different one-dimensional convolution for each task, transforming the varying original channel numbers into a fixed number of $D$ channels. However, this approach does not seem to fully achieve the concept of being \"adaptive.\" If a new task emerges with a number of channels outside the predefined range of $C_0$ to $C_N$ in the model, how would this be addressed? This raises concerns about the generalizability and flexibility of the current method in handling unforeseen tasks with different channel configurations.\n2. The purpose of using Mamba in this paper is to reduce computational complexity, conserve resources, and improve efficiency. Generally, the ultimate goal of improving efficiency in EEG models is to achieve real-time recognition. Given that EEG signals, like natural language, possess temporal characteristics, theoretically, a unidirectional Mamba would better meet this requirement, as it only requires past data rather than future information. The motivation for employing Mamba, especially Bidirectional Mamba, in this work is not sufficiently clear or logically aligned with this objective.\n3. In Section 4.1, the authors present the performance of the single-task EEGMamba and other transformer-based models concerning memory usage and inference speed as the sequence length increases. However, it is not clearly evident from Figure 4 that EEGMamba demonstrates a significant advantage over the other methods. Additionally, while it is acknowledged that memory usage increases and inference speed decreases with longer sequence lengths for both single-channel and multi-channel scenarios, the authors do not specify the actual sequence lengths employed in the current eight EEG tasks. This omission lacks a reference point, making it difficult to ascertain whether EEGMamba exhibits superior performance. Furthermore, as indicated in Appendix I, EEGNet appears to perform better in terms of memory usage and inference speed, while also demonstrating commendable performance across various datasets. This further undermines the effectiveness of the proposed method in this paper.\n4. In Section 4.2, the authors present the performance of EEGMamba in multi-task classification. However, I observe that EEGMamba does not demonstrate a significant advantage over the baseline models, and in many datasets, its performance is inferior to that of other single-task models, indicating that the multi-task approach does not facilitate mutual enhancement among tasks. Therefore, I question the necessity of employing a single model to address multiple tasks rather than utilizing several smaller models for different tasks, which might yield better results. The existing findings lack persuasiveness and do not adequately support the motivations for this work or the claims regarding the strong generalization capabilities of the proposed multi-task model.\n5. Regarding Figure 5, I observe that, apart from the sleep stage task, where there is a considerable variation in the activation probabilities of different experts, the activation probabilities for the other tasks across the eight experts are generally quite uniform. This uniformity makes it challenging to demonstrate a particular preference for any specific expert. How can the effectiveness and necessity of the MoE approach be substantiated under these circumstances?\n6. Figure 6 presents the ablation study results for the various modules of EEGMamba. However, the data indicate that these modules appear to have minimal discernible impact, as the experimental results across the Siena, CHB-MIT, and SHHS datasets show little variation. This raises concerns regarding the ability to substantiate the effectiveness of each module."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024eegmamba,\ntitle={{EEGM}amba: Bidirectional State Space Model with Mixture of Experts for {EEG} Multi-task Classification},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=13PclvlVBa},\nnote={under review}\n}"
},
"abstract": {
"value": "In recent years, with the development of deep learning, electroencephalogram (EEG) classification networks have achieved certain progress. Transformer-based models can perform well in capturing long-term dependencies in EEG signals. However, their quadratic computational complexity poses a substantial computational challenge. Moreover, most EEG classification models are only suitable for single tasks and struggle with generalization across different tasks, particularly when faced with variations in signal length and channel count. In this paper, we introduce EEGMamba, the first universal EEG classification network to truly implement multi-task learning for EEG applications. EEGMamba seamlessly integrates the Spatio-Temporal-Adaptive (ST-Adaptive) module, bidirectional Mamba, and Mixture of Experts (MoE) into a unified framework. The proposed ST-Adaptive module performs unified feature extraction on EEG signals of different lengths and channel counts through spatial-adaptive convolution and incorporates a class token to achieve temporal-adaptability. Moreover, we design a bidirectional Mamba particularly suitable for EEG signals for further feature extraction, balancing high accuracy, fast inference speed, and efficient memory-usage in processing long EEG signals. To enhance the processing of EEG data across multiple tasks, we introduce task-aware MoE with a universal expert, effectively capturing both differences and commonalities among EEG data from different tasks. We evaluate our model on eight publicly available EEG datasets, and the experimental results demonstrate its superior performance in four types of tasks: seizure detection, emotion recognition, sleep stage classification, and motor imagery. The code is set to be released soon."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"EEG Classification",
"State Space Models",
"Mixture of Experts",
"Brain-Computer Interfaces"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/da4a2c7a865952271bb3f9234697bb1f0679448a.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "EEGMamba: Bidirectional State Space Model with Mixture of Experts for EEG Multi-task Classification"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
14E7S17hFv | Counterintuitive RL: The Hidden Value of Acting Bad | main | Active | Counterintuitive;reinforcement learning | reinforcement learning | 3;3;5;6 | 3;4;2;4 | 2;1;2;2 | 2;2;3;3 | 3;2;3;3 | 4.25 | 3.25 | 1.75 | 2.5 | 2.75 | -0.058026 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. It has been mentioned that \"minimizing the state-action value function in early training ...\". Does the algorithm considers actions with minimum value \"only\" in early training and does the value of $\\epsilon$ in Algorithm 1 gradually reaches to zero? What is $e$ in algorithm 1?\n\n2. Is there any motivating factor to cluster the games in figure 4 or is it just because of the value range?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. In my view, the main strength of this work lies in the presented theoretical assessment. It shows that the minimum-action value leads to higher temporal difference (TD) than random actions and the difference in the TD is equal to the disadvantage gap. Such finding reveals the underlying importance of the bad actions that may help in accelerate the learning which if often ignored. \n\n2. This work nicely formalizes and defines the relevant concepts, and then gradually presents the core propositions. I have found the paper easy to follow. Also, the detailing of the propositions for both single and double Q-learning is helpful for the reader."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work considers the problem of experience collection through behavior policy in RL. They argue the benefit of leveraging extremum actions to learn optimal policy and outlined an algorithm that collects and uses such samples. They theoretically show that how actions with minimum value could be helpful. Experimental validation of the approach has been conducted using the Atari game environment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the paper presents several experimental results on ALE, it lacks experiments across different benchmarks. It needs rigorous validation to uphold the claim.\n\n2. Comparison with more recent and effective exploration techniques are missing. \n\n2. Some part of the writing needs improvement. For example, it is very hard to identify how figure 2 and 5 differ."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I see the following minor issues and make some auggestions:\n\nLine 50: I wouldn't call $\\epsilon$-greedy \"naive and standard technique\"\n\nLine 96: comma at the end, not a full stop\n\nLine 113: I believe you could use different subscript for $\\theta$ to differentiate the gradient step from the environment time step.\n\nLine 123: full stop at the end of eqn\n\nLine 124: \"a family of algorithms have been proposed based on counting state visitations\" what are these algorithms? I would strongly recommend citing \"R-max – A General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning\" and \"Reinforcement Learning in Finite MDPs: PAC Analysis\" here.\n\nLine 155: Why is there an $s' \\sim \\mathcal T(s, \\hat{a})$ in the first expectation? I don't see any dependence on $s'$ in the term inside the bracket. Same question for later versions of smoothness too.\n\nProof of Proposition 3.4: could you expand on the second inequality please?\n\nLine 264: I am unclear on what is the \"information gained\" here? Is it in an information theoretic sense or in terms of optimizaing the loss?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper provides a good set of experimental results in the low data regime. The method requires fairly simple changes to existing algorithms and it tends to improve performance while being so. The paper is largely well written and I was able to follow along easily."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present a new algorithm which explores by choosing the worst action as estimated by the neural q function. They demonstrate the efficacy of this in low data regimes for double DWN and compare it to vanilla epsilon greedy based double DQN."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I see a few important issues that need addressing before I can raise my score.\n\nIn **Proposition 3.4 and 3.6** you start with statements for state $s_t$, which is random variable corresponding to state at time $t$ but in the inequality on the RHS you somehow have $\\mathcal D(s)$ for a fixed state $s$. I am unclear as to where this $s$ is coming from. Moreover, I believe this would significantly complicate the proofs because you will have to account for the time step or \"loosen\" the lower bound because you will have to take some kind of infimum.\n\nWhile I am able to follow intuitively why you would benefit from taking the value minimizing action, I believe you should also include a **comment on the estimated regret** for this choice. It might be beneficial for a randomly initialized $Q$-function in a game setting but we must consider cases where large negative rewards are \"harmful\" to the agent. This is one of the reasons why minimizing regret is important to both theoreticians and practitioners.\n\nIn the experimental section I would be interested to see **comparison with two other papers**: \"Exploration with random network distillation\" and \"Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning\". Former is based on quantifying how novel a data point is and the latter is directly related to optimistically choosing an action based on pseudo counts. You refer to the inability of doing count based exploration (as done in tabular setting) in your paper but these works are doing some form of counts based exploration. For reference, in lines 126-128 you write\n>> incorporating these count-based methods in high-dimensional state representation MDPs requires substantial complexity including training additional deep neural networks to estimate counts or other uncertainty metrics\n\nI would expect some comparison to how much better these more complex methods are."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Section 5 suggests that experiments were also done with Double DQN (with PER), however all I could find were learning curves for QRDQN, and Table 1 with results for DDQN. Generally, figuring out what results are based on which basis algorithm was challenging as I had to go back to the text several times without success. Could you please clarify what basis agent/algorithm each plot/table is based on. E.g. Figure 3's caption says \"MaxMin TD Learning and canonical temporal difference learning in [ALE]...\". The canonical TD learning method actually implies more of the TD($\\lambda$) class of methods for prediction than a deep RL method such as DDQN or QRDQN. So please specify.\n\n2. Why are the curves for Atari 200M truncated in some cases? (Could be beneficial to add the performance curves for the full length of the experiments.)\n\n3. What was the reasoning behind choosing the specific subset of games for the Atari 200M experiments. \n\n4. Can you comment on the counterexample that I've mentioned in the \"Weaknesses\" section? What is your view on it? (Perhaps experimenting with such a setting would be useful.)\n\nMinor suggestions:\n- Line 87: \"MDP [...] contains continuous set of states\"; I believe this intro is incorrect and also not applicable to the setting of this paper. In Atari and Chain MDP, states are in fact discrete. In Atari, pixel values are discrete, yielding a discrete combinatorial set of states.\n\n- Line 89: The definition corresponds to the *expected* reward function. \n\n- Line 90: The PMF-based definition of the policy does not hold fully for the continuous-state definition of the MDP. But this will be fine if Line 87 is changed to discrete set of states. \n\n- Line 102: I believe the second expectation is mis-specified and in fact is not needed. \n\n- Line 108: \"In deep reinforcement learning, the state space or the action space is large enough that it is not possible to learn and\nstore the state-action values in a tabular form.\"; state-action spaces being large is not a property of DRL. I think better phrasing would be in this line: domains often tackled with DRL tend to have large state and/or action spaces.\n\n- Definition 3.3 seems to be formalized such that $\\theta$ is a random variable of the expectation, but the wording seems to imply that $Q_\\theta$ is a given.\n\n- Would be good to have a visualization of the Chain MDP for ease of readability. Also, what was the number of states $N$?\n\n- Number of environment interactions are not equal to the number of frames in Atari 2600 tasks, because of frame skipping of > 1 used. As such, the X axis labels should change to number of frames.\n\n- The proposed approach is only compatible with discrete-action Q-based methods. That is to say, methods like DDPG cannot utilize it. I think it would be good to mention this somewhere."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "**Interesting problem scenario:**\nConsidering exploration strategies that are directly useful for structural/temporal credit assignment is an interesting area to focus on in approximate/deep RL.\n\n**Analysis tools around acting uniformly vs. Q-minimizing actions after parameter initialization:**\nI found the approach of the propositions to analyze the impact of acting uniformly vs. taking the Q-minimizing action on the TD error to be interesting.\n\n**Experimental testbeds:** \nThe choice of testbeds, ranging from a toy MDP problem to 100K and 200M Atari benchmarks is reasonable. \n\n**Evaluation metrics:**\nReporting Median and 80% aggregate measures, using 5 seeds per each method in Atari for DQN-based methods, and reporting standard error of the mean return are all reasonable choices. However, statistical measures introduced by Agrawal et al. (2021) [arXiv:2108.13264] would have been a step up."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper argues for an alternative exploration strategy to $\\epsilon$-greedy for deep/approximate value-based RL methods (mostly those founded on Q-learning) in which instead of sampling uniformly at random with probability $\\epsilon$, their approach samples actions based on $min_a Q(s, a)$. Algorithmically, the TD update rule of the basis algorithm remains intact. The authors argue that experiences generated by this method of acting have meaningful implications during experience replay/consolidation by TD methods (in particular, deep Q-learning family of algorithms). As such, the authors frame their proposal as a TD approach, in what they call MaxMin TD learning.\n\nThey examine the learning performance on a few tasks of the Atari suite (200M frames), full set of the Atari 100K benchmark, and an illustrative Chain MDP task. The key Atari results are based on the combination of their MaxMin TD learning with QRDQN (a distributional RL algorithm) in comparison with QRDQN with $\\epsilon$-greedy, where MaxMin TD variant achieves higher AUC on both the Median and 80th Percentile aggregate measures."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Framing the approach as a TD method, as opposed to an exploration strategy:**\nFraming of the approach as a TD algorithm is not justifiable. A strategic exploration approach could facilitate credit assignment, but categorizing them as a TD approach is rarely ever useful in my view. The proposed method only touches experience generation and not experience consolidation and in this way, I see it as best described as a behavior/exploration strategy. Also, the baselines in question are Noisy Nets and $\\epsilon$-greedy, which are both known as exploration/behavior strategies.\n\n- **Propositions and proofs do not deliver** a full picture of what's going on, unlike the claims for theoretical foundations on par with those existing in tabular settings. \n\n- The approach of only choosing actions from $max Q$ and $min Q$ can easily be shown to introduce bias in a simple counterexample. Say in a multiarmed bandit, action *a* is initialized to the minimal value at random (wrt. to the other initialized actions' values) but as it happens it's *true* Q value is lower than the initialized value. Let's assume also that action *b* is initialized to the maximal value at random (wrt. to the other initialized actions' values) and its corresponding *true* action value is higher than all other initialized actions' values. Note that, even if we use functional approximation (e.g. a neural network) to solve this problem, with parameters shared between the Q estimators for the various actions, it can easily end up being the case that no other actions are experienced during the course of training interactions. This would hold even despite the fact that neither of actions *a* and *b* would be the Q-minimizing or the Q-maximizing actions, respectively, wrt. the *true* Q function.\n\n- Consider the example above again: The TD error reaches zero/near-zero for the Q-minimizing and Q-maximizing actions after a few updates (where Q is the current estimator and not the true Q-function). However, all other actions will have a much higher TD error since no updates is performed on them. This effectively shows that while the results of the Propositions in the paper could hold probabilistically assuming uniform initialization of the outputs of the Q-estimator network, they do not hold on a case by case setting nor do would they hold after several updates (after the uniformity of the outputs is no longer the case).\n \n- Results on Atari 100K are significant, but not on Atari 200M experiments (especially given the fact that the plots for the latter are truncated earlier than 200M frames; e.g. StarGunner is truncated at 70M frames). This likely has ties to my argument above. MaxMin TD could help at the beginning of training (assuming settings like my counterexample occur less commonly in practice / in these tasks), but would not be able to reach higher final performances after enough training of a good baseline on the task. \n\n- I think any benefit emerging from MaxMin TD could have ties to epistemic uncertainty minimization. I think discussions, detailed analysis, and comparisons with approaches directly purposed to do so (incl. bootstrapped DQN of Osband et al., 2016) would have been beneficial."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1: Could the authors clarify the decay rate for the hyperparameter $\\textit{Exploration epsilon decay frame fraction}$? It seems that $\\epsilon$ decreases every 0.8% of the total interaction budget. How was this value selected?\n\nQ2: Could you provide more details on how NoisyNetworks was implemented in the experiments? Clarifying architecture choices and how this was selected would be useful, as it apparently allows for a range of configurations.\n\nQ3: In Figure 1, constant 2 appears to balance exploration and exploitation in the UCB algorithm. Has this constant been optimized for the problem, and if so, could the results for varying values be shown? I'd like to see results for values such as [0.5, 1, 2, 3] as done for the epsilon value.\n\nQ4: Could the authors clarify the intended interpretation of Figure 4 for the reader? What is exactly the meaning of TD error being stable or suffering from a drop during environment interactions? What are \"high\" negative or positive values in this case?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors tested their approach using both DDQN and a more recent approach that represents the possible returns as distribution (QRDQN), which shows the flexibility of applying MaxMin TD Learning over a variety of different algorithms. Moreover, the tested environment is the well-known Atari benchmark, offering tasks with various characteristics. The paper is relatively easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose MaxMin TD Learning, an algorithm that alternates between optimistic and pessimistic strategies for action sampling and Q-estimation updates. This approach addresses sample inefficiency in deep reinforcement learning (RL), though the method offers only incremental novelty. The authors provide both theoretical and empirical analysis, evaluating their method on the popular Atari benchmark."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* *Marginal novelty*: The proposed method introduces limited novelty since exploring different selection criteria based on Q-estimations has been previously explored with ensembles [1, 2]. Additionally, similar work addressing the optimistic/pessimistic policy update trade-off exists using a more task-dependent strategy [3]. A Related Works section would help clarify where the proposed method advances existing literature. Furthermore, count-based exploration strategies should be referenced in the Background section for completeness.\n\n* *Evaluation in high-dimensional MDPs*: The evaluation lacks depth, particularly concerning high-dimensional MDPs. MaxMin TD Learning is designed to enhance sample efficiency via exploration, yet it is compared against a standard $\\epsilon$-greedy strategy, which performs well given a larger interaction budget and appropriately tuned decay factors. Limiting the interaction budget significantly impacts $\\epsilon$ decay and policy performance, and it appears that the decay factor used here converges too rapidly to its minimum (see Q1). I would recommend including experiments with varying $\\epsilon$ values, especially in the 200-million-frame setting. Additionally, while the 100k benchmark used the NoisyNetworks exploration strategy, it was absent in the 200-million-frame experiments.\n\n* *Fair comparison*: A more balanced comparison would be to benchmark MaxMin TD Learning against alternative approaches designed to enhance sample efficiency as seen in [4, 5]. The authors could emphasize the benefit of MaxMin TD Learning, such as enabling effective learning without requiring prior logged data or a guide policy, which could potentially lead to distribution shifts.\n\n**General remarks**\n\n- In the phrase “Thus, in high-dimensional complex MDPs…”, the citation of Kakade (2003) seems out of place, as deep reinforcement learning was developed later.\n\n- The second question raised saying that the goal is to achieve a *zero cost* experience collection seems infeasible in the context of exploration since interactions with the environment have an inherently associated cost. I think the authors suggest *zero additional cost*\n\n- I suggest having a single Reference section.\n\n**References**\n\n[1] Lan, Qingfeng, et al. \"Maxmin q-learning: Controlling the estimation bias of q-learning.\" arXiv preprint arXiv:2002.06487 (2020).\n\n[2] Agarwal, Rishabh, Dale Schuurmans, and Mohammad Norouzi. \"An optimistic perspective on offline reinforcement learning.\" International conference on machine learning. PMLR, 2020.\n\n[3] Moskovitz, Ted, et al. \"Tactical optimism and pessimism for deep reinforcement learning.\" Advances in Neural Information Processing Systems 34 (2021): 12849-12863.\n\n[4] Yao, Yao, et al. \"Sample efficient reinforcement learning via model-ensemble exploration and exploitation.\" 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.\n\n[5] Uchendu, Ikechukwu, et al. \"Jump-start reinforcement learning.\" International Conference on Machine Learning. PMLR, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024counterintuitive,\ntitle={Counterintuitive {RL}: The Hidden Value of Acting Bad},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=14E7S17hFv},\nnote={under review}\n}"
},
"abstract": {
"value": "Learning to make sequential decisions solely from interacting with an environment without any supervision has been achieved by the initial installation of deep neural networks as function approximators to represent and learn a value function in high-dimensional MDPs. Reinforcement learning policies face exponentially growing state spaces in experience collection in high dimensional MDPs resulting in a dichotomy between computational complexity and policy success. In our paper we focus on the agent’s interaction with the environment in a high-dimensional MDP during the learning phase and we introduce a theoretically-founded novel method based on experiences obtained through extremum actions. Our analysis and method provides a theoretical basis for effective, accelerated and efficient experience collection, and further comes with zero additional computational cost while leading to significant acceleration of training in deep reinforcement learning. We conduct extensive experiments in the Arcade Learning Environment with high-dimensional state representation MDPs. We demonstrate that our technique improves the human normalized median scores of Arcade Learning Environment by 248% in the low-data regime."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Counterintuitive",
"reinforcement learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e75480b8caf35924aac053b8d7439244b4b39661.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/d6a230ee09fcc32b21b8e227e1f6b6335f201f4d.zip"
},
"title": {
"value": "Counterintuitive RL: The Hidden Value of Acting Bad"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
14fFV0chUS | TRACE: Temporal Grounding Video LLM via Causal Event Modeling | main | Active | video large language model;video temporal grounding | applications to computer vision, audio, language, and other modalities | 5;5;6;6 | 4;3;4;3 | 3;3;3;3 | 3;2;3;2 | 3;2;3;3 | 5.5 | 3.5 | 3 | 2.5 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "No additional questions. Please see the \"Weaknesses\" section for areas needing clarification.\n\n### Recommendations for Improvement:\n- **Refine Prompt Design Explanation:** Providing specific strategies or insights on prompt design tailored for VTG tasks would enhance the paper's originality and usefulness for future researchers.\n \n- **Explore Custom Scene Parsing Techniques:** Introducing refined parsing methods could strengthen TRACE's robustness and accuracy in multi-modal alignment.\n\nThis structured feedback should provide the authors with a comprehensive view of the strengths and areas for enhancement in their paper on TRACE."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper proposes TRACE, a framework leveraging causal event modeling to generate structured video representations through large language models (LLMs). This approach addresses structural gaps in video data, making it valuable for multi-modal research and practical applications in video analysis.\n \n2. TRACE maximizes the potential of pre-trained LLMs by adopting causal event modeling, which decomposes video inputs into frames and aligns them with textual prompts. The temporal segmentation and alignment methods allow videos to be broken down into events with associated timestamps, salient scores, and captions. This granularity is crucial for precise video event parsing and presents a significant step forward in video understanding with LLMs.\n\n3.TRACE outperforms the existing Video-LLMs on three pivotal Video Temporal Grounding (VTG) benchmarks—Charades-STA, QV Highlights, and YouCook2—underscoring its efficacy and robustness in handling video temporal grounding tasks. This achievement underscores TRACE's ability to accurately capture and model the intricate temporal dynamics across a spectrum of video datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new method for Video Temporal Grounding (VTG) tasks, named TRACE. TRACE uses a causal event modeling framework to represent videos as a sequence of events with timestamps, salient scores, and textual descriptions. The paper designs a task-interleaved video large language model to address the limitations of traditional video LLMs in handling the inherent structure of videos. The TRACE model utilizes different encoders and decoding heads to process visual frames, timestamps, and text inputs, enabling more effective event sequencing and causal modeling. Experimental results demonstrate that TRACE achieves state-of-the-art zero-shot performance on various VTG tasks and, after fine-tuning, can match the performance of traditional non-generative, task-specific models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tWhile causal event modeling is presented as a core contribution of this work, the related work section does not address any prior research on similar methodologies. It would be helpful to clarify whether comparable approaches have been explored in the field of video understanding, or if this approach is entirely novel within this domain. Providing this context could strengthen the argument for the method’s originality and situate it more clearly within existing research.\n\n2.\tIt is unclear whether compressing visual features to 8 tokens is sufficient for preserving critical information in complex video scenes. The paper does not provide an analysis or experimental results on the trade-off between the number of tokens and model performance, which would be valuable in understanding the potential impact of this compression choice.\n\n3.\tThere are several grammatical and spelling errors throughout the manuscript, which impact readability and may detract from the paper’s clarity. For example: Line 22: \"processes\" should be corrected to \"process\". Line 44-45: The phrase \"...which,...\" should be rephrased, and \"lacks\" should be changed to \"which lack\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The presentation and illustration are quite clear and easy to follow.\n2. The motivation of causal event modeling is quite intuitive and the design is straightforward and yet effective.\n3. The zero-shot performance is superior compared to previous video LLM methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a task-interleaved video LLM, TRACE, which incorporates a newly-designed causal event modeling framework for VTG task. The TRACE employs multiple encoders for different inputs, while the task tokens are arranged in an interleaved manner. TRACE demonstrates SOTA performance on various VTG datasets compared to previous video LLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the paper compares TRACE with other video LLMs, it presents limited comparison and may not adequately address how it stands against traditional non-generative and task-specific models.\n2. The extent to which TRACE can be applied to other types of video tasks beyond VTG is unclear. Its design may be highly specialized, which could limit its applicability across diverse video understanding tasks. Authors should present more results on other video-understanding tasks since the design seems generalizable by building such causal event relations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) Video Temporal Grounding (VTG) is a crucial task, yet current Video-LLMs underperform in this area. Techniques aimed at improving temporal grounding for these models are highly valuable to advance the field.\n2) The causal event modeling framework fits well with the next-token prediction paradigm of large language models (LLMs), offering an intuitive way to model video structures in sequential tasks.\n3) TRACE demonstrates consistent performance improvements over prior Video-LLMs across three key VTG benchmarks (Charades-STA, QVHighlights, and YouCook2), underscoring its effectiveness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the task of Video Temporal Grounding (VTG) and introduces TRACE, a task-interleaved Video-LLM designed for enhanced VTG performance. The authors highlight limitations in current Video-LLMs, which rely solely on natural language generation without considering the inherent temporal structure of videos. To address this, they propose a novel causal event modeling framework, decomposing videos into sequences of events defined by timestamps, salient scores, and textual captions. Extensive experiments on datasets such as Charades-STA, QVHighlights, and YouCook2 demonstrate the superior zero-shot performance of TRACE compared to existing Video-LLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) While the motivation for TRACE is clear, the use of multiple task-specific heads may limit the model’s generalization. A primary appeal of Video-LLMs lies in their ability to handle a variety of tasks without specific fine-tuning. TRACE’s focus on VTG may narrow its versatility, making it less effective for general video understanding tasks. In most cases, lightweight VTG-specific models with stronger performance could be more suitable for VTG scenarios.\n2) Some clarity is not clear. For example, the paper does not adequately explain slot-based compression, which is not a widely known technique. Moreover, compressing each frame to just 8 visual tokens might lead to significant information loss, raising concerns about the trade-off between efficiency and accuracy.\n3) It is unclear whether the same set of number tokens is used for both timestamps and scores. If so, this could blend the two types of information, contradicting the authors' claim (lines 45–46) that the model preserves the distinct structure of video events."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "My main concern is with the autoregressive modeling approach, and if the authors can provide a reasonable explanation, I am willing to consider raising my score, as I believe this work could provide inspiration for future work."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The author first employs a causal modeling method in the grounding of VLLM, achieving causal probability modeling through the sequence of input tokens. This approach will provide inspiration for future work on video understanding tasks using VLLM."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new Video Temporal Grounding (VTG) method that addresses the shortcomings of existing LLM in handling VTG tasks by modeling causal events."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Autoregressive modeling.**\n\nOne of my major concerns is that the authors have only used the earlier events $e_{1:k-1}$ in their modeling of causal relationships between events through autoregression, without incorporating the equally known $e_{k+1:K}$. I believe this approach may be unreasonable since it is likely that the same events may occur earlier while the current event is different due to unrelated pretexts. However, this issue can be avoided by modeling different subsequent events simultaneously. Besides, most current video understanding researchers have modeled multiple events by utilizing all contextual events that occur before and after them [1-4]. This may require the authors to provide further explanation.\n\n[1] Liang C, et al. Visual abductive reasoning[C]. CVPR 2022.\n\n[2] Lam T E, et al. CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual Scenes. NeurIPS 2024.\n\n[3] Chen T, et al. MECD: Unlocking Multi-Event Causal Discovery in Video Reasoning. NeurIPS 2024.\n\n[4] Du Y, et al. Towards Event-oriented Long Video Understanding. ArXiv 2024.\n\n2. **Inference speed.**\n\nThe authors have adopted a form similar to autoregression, and I would like to understand if there is a time overhead in comparing their model's inference speed to that of current mainstream LLMs.\n\n3. **LLM backbone.**\n\nI noticed that the authors used Mistral-7B as the LLM backbone, however, in other comparison methods, Timechat used LLaMA-2, while HawkEye, Momentor, and VTimeLLM used Vicuna. I would like to know if the authors have conducted experiments with LLaMA-2 or Vicuna as the LLM backbone, to ensure that the superior performance is not due to the better LLM backbone but rather the causal modeling."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper aims to address the mismatch between video structure and video LLMs on video temporal grounding tasks, and propose a causal event modeling framework and the TRACE model as a solution."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024trace,\ntitle={{TRACE}: Temporal Grounding Video {LLM} via Causal Event Modeling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=14fFV0chUS},\nnote={under review}\n}"
},
"abstract": {
"value": "Video Temporal Grounding (VTG) is a crucial capability for video understanding models and plays a vital role in downstream tasks such as video browsing and editing. \nTo effectively handle various tasks simultaneously and enable zero-shot prediction, there is a growing trend in employing video LLMs for VTG tasks. However, current video LLM-based methods rely exclusively on natural language generation, lacking the ability to model the clear structure inherent in videos, which restricts their effectiveness in tackling VTG tasks. To address this issue, this paper first formally introduces causal event modeling framework, which represents videos as sequences of events, and predict the current event using previous events, video inputs, and textural instructions. Each event consists of three components: timestamps, salient scores, and textual captions. We then propose a novel task-interleaved video LLM called TRACE to effectively implement the causal event modeling framework in practice. \nThe TRACE processes visual frames, timestamps, salient scores, and text as distinct tasks, employing various encoders and decoding heads for each. Task tokens are arranged in an interleaved sequence according to the causal event modeling framework's formulation.\nExtensive experiments on various VTG tasks and datasets demonstrate the superior performance of TRACE compared to state-of-the-art video LLMs. Our model and code will be made publicly available."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"video large language model",
"video temporal grounding"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e7113f54c3401de1964e3a6b909dacc1224df0f7.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "TRACE: Temporal Grounding Video LLM via Causal Event Modeling"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
15ASUbzg0N | AVID: Adapting Video Diffusion Models to World Models | main | Active | world models;video diffusion;black box adaptation;controllable video generation | applications to computer vision, audio, language, and other modalities | 5;5;6;6 | 3;4;3;3 | 3;2;3;2 | 2;2;3;3 | 2;3;4;2 | 5.5 | 3.25 | 2.5 | 2.5 | 2.75 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper has a nice motivation: how does one adapt existing foundation models in order to add an action conditioning to them, so as to make it more relevant and useful for embodied robotics applications\n- The paper writing is clear; first the limitations of prior work are built up and then a solution is proposed"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a mechanism for adapting current image-conditioned video diffusion models to action-conditioned video diffusion models. They do this by training an additional UNet after the standard video diffusion UNet which predicts an adjustment to the noise output by the standard UNet. Because of this setup, their \"adapter\" does not need access to the parameters of the pretrained video diffusion model. Experiments show that this kind of noise adaptation helps for some metrics and does not for some others."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The way the paper starts with the motivation near L51-52 is a bit misleading. The paper actually cannot fix the issues in L51-52 because they still assume access to the internal inference pipeline of these closed-source model, because if I understand it correctly, this method needs access to a diffusion model's noise prediction at each of the N reverse diffusion steps that happens at inference. For closed source models, this information is not available.\n- The performance gain in the quantitative metrics is not substantial. The metrics where the proposed method shines are mostly photometric quantities. It is not clear if the error margin between prior work and this work just results from the standard deviation or variance of the models. I think a better reflection of the proposed approach would have come from an application to a downstream robot task (maybe manipulation) that would evaluate a robot in action. PSNR and other photometric errors with the shown gain do not say much about the performance of the method.\n- The paper heavily leans on the limitations of the POE approach and that is how a learnable adapter is motivated but qualitatively there is no comparison to that approach (even though POE is slightly better than the action conditioned diffusion across some metrics and settings)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Regarding W1, can authors elaborate more on how and why this analysis motivates design choices in methodology of AVID?\n2. In the ablation study of “No mask” (Table 3), is the adapter trained with $\\epsilon_{\\text{final}}$ given in Equation 5 or the adapter not being able to output a mask?\n3. Since the mask is an important component in design choices of AVID, could author visualize what the mask looks like in different timesteps of diffusion denoising process, which corresponds to Figure 4d?\n4. Since AVID performs two diffusion denoising process, does this increase inference time and thus limit the scope of downstream applications of synthetic videos generated from this approach?\n5. Regarding W3, is it technically possible to apply this approach to domains other than world modeling?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well written and easy to follow\n2. The main idea of training a lightweight adapter for action-labeled domains is reasonable. It balances finetuning efficiency and task performance.\n3. Baseline comparisons are comprehensive. Authors compared to many alternative baselines to demonstrate effectiveness of their approach. Authors provide qualitative visualizations for quality of generated videos and usefulness of learned masks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the problem setting of action-conditioned video generation. It proposes to adapt pre-trained video generation model to action labeled dataset without access to parameters from pre-trained models. The goal is to add action information as conditioning to pre-trained models for more accurate video predictions. Authors also analyze limitations in previous related work, production of expert, under a specific case. The proposed approach, AVID, trains an adapter in smaller size on action labeled video datasets. It takes noise predictions outputs from pretrained models and action information as input, and learns to output a mask that is used to combine outputs from pre-trained model and adapter. The adapter is trained with reconstruction loss between final output from both models and ground truth on domain-specific datasets. Authors conducted experiments on two robotics datasets, Procgen and RT1, and compared proposed approach to several baselines that have full access and do not assume access to pretrained model parameters. Experiments results demonstrate that AVID outperforms baselines in generating more realistic videos and better quality given action information on these domains."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation in Section 3.2 is a little unclear. It is hard to connect analysis about limitations of previous work [1] to motivations of the proposed approach\n2. The novelty is somewhat limited. The main difference from previous work is to have domain-specific adapter output an element-wise mask that is used to combine noise predictions from pre-trained model and adapter.\n3. The experimental domains are only two datasets within action-conditioned world modeling\n\n[1] Yang, Mengjiao, et al. \"Probabilistic adaptation of text-to-video models.\" arXiv preprint arXiv:2306.01872 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The authors propose a novel method to condition pre-trained video diffusion models on action sequences without access to the pre-trained model's parameters.\n2. The authors mathematically highlight the limitations of the adaptation method proposed in \"Probabilistic Adaptation of Text-to-Video Models\" and this other approach.\n3. The authors demonstrate that their adaptation method has better action consistency compared to the other approach, using a new metric that they introduce. \n4. The authors also propose multiple baselines to compare against their proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "1. The authors propose a novel method to condition pre-trained video diffusion models on action sequences without access to the pre-trained model's parameters.\n2. The authors demonstrate that their adaptation method is superior to the method proposed in \"Probabilistic Adaptation of Text-to-Video Models\" and mathematically highlight the limitations of this other approach.\n3. The authors use different pre-trained base models and two different video domains, games and robotics to quantitatively evaluate their proposed method against the above adaptation approach and some other proposed baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In Table 2, Action conditioned diffusion has a better Action Error Ratio compared to the proposed approach for all three (small, medium, large) variants. While the authors do note this as a limitation, this needs to be explained/investigated more. If it is better to just train an action conditioned diffusion model from scratch why should there be a need to adapt pre-trained models ?\n\n2. Instead of using the action embedding to just scale and shift the t-th frame feature, have the authors explored using cross-attention layers directly with the action embedding sequence similar to language conditioning ? Are there any specific challenges that prohibit such an approach ?\n\n3. It would be interesting to see results for each task type in RT-1 . Are there tasks that are much harder to model than others and what does that tell us about the approach ?\n\n4. Some video visualisations of the generated videos (especially for robotics) would also be very useful to judge the effectiveness of the approach. Are the videos temporally consistent visually ?\n\n5. Why is IRAsim's Action error ratio empty in Table 7 ? is it not possible to evaluate the Action Error Ratio of IRAsim ?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Most questions and suggestions are detailed in the 'Weaknesses' section.\n\n**Limitations of Naive Adaptation of Yang et al.**\n\nCan the authors please highlight the exact source of discrepancy between the derivation in Yang et al. to the derivation presented in this section? Do you claim that there is an error in their derivation? Alternatively, are there different assumptions in your setting where their derivation does not hold?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Summarized points:\n- Tackle an interesting problem in the path for scaling robot learning\n- Clear and well written paper\n- Good positioning in related work\n- Comparison to relevant baselines\n- Thorough analysis of results\n- Detailed appendix"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a method for leveraging diffusion models pretrained on large-scale action-free video data for training action-conditioned diffusion world models on smaller action-labeled data from domains of interest. The motivation for training these world models is to solve downstream sequential decision-making tasks. The proposed method’s main novelty is that it requires access to some intermediate calculations of the pretrained diffusion model but not to its parameters. The proposed method, AVID, trains an adapter network which is conditioned on the pretrained model’s noise prediction and optimizes a denoising loss that incorporates both noise predictions from the pretrained model and the adapter’s output using a learned mask. The author’s evaluate world model performance on a real-robot and a video game domain based on multiple perceptual metrics as well as an action-prediction-based metric. Baselines include various diffusion-based methods some of which require full access to model parameters. The proposed method either outperforms or is comparable to baselines in most of the evaluated metrics while not requiring access to pretrained model parameters."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Summarized points:\n- Intermediate model calculations are not necessarily more likely to be accessible than the model parameters\n- Limitation analysis of previous work (Section 3.2) does not clearly motivate the author’s specific choice of solution\n- Main motivation is sequential decision-making but evaluation metrics do not assess the world models’ efficacy in solving such tasks\n- It is not clear from the experimental results that training from scratch is not preferable to the proposed method for downstream sequential decision-making\n\n**Evaluation - Metrics**\n\nThe main motivation of your method is to accommodate sequential decision-making but evaluation metrics do not assess the world models’ efficacy in policy learning or planning.\nAll metrics excluding ‘Action Error Ratio’ are perceptual metrics that may be dominated by aspects of the videos that are not important for control. For this reason, I believe the most interesting and relevant metric out of the ones you display in your evaluation is the ‘Action Error Ratio’. Your evaluation could benefit from including additional metrics that are a better proxy for the world model’s usefulness in sequential decision-making. In the Procgen dataset for example, you may want to measure the ability to predict the reward from the generated frames as well as the actions.\n\nI understand that evaluating the world models by actually using them to solve a sequential decision-making task may not be straightforward. Doing this for the RT1 dataset would be hard for multiple reasons, but it may be more feasible for the Procgen environments. One possible evaluation pipeline is training a separate model to predict the reward from a given frame and then use the cross-entropy method (CEM) or a similar sampling-based planning algorithm with model predictive control (MPC) on top of the world model to maximize the sum of rewards in the prediction horizon. Any decision-making algorithm you choose doesn’t have to be SOTA to demonstrate the point that a given world model is better than the other for this purpose.\n\nWhat is the accuracy of the action predictor on each dataset? I believe this is important in order to validate the use of the ‘Action Error Ratio’ metric and that this information should at least be in the appendix.\n\n**Evaluation - Baselines**\n\nWhy do you tune baseline hyperparameters based on FVD and not based on e.g. normalized evaluation metrics? I find this choice puzzling since you explicitly write in the results section that this metric is less suitable than others in the setting of action-conditioned video generation.\n\nHow do you choose which baselines out of the 8 you suggested appear in the result tables?\n\nCan the authors please explain what is the purpose of the ‘Full’ row in the result tables?\n\n**Evaluation - Results**\n\nIt is not clear from the experimental results that training from scratch is not preferable to the proposed method for downstream sequential decision-making, a point that is also suggested in the limitations section and is mostly based on the ‘Action Error Ratio’ metric. This is not to say that it clearly is not beneficial. I suggest adding a discussion about the differences in performance in the two domains which would incorporate further insights as to when and why training an adapter is preferable to training from scratch.\n\n**Evaluation - Ablation Study**\n\n*Mask ablation*: It is not clear from your results that the learned mask has performance benefits and can’t be ‘absorbed’ into the adapter noise prediction, especially since it hurts performance on one dataset and doesn’t in the other.\nHow do you explain the difference in the effects of the mask on performance in each dataset? I think a discussion with respect to factors like the relationship between pre-training and fine-tuning data in each dataset and with respect to the results presented in Figure 4d could shed more light on this matter.\n\n*Conditioning ablation*: I think the method and/or ablation section can benefit from an explanation or intuition behind why conditioning on the pretrained model’s output is beneficial, given that the pretrained output is already accounted for in the objective.\n\n*Request for ablation*: As I see it, the fundamental difference between the proposed method and the PoE baseline is that the parameters of the adapter network are trained on the denoising loss containing noise predictions from both the pretrained network and the adapter network. Therefore an interesting ablation would be combining both the NM and NC ablations."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Adapting pretrained video diffusion models to action-conditioned world models without finetuning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024avid,\ntitle={{AVID}: Adapting Video Diffusion Models to World Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=15ASUbzg0N},\nnote={under review}\n}"
},
"abstract": {
"value": "Large-scale generative models have achieved remarkable success in a number of domains. However, for sequential decision-making problems, such as robotics, action-labelled data is often scarce and therefore scaling-up foundation models for decision-making remains a challenge. A potential solution lies in leveraging widely-available unlabelled videos to train world models that simulate the consequences of actions. If the world model is accurate, it can be used to optimize decision-making in downstream tasks. Image-to-video diffusion models are already capable of generating highly realistic synthetic videos. However, these models are not action-conditioned, and the most powerful models are closed source which means they cannot be finetuned. In this work, we propose to adapt pretrained video diffusion models to action-conditioned world models, without access to the parameters of the pretrained model. Our approach, AVID, trains an adapter on a small domain-specific dataset of action-labelled videos. AVID uses a learnt mask to modify the intermediate outputs of the pretrained model and generate accurate action-conditioned videos. We evaluate AVID on video game and real-world robotics data, and show that it outperforms existing baselines for diffusion model adaptation. Our results demonstrate that if utilized correctly, pretrained video models have the potential to be powerful tools for embodied AI."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"world models",
"video diffusion",
"black box adaptation",
"controllable video generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a6cdded6b6e19d7584670b99a59986474e06f242.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/25ac6509dd36691b1b407555902a403283ba7d12.zip"
},
"title": {
"value": "AVID: Adapting Video Diffusion Models to World Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
15UetYngA7 | FuseChat: Knowledge Fusion of Chat Models | main | Active | Model Fusion;Large Language Models | foundation or frontier models, including LLMs | 5;6;6 | 4;5;4 | 2;3;3 | 2;3;3 | 3;2;4 | 5.666667 | 4.333333 | 2.666667 | 2.666667 | 3 | 0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. For Figure 3, I wonder if pivot LLM is the original OpenChat-3.5-7B or OpenChat-3.5-7B after fusing training. I also wonder if Target LLM is the OpenChat-3.5-7B after fusing training or the final FuseChat model. Please clarify these.\n2. I wonder if you could categorize the samples in your training data into domains by MT-Bench and see how the distribution is in your training set."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper studies an interesting question of how to fuse multiple chat LLMs into a potent chat LLM. The paper is well-written and well-organized. \n2. The paper has extensive experiments to investigate the effectiveness of their proposed framework and each component in their framework.\n3. Their fusion method is also computation-friendly, which doesn't require additional training or dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new framework, FuseChat, to fuse diverse LLMs into a single LLM capable of performing various tasks. They first apply pairwise knowledge fusion on source chat LLMs to create multiple target LLMs with identical structures. To fuse models with different vocabulary, they introduce a statistics-based token alignment approach to obtain probabilistic distribution matrices. Then, they fuse all the target LLMs within the parameter space by utilizing the proposed new merging method, SCE. In their experiments, they conducted extensive experiments to investigate their framework with diverse source LLMs and evaluation datasets. They also offered a number of model analyses and ablation studies."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. They didn't provide a significance test to show if their proposed method significantly outperforms their baselines (e.g. FuseLLM/OpenChat-3.5-7B Multi) or not. Because the improvement in some tasks is small, it would be better to show whether the improvement is significant. \n2. Table 1's caption needs to be improved. It would be helpful if they clarified what bold font and underscore mean in their table and what the percentage means."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tHave the authors considered individual model distillation instead of pairwise fusion, given the MinCE choice in Equation 2? \n2.\tWhat is the rationale behind the 0.9/0.1 weight distribution in Equation 4? \n3.\tCan the authors provide statistical significance tests for the improvements over Pairwise Fusion?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tThe motivation is practical and significant, offering a cost-effective solution for integrating capabilities of different heterogeneous LLMs without training new models from scratch.\n\n2. The two-stage framework effectively combines heterogeneous model knowledge through distillation into homogeneous models followed by parameter merging, with a well-designed token alignment strategy.\n\n3. Comprehensive experiments validate the framework's effectiveness, showing competitive performance against different methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces FUSECHAT, a framework designed for the knowledge fusion of chat-based large language models (LLMs). The proposed fuse-and-merge framework integrates the strengths of diverse LLMs through lightweight continual training while avoiding the high cost and potential redundancy associated with developing new LLMs from scratch. Experimental results indicate that the proposed model outperforms existing methods across AlpacaEval and MT-Bench."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper's technical contribution appears somewhat limited. The approach can be viewed as a combination of pairwise FuseLLM and model merging (similar to TIES-Merging), both of which have been previously established as effective methods. The improved performance, while notable, follows logically from the combination of these known techniques, making the technical innovation less impressive than desired.\n2. Several claims in the paper require further clarification. For instance, the statement on line 92 of the Introduction that \"FUSELLM limits its exploration to source LLMs of the same size as the target LLM\" appears inconsistent with FUSELLM's design, which can handle different-sized source models. Furthermore, FUSECHAT doesn't present special designs for distilling from differently-sized source models. Additionally, the choice of MinCE for the Fusion function in Equation 2 reduces to single-model distillation of the model with lower CE score in each pair, raising questions about the necessity of the pairwise approach.\n3. There are concerns regarding experimental details. The combination weight is 0.9 in Equation 4, which means only 0.1 weight is assigned to distillation loss. Compared to 0.9 for SFT, this setting potentially undermines the significance of the distillation process. Moreover, the modest performance difference between FUSECHAT and Pairwise Fusion shown in Table 1 warrants statistical significance testing to validate the improvements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. First, the challenges of knowledge fusion tasks and the contributions of this paper should be introduced in the Introduction section.\n2. The Method section should highlight the work done by the author. Extensive introduction of work that is not their own will make the article appear less innovative, and you can add formulas to further explain Token Alignment. \n3. The introduction of the SCE algorithm is too short, and the reasons for the use of some steps are not introduced, such as the Calculate and Erase steps.\n4. Added explanations for poor experimental results in the experimental section, for example, Target LLM performs worse than Pivot LLM and Source LLM in some dimensions."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "In general, the logic of the article is good, and the abstract, main text, and conclusions are consistent. The experiments are sufficiently convincing. The author summarizes the previous work from multiple aspects in the related work section."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduce a fuse-and-merge framework called FUSECHAT, which includes two stages. Pairwise knowledge fusion using a pivot LLM and token alignment to generate target LLMs with identical structure and size, and merging these models via SCE method, which determines merging coefficients based on parameter updates."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In the Introduction section, there is insufficient explanation of the challenges faced by FUSECHAT. It is not enough to just explain the advantages of knowledge fusion, but the complexity of the work should also be highlighted.\n2. The contribution of the work done in this paper is not explained in the Introduction section. \n3. The method section uses too many narrative words and lacks specific formula expressions, which increases the difficulty for readers to understand the article. \n4. In the experiment section, there is a lack of explanation for the adverse results in the experiment."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "In this work, we propose a fuse-and-merge framework for knowledge fusion of structurally and scale-varied chat LLMs to integrate their collective knowledge and individual strengths into a more potent chat LLM, resulting in FuseChat."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024fusechat,\ntitle={FuseChat: Knowledge Fusion of Chat Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=15UetYngA7},\nnote={under review}\n}"
},
"abstract": {
"value": "While training large language models (LLMs) from scratch can indeed lead to models with distinct capabilities and strengths, it incurs substantial costs and may lead to redundancy in competencies. Knowledge fusion aims to integrate existing LLMs of diverse architectures and capabilities into a more potent LLM through lightweight continual training, thereby reducing the need for costly LLM development. In this work, we propose a new framework for the knowledge fusion of chat LLMs through two main stages, resulting in FuseChat. Firstly, we conduct pairwise knowledge fusion on source chat LLMs of varying structures and scales to create multiple target LLMs with identical structure and size via lightweight fine-tuning. During this process, a statistics-based token alignment approach is introduced as the cornerstone for fusing LLMs with different structures. Secondly, we merge these target LLMs within the parameter space, where we propose a novel method for determining the merging coefficients based on the magnitude of parameter updates before and after fine-tuning. We implement and validate FuseChat using six prominent chat LLMs with diverse architectures and scales, including OpenChat-3.5-7B, Starling-LM-7B-alpha, NH2-SOLAR-10.7B, InternLM2-Chat-20B, Mixtral-8x7B-Instruct, and Qwen-1.5-Chat-72B. Experimental results on two instruction-following benchmarks, AlpacaEval 2.0 and MT-Bench, demonstrate the superiority of FuseChat-7B over baselines of various sizes. Our model is even comparable to the larger Mixtral-8x7B-Instruct and approaches GPT-3.5-Turbo-1106 on MT-Bench."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Model Fusion",
"Large Language Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/129f5d0cbd1af8519a0f0e8e5f6e009d00f5c7fd.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/70909a90c768d4664f54918eff8a8c7125d2ffba.zip"
},
"title": {
"value": "FuseChat: Knowledge Fusion of Chat Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
15dVqf7VXR | Learning with User-Level Local Differential Privacy | main | Active | Local differential privacy;minimax | learning theory | 3;5;6 | 4;3;4 | 2;2;3 | 2;2;3 | 3;2;3 | 4.666667 | 3.666667 | 2.333333 | 2.333333 | 2.666667 | -0.188982 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why is the case of $\\epsilon > 1$ considered interesting for LDP studies?\n\n2. In Proposition 1 (2), to ensure user-level $\\epsilon$-LDP from item-level $\\epsilon$-LDP, if we randomly pick a sample from each user, why is it stated as ''$n$ users with $m$ samples per user'' instead of ''$n$ users with $1$ sample per user''?\n\n3. For Definition 1, could you explain in detail why the definition of user-level $\\epsilon$-LDP does not ensure item-level $\\epsilon$-LDP?\n\n4. For Theorem~1, I am unable to understand why is it said that the mean squared error will never converge to zero with increasing $m$ if $n$ is fixed.\n\n5. What does $n_0$ represent in Equation (6)?\n\n6. For the stochastic optimization problem, why is only the bounded gradient case considered? Why can't the private mean estimation over unbounded support developed in the paper be used for the unbounded gradient case, which seems more interesting and important in practice?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper tackles significant learning problems under user-level local differential privacy (LDP) constraints and establishes several tight lower and upper bounds."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper first analyzes the mean estimation problem and then extends the findings to stochastic optimization, classification, and regression. Specifically, the authors propose adaptive strategies to achieve optimal performance across all privacy levels. They also derive information-theoretic lower bounds, demonstrating that the proposed methods are minimax optimal up to logarithmic factors. Notably, unlike the central DP model, where user-level DP generally leads to slower convergence, the results show that, under the local DP model, convergence rates are nearly identical between user-level and item-level cases for distributions with bounded support. For heavy-tailed distributions, the user-level rate is even faster than the item-level rate."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Some statements throughout the paper are somewhat unclear, which can make parts of the presentation difficult to follow.\n\nFor the stochastic optimization problem, only the bounded gradient case and strongly convex objective functions are considered, which may not be sufficiently practical for broader applications."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is very organized and presents its results in a clear manner.\n\n2. Matching information-theoretic lower bounds are also derived which enhances the completeness of this work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper examines user-level privacy in a distributed setting, particularly in user-level local differential privacy (ULDP). The authors analyze mean estimation and its applications in stochastic optimization, classification, and regression, proposing adaptive strategies that optimize performance across various privacy levels. The authors claim that unlike in the central model, the convergence rates for user-level and item-level privacy are nearly equivalent in local models, with user-level privacy yielding even faster rates for heavy-tailed distributions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This paper studies ULDP on various problem settings: mean estimation, stochastic optimization, classification and regression. It is clear from Table 1 how the proposed rates in ULDP is different from the rates in item-level LDP. However, some relevant papers appear to be missing from the references. For example, [1], [2] and [3] \n\n[1]: Li, Bo, Wei Wang, and Peng Ye. \"Improved Bounds for Pure Private Agnostic Learning: Item-Level and User-Level Privacy.\" arXiv preprint arXiv:2407.20640 (2024).\n\n[2]: Cummings, Rachel, et al. \"Mean estimation with user-level privacy under data heterogeneity.\" Advances in Neural Information Processing Systems 35 (2022): 29139-29151.\n\n[3]: Charles, Zachary, et al. \"Fine-tuning large language models with user-level differential privacy.\" arXiv preprint arXiv:2407.07737 (2024).\n\n\nBesides, on line 132 and 133\" Moreover, we also provide the first analysis on nonparametric classification and regression problems under user-level ϵ-LDP\" is not accurate. To the best of my knowledge, [4] also studies regression in the ULDP setting under sparsity constraint. From my perspective, sparse estimation problem in LDP model ([5], [6]) might also could also be a valuable addition to the related work section.\n\n[4]: Ma, Yuheng, Ke Jia, and Hanfang Yang. \"Better Locally Private Sparse Estimation Given Multiple Samples Per User.\" arXiv preprint arXiv:2408.04313 (2024).\n\n[5]: Zhu, Liyang, et al. \"Improved Analysis of Sparse Linear Regression in Local Differential Privacy Model.\" arXiv preprint arXiv:2310.07367 (2023). \n\n[6]: Zhou, Mingxun, et al. \"Locally differentially private sparse vector aggregation.\" 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The authors highlight the regime where \\eps > 1 in the introduction. Yet, it is unclear how this regime is handled in the proof of the lower bound. For example, in the proof of Theorem 2, how do we get from Eq. 62 to Eq. 64, if \\eps > 1? I understand that the current proof holds for \\eps < 1. Similar questions exist in the proof of Theorems 6 and 7. \n\n2. Following the above, it would be useful to highlight the difference in the lower bound proof for item-level and user-level LDP, especially the regime when \\eps >1. \n\n3. It seems to me that the current local model is **non-interactive**? Can the authors comment on the **interactive** model? That is, can the proposed algorithms be easily extended to the interactive model?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "+ It addressed user-level DP, which is a relatively less explored but extremely relevant area\n+ It studied a wide variety of tasks (mean estimation, stochastic optimization, nonparametric classification and regression)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the problem of achieving user-level local differential privacy (LDP) across various statistical tasks, including mean estimation, stochastic optimization, classification, and regression. By tailoring privacy mechanisms to different privacy levels, the authors propose algorithms that attain optimal performance rates under user-level LDP, achieving minimax optimality up to logarithmic factors. Unlike the central model, where user-level privacy often implies slower convergence, the local model yields convergence rates comparable to item-level LDP, with even faster rates in heavy-tailed distributions. This work provides both theoretical bounds and adaptive strategies, expanding the scope of user-level LDP applications in distributed systems."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Technical novelty is unclear\n- Some proof is unclear"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A systematic study on user-level local diferential privacy on various tasks including mean estimation, stochastic optimization, classification and regression"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning with User-Level Local Differential Privacy},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=15dVqf7VXR},\nnote={under review}\n}"
},
"abstract": {
"value": "User-level privacy is important in distributed systems. Previous research primarily focuses on the central model, while the local models have received much less attention. Under the central model, user-level DP is strictly stronger than the item-level one. However, under the local model, the relationship between user-level and item-level LDP becomes more complex, thus the analysis is crucially different. In this paper, we first analyze the mean estimation problem and then apply it to stochastic optimization, classification, and regression. In particular, we propose adaptive strategies to achieve optimal performance at all privacy levels. Moreover, we also obtain information-theoretic lower bounds, which show that the proposed methods are minimax optimal up to logarithmic factors. Unlike the central DP model, where user-level DP always leads to slower convergence, our result shows that under the local model, the convergence rates are nearly the same between user-level and item-level cases for distributions with bounded support. For heavy-tailed distributions, the user-level rate is even faster than the item-level one."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Local differential privacy",
"minimax"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a9ae0d50e13ee4fc6188b1059291ff9ce7b533be.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Learning with User-Level Local Differential Privacy"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
15lk4nBXYb | CCM-DiT: Camera-pose Controllable Method for DiT-based Video Generation | main | Active | Video Generation;Diffusion Models | generative models | 3;3;3;3 | 3;5;3;4 | 2;3;2;2 | 2;2;2;1 | 1;2;2;1 | 3 | 3.75 | 2.25 | 1.75 | 1.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The idea of converting camera poses into pixel-wise embeddings is novel, which allow the video generation model to effectively understand the camera motion.\n\n2. This paper studies three common ways of incorporating camera pose embedding into a DiT model, which could be useful for future work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an approach to enable controlling camera pose for video generation based on Diffusion Transformer (DiT). It converts pixel-wise motion field based on Plucker coordinates into a sparse motion field, which is then injected into the temporal attention part of DiT. LoRA is used to fine-tune a pre-trained DiT model (Open-Sora). Experimental results on the RealEstate10K dataset are reported."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed sparse motion encoding module seems almost identical to the standard pixel-wise Plucker embedding. Compared with Eq. (3), the only difference in Eq. (4) is the embedding computation are performed on a set of sparse locations controlled by $s_x$ and $s_y$. It is not clear how the camera \"motion\" is encoded. Does the proposed approach convert the pixel-wise motion vectors shown in Fig. 4 into embeddings? \n\n2. A lot of definitions are not clear and math symbols are not used rigorously in the presentation, making the paper hard to follow and understand.\n\n- a) The Sparse Motion Encoding Module and Temporal Attention Injection Module are not shown in Fig. 1 at all.\n\n- b) In line #179 on page 4, how is $RT$ defined? Is it matrix multiplication similar to $RK^{-1}$?\n\n- c) In line #195 on page 4, it says $F_s\\in \\mathbb{R}^{L\\times M\\times N}$? According to the definition in Eq.(4), the channel dimension shouldn't be 1?\n\n- d) In Fig. 2, $c_p$, $c_s$, and $c_l$ are not defined in the main text. And the shape of $c_l$ is not clearly explained either.\n\n- e) In Fig. 3, what are $s$, $p$, and $p_m$?\n\n3. The first two items of the claim contributions of the paper are essentially identical. Both of them are about incorporating camera poses into a DiT model.\n\n4. Only visual results of simple camera motion (zoom in, zoom out, and roundabout) are shown in the paper. No supplementary results are available. It is therefore hard to gauge the effectiveness of the proposed approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Could the authors provide a user study to assess the quality?\n\n- Could the authors upload the generated videos and visual comparisons with baselines?\n\n- Could the authors provide the implementation details of cross-attention and adaptive normalization? Including where these computations happen in relation to temporal attention computation. (In Figure 3, the injection happens between two temporal attention layers. This figure is wrong to me as exactly one temporal attention should exist either before or after the injection.)\n\n- Could the authors provide the reasons why the models are trained on only 16-frame videos?\n\n- Could the authors detail how the model is extended to 72 frames given the model is trained on 16 frames?\n\n- The authors mention that \"object movement tends to be limited to small-scale motions\". I think this can be a big issue. Could the authors provide a detailed comparison with other baselines?\n\n- Is the motivation for introducing sparse sampling of the motion field for computational efficiency? The authors lately argue that sparse sampling improves the results, but I am not convinced why the performance is improved. Could the authors provide more detailed reasons behind that?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- To the best of my knowledge, this is the first work that tackles camera-motion-controlled video generation with open-source DiT (i.e. opensora).\n\n- The idea of sparsely sampling motion fields before inputting them into the VAE encoder is new.\n\n- They demonstrate that adaptive normalization for conditioning camera motions is the effective strategy for camera-conditioned video generation for the first time, which is consistent with the results demonstrated in trajectory-controlled generation (e.g. Tora). \n\n- They quantitatively demonstrate that the generated videos have better motion and visual quality for 72-frame generation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an approach to training a DiT-based video diffusion model (OpenSora) to control the camera motions of generated videos. \nThe camera motion, which is represented by Plücker coordinates, is first sparsely sampled (downsampled by x40) and then encoded into the \"motion latent\" via VAE encoder (MegViT-v2, inspired by Tora). Finally, motion latent is injected into the temporal attention layer of DiT via adaptive normalization. The model is finetuned on 16-frame videos from the RealEstate10K dataset. The authors demonstrate that visual quality and motion accuracy (FID, FVD, CamMC) outperformed baselines for 72 frame generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Novelty:\n\n- As the authors have already acknowledged, the idea of using Plucker coordinates has already been introduced (e.g., VD3D). Additionally, the use of a VAE encoder and adaptive normalization has already been introduced by Tora. Following that, the main technical contribution is introducing a sparsely sampled motion field. The author argues that sparsely sampled motion fields contribute to performance improvement, but the authors fail to provide details results (e.g., visual results, more ablation study in Table 2, what about x1?) nor detailed motivation. Additionally, choosing this downsample factor seems heuristic with no intuitive justifications. I would appreciate it if the authors could provide more ablation studies and technical motivations for applying sparsely sampled motion fields.\n\nExperiments:\n\n- Although the model is trained on a 16-frame dataset, the model performs worse than other baselines for 16-frame generation. Additionally, the motivation for training only on 16-frame videos is unclear, given the model is tasked to generate longer-frame videos during the experiments. I would appreciate it if the authors could provide more explanations for this decision.\n\n- The authors did not provide sufficient qualitative results or user study, where the superiority of their method is not convincing. \n\nClarity:\n\n- The paper lacks implementation details. For instance, I am not sure how adaptive normalization and cross-attention are performed exactly. Figure 3 seems inaccurate because the injection happens between two temporal attention layers, where the temporal attention layer should exist in only one of these locations. Please see the questions below for more requests for clarification."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Besides the questions in the first point of weakness section, I have the following questions.\n1. The MagVit2 model is designed to be able to compress the images and videos in a single model. Using some padding, the first image is treated as a separate image, thus the training of MagVit2 VAE is 17 frames (line 17) is reasonable. But, in line 259, the author said \"we extract 16-frames...\", I want to know how those 16 frame are padded and what is the output of the VAE encoder?\n2. Can the author provide the reconstruction results, using l1 loss, for the reconstructed sparse Plücker embedding?\n3. Whether the motion degree of objects of the whole scene degraded after adding the camera control-related modules? \n4. In line 303, the author state that they use the different resolution for different video generation models, can the FID, FVD fully reflect the ranking of different models?\n5. In the visualization results, the camera trajectories seems too simple, focusing on panning and zooming. I remember there are some more complex camera trajectories in RealEstate10K dataset, can the author provide some quantitative or qualitative results on those complex camera trajectories?\n6. How to calculate the CamMC metric for SVD, AnimateDiff, OpenSora model, since they cannot take the camera trajectories as input."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper train a VAE model to temporally downsample the Plücker embeddings, easing the conflict of different temporal length between Plücker embedding and the latent features. \n2. The visualization results demonstrate the effectiveness of the proposed method on some simple camera trajectories, like paning and zooming."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims at controlling the camera viewpoints of the videos generated by the DiT-based video diffusion models. To achieve the precise camera viewpoint control, this paper utilizes the Plücker embedding as the camera representation. The Plücker embeddings are per-frame spatial maps, while the DiT-based diffusion (like OpenSora) do some downsamples in the temporal dimension. To deal with this conflict, this paper proposes a Sparse Motion Encoding Module to temporally downsample the Plücker embeddings, with the same ratio as the OpenSora VAE. This Sparse Motion Encoding Module is implemented by a MagVit2 like causal VAE. The generated latent motion is injected into the temporal attention layer of OpenSora using an adaptive normalization layer. Experiments demonstrate the superiority of proposed method on both short and long video generation (with camera control) task. Some ablation studies also prove the effectiveness of the proposed Sparse Motion Encoding Module and Temporal Attention Injection Module."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The motivation of Sparse Motion Encoding Module is not well presented. Why the VAE model is used to compress the Plücker embedding? The encoder of VAE can be used to compress the Plücker embedding, what is the decoder used for? Besides, generally, the VAE model will not bring too much extra computation, and it can be reused once encoded, why this module **sparsely** sample some Plücker motion vector?\n2. The writing is not good for this manuscript. For example, there are some typos, like the **MegVit-v2** in Line 198, the inconsistency of OpenSora and Open-Sora. Besides, some details are missing, like what is the input of the Sparse Motion Encoding Module. The rows 2 and 4 in Figure 4 does not provide too much information.\n3. The experiments is not very convincing. See Question part."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. In Fig 1, I am curious how to convert the text instruction \"Zoom-in\" to the motion field.\n2. Eq (3) is unclear. Is P_{x,y} the Plucker embedding? But how to get R, K and t from the dataset? Does the dataset provide such information?\n3. What is the major difference between this paper and the motionCtrl [1]? A detailed comparison of the proposed method with MotionCtrl, highlighting key differences in approach, architecture, and performance, would be helpful.\n\n[1] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan. 2024. MotionCtrl: A Unified and Flexible Motion Controller for Video Generation. In ACM SIGGRAPH 2024 Conference Papers (SIGGRAPH '24). Association for Computing Machinery, New York, NY, USA, Article 114, 1–11. https://doi.org/10.1145/3641519.3657518"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The authors add the camera pose in the video generation, which is an interesting point."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose a DiT-based video generation method that embeds the camera poses as controllable signals. They use LoRA to fine-tune the attention layer parameters in the training. RealEstate10K dataset is used in the evaluation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Many details are missing in Fig 2. I do not find any explanations of what does frame pose mean and how to get the frame pose. How to convert frame pose to camera pose is also very unclear. It would be helpful to provide a step-by-step description of how to extract and process the pose information from the dataset, including definitions of frame pose and camera pose, and the conversion process between them.\n2. The method is only evaluated on a single dataset, which is not sufficient to verify the effectiveness of the method. For example, authors can test on videos from WebVid and HD-VILA following MotionCtrl [1] paper.\n\n[1] Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan. 2024. MotionCtrl: A Unified and Flexible Motion Controller for Video Generation. In ACM SIGGRAPH 2024 Conference Papers (SIGGRAPH '24). Association for Computing Machinery, New York, NY, USA, Article 114, 1–11. https://doi.org/10.1145/3641519.3657518"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024ccmdit,\ntitle={{CCM}-DiT: Camera-pose Controllable Method for DiT-based Video Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=15lk4nBXYb},\nnote={under review}\n}"
},
"abstract": {
"value": "Despite the significant advancements made by Diffusion Transformer (DiT)-based methods in video generation, there remains a notable gap with camera-pose perspectives. Existing works such as OpenSora do not adhere precisely to anticipated trajectories, thereby limiting the utility in downstream applications such as content creation.\nTherefore, we introduce a novelty approach that achieves fine-grained control by embedding sparse camera-pose information into the temporal self-attention layers. We employ LoRA to minimize the impact on the original attention layer parameters during fine-tuning and enhance the supervision of camera-pose in the loss function.\nAfter fine-tuning the OpenSora’s ST-DiT framework on the RealEstate10K dataset, experiments demonstrate that our method outperforms LDM-based methods for long video generation, while maintaining optimal performance in trajectory consistency and object consistency."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Video Generation",
"Diffusion Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1614a2503c1fc1701e82d661c9176a0928e52260.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "CCM-DiT: Camera-pose Controllable Method for DiT-based Video Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
16O8GCm8Wn | Robust Watermarking Using Generative Priors Against Image Editing: From Benchmarking to Advances | main | Active | AI Security;Watermark;Diffusion Model;Image Editing | generative models | 5;6;6;6;6 | 4;3;4;4;2 | 2;3;2;3;2 | 2;3;3;3;2 | 3;3;3;3;3 | 5.8 | 3.4 | 2.4 | 2.6 | 3 | -0.375 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is clearly written and organized, with effective figures explaining both W-Bench and VINE.\n\n- The paper provides rigorous evaluations, testing VINE and eleven other watermarking models on diverse editing techniques."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents VINE, a watermarking method designed to withstand various image editing techniques enabled by advanced generative models. It also introduces W-Bench, a benchmark that evaluates watermark robustness against multiple types of edits, making it a valuable resource for watermarking research."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- EditGuard is primarily designed for editing detection, not robust watermarking, and it was not tested with its most robust configuration. This impacts the fairness of the evaluation, as EditGuard’s focus and strengths differ from VINE’s intended use."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Although the watermarking against Image Editing is interesting and novel, I cannot get the value of this task. Can you elaborate the perspective of this task?\n2. The author hypothesizes that a powerful generative prior can facilitate embedding information more invisibly while enhancing robustness (Line 249). Why hypothesize that? What are the assumptions based on?\n3. What is the purpose of finetuning VINE-B to VINE-R using Instruct-Pix2Pix? (Line 323)\n4. Why is the resolution not unified? (Line 1042) \n5. Is VINE only work on the Image Editing task? What about other common watermarking tasks?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe proposed method is easy yet effective. The combination of different losses is reasonable.\n2.\tThe validation of watermarking patterns in high-frequency bands after image editing and blurring is solid.\n3.\tThe experimental results show the proposed watermarking method is robust enough against multiple image editing methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces W-Bench, the first comprehensive benchmark designed to evaluate the robustness of watermarking methods against a wide range of image editing techniques, including image regeneration, global editing, local editing, and image-to-video generation. Authors reveal that image editing and blurring distortion predominantly remove watermarking patterns in high-frequency bands, while those in low-frequency bands remain less affected. Based on this, distortions are used as surrogate attacks to overcome the challenges of using T2I models during training and to enhance the robustness of the watermark. The authors approach the watermark encoder as a conditional generative model and introduce two techniques to adapt SDXL-Turbo, a pretrained one-step T2I model, for the watermarking task. Experimental results demonstrate that VINE is robust against multiple image editing methods while maintaining high image quality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThis paper lacks the validation of hypotheses in Line 249.\n2.\tThe task of watermarking against Image Editing seems worthless.\n3.\tThe watermarking pattern existing in high-frequency bands after image blurring is not a new discovery. However, the author spends too much text on it."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I have doubts about the results in Figure 5(a). The experimental results show that 250-step noise in image regeneration can significantly disrupt the watermark(bit acc). Does this mean that global image editing (e.g., SDedit, prompt2prompt) with 250 steps can also completely remove the watermark? If so, I believe this result does not demonstrate robustness, as global image editing often uses even more denoising steps."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Comprehensive Evaluation Framework: W-Bench covers a variety of image editing techniques, providing a comprehensive platform for assessing the robustness of watermarking methods.\n\n2. Innovative Use of Generative Priors: VINE embeds watermarks by adapting pretrained large-scale generative models, making the embedding more imperceptible and robust.\n\n3. This task is innovative, focusing on watermarking that is robust against image editing methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new evaluation benchmark, W-Bench, designed to test the robustness of image watermarking methods under image editing supported by large-scale generative models. W-Bench includes image regeneration, global editing, local editing, and image-to-video generation. The authors also propose VINE, a watermarking method utilizing generative priors to enhance the robustness and visual quality of watermark embedding. Experiments show that VINE outperforms existing watermarking methods across various image editing techniques."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "TreeRing, Gaussian Shading, and RingID, which add watermarks in the frequency domain of the initial noise, are generally considered robust against image editing (e.g., prompt2prompt) and regeneration. This paper lacks this crucial comparison. If these methods are also robust to image editing, the contribution of this paper may be diminished.\n\nReference:\n1. Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust\n2. Ringid: Rethinking tree-ring watermarking for enhanced multi-key identification\n3. Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What is the reason for choosing only these four types of image editing methods (image regeneration, global editing, local editing, and image-to-video generation) to evaluate the image watermarking robustness, against image editing? \n2. What is the motivation for using SDXL-Turbo as the generative prior for watermark encoding? If it is just to avoid multi-step sampling, there should be lots of one-step generative models to choose from, for example, the SDXS [2]. \n\n[2] Song, Yuda, Zehao Sun, and Xuanwu Yin. \"SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions.\" arXiv preprint arXiv:2403.16627 (2024)."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper focuses on the image watermark robustness against image editing, which is important but has rarely been explored.\n2. The proposed benchmark includes different types of image editing approaches, rendering it comprehensive to some extent.\n3. The proposed SDXL-Turbo-based robust image watermarking method is novel, and the experiments demonstrate its effectiveness.\n4. The paper is overall well-written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an image watermarking benchmark, specifically aiming to evaluate the watermark robustness against four image editing methods. In addition, an image watermarking that is robust against image editing is proposed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The benchmark only considers four types of image editing methods (image regeneration, global editing, local editing, and image-to-video generation). Other image editing methods such as style transfer are not considered.\n2. Only one image-to-video generation method is included in the benchmark. The robustness against other image-to-video generation methods such as [1] is not evaluated.\n\n\n[1] Hu, Yaosi, Chong Luo, and Zhenzhong Chen. \"Make it move: controllable image-to-video generation with text descriptions.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.Figure 6 in the appendix shows that VINE exhibits higher brightness in the central region, providing evidence for why the proposed watermarking method demonstrates strong robustness against image editing. If the author can thoroughly elucidate the principles underlying this phenomenon, it may address the previously mentioned issue of \"a disconnect between the author's analysis of watermark robustness and the design of the watermark model.\"\n\n2.The experimental results demonstrate that the proposed watermarking method, VINE, significantly enhances robustness against various image editing techniques. Has the author considered using representative image editing as an attack template, incorporating the associated attack loss as one of the objective functions during the training phase? Alternatively, how might integrating the specific effects of image editing on watermarks into the design of the watermarking model influence the results of the watermarking algorithm?\n\n3. In the experimental section, some of the differences between the subjective experimental results are difficult to discern visually. The author could consider selecting a subset of images and enlarging specific regions to facilitate reader comprehension."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper presents the first holistic benchmark that incorporates four types of image editing techniques to assess the robustness of watermarking methods. This is significant for evaluating the robustness of future watermarking methods, as it helps to promote the standardization and comprehensiveness of robustness assessments. By addressing a critical gap in evaluating watermark resilience against sophisticated transformations enabled by modern generative models, this work encourages researchers in the field of image watermarking to focus on the robustness of their methods against emerging image editing technologies, including image regeneration, global editing, local editing, and image-to-video generation. Overall, the paper is clearly articulated and well-supported."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper evaluates eleven watermarking methods against prevalent image editing techniques and demonstrates that most methods fail to detect watermarks after such edits. It also introduces a watermarking model based on SDXL-Turbo, which exhibits high robustness against these editing methods while maintaining high image quality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper explains the reasons behind the watermarking algorithm's resistance to image editing from the perspective of the frequency domain. It notes that the watermarking methods exhibiting high robustness against image editing in certain scenarios display prominent patterns in the low-frequency bands, which aligns with the general understanding of watermark robustness. However, the paper primarily focuses on the robustness of watermarking methods against image editing techniques based on generative models. Therefore, summarizing the unique effects of such image editing techniques on the watermark is more meaningful.\n2. We observe that the proposed watermarking method, VINE, shows higher brightness in the central region of the frequency domain, which corresponds to the author's analysis of watermark robustness. However, the paper does not clarify why this watermarking model based on SDXL-Turbo exhibits such characteristics, leading to the author's specific design of the watermark algorithm. In other words, there seems to be a disconnect between the author's analysis of watermark robustness and the design of the watermark model."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present the first comprehensive benchmark for evaluating the robustness of eleven watermarking methods against prevalent image editing techniques and propose a watermarking model based on SDXL-Turbo that remains robust to these editing methods."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024robust,\ntitle={Robust Watermarking Using Generative Priors Against Image Editing: From Benchmarking to Advances},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=16O8GCm8Wn},\nnote={under review}\n}"
},
"abstract": {
"value": "Current image watermarking methods are vulnerable to advanced image editing techniques enabled by large-scale text-to-image models. These models can distort embedded watermarks during editing, posing significant challenges to copyright protection. In this work, we introduce W-Bench, the first comprehensive benchmark designed to evaluate the robustness of watermarking methods against a wide range of image editing techniques, including image regeneration, global editing, local editing, and image-to-video generation. Through extensive evaluations of eleven representative watermarking methods against prevalent editing techniques, we demonstrate that most methods fail to detect watermarks after such edits. To address this limitation, we propose VINE, a watermarking method that significantly enhances robustness against various image editing techniques while maintaining high image quality. Our approach involves two key innovations: (1) we analyze the frequency characteristics of image editing and identify that blurring distortions exhibit similar frequency properties, which allows us to use them as surrogate attacks during training to bolster watermark robustness; (2) we leverage a large-scale pretrained diffusion model SDXL-Turbo, adapting it for the watermarking task to achieve more imperceptible and robust watermark embedding. Experimental results show that our method achieves outstanding watermarking performance under various image editing techniques, outperforming existing methods in both image quality and robustness. Our model and benchmark will be publicly available."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"AI Security",
"Watermark",
"Diffusion Model",
"Image Editing"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e9740056026febb22ac684c0b11d0b966af36c8b.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Robust Watermarking Using Generative Priors Against Image Editing: From Benchmarking to Advances"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
16kG5aNleS | Transformer Meets Twicing: Harnessing Unattended Residual Information | main | Active | transformers;self-attention;oversmoothing;nonlocal smoothing;nonparametric regression | foundation or frontier models, including LLMs | 3;5;5;6 | 3;3;4;2 | 3;3;3;3 | 2;2;2;2 | 3;3;3;3 | 4.75 | 3 | 3 | 2 | 3 | -0.324443 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I would be grateful if the authors could respond and address the weaknesses. I am willing to increase my score if the authors could address the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is relatively easy to follow and well-written.\n2. The proposed \"Twicing Attention\" is simple and easy to implement.\n3. Theoretical motivation and the mathematical details behind their motivation and choices have been provided."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The over-smoothing problem in Transformers is a well-known phenomenon, where the outputs of different attention layers in a Transformer model are highly similar. This paper introduces Twicing Attention to address this problem, which uses low-pass NLM smoothing filters to tackle this problem. The core idea can be phrased as, instead of using the standard attention matrix $A$, to use $2A - A^2$."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper compensates for the simplicity of the core idea by over-explaining and being overly verbose. For example, most of the material on pages 7-8 can be summarised in 2-3 paragraphs. Even Algorithm 1 on page 8 is redundant and too verbose. The algorithm's objective is clear and simple: to compute $2A - A^2$. I don't think one needs 12 lines to explain that.\n2. Instead, the paper could have added to its contribution through a more thorough study. E.g., one avenue for improvement would be to consider other candidates besides the $2A - A^2$ and then compare them in the considered benchmarks"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "My question lies in the efficiency comparison (Tab. 4). Despite the fact that Twicing has the same complexity of $O(N^2 d)$ as claimed in the paper, it still increases the overhead by an additional 50% due to the extra matrix multiplication in line 7, Alg. 1. However, Tab. 4 indicates that implementing Twicing or not will not incur big difference on both speed and GFLOPs. What is the reason behind that? I would appreciate a more detailed efficiency analysis & comparison in the rebuttal phase if possible."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Novelty: The authors introduce the Twicing Attention mechanism to slow down the eigenvalue decay associated with representational collapse.\n2. Theoretical Contribution: the authors provide mathematical validation for the Twicing Attention’s capability to retain information across layers.\n3. Experiments: the authors evaluate their proposed methods on both language models and vision models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The self-attention mechanism's representational capacity diminishes significantly across layers, and this oversmoothing effect is reducing overall performance. This paper introduces Twicing Attention, a novel mechanism that connects self-attention computations with low-pass non-local means smoothing filters. By employing a kernel twicing procedure, it alleviates the low-pass effects of NLM smoothing while preserving meaningful information from residuals. Twicing Attention offers slower decay of representational capacity and improved accuracy across different data modalities. Significant performance improvement brought by Twicing attention is observed in multiple tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited improvement: The gains in clean data settings (such as on ImageNet in Tab. 1) are modest.\n2. Lack of comparison: the work does not compare its method with alternative solutions that address oversmoothing, such as regularization strategies.\n3. Lack of ablations: the authors are suggested to consider applying the proposed method at different layer depths or intervals and evaluate their difference."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. For pure curiosity, I would like to ask what the authors think the performance of this method would be in more extreme cases, which in this case refers to two main scenarios: first, the performance on LLMs with a very large number of parameters. Second, on non-classical Transformer structures, such as Linear Transformer and other analogs."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-written, with clear expression of formulae and symbols, and will be highly readable.\n2. The authors discuss the recurring problem of decay of representational capacity in Transformer, which has also been recognized as a manifestation of rank collapse in other studies. Instead of directly continuing the study of related work on rank collapse, the authors start with the NLM and try to gradually restore the cause of this phenomenon and again based on the proposed method that can alleviate this problem, the research angle is more interesting and also quite theoretical significance.\n3. The author's description of the solution is complete and accompanied by thorough proofs, the process is clear and easy to understand, and the work done is very informative."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper propose the Twicing Attention, a novel attention mechanism that uses kernel twicing procedure in nonparametric regression to achieve slower decay of representational capacity and improved accuracy across various data modalities and tasks. And the design of this module builds on the study of the connection between self-attention and NLM smoothing filters. The method was tested on a public dataset, yielding promising results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Admittedly, the authors' work is very rich and makes a very profound contribution at the theoretical level, but in my humble opinion, the authors' approach serves as a skillful level of reconciliation that moderates the rank collapse in depth, whereas a similar reconciliation skill is actually not uncommon in rank collapse-related research directions. I am not accusing the authors of not being innovative enough, but I hope that the authors can go further at the theoretical level and expand the sequence model that can really reduce the loss of information unlike the classical Transformer.\n2. The author's research is more profound, but the experiments are less adequate, with too few test datasets and too few comparison methods. I tend to think that this is the result of too much time constraints, and I hope that the author will add more datasets as well as other experiments on the Transformer if there is enough time."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Visualizations on earlier layers and more heads of the transformers would help to strengthen your claim.\n- Please refer to the weakness.\n- I am open to increase my score if you alleviate my concerns."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Theoretical foundation: The paper provides analysis connecting self-attention to NLM filters. \n- Fluent presentation flow: Messages of this paper are presented well, with well-demonstrated background knowledge and motivation.\n- Empirical validation: The authors provide visualizations of attention heatmaps, which validates their claim that their method preserve the diversity of token representations"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the Twicing Attention mechanism, drawing inspiration from established connections between self-attention and low-pass non-local means (NLM) smoothing filters. The authors demonstrate two key advantages of their proposed method: 1) a theoretically proven slower decay of representational capacity across transformer layers, and 2) improved performance on both vision and language tasks across multiple datasets. The paper's primary contribution lies in its theoretical framework. It first establishes that representation collapse in transformers stems from the inherent low-pass characteristics of NLM filters. The authors then provide proof showing that the twicing formulation ($2A^2-A$) offers superior theoretical properties compared to standard attention ($A$), particularly in preserving token diversity and meaningful feature representations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Narrow Problem Framing**: The paper's central premise regarding \"representation collapse\" in transformers warrants closer scrutiny. Recent research has demonstrated that this phenomenon is not inherent to transformer architectures. For instance, DINO(Caron et al., 2021) demonstrates that self-supervised training can produce well-structured, diverse token representations in Vision Transformers. Furthermore, Darcet et al. (2024) provide evidence that apparent \"collapse\" may actually reflect a more nuanced information distribution pattern, where artifacts in attention heatmaps encode global information while non-artifact tokens maintain distinct representations, albeit with lower similarity to the CLS token.\n- **Additional computational cost and marginal empirical improvements**: Performance increase in Table 4 is in trade of computational cost. Hardly can engineers be convinced to substitute the original attention with the proposed one.\n- **Limited Evaluation Scope**: The authors report the empirical performance on classification tasks for vision models. Yet dense tasks such as segmentation are more direct and effective in evaluating the structure of patch representations produced by the method."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel attention mechanism to enhance expressive power of transformers by leveraging residual information."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024transformer,\ntitle={Transformer Meets Twicing: Harnessing Unattended Residual Information},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=16kG5aNleS},\nnote={under review}\n}"
},
"abstract": {
"value": "Transformer-based deep learning models have achieved state-of-the-art performance across numerous language and vision tasks. While the self-attention mechanism, a core component of transformers, has proven capable of handling complex data patterns, it has been observed that the representational capacity of the attention matrix degrades significantly across transformer layers, thereby hurting its overall performance. In this work, we leverage the connection between self-attention computations and low-pass non-local means (NLM) smoothing filters and propose the Twicing Attention, a novel attention mechanism that uses *kernel twicing procedure* in nonparametric regression to alleviate the low-pass behavior of associated NLM smoothing with compelling theoretical guarantees. This approach enables the extraction and reuse of meaningful information retained in the residuals following the imperfect smoothing operation at each layer. Our proposed method offers two key advantages over standard self-attention: 1) a provably slower decay of representational capacity and 2) improved accuracy across various data modalities and tasks. We empirically demonstrate the performance gains of our model over baseline transformers on multiple tasks and benchmarks, including image classification and language modeling, on both clean and corrupted data."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"transformers",
"self-attention",
"oversmoothing",
"nonlocal smoothing",
"nonparametric regression"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/465d7006fcae05f2c55e553fa21c9e7843e3fdd4.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/feb0fa1c50098dc8e35348fc610d43ac06bcf816.zip"
},
"title": {
"value": "Transformer Meets Twicing: Harnessing Unattended Residual Information"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1762Fbr4HK | Deep Generative Modeling for Identification of Noisy, Non-Stationary Dynamical Systems | main | Active | system identification;non-autonomous differential equations;dynamical systems;variational inference;variational autoencoders;SINDy;sparse regression;uncertainty quantification;latent variable discovery;biophysics applications;biology;neuroscience | learning on time series and dynamical systems | 3;5;5;6 | 4;4;4;4 | 2;3;3;3 | 2;2;3;3 | 3;2;2;3 | 4.75 | 4 | 2.75 | 2.5 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- For the switch signals (Fig. 2 a-c, also Fig. 3A low noise setting), the inferred ODE parameter time series seem to exhibit high frequency oscillations on top of the correct switch-like dynamics. Is there an intuitive explanation why the encoder-decoder architecture struggles in inferring the correct switching dynamics and how this could be addressed?\n- Results of Fig. 3B look rather weak to me, can the authors report Pearson’s $r$ of noise lvl vs. std?\n- I’m confused by section 4.6 & Fig. 7; How exactly does the dynamic SINDy approach compare now to the proposed baseline methods based on SLDS and (vanilla?) SINDy with a group sparsity norm? I think Fig. 7 would be much clearer if the authors would find a design to compare all comparison methods side-by-side.\n- ll. 409-411: Can the authors provide references for the mentioned studies?\n- How do other methods like reservoir computing compare to the dynamic SINDy approach qualitatively and quantitatively in the settings discussed in the manuscript (see e.g. [1])?\n- How does the approach perform on e.g. benchmarks used in [2], which exhibit different bifurcations than the ones discussed in this paper?\n\nI am very happy to increase my score if the authors adequately address my concerns and questions.\n\nReferences:\n\n[1] Köglmayr, Daniel, and Christoph Räth. \"Extrapolating tipping points and simulating non-stationary dynamics of complex systems using efficient machine learning.\" Scientific Reports 14.1 (2024): 507.\n\n[2] Patel, Dhruvit, and Edward Ott. \"Using machine learning to anticipate tipping points and extrapolate to post-tipping dynamics of non-stationary dynamical systems.\" Chaos: An Interdisciplinary Journal of Nonlinear Science 33.2 (2023)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Learning a parsimonious representation of non-autonomous DS is extremely important and relevant in many scientific disciplines, which makes the approach very promising.\n- I think it is highly interesting that the encoder-decoder architecture is able to predict the ODE parameters with this level of fidelity in an unsupervised fashion (as there is no direct reconstruction loss for the ODE parameter time series involved in the loss function (7)).\n- The method is tested against other baselines and also on a real-world dataset (C. elegans)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes ‘dynamic sparse identification of nonlinear dynamics’ (dynamic SINDy), a deep learning framework for identifying governing equations in noisy, non-stationary and nonlinear dynamical systems (DS). By combining variational autoencoders (VAEs) and previous work on SINDy, it enables unsupervised inference of the underlying ODE systems’ parameters while extracting a global and parsimonious nonlinear dynamical model. The approach is validated on both synthetic and real-world data and is compared to other methods in the field, demonstrating great potential for scientific machine learning community."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The authors should stick to the ICLR style guide and use the 'author, year' reference style instead of mere numerical numbers (i.e. APA style instead of IEEE). This increases readability and helps the reader to understand the train of thought of the authors, as one directly sees on which work the authors base certain statements.\n- Center box in Fig. 1B is in parts hard to read as (font) sizes vary a lot. I think it would be better to shrink down Fig. 1A a touch and to increase size of Fig. 1B, especially as it describes the main framework of the manuscript.\n- I also think the figure group titles ('suptitles') above Fig. 1, 2, and 4 are superfluous and their message should be put into the figure caption. This would create additional space (e.g. to compensate for the change in referencing style).\n- I think ‘dynamic HyperSINDy’ deserves a bit more attention in the main text, which lacks explanation on how this approach really works. Explaining this method in the supplement makes the corresponding results a bit hard to read and almost forces the reader to read the supplement section 1.2.2.\n- All of the employed (benchmark) datasets are fairly low dimensional (2-3D). The authors do not address the scalability of the method to high dimensional systems (which can not be sufficiently described by the first few PCA components). I think this is a major drawback, as this setting is highly relevant to many real-world systems.\n\nMinor details:\n- typo: Fig. 3B y-axis label say “approxiate std”\n- l. 353 It just says 6A and 6B, while the authors probably reference Fig. 5A and 5B? Also l. 360 it says 6C instead of 5C.\n- typo: supplement l. 262 it says weight decay of 1e5 (I assume 1e-5?)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "have the authors applied their method to any datasets requiring a higher dimensional latent space to see how quality of the learned dynamical system scales with dimensionality of the latent space? \n\nhave the authors considered comparisons to a method more adept at handling more smooth like transitions of dynamics such as [1] which considers a smoothly switching latent system. \n\n[1] Kurle, Richard, et al. \"Deep rao-blackwellised particle filters for time series forecasting.\" Advances in Neural Information Processing Systems 33 (2020): 15371-15382."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "dynamical system reconstruction or system identification is an important topics, and learning more interpretable models of system dynamics, such as time varying ode representations like the authors, has broad application to many fields. additionally, the authors consider a comprehensive amount of toy experiments — i appreciate that the authors considered experiments with several time varying motifs (i.e. switching, sigmoid, switch, fourier) and show that their method can recover the time varying coefficients with a calibrated measure of uncertainty."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "the authors introduce dynamic sindy — an extension of the sindy for uncovering nonstationary dynamical systems from data. the authors use a time-series VAE architecture to map their data to time-varying coefficients that are linearly combined with a fixed library of basis functions to produce an estimate of the data derivative. they conduct experiments using several toy datasets showing that their dynamic sindy can recover the coefficients of time-varying dynamical systems, even in the case that the entire system is not observed. finally, they show on low-dimensional representations of c. elegans neural recordings, that their method recovers a representative dynamical system of the first principal component."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "i found the separation between what has previously been done in the literature and what are the authors exact main novel contributions to be unclear; for example, more precise statements about the differences with hypersindy (around line 096) would have been very helpful. in its current form, it is hard to parse from the manuscript what their exact technical advances are.\n\na lot of real-estate in the paper is taken up by the experimental plots. often i found the amount of information conveyed by the plots disproportionate to the amount of space they take up — making more compact figures seems like it would work to the authors advantage. additionally, information could be conveyed better i.e. thick lines and their ordering (i.e. green/blue lines in Fig 2d are not clear). Fig 1B has a lot of small labels and the zoomed out view of timeVAE does not feel like it helps much."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Variational inference is known to provide overconfident uncertainty estimates because of the KL term encouraging mode-covering behavior. Can the authors discuss this further and the impact of this on the method?\n- How robust are the identified parameters? Is there a quantitative relationship between the robustness and (1) the size of the library, (2) the level of noise in the system?\n- It is stated that the method can only deal with non-stationarity arising from separable time-varying variables. Can the authors elaborate more on why this is the case?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The authors address an important and often overlooked problem in modeling time series data: learning interpretable models from non-stationary dynamical systems.\n- The problem and the proposed method are presented clearly, and the paper is generally easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose DynamicSINDY, an approach that aims to learn sparse ODEs with time-varying coefficients from noisy, non-stationary time series data. The authors achieve this by combining SINDy with sequential VAEs to probabilistically infer the coefficients and their time-varying values. The method is evaluated on three synthetic datasets and a calcium imaging dataset of C. elegans."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While the motivation of the problem is clear, I find that the conducted experiments fail to convince the reader of the method's impact in real-world settings. The experiments mainly focus on synthetic datasets that are artificially created to fit this problem and avoid most challenges often encountered in real-world datasets (high dimensionality, non-Gaussian noise, large space of possible coefficients). I think the paper would be greatly benefit if the authors can demonstrate the method in such contexts. \n\n- The C. elegans dataset used to demonstrate the method is quite simple, and the method is only applied to low-dimensional representations obtained via PCA (which in this case is enough to explain the data). The impact of using DynamicSINDY in this case is not well motivated, and the obtained results don't add any much scientific insights, especially since identifiability is not discussed.\n\n- While this is not necessarily always a weakness, the proposed method is a straightforward combination of two existing approaches. Taken together with the limited experiments section and the lack of significant technical innovations, I find the overall contribution of the paper in its current form rather limited."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- What can one do with the uncertainty? Is it necessary for accurate estimate? Would the uncertainty provide insights for scientific questions? Suggestion: elaborate more or showcase scenarios where the uncertainty is useful vs. method.\n- How does the method perform for noise driven dynamics? e.g. system with multiple meta-stable points or line attractors.\n- What scientific insights the proposed method could offer for the systems that don't have particular a priori form of ODEs? If we don't know u(t) switches, would it discover that?\n- Line 461 typo: rLDS\n- Suggestion: put Dynamics SINDy reconstruction together with SLDS result in Fig. 7."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Modeling Time-Varying Coefficients: Dynamic SINDy employs deep generative models, specifically variational autoencoders (VAEs), to learn the time-varying nature of ODE coefficients. This enables the identification of non-autonomous systems that exhibit complex dynamics.\n- Uncertainty Quantification: The use of VAEs allows dynamic SINDy to quantify uncertainty in the estimated ODE coefficients. \n- Latent Variable Discovery: Dynamic SINDy can effectively uncover hidden (latent) variables that influence the observed dynamics. This is demonstrated through an example with the Lotka-Volterra equations, where only prey population data is available.\n- Application to Real-World Data: The paper validates dynamic SINDy's capabilities on both synthetic and real-world data, i.e. the C. elegans data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents dynamic SINDy, a machine learning method designed to identify the governing equations of noisy, nonlinear, and non-autonomous dynamical systems from data. Dynamic SINDy combines variational inference with sparse identification of nonlinear dynamics to identify time-varying coefficients in sparse ordinary differential equations. The method is particularly valuable for non-stationary systems with changing parameters over time. The contributions include\n- Modeling Time-Varying Coefficients: Dynamic SINDy employs deep generative models, specifically variational autoencoders (VAEs), to learn the time-varying nature of ODE coefficients. This enables the identification of non-autonomous systems that exhibit complex dynamics.\n- Uncertainty Quantification: The use of VAEs allows dynamic SINDy to quantify uncertainty in the estimated ODE coefficients. This is crucial for understanding the reliability of the identified model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Uncertainty Quantification: though sec 4.2 shows the estimated standard deviation follows the truth, it only confirms the accuracy of estimation. Nonetheless, it is unclear what use the uncertainty could provide.\n- The examples are mostly using systems that driven by mean/input processes. It's unclear how the proposed method would perform for noise driven dynamics.\n- In the C. elegans example, the assumed form depends heavily on prior knowledge (dimensionality, input) on the dynamics. Though the proposed method has shown very good performance, it's unclear what scientific insights the proposed method could offer especially for the systems that people have limited knowledge about."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce a method for identifying non-autonomous differential equations and discovering latent variables in dynamic systems, validated on synthetic data (e.g., the Lorenz system) and applied to neuronal activity data."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024deep,\ntitle={Deep Generative Modeling for Identification of Noisy, Non-Stationary Dynamical Systems},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1762Fbr4HK},\nnote={under review}\n}"
},
"abstract": {
"value": "An important challenge in many fields of science and engineering is making sense of time-dependent measurement data by recovering governing equations in the form of differential equations. We focus on finding parsimonious ordinary differential equation (ODE) models for nonlinear, noisy, and non-autonomous dynamical systems and propose a machine learning method for data-driven system identification. While many methods tackle noisy and limited data, non-stationarity – where differential equation parameters change over time – has received less attention. Our method, dynamic SINDy, combines variational inference with SINDy (sparse identification of nonlinear dynamics) to model time-varying coefficients of sparse ODEs. This framework allows for uncertainty quantification of ODE coefficients,\nexpanding on previous methods for autonomous systems. These coefficients are then interpreted as latent variables and added to the system to obtain an autonomous dynamical model. We validate our approach using synthetic data, including nonlinear oscillators and the Lorenz system, and apply it to neuronal activity data from C. elegans. Dynamic SINDy uncovers a global nonlinear model, showing it can\nhandle real, noisy, and chaotic datasets. We aim to apply our method to a wide range of problems, specifically to dynamic systems where complex parametric time dependencies are expected."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"system identification",
"non-autonomous differential equations",
"dynamical systems",
"variational inference",
"variational autoencoders",
"SINDy",
"sparse regression",
"uncertainty quantification",
"latent variable discovery",
"biophysics applications",
"biology",
"neuroscience"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1131bdc48d0fc37fdb392559d4c95a6c5a799148.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/8bcd2f7a9a63c6091074f573e7b45c130491bc0c.zip"
},
"title": {
"value": "Deep Generative Modeling for Identification of Noisy, Non-Stationary Dynamical Systems"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
17U3nlco2r | ChebyNet: Boosting Neural Network Fitting and Efficiency through Chebyshev Polynomial Layer Connections | main | Active | DNN;Chebyshev Polynomial | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;3;3;5;5 | 4;4;3;3;4 | 2;1;2;3;3 | 3;2;2;2;2 | 1;2;2;3;3 | 3.8 | 3.6 | 2.2 | 2.2 | 2.2 | -0.166667 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the questions raised above in the weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) The idea of multiplying Chebyshev polynomials of inputs to features as element-wise (Hadamard) product is novel.\n\n2) Experiments show that such ChebyNet of using high order (1,...,9) Chebyshev polynomials may often improve the precision over the plain networks and be often better than using ordinary polynomials (PolyNet)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies a new approach for interactions between non-adjacent layers in neural networks. Existing interactions among non-adjacent layers are typically studied as additive shortcuts as ResNets, dense connections, and attention mechanisms. This paper aims to bring a new type of interactions between nonadjacent network layers, by multiplying Chebyshev polynomials of inputs (up to a downsampling) to features as element-wise or Hadamard products. Since the 0-order Chebyshev polynomial is the identity, such a scheme can be regarded as an extension of existing network features $f(x)$ to those multiplied element-wisely by sums of high order Chebyshev polynomials of inputs, i.e. $f(x) \\circ \\sum_{i=0}^n L_i(g(x))$, where $L_i$ is defined as the Chebyshev polynomials of the first kind recursively and $g(x)$ is a downsampling operation to align the dimensionality of input $x$ with the feature $f(x)$. \n\nThe motivation of exploiting Chebyshev polynomials roughly lies in the fact that their roots, Chebyshev nodes, actually provide a tight bound in polynomial interpolation of continuous functions that minimizes the Runge oscillation phenomenon. Moreover, Chebyshev polynomials in the first kind has a recursive representation which can be easily implemented with deep neural networks. \n\nThe utility of such a construction is demonstrated by several experiments: low dimensional numerical function approximation, MNIST image generation using UNet-diffusion, learning of some dynamical systems (2-body, 3-body, and real pendulum problem), UNet image segmentation (ACDC and BraTS19), and image classification (Cifar-10 and Cifar-100)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The reproducible codes are not provided with the paper anonymously. Since it is mainly an experimental paper for ICLR, reproducible research is necessary to evaluate the experimental results. \n\n2) The motivation of designing ChebyNet architecture seems not clear enough. Among the various possibility of non-adjacent layer interactions, why do the authors choose elementwise product between the sums of Chebyshev polynomials of inputs and features? It seems to me that the 0-th order Chebyshev polynomial recovers the original networks. But what about high order polynomials? Why not additive form? Why not using weighted sum of polynomials while the weights could be tuned?\n\n3) One motivation of exploiting Chebyshev polynomials roughly lies in the fact that their roots, Chebyshev nodes, actually provide a tight bound in polynomial interpolation of continuous functions that minimizes the Runge oscillation phenomenon. Does this property lead to any particular consideration of constructing ChebyNet architecture? Moreover, why does the recursive formulation provide superior numerical stability?\n\n4) In the performance metrics, margins of improvement over the baseline are sometimes small and we are not sure if the improvements are significant. Since this is the first kind of experiments for the proposed methods, it would be better to include certain error bars to account for randomness in evaluations. \n\n5) Figure 5 shows the differences between Cheby-CNN and Poly-CNN in image classification, highlighting the negative correlations on the low orders (0-2) Cheby-CNN. The authors suggest that \"The strong correlation among high-order features suggests that low-order features are already sufficient for representing the underlying information, indicating potential for parameter compression\". However, from Table 5, middle to high order polynomials seem with high performance as well. Is there a principle in polynomial order selections? By the way, in the last row of this table, some number like 77.1, 76.8 seems missing the highlighted bold font as they are higher than the baseline."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses, particularly the motivation for using Chebyshev polynomials to enhance DNNs. How does this method compare to simpler residual or dense connections in terms of inter-layer interaction benefits?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tChebyNet is adaptable and can be seamlessly integrated into multiple existing architectures, such as UNet and HNN, with minimal implementation complexity.\n2.\tThe approach is tested on a range of tasks and exhibits performance gains in most cases, supporting its practical efficacy.\n3.\tChebyNet shows a robust capacity for approximating various mathematical functions, with Table 6 in the appendix indicating superior performance over MLP and Poly-MLP in approximating elementary functions like sign(x) and tanh."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces ChebyNet, a novel architecture aimed at enhancing neural networks by fostering connections between non-adjacent layers, an area typically underexplored in conventional networks. The core motivation is the limited interaction between distant layers, which can constrain a network's capacity to model complex functions. ChebyNet addresses this by employing Chebyshev polynomial basis functions to augment layer connections, which are then fused with outputs, effectively enhancing the network’s representational power. The proposed method is evaluated across various tasks, including regression, image generation on MNIST, and classification, demonstrating that ChebyNet is versatile and improves performance in numerous settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe motivation for ChebyNet could be further clarified. While the paper states that existing networks lack inter-layer connections, there are established models, like DenseNet, that enhance layer interactions. Thus, the benefit of Chebyshev polynomial-based connections versus simpler dense, residual or pyramid connections remains unclear.\n2.\tFigure 1 could be refined for clarity, as it currently suggests that polynomial connections link the network’s input and output directly, whereas, according to the text part, these connections are applied within layers.\n3.\tThe method’s evaluation is limited to small-scale datasets. Testing on larger benchmarks, such as ImageNet, would provide a more compelling demonstration of its scalability.\n4.\tWhile ChebyNet is posited to improve non-adjacent layer interactions, the paper lacks strong empirical / theoretical evidence to substantiate this claim fully.\n5.\tThe paper claims that Chebyshev connections can enhance the efficiency of DNNs; however, no experiments are provided to validate this claim. To my knowledge, additional connections may introduce extra memory and I/O overhead during inference. Supplementary experiments demonstrating the efficiency benefits would strengthen the paper.\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "please refer to weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper is well-crafted, effectively illustrating the concepts and experimental results.\nTo validate the proposed method, the authors conduct extensive experiments across various tasks, including image classification and image segmentation. The outcomes of these experiments confirm the efficacy of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the author addresses the issue that previous neural network architectures fail to explicitly model the relationships between different layers. To mitigate this, the author introduces ChebyNet, which models the recursive and polynomial relationships between layers. To validate the proposed method, the authors conduct several experiments across various tasks. The results demonstrate that incorporating relationships between layers enhances performance and suggests a promising direction for network structural design."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Despite the demonstrated experimental improvements, I have several concerns regarding the proposed method. Firstly, could the authors provide an analysis of the memory usage, inference time, and training time of the proposed method? I am interested in determining whether it requires additional resources to train the model. Additionally, the use of MNIST and CIFAR datasets might not be sufficient to thoroughly validate the method; could the authors present results on larger datasets? Furthermore, could the authors discuss the robustness of the proposed method? While modeling the relationship between different layers may increase the capacity of the model, it could also increase the risk of overfitting."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Clear method section, with a straightforward explanation of the architectures used would go a long way in understanding the significance of the experiments.\n\nWhat was the reason behind the choice of experiments and baseline architectures? \n\nWhy are Chebishev polynomials particularly good? This is not really clear from the text. Are they a fundamental ingredient that all modern architectures should use to propel their performance further?\n\nCould you run some experiments regarding the precise type of polynomials used, or more clear ablations on where and how the polynomials were applied?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Originality\nThis work explores an under-explored direction of research. Prior work has not carried out such an extended study.\n\nQuality\nThe method has been applied to many different tasks, trying to show the potential applications to several areas where deep learning is traditionally applied.\n\nSignificance\nSpecific problems will require specific biases, and Chebyshev polynomials are indeed an interesting way to provide useful modeling biases to the available architectures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Current architectures used in the field of deep learning are limited in the modeling of relationships between non-adjacent layers. Skip-connections are popular additive methods for connecting non-adjacent layers, and prior work has explored the use of polynomial functions to establish relationships between layers. In this work, Chebyshev polynomials of high order have been applied to several modern architectures and evaluated on several tasks. The experiments show that adding Chebyshev polynomials to the architectures can help improve performance slightly when compared with certain baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The contribution is not clear. The first contribution claimed states that ChebyNet is introduced. However, several architectures are apparently used in the experiments, where Chebyshev polynomials are somehow applied to existing architectures to attempt improvements in their performance. Please clarify the first contribution. The second contribution claims that ChebyNet consistently surpasses existing networks, however, in the experiments it is clear that this is only true for certain hyperparameter choices (which are not clear).\n\nThe methodology is far from being clear or reproducible. The polynomials are described, but how they are applied to the network architecture is never explained in sufficient detail to allow an expert to reproduce the results obtained. No details on the architecture structure, no pseudocode of the implementation, no details of the optimizer used for each experiment (it is mentioned only for one) or the learning rate, weight decay, or other details on data augmentation, and so on.\n\nThere are writing issues with the manuscript, please read it over again and fix grammatical and typographical mistakes (e.g. Implimentation as the title of section 3.3).\n\nNumerical function approximation loss un Figure 2 shows Loss against Order. What does Order mean for an MLP? Please, again, do not place results and experiments without explaining what was done. Why is the MLP failing to fit a quadratic function? Was the error achieved exactly 0? This might not be surprising given that polynomials are part of the architecture itself, but would have been interesting a more in-depth discussion.\n\nThe FID obtained on MNIST appears very high for both the Cheby-UNet, and the baseline UNet. More details on the hyperparameters used would help understand the performance. The quality of the samples also appears qualitatively worse than a simple UNet-based implementation of diffusion available on GitHub (https://github.com/bot66/MNISTDiffusion). More details are necessary.\nThe use of FID as a metric is not sufficient. As the objective is showing the ability of the architecture to fit complex functions, the log-likelihood would have also been an important metric to display, as it more closely shows the ability of a model to fit complex functions. MNIST is not enough, and at least CIFAR10 should have been used. I would also suggest CelebA, which appears more complex but is actually quite simple compared to CIFAR10 for a generative model.\n\nThere was no discussion or acknowledgment of the limitation which comes from using models with different parameter count. From the text it is not clear whether the parameters count was kept constant, or if at least the comparison could be considered fair in all experiments."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- How do you integrate the Chebyshev connections into complex architectures like ResNet or UNet? Can you provide more concrete details?\n\n- Given that Equation 4 diverges from the typical polynomial sequence, what justifies calling the method polynomial-based? Is the benefit truly coming from the polynomial structure or just from additional learned connections?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Novel Use of Polynomials: Applying Chebyshev polynomial connections for inter-layer relationships is an interesting twist that brings more flexibility to the model's structure.\n2. Versatility Across Tasks: The method shows improvements across different tasks, suggesting it has general applicability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes ChebyNet, a neural network architecture that uses Chebyshev polynomial connections to boost the network's fitting capabilities and efficiency. The idea is to go beyond typical additive shortcuts, adding both recursive connections between adjacent layers and polynomial-based relationships between non-adjacent layers. The authors demonstrate the effectiveness of ChebyNet on various tasks, like function approximation, image classification, and semantic segmentation, showing that it often outperforms standard networks with fewer parameters."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Implementation: The implementation details are unclear. In the methodology section, Equations 3 and 4 outline the connectivity patterns between layers, but there is no specific guidance on how to apply these connections to complex architectures like UNet (as shown in Tables 1 and 4), or ResNet and MobileNet (as shown in Table 5). This raise serious problems when I want to to dive in the details of the paper. Those details are also not incorporated in the Appendix as well.\n\n2. Unclear Use of Polynomials: The method appears to focus on a recursive layer connectivity similar to Chebyshev polynomials, but it doesn't actually involve using polynomials. Equation 4 resembles Equation 3 but starts with a different initial condition, leading to entirely different sequences in the recursion.\n\n3. Computation: While the paper claims the efficiency (Line 88-89, \"fewer parameters and reduced computational overhead\"), there is no actually discussion on the real computation gain with respect to different applications. From my understanding, with increased connectivity, there is a likelihood of higher computational costs, which is why architectures like DenseNet, despite their strong performance, are not widely adopted in real-world applications. The paper does not sufficiently discuss how ChebyNet handles the potential slowdown due to the additional polynomial connections.\n\n4. Limited Baseline Comparisons: The paper mainly introduce a new type of connectivity of layers, which is more on par for ResNet and DenseNet. However, the comparisons are mostly against basic versions of popular models. Adding comparisons with more sophisticated connectivity strategies would strengthen the results and make the findings more convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024chebynet,\ntitle={ChebyNet: Boosting Neural Network Fitting and Efficiency through Chebyshev Polynomial Layer Connections},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=17U3nlco2r},\nnote={under review}\n}"
},
"abstract": {
"value": "Traditional deep neural networks (DNNs) predominantly adhere to a similar design paradigm. Even with the incorporation of additive shortcuts, they lack explicit modeling of relationships between non-adjacent layers. Consequently, this paradigm constrains the fitting capabilities of existing DNNs. To address this issue, we propose ChebyNet, a novel network paradigm to build Chebyshev polynomial connections between general network layers. Specifically, we establish recursive relationship among adjacent layers and polynomial relationship between non-adjacent layers to construct ChebyNet, which improves representation capabilities of the network. Experimentally, we comprehensively evaluate ChebyNet on diverse tasks, including function approximation, semantic segmentation, and visual recognition. Across all these tasks, ChebyNet consistently outperforms traditional neural networks under identical training conditions, demonstrating superior efficiency and fitting properties. Our findings underscore the potential of polynomial-based layer connections to significantly enhance neural network performance, offering a promising direction for future deep learning architectures."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"DNN",
"Chebyshev Polynomial"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b20c6578ac4c4c2b94a3a4f78b642688629f7836.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "ChebyNet: Boosting Neural Network Fitting and Efficiency through Chebyshev Polynomial Layer Connections"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
17idjbdHVW | A Computation and Communication Efficient Projection-free Algorithm for Decentralized Constrained Optimization | main | Active | Decentralized stochastic optimization;variance reduction;Frank-Wolfe method | optimization | 3;5;6 | 5;3;4 | 2;3;3 | 2;2;3 | 2;2;3 | 4.666667 | 4 | 2.666667 | 2.333333 | 2.333333 | -0.654654 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Should the sample complexity in the non-convex case be $\\mathcal{O}(n + \\sqrt{n/m}L^2\\varepsilon^{-2})$? Letting $m = 1$, the problem reduces to the centralized finite-sum setting, where the sample complexity should be $\\mathcal{O}(n + \\sqrt{n}\\varepsilon^{-2})$ or $\\mathcal{O}(n\\varepsilon^{-2})$, as shown in [1].\n\n2. In Table 1, is a direct comparison of convergence rates with [2] appropriate? Specifically, this paper addresses a finite-sum problem, whereas [2] deals with an online setting. Since DVRGTFW cannot be directly applied to the online problem, such a comparison may be inappropriate. The authors should at least point out the differences in settings when making these comparisons.\n\n3. Finally, there are some minor issues, such as typos. \n- The Lyapunov functions defined in L.739 use the symbols $\\Phi$ and $\\Psi$ , but in several places in the following proofs, they are written as $\\phi$ and $\\psi$ (L.994, L.1069, L.1076, L.1082, and L.1085).\n- L.818. ``fastMix'' should be ``FastMix''.\n- The paper [1] has been accepted in ICML and the reference should be updated.\n\n---\nReferences\n\n[1] Aleksandr Beznosikov, David Dobre, and Gauthier Gidel. Sarah frank-wolfe: Methods for constrained optimization with best rates and practical features. In ICML, 2024.\n\n[2] Hoang Huy Nguyen, Yan Li, and Tuo Zhao. Stochastic constrained decentralized optimization for machine learning with fewer data oracles: a gradient sliding approach. arXiv preprint arXiv:2404.02511, 2024."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper shows better theoretical convergence results compared to previous works. Specifically, by incorporating techniques such as gradient tracking and multi-consensus, it extends constrained finite-sum algorithms to the decentralized setting. The convergence of DVRGTFW is analyzed using Lyapunov functions, theoretically establishing improved sample and communication complexities, which is also validated by numerical experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the decentralized constrained finite-sum optimization problem and provides a projection-free algorithm called DVRGTFW. In the convex and non-convex cases, the sample complexities $\\mathcal{O}(n+\\sqrt{n/m}L\\varepsilon^{-1})$ and $\\mathcal{O}(\\sqrt{n/m}L^2\\varepsilon^{-2})$ are established, respectively. Numerical experiments validate the performance of the algorithm."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While improved theoretical results are established for decentralized Frank-Wolfe method, the techniques are overall similar to existing ones."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see Weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper is well written. It is easy to follow.\n\n2. The literature review is good."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper develops a decentralized stochastic Frank-Wolfe algorithm and establishes its convergence rate for both convex and nonconvex constrained problems. The experiment demonstrates the effectiveness of the proposed algorithm."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The novelty is limited. Decentralized unconstrained optimization has been well studied. This paper tries to extend those algorithms to constrained problem, where the feasible set is bounded. However, this extension is trivial. In particular, due to the bounded feasible set, it is trivial to bound the gradient variance. Actually, the proof for frank-wolfe algorithm is much easier than the unconstrained counterpart.\n\n2. As mentioned in this paper, there are some existing decentralized Frank-wolfe algorithms for DR-submodular optimization problems. What is the difference between those algorithms and this paper? Are there any unique challenges compared to those algorithms? It would be good if the authors could discuss these critical points to show the contribution of this paper. \n\n3. FastMix is a not very common communication method. It would be good to provide some background for this method. For example, in standard gradient tracking method, it is well known that $\\bar{v}_t=\\bar{y}_t$. Does FastMix also have this property? It seems the authors directly use $\\bar{v}_t=\\bar{y}_t$ in the proof. \n\n4. It would be good to provide more details about the proof. For example, how to get the third step in Line 764? It is not very clear. \n\n5. How does the heterogeneity affect the convergence rate? \n\n6. Why does IFO not depend on the spectral gap? Any explanation?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In table 1, when $m = 1$, we should recover the complexities in the centralized setting in the convex/non-convex setting, however, for the proposed algorithm, the reviewer does not understand why it matches the bounds given in [Beznosikov et al., 2023], for example, in the convex case the table suggests $\\tilde{\\mathcal{O}}(n + \\frac{\\sqrt{n}}{\\varepsilon})$, while [Beznosikov et al., 2023] gives $\\tilde{\\mathcal{O}}(n + \\frac{1}{\\varepsilon})$.\n\n2. What is the output of Algorithm 2 FastMix? \n\n3. Is it possible to further improve the communication complexity of the algorithm so that it matches the optimal bounds?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The author manages to combine the technique of variance reduction and gradient tracking to Frank-Wolfe algorithm in the decentralized setting, convergence analysis in both convex case and non-convex case are provided, illustrating the effectiveness of the proposed algorithm DVRGTFW.\n\n2. The proposed algorithm achieves best-known incremental first order oracle complexities both in the convex case and in the non-convex case, and near optimal communication complexity in the non-convex case.\n\n3. The paper offers numerical experiments to validate the theory presented in the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors proposed to combine the Frank-Wolfe algorithm with variance reduction as well as gradient tracking in the decentralized setting, resulting in the algorithm DVRGTFW. Convergence analysis in the convex and non-convex case are provided with numerical experiments conducted to further support the theory provided."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Though the results are interesting, the proposed method appears to be primarily a combination of established techniques, such as variance reduction, gradient tracking, and the Frank-Wolfe algorithm. As a result, the novelty of the approach may be somewhat limited.\n\n2. If I am not mistaken, the communication complexity for DVRGTFW is not better than existing methods in the convex case given its extra dependence on $\\sqrt{mn}$ as it is demonstrated in Table 1, which is a limitation of the algorithm.\n\n3. I recommend that the authors do a thorough check of the paper as there are many typos, some of them are confusing, such examples include:\n- At line 92, ''develop communication and communication efficient'';\n- At line 114, $m = 0$;\n- At line 222, $x_0 \\in \\mathbb{R}^d$,\n- There are also some notations used without introduction in the paper.\n\n4. In some of the numerical experiments, the proposed algorithm is not better than existing algorithm for an unclear reason."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024a,\ntitle={A Computation and Communication Efficient Projection-free Algorithm for Decentralized Constrained Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=17idjbdHVW},\nnote={under review}\n}"
},
"abstract": {
"value": "Decentralized constrained optimization problems arise in numerous real-world applications, where a major challenge lies in the computational complexity of projecting onto complex sets, especially in large-scale systems. \nThe projection-free method, Frank-Wolfe (FW), is popular for the constrained optimization problem with complex sets due to its efficiency in tackling the projection process. \nHowever, when applying FW methods to decentralized constrained finite-sum optimization problems, previous studies provide suboptimal incremental first-order oracle (IFO) bounds in both convex and non-convex settings. \nIn this paper, we propose a stochastic algorithm named Decentralized Variance Reduction Gradient Tracking Frank-Wolfe ($\\texttt{DVRGTFW}$), which incorporates the techniques of variance reduction, gradient tracking, and multi-consensus in the FW update to obtain tight bounds. \nWe present a novel convergence analysis, diverging from previous decentralized FW methods, and demonstrating $\\tilde{\\mathcal{O}}(n+\\sqrt{\\frac{n}{m}}L\\varepsilon^{-1})$ and $\\mathcal{O}(\\sqrt{\\frac{n}{m}}L^2\\varepsilon^{-2})$ IFO complexity bounds in convex and non-convex settings, respectively. \nTo the best of our knowledge, these bounds are the best achieved in the literature to date. Besides, in the non-convex case, $\\texttt{DVRGTFW}$ achieves $\\mathcal{O}(\\frac{L^2\\varepsilon^{-2}}{\\sqrt{1-\\lambda_2(W)}})$ communication complexity which is closed to the lower bound $\\Omega(\\frac{L\\varepsilon^{-2}}{\\sqrt{1-\\lambda_2(W)}})$. \nEmpirical results validate the convergence properties of $\\texttt{DVRGTFW}$ and highlight its superior performance over other related methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Decentralized stochastic optimization",
"variance reduction",
"Frank-Wolfe method"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5142fd7eb6fb6597fd7b673fe652db27d7cc9c04.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/169b2e472d47302502590ff2ef43ee7d3cf667df.zip"
},
"title": {
"value": "A Computation and Communication Efficient Projection-free Algorithm for Decentralized Constrained Optimization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1959usnw3Z | Chordal Graph Sampling-Based Mini-batch Training Algorithm for Large Graphs | main | Active | Large scale dataset;Graph neural networks | learning on graphs and other geometries & topologies | 3;3;3;3 | 3;3;4;4 | 2;1;2;2 | 1;1;2;1 | 1;1;3;2 | 3 | 3.5 | 1.75 | 1.25 | 1.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weakness"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper is easy to understand.\n\n2. Experiments are conducted on three new datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper builds on cluster-gcn, using chordal graph partition instead of the metis algorithm. The performance of CGST was tested on three large-scale datasets. Overall, the novelty of this paper is limited, and its performance is relatively average."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The novelty of this paper is extremely limited. The main difference from Cluster-GCN is the use of a different graph partition algorithm. Moreover, the graph partition algorithm used in this paper is not original. Additionally, the random aggregation technique mentioned in section 4.3 is also used by Cluster-GCN. The only difference is that edges between different clusters have been removed.\n2. The experimental results indicate that CGST's performance is suboptimal. Although the accuracy is sufficiently good, as a work on scalable training, the memory usage and training time performance are worse than the baselines.\n3. This paper does not discuss any work related to scalable training from 2022 to 2024.\n4. This paper contains many typos.\n5. This paper does not compare with baselines on standard datasets."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Algorithm 1, how to get the input clique tree? What's the complexity to construct such a tree? Considering to arbitrarily select an edge at each epoch, how to guarantee balanced partition?\n2. What are the strengths to partition a graph into balanced chordal graph over other balanced partition?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The organization is good to follow.\n2. The authors find a new potential of training large-scale graph."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on training GNNs on large graphs, and proposes to separate the whole graph into several balanced chordal graph. The authors try to maintain main information of the original graph structure."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Potential violation of double-blind policy. For the link at line 387, 'LICENSE' contains \"Copyright (c) 2024 Su ziyang\", where \"Su ziyang\" is a name implying one of the authors. Besides, this link contains the LATEX file of the submission not the code.\n2. Section 3 is about one page, but only contains some well-known knowledge.\n3. As shown in Figure 1, the authors argue that previous mini-batch methods suffer from information loss because of removed nodes and edges. However as shown in Figure 2, the proposed model also does not consider the nodes between different cliques.\n4. The baselines are too old, where the authors do not provide the citation for SAGN.\n5. As shown in Table 1, the proposed model cannot achieve the best memory usage and training time in all three datasets. Considering this paper studies large-scale training, these two metrics are very important.\n6. I strongly suggest the authors further check the writing:\n- Section 3 \"PREMILINARY\"\n- What's $\\mathcal{O}$ in Definition 2."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Q1: Please discuss the difference between your paper and ClusterGCN.\n\nQ2: In Line 94, \"Under extensive experiments on four real-world datasets...\" , where are the fourth dataset?\n\nQ3: Section 2.3 has a title of \"GNN decoupling\", but the main text is about attention and skip-connection. How these concepts are related to GNN decoupling?\n\nQ4: In Line 359, \"We select six baselines to evaluate the performance...\", where are the sixth baseline?\n\nQ5: In Line 388, \"Codes are available at...\", there is no implementation code in this link, here is the tex source. It is normal not to provide the code during the review stage, but please do not deceive the reviewers.\n\nQ6: In Line 517, \"Case study...\", this is not case study."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "S1: The scalability of GNNs is an important research problem.\n\nS2: The format looks fine."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes CGST. CGST includes a balanced chordal graph partition module and a batch random aggregation module to improve performance on node classification tasks while maintaining main information."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1: Apart from the graph partition method, I don't see any difference between this paper and ClusterGCN. And the graph partition method is adopted from existing work.\n\nW2: The experiment setting is strange. The authors do not use common graph datasets. The results are also unsatisfied, as a scalable method, CGST performs poorly both in terms of memory and time.\n\nW3: Parts of this paper were written in LLM, e.g., Line 321, what is \"cluster graph spatial transformer\"?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See weakness"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Experiments are conducted on three new datasets, which is inconsistent with previous work. Testing on new datasets is commendable.\n\n2. This paper is easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a GNN training technique based on subgraph sampling, which is based on chordal subgraph partition. The authors tested the performance of CGST training on GCN across three large graphs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In the introduction, the figure 1 is inappropriate. Methods like CLUSTER-GCN and GAS use METIS to partition graphs, which does not result in some nodes being removed and unable to appear in the training batches.\n2. The author frequently mentions that chordal subgraph partition is a major contribution, but I notice that the work in section 4.2 originates from [1]. This significantly undermines the novelty and amount of work in this paper. The author should provide an accurate explanation and description of this.\n3. There are significant problems in the experimental section of the paper, which completely fails to meet the acceptance standards of ICLR. The author should provide experimental results for a variety of GNNs, not just limit to GCN. In terms of experimental results, CGST is also not ideal in terms of Mem usage and training time. Moreover, the author should provide experimental results for commonly used datasets, such as Products.\n\nOverall, I think this paper has significant deficiencies, especially in the experimental section.\n\n[1] Jungho Ahn,Lars Jaffke,O-joung Kwon, and Paloma TLima. Well-partitionedchordalgraphs. Discrete Mathematics."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024chordal,\ntitle={Chordal Graph Sampling-Based Mini-batch Training Algorithm for Large Graphs},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1959usnw3Z},\nnote={under review}\n}"
},
"abstract": {
"value": "Graph Neural Networks (GNNs) are powerful models for learning representations of attributed graphs. To scale GNNs to large graphs, many methods use various techniques, such as sampling and decoupling, to alleviate the “neighbor explosion” problem during mini-batch training. However, these sampling-based mini-batch training methods often suffer from greater information loss than decoupling-based methods or full-batch GCNs. Besides, most original segmentation methods for large graphs usually lose a large number of edges, resulting in suboptimal performance when performing mini-batch training. Therefore, we propose a Chordal Graph Sampling-based mini-batch Training algorithm for GNNs on large-scale graph datasets, called CGST. CGST includes a balanced chordal graph partition module and a batch random aggregation module to improve performance on node classification tasks while maintaining main information of the original graph structure. Experiments on three large-scale graph datasets prove the effectiveness of CGST."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large scale dataset",
"Graph neural networks"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/40f2d0d451a620b7f727b6f81b26953617033ea3.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Chordal Graph Sampling-Based Mini-batch Training Algorithm for Large Graphs"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
19QWQSsbOA | Multi-scale Conditional Generative Modeling for Microscopic Image Restoration | main | Active | Microscopic Image Restoration;Generative Model | generative models | 3;5;6;6 | 5;4;4;5 | 2;1;3;3 | 2;2;3;2 | 1;2;3;3 | 5 | 4.5 | 2.25 | 2.25 | 2.25 | -0.408248 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "The anonymity of the authors is compromised, as this paper is available on arXiv at https://arxiv.org/abs/2407.05259."
},
"flag_for_ethics_review": {
"value": [
"Yes, Other reasons (please specify below)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could the authors provide more detailed explanations regarding the choice and role of each loss term in Equation 18 and explain how they determined the relative weighting (λ, ν, α values) between the terms.\n2. Could the authors provide a comparison of training time and the number of training parameters for MSCGM versus other models?\n3. Could the authors to provide a detailed algorithm or pseudocode for MSCGM training, similar to what they provided for BBDP Algorithm."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. MSCGM’s wavelet-based decomposition and conditional modeling shows substantial improvements in sampling speed and better reconstruction quality.\n2. By adapting the generative approach to frequency characteristics, MSCGM enhances detail in restored images, especially in high-frequency components crucial for microscopy images.\n3. The authors presented a new loss function."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a multi-scale conditional generative model (MSCGM) aimed at enhancing microscopic image restoration by combining wavelet transforms and a Brownian Bridge diffusion process. The authors leverage multi-scale wavelet transforms to efficiently model low- and high-frequency image components, significantly improving the generation quality and speed of image restoration compared to traditional diffusion models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Equation 18 combines multiple objectives—L2 loss, Structural Similarity Index Measure (SSIM), and Wasserstein distance—but the rationale behind each component’s inclusion is not fully explained. Additionally, the roles and relative importance of the scaling parameters λ, ν, and α are unclear. \n2. The training procedure for MSCGM is not explicitly described. Unlike the clear training steps outlined for BBDP, MSCGM lacks a step-by-step description of its training pipeline. \n3. While Table 1 compares MSCGM with other models in terms of PSNR, SSIM, and sampling time, it does not include training time or the number of trainable parameters for each method. Without these metrics, it is challenging to gauge MSCGM’s overall computational cost relative to other approaches. Including such details would provide a more comprehensive view of the model’s efficiency.\n4. In Section 4.2, the authors state that FID is considered as an evaluation metric. However, this metric is not included in Table 1. As FID is widely used in assessing generative models for image quality, its inclusion would offer further insights into MSCGM’s performance in distributional similarity to real images.\n5. Equations from 4 to 15 are borrowed from BBDP paper. It is better to include them under the Preliminaries section."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please see my concerns in Weakness"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The authors recognize the loss of detail in LDM, a known issue, and apply it to the microscopic image restoration context, an interesting direction.\n2. They introduce the novel idea that the Brownian bridge stochastic process could effectively integrate conditional images."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a multi-scale conditional generative model (MSCGM) for image restoration, incorporating multi-scale wavelet transforms and a Brownian bridge stochastic process. The wavelet transform is included due to its reversibility, which maintains information integrity in the latent diffusion space, in contrast to traditional Latent Diffusion Models (LDM). The Brownian bridge stochastic process is leveraged to introduce conditional images in both forward and reverse processes. While the authors aim to address microscopic image restoration, the motivation and results in the paper do not consistently support this focus."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Lack of Consistency:** The paper lacks organization and clarity. Although the title emphasizes \"Microscopic Image Restoration,\" the experiments primarily focus on \"Natural Image Super-resolution\" and \"Low-light Natural Image Enhancement.\" Only a small subset of results explores microscopic images. If the model is intended for general image restoration, it would be more accurate to propose it as a ‘unified image restoration’ model. I suggest the authors either refocus their experiments more heavily on microscopic image restoration to align with the title, or broaden the title to reflect the wider scope of image restoration tasks covered in the paper.\n \n2. **Introduction Needs Refinement:** The introduction lacks a clear problem definition and research motivation. The first two paragraphs provide a broad overview of diffusion processes that diverges from the paper’s focus. The discussion on latent diffusion downsampling is a well-known issue and could be alleviated by higher resolutions. The authors should clearly articulate why microscopic images especially require the multi-scale wavelet transform in the introduction. Please include a discussion of how their approach compares to or builds upon these existing wavelet-based diffusion models in the Introduction, highlighting any key differences or improvements.\n \n3. **Lack of Acknowledgment of Prior Work:** The paper does not credit previous studies applying wavelet transforms in diffusion models, which could mislead readers into believing the concept originated here. Papers like \"Wavelet Diffusion Models are Fast and Scalable Image Generators (CVPR 2023)\" and \"Training Generative Image Super-Resolution Models by Wavelet-Domain Losses Enables Better Control of Artifacts (CVPR 2024)\" are directly related and should be cited with comparisons to clarify this study’s contributions.\n \n4. **Figure 1 Illustration Issues:** The paper title focuses on \"Microscopic Image Restoration,\" yet Figure 1 uses natural images. Including examples of microscopic images to show the degradations introduced by LDM and Refusion compared to MSCGM would enhance clarity.\n \n5. **Methodology Development Clarity:** The description of the wavelet transform on page 4 is overly general, with key details moved to the appendix. Clear explanations of any novel model designs or algorithmic adaptations should be provided in the main text.\n \n6. **Quality of Mathematical Presentation:** Symbols in the equations are used without proper declarations or explanations. Inconsistent symbols, like the variable for the normal distribution \\( N \\), further detract from clarity.\n \n7. **Algorithm 1 Lack of Context:** Algorithm 1 on page 5 is underdeveloped. Symbols are not defined before use, and the algorithm lacks defined input-output requirements.\n \n8. **Figure 2 Diagram Confusion:** Figure 2 is difficult to interpret. The illustration doesn’t clearly label network modules, workflow processes, or shared parameters (only a line is shown), which fails to clarify the model structure effectively.\n \n9. **Lack of Dataset Information:** The results section includes evaluations of microscopic images, but there’s no description of the dataset. Is it public or private? What is the image count? Without these details, readers cannot analyze or reproduce the results. Please provide a detailed description of the microscopic image dataset used, including its source, size, and any preprocessing steps applied.\n \n10. **Insufficient Ablation Studies:** Results provide only a simple comparison with LDM, without deeper exploration of MSCGM’s components or ablation studies to justify the performance benefits of each module.\n \n11. **Unconvincing Model Performance:** The model’s performance requires further validation through comparison with advanced models. Numerous diffusion-based image restoration models from 2024 exist, yet none are used for comparison. This weakens the paper’s credibility. Key diffusion-based image restoration works worth considering include: \n - RDDM ([link](https://cvpr.thecvf.com/virtual/2024/poster/31373)) \n - HIR-Diff ([link](https://cvpr.thecvf.com/virtual/2024/poster/29665)) \n - WF-Diff ([link](https://cvpr.thecvf.com/virtual/2024/poster/30059)) \n - DeqIR ([link](https://cvpr.thecvf.com/virtual/2024/poster/31759)) \n - GDP ([link](https://cvpr.thecvf.com/virtual/2023/poster/22095))"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Especially considering that inference time is one of the main benefits, why was it not compared to models with fewer step counts or at least an in-depth analysis of how step counts influence the SOTA model performance? E.g. [2], or other methods that can be applied to the problem domain?\n\n[2] Phung, Hao, Quan Dao, and Anh Tran. \"Wavelet diffusion models are fast and scalable image generators.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "• Integrating the Brownian Bridge Diffusion Process (BBDP) with adversarial learning in a multi-scale wavelet diffusion framework is innovative, enhancing image quality and stability. \n\n• The model achieves notable speed improvements, delivering faster inference without sacrificing image quality. \n\n• Performance remains consistent across diverse experiments, demonstrating robustness on both microscopy and natural images."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present a novel multi-scale generative model that leverages the Brownian Bridge process within the wavelet domain. This approach enhances inference speed while maintaining high image quality and diversity. The model is further integrated with computational microscopy workflows, expanding its applicability. The authors evaluate its performance on both microscopy and natural image datasets, demonstrating that it achieves slightly better results compared to existing methods such as IR_SDE, Refusion, and BBDM, with the added advantage of faster inference."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "• The paper lacks a clear motivation for applying this model to computational microscopy workflows. The rationale for this specific application is unclear and lacks context, the relevance to microscopy appears out of place. A discussion on how this functionality benefits microscopy would help justify this direction and clarify its practical utility.\n\n• The primary advantage of this method is its reduced inference time; however, the paper lacks a direct comparison with other methods that similarly aim to improve efficiency. Including such a comparison would provide valuable context and help quantify the benefits more clearly.\n\n• The general evaluation lacks depth and is missing ablation studies. \n\n• There appear to be configuration issues with the comparison methods. For instance, IR-SDE [1] is cited as requiring 100 steps, but the authors use 1000, which significantly prolongs inference time. With the correct configuration (100 steps), the inference time should drop from 32 seconds to approximately 3 seconds.\n\n• The choice of metrics is limited and somewhat inadequate for a super-resolution task. Relying solely on PSNR and SSIM may overlook important aspects of image quality. Including pixel-based metrics would provide a more comprehensive evaluation and might show shortcomings of the proposed method.\n\n\n[1] Luo, Ziwei, et al. \"Image restoration with mean-reverting stochastic differential equations.\" arXiv preprint arXiv:2301.11699 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The paper demonstrated that the low-frequency coefficients in higher scales show Gaussian tendency and thus applied this to BBDP. The idea is novel and well hypothesized, but it would be helpful if other DM methods, such as IR-SDE and ReFusion methods that are implemented on 4x super-resolution experiment, are also tested on microscopy image dataset. Only CMSR (GAN: non-diffusion model), is compared at the moment, not showing the effectiveness of proposed near-Gaussianity assumption.\nSimilarly, applying BBDM to full resolution image does not seem to be fair comparison. Since many works demonstrated the effectiveness of multi-scale diffusion models, BBDM should be implemented in a same manner as the proposed method to prove the superiority of WT instead of other compression technique. Please conduct an ablation study that replaces WT with simple down-sampling.\nIs there any specific reason why the proposed work adopted BBDM which was initially designed for image translation where input and target domains are different? Super-resolution tasks seem to have similar domains for input and target. Justify the choice of BBDM for super-resolution."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper analyzed the characteristics of microscopic images and proposed adequate methodology to address the sparsity and non-Gaussianity. Since the wavelet transformation divides the image into two subbands (high- and low- frequency coefficients) losslessly, handling each subband in a different manner is original. \nThe contribution of the work is clear and well demonstrated. \nIn addition, the work could be further applied to different modality images where sparse or non-Gaussianity exist."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed a multi-scaled generative model that uses a diffusion model (DM) for low-frequency image and a GAN for high frequency images. The wavelet transform provides multi-scale images without lossy encoding process. The lossless compression is particularly important for microscopic imaging where high-frequency component are sparse and non-Gaussian. Additionally, the authors showed the near-Gaussian property of low-frequency component and thus employed Brownian Bridge Diffusion Process (BBDP). The idea of employing different networks (DM and GAN) to different resolutions according to the characteristics of microscopic dataset is novel. The proposed MSCGM (multi-scale conditional generative model) showed improved super-resolution result with fast inference time."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although the idea of the paper is novel, the effectiveness of the work has not been thoroughly assessed. The use of WT and the superiority of the proposed method compared to conventional method should be further evaluated. The specific comments are described in Questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024multiscale,\ntitle={Multi-scale Conditional Generative Modeling for Microscopic Image Restoration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=19QWQSsbOA},\nnote={under review}\n}"
},
"abstract": {
"value": "The advance of diffusion-based generative models in recent years has revolutionized state-of-the-art (SOTA) techniques in a wide variety of image analysis and synthesis tasks, whereas their adaptation on image restoration, particularly within computational microscopy remains theoretically and empirically underexplored. In this research, we introduce a multi-scale generative model that enhances conditional image restoration through a novel exploitation of the Brownian Bridge process within wavelet domain. By initiating the Brownian Bridge diffusion process specifically at the lowest-frequency subband and applying generative adversarial networks at subsequent multi-scale high-frequency subbands in the wavelet domain, our method provides significant acceleration during training and sampling while sustaining a high image generation quality and diversity on par with SOTA diffusion models. Experimental results on various computational microscopy and imaging tasks confirm our method's robust performance and its considerable reduction in its sampling steps and time. This pioneering technique offers an efficient image restoration framework that harmonizes efficiency with quality, signifying a major stride in incorporating cutting-edge generative models into computational microscopy workflows."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Microscopic Image Restoration",
"Generative Model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ac6a5e380b3c32ce287cea704ff46dbafc11bd8f.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/6167139b18178ef2b1f5d714b6a6d63a86c51a1b.zip"
},
"title": {
"value": "Multi-scale Conditional Generative Modeling for Microscopic Image Restoration"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
19ufhreGTj | Understanding Dimensional Collapse in Cross-Modal Feature Distillation | main | Active | knowledge distillation;feature distillation;cross-modal learning;dimensional collapse | learning theory | 3;5;6;6;6 | 4;4;3;3;3 | 4;2;3;3;3 | 2;1;2;3;3 | 3;3;3;3;3 | 5.2 | 3.4 | 3 | 2.2 | 3 | -0.840168 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the cons part. I will moderately raise my score if the authors can provide further experimental results and answer my questions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well-written, which is quite easy to follow.\n2. The author sufficiently demonstrates that modality gap can cause dimensional collapse, leading to suboptimal performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the author mainly propose to solve the problem of dimensional collapse caused by modality gap in cross-modal knowlegde distillation task. Firstly, the author demonstrates the impact of modality gap on cross-modal features theoretically and empirically. To combat with this issue, a Cross-modal Information Bottleneck Approximation (CIBA) framework is proposed that extracts modality-general features through a bottleneck structure, meanwhile aligning teacher and student features with an additional loss. Experiments on several datasets demonstrates the performance of CIBA."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My main concern is about the generalizability of the method. As mentioned in problem statements and limitations, the theorem is established on linear extractor, which is inconsistent with practical applications where non-linear encoders are widely applied. Under such conditions, the proposed concept of modality-general and modality-specific parts could be vague since the better capacity of the encoders. I can truly understand the difficulty of theorical provement with non-linear extractors, while the direct application of CIBA seems to be accessible. Can you provide further results of CIBA compared to SOTAs with more powerful encoders on current datasets (RAVDESS and MM-IMDB) to prove the superiority?\n2. In the method part, a bottleneck structure is utilized to capture mutual modality information. From my point of view, the dimension of the bottleneck feature may be a crucial parameter affecting the granularity of the extracted information. Performance seems to be fluctuant with chaning values of the param according to Fig.5(c). Can you provide more ablation on this param on more datasets? How do you choose the best bottleneck dimension?\n3. The author mainly focus on the introduction and demonstration of modality gap's impact on dimensional collapse, while the introduction of method seems to be ordinary and unremarkable. Besides, since the information bottleneck structure was proposed by earlier research, and the proposed loss is a direct combination of generation loss and KL loss, the novelty of the paper is somehow limited."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How does the performance of CIBA compare when the assumption of orthogonality between modality-general and modality-specific features is relaxed? \n\nHow does CIBA differ from the previous CMKD approaches?\n\nHow sensitive is the CIBA framework to the choice of hyperparameters, particularly the dimension of the bottleneck feature (H)?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper proposes a novel cross-modal knowledge distillation (CMKD) method by focusing on the issue of dimensional collapse in cross-modal feature distillation. The concept of modality gap and its impact on the efficacy of feature distillation is a fresh approach to understanding the limitations of CMKD. The proposal of the Cross-modal Information Bottleneck Approximation (CIBA) scheme is creative and addresses a significant problem in transferring knowledge across different modalities.\n\nThe paper is well-written. The figures and tables are clear and effectively support the textual content."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper titled \"Understanding Dimensional Collapse in Cross-Modal Feature Distillation\" investigates the challenges of transferring knowledge across different modalities in multi-modal neural networks, specifically focusing on the problem of dimensional collapse in cross-modal feature distillation (CMFD). The authors hypothesize that the modality gap between the teacher and student models leads to dimensional collapse in the student's feature space, which degrades the quality of knowledge distillation. To address this, they propose a novel framework called Cross-modal Information Bottleneck Approximation (CIBA), which aims to extract and transfer modality-general features from the teacher model to sub-dimensions of the student model's features. The paper empirically demonstrates that CIBA effectively reduces dimensional collapse and improves performance on various real-world multi-modal datasets, including RAVDESS (Audio-Image), MM-IMDB (Image-Text), and nuScenes (LiDAR-Camera). The key contributions of the paper are the theoretical and empirical investigation of the modality gap's impact on CMKD, the proposal of the CIBA framework, and the validation of its effectiveness across different modalities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The contribution is incremental.\n\nI feel the information bottleneck approximation idea has been used extensively.\n\nWhile the proposed method is shown to outperform baseline approaches, it is unclear how it compares to the most recent and advanced techniques in the field, i.e., DML[1], DKD[2], DIST[3], C2KD[4].\n\n[1] Ying Zhang, Tao Xiang, Timothy M. Hospedales, and Huchuan Lu. Deep mutual learning. In CVPR, 2018. 1, 3, 6, 7.\n\n[2] Borui Zhao, Quan Cui, Renjie Song, Yiyu Qiu, and Jiajun Liang. Decoupled knowledge distillation. In CVPR, 2022. 1, 2, 3, 4, 6, 7.\n\n[3] Tao Huang, Shan You, Fei Wang, Chen Qian, and Chang Xu. Knowledge distillation from a stronger teacher. In NeurIPS, 2022. 1, 2, 6, 7, 8.\n\n[4] Huo F, Xu W, Guo J, et al. C2KD: Bridging the Modality Gap for Cross-Modal Knowledge Distillation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 16006-16015."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The theoretical and empirical investigation of the \"modality gap\"—the distributional shifts between different modalities—and its detrimental effect on CMKD, specifically leading to dimensional collapse in the student model’s feature space.\n\n2. CIBA extracts modality-general features from the teacher model and transfers them to sub-dimensions of the student’s features. This method mitigates the dimensional collapse, ensuring more robust and effective knowledge transfer."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tries to address the challenges associated with deploying multi-modal neural networks in real-world applications, specifically focusing on the constraints of limited computing resources and complex sensor configurations. The authors explore Cross-Modal Knowledge Distillation (CMKD) as a solution for transferring knowledge from a pretrained teacher model to a more deployable student model tailored to a target modality. Despite the advancements in CMKD across various domains, the paper identifies a gap in understanding how distributional shifts between modalities—referred to as the modality gap—affect the efficacy of feature distillation. The study hypothesizes and empirically demonstrates that a significant modality gap leads to dimensional collapse within the student model's feature space, undermining performance. To mitigate this issue, the authors introduce the Cross-modal Information Bottleneck Approximation (CIBA) scheme, designed to extract and transfer modality-general features from the teacher model effectively. Experimental results on diverse real-world multi-modal datasets confirm that the proposed CIBA method successfully reduces dimensional collapse in the student model, resulting in enhanced performance. This work contributes a deeper understanding of the interplay between modality gaps and knowledge transfer in CMKD, offering a practical solution to improve the deployment of multi-modal neural networks under resource constraints."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Since RAVDESS is a relatively small size dataset. Do you try to work on VGGSound?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe authors successfully propose and validate that the modality gap between the teacher and student models can lead to dimensional collapse in the student’s feature space.\n\n2.\tA novel Cross-modal Information Bottleneck Approximation (CIBA) scheme is introduced to extract and transfer modality-general features from the teacher model.\n\n3.\tExperimental results across various loss functions and tasks provide strong evidence for the effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the relationship between distributional shifts across modalities and their impact on the effectiveness of cross-modal knowledge distillation (CMKD), specifically addressing the issue of cross-modal feature distillation. The authors hypothesize and validate that the modality gap between the teacher and student models may lead to dimensional collapse in the student’s feature space. To address this, they propose a Cross-modal Information Bottleneck Approximation (CIBA) scheme aimed at extracting and transferring modality-general features from the teacher model. Experimental results demonstrate that the proposed distillation strategy effectively mitigates dimensional collapse in the student model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe work is predicated on the assumption of linear feature extractors; however, in practical applications, most feature extractors are non-linear.\n\n2.\tIn the MM-IMDB dataset, the observed improvement is marginal. Could you please provide a more detailed explanation for this finding?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No ethics review needed."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How is the Figure 1 formulated? The manuscript does not mention the details. I think it is important for the motivation of modality-general and modality-specific knowledge analysis.\n2. How about directly apply unimodal knowledge distillation on crossmodal knowledge distillation? Could the proposed method be integrated into SOTA methds?\n\n[r1] Zhang L, Shi Y, Shi Z, et al. Task-oriented feature distillation[J]. Advances in Neural Information Processing Systems, 2020, 33: 14759-14771. \n\n[r2] Huang T, You S, Wang F, et al. Knowledge distillation from a stronger teacher[J]. Advances in Neural Information Processing Systems, 2022, 35: 33716-33727."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Research about cross-modal knowledge distillation (CMKD) on the feature view is an important topic for multimodal learning and knowledge distillation. This paper analyses the dimensional collapse induced by modality gap and propose Cross-modal Information Bottleneck Approximation (CIBA) to disentangle the general and specific knowledge, which is novel and practical.\n2. Utilizing the Mean Squared Error (MSE) loss for feature distillation (FD) is reasonable and suitable for the subsequent theoretical analysis.\n3. This work is a good extension of the modality focusing hypothesis, and gives a solid analysis and detailed solutions.\n4. This work is well written and organized. Extensive experiments on Audio-Image, Image-Text, and LiDAR-Camera crossmodal transfer are conducted."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the problem of dimensional collapse in cross-modal feature distillation (CMFD), where a student model trained on one modality aims to mimic the feature representations of a teacher model trained on a different modality. The authors hypothesize that the distributional shift, or \"modality gap\", between the teacher and student modalities leads to the student's feature space collapsing to only capture the modality-general features, resulting in suboptimal distillation performance. To address this issue, the authors provide in-depth analysis on how distributional shifts across different modalities and propose a Cross-modal Information Bottleneck Approximation (CIBA) scheme that extracts and transfers the modality-general features from the teacher to the student, allowing the student to effectively span both modality-general and modality-specific feature spaces."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Modality gap is widely studies in multimodal learning, and this paper does not give a review of previous modality gap analysis. Moreover, the cross-modal knowledge distillation on logit-level method [r1] is not mentioned and analysed.\n[r1] Huo F, Xu W, Guo J, et al. C2KD: Bridging the Modality Gap for Cross-Modal Knowledge Distillation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 16006-16015.\n2. The RAVDESS, MM-IMDB, and nuScenes have limited class categories. Large-scale experiments like conducting experiments on VGG-Sound (or subset) will make the paper more convincing.\n3. Related works about the 'Cross-modal knowledge distillation' are somewhat out-of-date, only one paper published in 2023 is mentioned.\n4. The proposed method is somewhat similar to online distillation [r1] and task-oriented feature distillation [r2]. How about the performance of directly employing task-oriented feature distillation [r2] on cross-modal feature distillation?\n[r2]Zhang L, Shi Y, Shi Z, et al. Task-oriented feature distillation[J]. Advances in Neural Information Processing Systems, 2020, 33: 14759-14771."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We investigate the distributional shifts across different modalities that ultimately lead to dimensional collapse in cross-modal knowledge distillation, then propose a methodology to address it."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024understanding,\ntitle={Understanding Dimensional Collapse in Cross-Modal Feature Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=19ufhreGTj},\nnote={under review}\n}"
},
"abstract": {
"value": "To overcome limited computing resources and the complexity of sensor configurations in deploying multi-modal neural networks in real-world applications, cross-modal knowledge distillation (CMKD) aims to transfer valuable information from a pretrained teacher model to a deployable student model with the target modality. Despite the successful applications of CMKD in various fields, our understanding of knowledge transfer across different modalities remains insufficient to fully explain the efficacy of feature distillation. In this work, we investigate the relationship between the distributional shifts across modalities, referred to as the modality gap, and its impact on the effectiveness of CMKD, particularly focusing on the problem of cross-modal feature distillation. We first hypothesize and empirically validate that the modality gap between the teacher and student causes dimensional collapse in the student's feature space. To prevent such inefficiency, we propose a Cross-modal Information Bottleneck Approximation (CIBA) scheme aimed at extracting and transferring modality-general features from the teacher model. Lastly, we experimentally demonstrate that our distillation strategy effectively reduces the dimensional collapse in the student model, thereby achieving improved performance for various real-world multi-modal datasets."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"knowledge distillation",
"feature distillation",
"cross-modal learning",
"dimensional collapse"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b8809095347811a1d25d10c0d3171e5f6bc8a055.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Understanding Dimensional Collapse in Cross-Modal Feature Distillation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1ABhAZCoGr | DYSTIL: Dynamic Strategy Induction with Large Language Models for Reinforcement Learning | main | Active | Neurosymbolic Systems;Reinforcement Learning;Large Language Models;Strategy | neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.) | 3;3;5 | 4;4;4 | 2;1;2 | 2;2;2 | 3;2;3 | 3.666667 | 4 | 1.666667 | 2 | 2.666667 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "In the introduction, it is mentioned that BC+RL can't enable an RL agent tot acquire higher level abstractions and understanding of the RL task. This is not only very loosely defined, but likely not true. Do the authors mean that an RL agent wouldn't acquire an understanding that can be translated in language? This is very different than the claims being made.\n\nWhy use GPT-4o for generating strategies? How does Llama 3.1 compare? It would be much more preferable to have it be the same family of models.\n\nThe word \"neuro-symbolic\" is used to characterize the method, but is it really a neuro-symbolic method? To me it just seems like the neural network is additionally conditioned on language. This qualifier seems a bit of stretch.\n\n[1] Deep Reinforcement Learning at the Edge of the Statistical Precipice, Agrawal et al., 2022\n\n[2] Deep reinforcement learning that matter, Henderson et al., 2018\n\n[3] Code as Reward: Empowering Reinforcement Learning with VLMs, Venuto et al., 2024"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper identifies an important area for research, that is, how to combine expert data and reinforcement learning. The proposed approach is different from some of the more traditional ways of leveraging both kinds of data, building on the strengths of LLMs to devise high level strategies."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a way to leverage expert data for behavior cloning and RL to craft policies that are conditioned on a set of strategies which hopefully encode generalizable behavior. The authors claim that existing methods on BC+RL suffer from important issues (poor generalization, sample efficiency and interpretability), which the proposed approach can address. In particular, the authors train an open source LLM on expert data through behavior cloning by conditioning the policy on strategies that are devised by a teacher LLM (GPT-4o). The model is then trained with RL data, leading to a new list of strategies, which is then used to further guide the agent. The strategies are selected by verifying whether they help the model achieve higher performance. On a set of four environments, the paper shows that the proposed approach improves upon previous baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The empirical setup brings a lot of questions. A major red flag is that there are no error bars at all and no mention of the number of seeds. Please see the rich literature on the subject of statistical relevance in decision making [1, 2]. For the tasks themselves, it is not clear why some choices are made. For example, is the max_steps=60 the default number? In the codebase of Minigrid I can see that the default value is set of 100, so an explanation would be necessary. \n\nAnother important area of doubt is concerning the strategy for updating the list of strategies. Currently, this is a complicated method that relies on evolving the strategy list with respect to performance. Why is such a complex method used? How sensitive is it to the different hyperparameters? How does it compare to simply asking GPT-4o for a new list of strategies? These are key questions that are completely unanswered.\n\nThe authors claim that generalization is a limitation of BC+RL, yet the paper does not show any experiments on generalization. This would be a great opportunity for the authors to show the compositionally that is afforded by language. It would also be a great opportunity to address another important are of concern: how much does the list of strategies affect the model? How much can you steer its behavior by changing the list? At the moment, it really isn't clear that the RL agent really responds to this conditioning.\n\nThe performance numbers reported for some of these tasks seems very low, which also comes from a limited set of baselines. In particular, I would really like to see the performance of GPT-4o on these tasks. Another family of baselines would be to compare to LLMs generating goals for the RL agent [3], which is relatively close to the ideas presented here. Notice that in that paper the results are significantly better than the numbers presented here."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well-written and easy to follow.\n2. The authors provide illustrations of their method, which makes it clear to how it works."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed DYSTIL which integrates LLMs into a strategy-based neuro-symbolic reinforcement learning framework. The method aims to address the generalization issue of RL."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My biggest concern is the limited novelty and experiments. There are many papers that proposed strategy-generating methods as a summary or reflection of the past trajectories, such as [1]. The authors failed to discuss the similarities and differences between their method and these works.\n2. The experiments are only conducted in several environments from Minigrid. Whether this approach can generalize and how to design each component for different tasks remains unclear. Besides, the compared baselines are limited. I strongly encourage the authors to do literature reviews and add more baselines such as [1].\n\n[1] Shinn et al., Reflexion: Language Agents with Verbal Reinforcement Learning."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can DYSTIL generalize to other language-based decision-making tasks, such as those solved by ReAct (e.g., ALFWorld)? How could you extend your framework to accommodate these tasks?\n2. In the GLAM baseline paper[1], the average success rate converges and reaches high performance (i.e., over 80%) at approximately 1e6 steps. Is there a reason you chose 1e5 steps for evaluation? What causes the discrepancy between your configuration and results compared to theirs?\n3. In the ablation study, dynamic strategy updates are removed, so there is no $\\mathcal{L}_2$ in the static strategy settings. Does this result in more iterations compared to the proposed method based on the same training frames? I also want to confirm whether $\\mathcal{L}, \\mathcal{L}_1, \\mathcal{L}_2$'s executions are all counted in training frames.\n4. Can the strategies generalize to novel tasks? For instance, would training on Unlock Pickup help in solving Key Corridor?\n\n[1] Carta et. al. \"Grounding large language models in interactive environments with online reinforcement learning\". In Proceedings of ICLR 2023."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Quality: The paper presents ideas through clear text and figures, aiding understanding of the overall concepts.\n- Significance: This paper demonstrates that the proposed method outperforms two baselines in grid-world environments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces DYSTIL, a neuro-symbolic reinforcement learning framework that integrates large language models (LLMs) to dynamically induce and internalize textual strategies, enhancing policy generalization and sample efficiency. By leveraging LLMs to provide strategy guidance based on expert demonstrations, DYSTIL improves interpretability and transparency in reinforcement learning tasks. Empirical results demonstrate that DYSTIL outperforms state-of-the-art methods by 17.75% in success rate across complex environments like Minigrid and BabyAI."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Scalability: DYSTIL’s reliance on closed-source, SOTA LLMs (e.g., GPT-4o) raises issues of scalability, reproducibility, and accessibility, especially for the model which needs to recurrently call strategy-generating LLM for each iteration. The paper also lacks ablation studies using different LLMs, which would help clarify the flexibility of using other LLMs for this work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024dystil,\ntitle={{DYSTIL}: Dynamic Strategy Induction with Large Language Models for Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1ABhAZCoGr},\nnote={under review}\n}"
},
"abstract": {
"value": "Reinforcement learning from expert demonstrations has long remained a challenging research problem, and existing methods resorting to behavioral cloning plus further RL training often suffer from poor generalization, low sample efficiency, and poor model interpretability. Inspired by the strong reasoning abilities of large language models (LLMs), we propose a novel strategy-based neuro-symbolic reinforcement learning framework integrated with LLMs called DYnamic STrategy Induction with Llms for reinforcement learning (DYSTIL) to overcome these limitations. DYSTIL dynamically queries a strategy-generating LLM to induce textual strategies based on advantage estimations and expert demonstrations, and gradually internalizes induced strategies into the RL agent through policy optimization to improve its performance through boosting policy generalization and enhancing sample efficiency. It also provides a direct textual channel to observe and interpret the evolution of the policy's underlying strategies during training. We test DYSTIL over challenging RL environments from Minigrid and BabyAI, and empirically demonstrate that DYSTIL significantly outperforms state-of-the-art baseline methods by 17.75% success rate on average while also enjoying higher sample efficiency during the learning process."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Neurosymbolic Systems",
"Reinforcement Learning",
"Large Language Models",
"Strategy"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d49d8dbae375e432284685600bc8e5372389f44f.pdf"
},
"presentation": null,
"primary_area": {
"value": "neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "DYSTIL: Dynamic Strategy Induction with Large Language Models for Reinforcement Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1AYrzmDK4V | Watermark Smoothing Attacks against Language Models | main | Active | LLM Watermark | alignment, fairness, safety, privacy, and societal considerations | 1;3;3;6 | 4;4;3;3 | 3;2;2;2 | 1;2;2;3 | 3;2;3;3 | 3.25 | 3.5 | 2.25 | 2 | 2.75 | -0.70014 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "What are the main differences between this work and that of Zhang et al. (2023)? (see Weaknesses section)"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper proposes a heuristic to estimate which tokens contribute the most to the overall watermark signal and removes the watermark by editing these tokens using another language model. The idea is interesting, and the paper empirically validates the effectiveness of their attack across different watermarks, language models, and datasets. These results clearly establish the effectiveness of the attack in practice."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an automatic method for editing watermarked text from a language model to evade watermark detection using another (weaker) language model. The paper mainly considers the \"red-green list\" watermark of Kirchenbauer et al. and variants thereof, though the techniques should presumably generalize to other watermarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper distinguishes its main contributions from prior work by arguing that prior work on automatically removing watermarks involved using language models that were at least as strong as the original watermarked language model. However, one notable exception is the work of Zhang et al. [1], who seem to also focus on removing watermarks using weaker language models. This work is cited in the present paper but not discussed in any detail. It would be great if the authors can update their paper with a discussion of how their work differs from [1]. Otherwise, the novelty/significance of the main contributions over prior work is not clear.\n\n\n[1] Zhang et al. (2023) Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models. https://arxiv.org/abs/2311.04378"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Can the authors provide a baseline that uses the local reference model to do the paraphrase attack?\n- What could be potential adaptive defenses for this attack?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- I find the proposed method very interesting and quite different from the previous work. Meanwhile, the method doesn't require a strong oracle model like a paraphrasing attack, which makes the threat model more realistic.\n- I really enjoy reading this paper, especially section 3.1, which gives readers a lot of insights.\n- The results look positive and a lot of different watermarking schemes are covered (most results are presented in the appendix)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors introduce a novel watermark-removal attack that requires only a small watermark-free reference model. The attacker first estimates the probability of the generated token at position i being in the watermark's green list, which correlates with the relative confidence of the most likely token among the top k tokens. According to the confidence score, the attacker then combines the probability distributions at position i from both the watermarked model and the reference model to sample the token. This approach effectively evades watermark detection while maintaining high text quality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed method relies on using the logits/output probabilities of the watermarked model. This might limit the attack to some API models that may not return the logits/probabilities or only return top-k probabilities or even calibrated probabilities.\n- The paper uses perplexity or loss to measure the text quality, but I think it's not enough to show the quality of the text. For example, the model can generate an answer for a math question with a very low perplexity, but the answer is completely wrong. So, I think it will be more helpful if the authors can include more text quality metrics like P-SP used in [1] or even a model-based evaluation like asking a large oracle model which generation is preferable.\n- I think it's also helpful to the paper if the answers can show the results under different data distributions instead of overall c4.\n\n[1] Kirchenbauer, J., Geiping, J., Wen, Y., Shu, M., Saifullah, K., Kong, K., Fernando, K., Saha, A., Goldblum, M., & Goldstein, T. (2023). On the Reliability of Watermarks for Large Language Models. ArXiv, abs/2306.04634."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Table 1, Watermark (smoothing) has a lower perplexity than Watermark (or even Unwatermark) in some cases (e.g., Llama2-7b). In other words, the attack can even improve the quality of the text, which seems counterintuitive as the reference model is weaker. This also raises a concern about whether perplexity is the right measure to look at the quality of a text here. The authors may want to include other text quality metrics in the numerical studies.\n2. I would like to know if the authors can discuss the potential pitfalls of their methods, e.g., provide concrete examples or scenarios where their smooth attack might fail, and discuss the implications of such failures"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Many existing methods for statistical watermarking have primarily concentrated on the generation and detection of watermarks. This paper takes a different approach by examining statistical watermarking from a new perspective. This perspective is interesting and may also aid in the development of improved watermark generation and detection techniques."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work develops a smooth attack in the “green-red list” watermarking framework. The paper shows that a smooth attack makes it easier to bypass the detector while still preserving the quality of the text."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The significance level $S_t$ is unobserved and was estimated using a surrogate quantity, $c_t$. Though the authors showed that there is generally a negative correlation between $c_t$ and $S_t$, this is only a weak justification. It is possible that a small $c_t$ would correspond to a large $S_t$ in some situations, e.g., when $K$ is small. \n2. The method only applies to the “green-red list” watermarking scheme, which is known to be biased because it does not preserve the original text distribution. In contrast, there are unbiased watermarking methods (e.g., Kuditipudi et al., 2023; Aaronson, 2023). It is unclear if the proposed method applies to unbiased watermarking schemes. Perhaps the authors can provide more discussions about how their method might be adapted or extended to work with unbiased watermarking schemes.\n3. The paper lacks a rigorous theoretical analysis of the effect of the smooth attack on the text quality, e.g., bounds on how much the smoothing attack can affect certain text quality metrics."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why in Figure 1, top-p sampling (right figure) has some points with the total variation distance being 0 or 1, but top-k sampling (middle figure) does not?\n2. How many queries (prefixes) do you use for computing the bin index as described in Lines[261-266]?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The writing is easy to follow.\n\n2. Propose a smoothing attack scheme against statistical watermarking, and show that the significance level $S_t$ is highly correlated with the total variation distance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a \"smoothing attack\" that bypasses statistical watermarking in large language models (LLMs). By blending outputs from the watermarked model with a weaker reference model, it removes watermarks without impacting text quality on PPL."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The applicability of this method is limited, as obtaining a high-quality reference model is often not possible (e.g. for GPT-4). Additionally, it requires access to token logits, meaning it is not a purely black-box approach as claimed.\n\n2. In line 146. The authors overclaim that their attack is universally applicable to all statistical watermarking schemes. However, many watermarking schemes [1,2,3] do not use a green list, and their proposed method cannot be applied.\n\n3. Additional metrics are needed to better reflect the quality of the generated text. PPL tends to favor a distribution similar to that of the oracle model, which can introduce bias. It would be more informative to include straightforward metrics, such as BLEU in machine translation, to provide a clearer evaluation.\n\n4. The paper lacks key baseline results needed to demonstrate the effectiveness of the proposed method. Naive smoothing using $\\lambda \\tilde{P}(x)+(1-\\lambda) P^{ref}(x)$ can also remove the watermark while preserving part of the text quality.\n\n5. The choice of z-score threshold used in the experiments is unclear. It would be more straightforward to present the true positive rates at specific theoretical false positive rates, providing a clearer understanding of the method’s performance.\n\n6. The experimental settings for certain tests are suboptimal. For instance, in Table 2, the z-score for XSIR and SIR is too low, indicating that the watermark strength in the original watermarked model is insufficient.\n\n[1] Kuditipudi, R., Thickstun, J., Hashimoto, T. and Liang, P., 2023. Robust distortion-free watermarks for language models. arXiv preprint arXiv:2307.15593.\n\n[2] Hu, Z., Chen, L., Wu, X., Wu, Y., Zhang, H. and Huang, H., 2023. Unbiased watermark for large language models. arXiv preprint arXiv:2310.10669.\n\n[3] Dathathri, S., See, A., Ghaisas, S., Huang, P.S., McAdam, R., Welbl, J., Bachani, V., Kaskasoli, A., Stanforth, R., Matejovicova, T. and Hayes, J., 2024. Scalable watermarking for identifying large language model outputs. Nature, 634(8035), pp.818-823."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024watermark,\ntitle={Watermark Smoothing Attacks against Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1AYrzmDK4V},\nnote={under review}\n}"
},
"abstract": {
"value": "Statistical watermarking is a technique used to embed a hidden signal in the probability distribution of text generated by large language models (LLMs), enabling the attribution of the text to the originating model. We introduce the smoothing attack and show that existing statistical watermarking methods are not robust against minor modifications of text. In particular, with the help of a weaker language model, an adversary can smooth out the distribution perturbation caused by watermarks. The resulting generated text achieves comparable quality to the original (unwatermarked) model while bypassing the watermark detector. Our attack reveals a fundamental limitation of a wide range of watermarking techniques."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM Watermark"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c92dc85ed9449585d4fba912de43cd2a2ad0e683.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/b4c2fe02138d48e982bdf321f62c386d64d3d5a1.zip"
},
"title": {
"value": "Watermark Smoothing Attacks against Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1BdPHbuimc | Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models | main | Active | large language model;question answering;chain-of-thought | applications to computer vision, audio, language, and other modalities | 5;5;8 | 3;4;3 | 3;2;3 | 3;2;3 | 4;1;3 | 6 | 3.333333 | 2.666667 | 2.666667 | 2.666667 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Could you clarify what “imputation” refers to in Table 2? Are there results available for CoA without MRFS, and what does “w/ ROUGE” mean? My understanding was that ROUGE is used only in ASQA.\n\n2. In Table 3, could you provide separate statistics for input and output tokens, as well as the average token usage per action? This would help readers better understand the specific cost details.\n\n3. Could you elaborate on what is meant by the term “knowledge boundary”?\n\n4. Are the results of the Chain-of-Action framework directly comparable to previous studies? I noticed that this study used GPT-4, while DSP and SearchChain relied on older-generation LLMs (text-davinci-002 and gpt-3.5-turbo, respectively).\n\n5. Would it be fair and perhaps clearer to rename Sections 2.2.1 and 2.2.2 as \"Data Collection\" and \"Data Verification,\" instead of “Actions Design” and “Actions Workflow”? These alternative terms seem easier to understand and align well with the content of the corresponding subsections."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This study introduces a framework embodying the divide-and-conquer approach, effectively breaking down complex tasks into manageable components that are tackled sequentially. This structure enhances the model's ability to handle multifaceted queries with improved precision.\n\n2. The empirical results demonstrate notable improvements in both performance and efficiency, as reflected in reduced API calls and token usage compared to prior methods. These gains underscore the framework’s effectiveness and potential for cost-saving in real-world applications.\n\n3. The introduction of the multi-reference faith score (MRFS) is a contribution, which effectively identifies and mitigates information conflicts, and improves answer reliability and trustworthiness in real-time applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces the Chain-of-Action (CoA) framework, a novel approach to multimodal and retrieval-augmented question answering that enhances the faithfulness and reasoning quality of large language models (LLMs). CoA addresses key challenges in QA, such as unfaithful responses and weak reasoning, by decomposing questions into a series of reasoning steps or actions that systematically retrieve and verify information from various sources. The framework introduces three \"Plug-and-Play\" actions—web querying, knowledge encoding, and data analyzing—that support multimodal data integration. Additionally, a multi-reference faith score (MRFS) is proposed to resolve inconsistencies and improve response accuracy. Experimental results demonstrate CoA’s effectiveness in handling complex questions across QA benchmarks and in real-world applications, particularly in the Web3 domain."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper’s primary weakness lies in how it presents its key concepts and narrative. Many claims, such as \"multimodal,\" \"plug-and-play,\" and \"action-based\" elements, lack direct evidence or clear definitions, making it challenging to follow the core contributions. Though the pipeline is straightforward, understanding the study's actual workflow is hindered by (1) inaccurate terminology, (2) loosely connected methodology descriptions, and (3) a mix of abstract workflows and technical details.\n\n2. Certain terms are uncommon or seem misapplied, which leads to confusion. For example, terms like \"multimodal\" (when referring to text and tabular data), \"chain-of-action\" (more of a \"chain-of-data-collection-and-verification\"), \"actions design\" (data collection), \"actions workflow\" (data verification), \"node\" (sub-question), and \"knowledge boundary\" (what a model actually knows) lack clarity and could benefit from more precise definitions or alternatives.\n\n3. Question decomposition appears critical to this framework, yet there is limited discussion on decomposition strategies or comparisons with existing baselines. Further elaboration here would strengthen the paper's contributions.\n\n4. The \"plug-and-play\" feature is presented as a low-cost prompting strategy; however, integrating retrieval for each data type (e.g., web, internal knowledge) may not be straightforward. It may be worth reconsidering or refining this claim to better reflect its implementation complexity.\n\n5. The paper’s claim of multimodal data handling is unclear. If the input consists of real-time information, domain knowledge, and tabular data, it may be more accurately described as handling heterogeneous data rather than multimodal data. Additionally, if tabular data is linearized as text for LLM input, the fundamental multimodal claim weakens.\n\n6. The study does not include ablations to show the specific contribution of tabular data. Providing such analyses could clarify its impact on the framework's performance.\n\n7. Section 3.2 mentions expert evaluation on a 1 to 3 scale based on three criteria, but it lacks details on the expert recruitment process, qualifications, and any inter-rater reliability metrics. Adding these details would increase the transparency and credibility of the evaluation process."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can you elaborate on the key differences between \"thoughts\" in CoT and \"actions\" in CoA? How does this change improve the overall performance? It would also be helpful if you can discuss the limitations and trade-offs between them.\n2. If the system doesn't have the ability to add additional actions like web query, does CoA still perform better than CoT. \n3. Does CoA add significant latency to QA process?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors utilize newer multimodal LLM abilities to perform actions such as web query and data analysis. The authors come up with a new QA mechanism for LLMs which uses the actions The method is called Chain of Action(CoA).\n2. The authors demonstrate that this method significantly outperforms other reasoning and QA methods on many QA datasets. \n3. The improvement of using actions over thoughts does seem to be the natural way of solving a question. This approach has significant potential for improving QA capabilities of LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a new QA retrieval mechanism called Chain of Action(CoA). When a question is asked to an LLM, there is a prompt which generates a list of actions the LLM needs to take first to effectively answer the questions. They introduce a Plug and Play approach where in case of Multimodal LLMs, the actions taken can be integrated into the application. The actions can be web query or data analysis. The paper integrates 3 such actions. The LLM then performs each of the individual action generated and then there is another query which combines information from all the actions. The LLM then gives an answer based on the newly injected information"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Based on the number of actions to be taken and what kind of \"plug\" is used for the action, the time taken to finish all actions and send out an answer might become significant. It would have been good to see the study on latency(eg. average response time) of the system because of the new method.\n2. It would be helpful to conduct an ablation study when you remove specific action types to isolate their impact on performance. This would provide clearer insights on how much this method relies on additional capabilities. \n3. Comparing CoA with and without the ability to perform additional \"plugs\" across different types of questions can be useful in understanding the impact of this method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Can the authors provide more details on how the CoA framework could be adapted for tasks involving visual or mixed data modalities?\n2. How does the framework handle discrepancies or conflicts when sources provide contradictory information?\n3. Are there plans to explore CoA's performance in real-time, fast-evolving information retrieval scenarios where data may change rapidly (e.g., live news events)?\n4. Could the use of CoA extend to tasks requiring intricate reasoning paths that involve recursive or nested logic?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- **Innovative Framework**: The CoA's structured decomposition into sub-questions and its use of domain-adaptable plug-and-play actions represent a significant advancement in enhancing the faithfulness and accuracy of LLM responses.\n- **Empirical Validation**: Demonstrated strong performance on benchmarks and real-world applications, notably outperforming existing baselines in multimodal QA tasks.\n- **Verification Mechanism**: The multi-reference faith score is an effective metric for cross-validating LLM-generated answers against external sources, enhancing reliability.\n- **Practical Impact**: Real-world implementation in a Web3 QA system showed increased user engagement and positive feedback, validating the method's applicability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents the Chain-of-Action (CoA) framework designed to improve large language models' (LLMs) performance in multimodal and retrieval-augmented question-answering (QA). CoA addresses challenges such as hallucination and weak compositional reasoning by decomposing questions into reasoning chains. This method incorporates plug-and-play actions for retrieving diverse data sources and uses a novel multi-reference faith score for verification. Empirical results show CoA outperforms other methods in public benchmarks and real-world applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While the CoA approach shows strong empirical performance, its adaptability to more diverse or unstructured data modalities beyond text and tabular data remains to be proven.\n- The scalability and efficiency when integrating more complex or real-time data sources require further exploration, especially in scenarios with rapidly changing information.\n- The approach, despite its modular design, may face challenges in tasks involving higher-order reasoning or complex multi-step dependencies that are not purely fact-based."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024chainofaction,\ntitle={Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1BdPHbuimc},\nnote={under review}\n}"
},
"abstract": {
"value": "We present a Chain-of-Action (CoA) framework for multimodal and retrieval-augmented Question-Answering (QA). Compared to the literature, CoA overcomes two major challenges of current QA applications: (i) unfaithful hallucination that is inconsistent with real-time or domain facts and (ii) weak reasoning performance over compositional information. Our key contribution is a novel reasoning-retrieval mechanism that decomposes a complex question into a reasoning chain via systematic prompting and pre-designed actions. Methodologically, we propose three types of domain-adaptable `Plug-and-Play' actions for retrieving real-time information from heterogeneous sources. We also propose a multi-reference faith score to verify conflicts in the answers.\nIn addition, our system demonstrates that detecting the knowledge boundaries of LLMs can significantly reduce both LLM interaction frequency and tokens usage in QA tasks. Empirically, we exploit both public benchmarks and a Web3 case study to demonstrate the capability of CoA over other methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language model",
"question answering",
"chain-of-thought"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e4915d4d96a9ccd999db7a92c8aef32d5ae71c8f.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1BlEVFmqwn | $\text{O}_\text{2}$VIS: Occupancy-aware Object Association for Temporally Consistent Video Instance Segmentation | main | Withdraw | Video instance segmentation;Long-term memory;Temprorally consistent learning | applications to computer vision, audio, language, and other modalities | Seunghun Lee;Jiwan Seo;Minwoo Choi;Kiljoon Han;Jaehoon Jeong;Ehsan Adeli;Sang Hyun Park;Sunghoon Im | ~Seunghun_Lee3;~Jiwan_Seo2;~Minwoo_Choi1;~Kiljoon_Han1;~Jaehoon_Jeong1;~Ehsan_Adeli1;~Sang_Hyun_Park1;~Sunghoon_Im1 | 3;3;3;6;6 | 4;3;4;3;2 | 3;2;2;3;2 | 2;2;2;3;2 | 3;1;2;3;2 | 4.2 | 3.2 | 2.4 | 2.2 | 2.2 | -0.763763 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Is the effectiveness of IOM and DOA useful in other tasks such as multiple object tracking?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Application-oriented Innovation: O2VIS introduces occupancy information into the memory update process, making it possible to maintain object identity consistency more accurately in scenes where objects frequently appear and disappear. This occupancy-aware memory management strategy provides a useful enhancement for video instance segmentation.\n\n2. Empirical Support: The experimental results on various benchmark datasets show improved average precision (AP) and average recall (AR), supporting the method's effectiveness. Additionally, the ablation studies validate the contributions of IOM and DOA, strengthening the reliability of the results.\n\n3. Well-designed Component Structure: The decoupled approach in DOA separately manages existing and new objects, using occupancy-guided Hungarian matching to reduce incorrect associations. This is a practical and effective design choice."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents the O2VIS framework, aiming to improve long-term consistency in video instance segmentation. By introducing Instance Occupancy Memory (IOM) and Decoupled Object Association (DOA), this method enhances the stability of object tracking in dynamic video scenes, effectively differentiating between recurring and new objects. The paper demonstrates the approach's performance on multiple benchmark datasets, such as YouTube-VIS and OVIS, highlighting its advantages in accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited Theoretical Innovation: The calculation of foreground probability is essentially a weighted adjustment using output probabilities and does not introduce a novel computational framework or algorithm. IOM and DOA represent more of an applied enhancement of existing memory and association techniques, rather than a fundamental theoretical breakthrough, which may limit the impact at conferences focused on theoretical innovation.\n\n2. Unclear Generalizability: The method is primarily designed for video instance segmentation, and its applicability to other tasks, such as multi-object tracking, has not been demonstrated. Verifying IOM and DOA’s effectiveness in other visual tasks would strengthen the paper’s generalizability.\n\n3. Dependence on Pre-trained Model Accuracy: Since the foreground probability relies on classification outputs, errors in these outputs could lead to incorrect memory updates, potentially destabilizing tracking performance. This dependency might reduce overall system stability, particularly when applied to longer or more complex video sequences."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- It is very hard to understand Figure 1. There are barely any labels and barely any text in the caption to explain what each of the icons in the figure means. The first teaser figure should be very easy to understand and should convey an overall takeaway from the method, and not describe the method itself. \n- Can you explain how you get an object's occupancy O near L206?\n- If an object's occupancy is 0, why should it's new feature representation be added to the memory?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The motivation is well-grounded that the objects should be treated differently based on if they have been seen before"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a method for long-term tracking consistency in videos for the task of video instance segmentation. The core idea is that using the visibility or occupancy of the objects can help in associating their features correctly so as to differentiate between new and previously seen objects. Treating these two kinds of objects differently by associating them separately also helps. Experiments exist that compare state-of-the-art approaches to the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- One of the most important baselines is missing for this task is SAM2 for video instance segmentation\n- While the motivation is good, similar ideas have appeared before for tracking, for instance in DeepSORT or Detecting Invisible People. These are not segmentation approaches but should be cited to acknowledge that both of the main contributions of this paper have appeared before.\n- It seems like the ID switches metric from multi-object tracking based on bounding boxes, is what the paper wanted to improve but there is no comparison to prior approaches with that metric so it is hard to tell if their claim of long-term consistency is valid over an entire dataset."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see Weaknesses"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Innovation: The study proposes an instance occupancy memory mechanism that addresses challenges in maintaining consistency when objects disappear and reappear, making it well-suited for complex, dynamic video scenes.\nPerformance: The experimental results show that O2VIS significantly outperforms current state-of-the-art methods across multiple datasets, especially in AP scores.\nDecoupled Strategy: By implementing a decoupled association strategy for handling existing and new objects separately, the method avoids common background misalignment issues, enhancing tracking accuracy.\nComprehensive Experiments: The paper provides thorough experimental comparisons with existing VIS methods, and ablation studies validate the effectiveness of each technical component, demonstrating the contributions of IOM and DOA modules."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces O2VIS, a novel framework for video instance segmentation that enhances long-term consistency in object tracking. The framework incorporates an Instance Occupancy Memory (IOM) module and a Decoupled Object Association (DOA) strategy, effectively distinguishing between new and recurring objects across frames. By decoupling the association of existing and newly appeared objects, the method maintains stable and consistent object identities throughout videos. Experimental results demonstrate that O2VIS achieves state-of-the-art AP scores on the YouTube-VIS 2019, 2021, and 2022 datasets, setting a new benchmark for the VIS task."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Insufficient Details: While the paper introduces the Occupancy-guided Hungarian Matching and Decoupled Object Association strategies, implementation details are limited. Providing pseudocode or more concrete algorithmic descriptions could enhance clarity.\nComputational Cost: The addition of IOM and DOA likely increases computational complexity, particularly due to multi-frame memory updates and association matching. It would be beneficial to quantify the computational overhead of these modules within the paper.\nGeneralizability: Experiments are currently focused on standard datasets like YouTube-VIS. The model’s performance in more challenging scenarios, such as high occlusion or rapid object movement, remains unclear.\nModel Complexity: With the integration of multiple modules, the overall model structure is complex, which may pose deployment challenges. Future work could explore simplifying the model or improving its efficiency."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1 The authors should include a comprehensive comparison of computational resources, including model parameters, inference speed, and memory usage, with existing methods. This would provide crucial context for understanding the practical trade-offs of their approach. \n\n2 Additionally, including more detailed pseudo-code for key algorithms and visualizations of memory usage patterns would enhance the technical clarity of the paper. \n\n3 Finally, an analysis of failure cases and performance on longer video sequences would provide valuable insights into the method's limitations and potential areas for improvement."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1 The paper's technical contributions are both novel and well-executed. The IOM mechanism provides an elegant solution to the challenging problem of maintaining object identity consistency, while the decoupled association strategy effectively addresses the issue of new object appearances. \n\n2 The comprehensive experimental evaluation, including extensive ablation studies, convincingly demonstrates the effectiveness of each component. The strong performance across multiple benchmarks further validates the proposed approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces O2VIS, a novel framework designed to enhance long-term consistency in video instance segmentation. The work presents two main technical innovations: an Instance Occupancy Memory (IOM) for tracking global instance features and their occupancy status, and a Decoupled Object Association (DOA) strategy that separately handles existing and new objects. The framework demonstrates state-of-the-art performance on YouTube-VIS benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1 The paper does not provide comparisons of model parameters and inference speed with existing methods, making it difficult to assess the practical implications of implementing this approach. \n\n2 There is no discussion of memory consumption or runtime benchmarks, which are crucial considerations for real-world applications. \n\n3 Some technical details, particularly regarding the IOM update mechanism and the interaction between TE and TA trackers, could be explained more thoroughly."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and easy to follow. A thorough experimental analysis has been performed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an occupancy memory and a decoupled object association to track global features of objects long term and to ensure consistent matching of new and old objects. The proposed method achieves good performance on the VIS task."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited technical novelty: The paper proposes some techniques to improve VIS, but all of these techniques have been seen in other tracking/VIS works in some form or the other. For example, the IOM is similar to global track queries in trackformer [2], Hungarian matching to align current objects to a global memory in DOA has been explored before in many works.\n2. Incremental improvement: The results in table 1 and 2 often show a minimal improvement as compared to prior works. For example, on the youtube vis 2019 dataset, the method only gets a 0.2 points improvement over DVIS++ using the R50 backbone. Similar trend is observed for other datasets and other backbones. These improvements could often just come from randomness during training, so it would be nice if the authors could put error bars in the tables to demonstrate consistency. \n3. Some prior works (e.g., CAROQ [1]) use query-based propagation for global tracking. How does the proposed method compare with such a method in terms of the number of parameters involved in tracking and the tracking speed? The proposed method requires 2 networks for tracking, as opposed to 1 network in most prior works, so some comparison table on the average time taken and the parameters involved solely for tracking would also be insightful.\n4. There are some typos in the paper, e.g., a capitalized letter mid-sentence in line 47.\n\n\n[1] Choudhuri et al., Context-Aware Relative Object Queries to Unify Video Instance and Panoptic Segmentation, CVPR 2023\n[2] TrackFormer: Multi-Object Tracking with Transformers, Meinhardt et al., CVPR 2022"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose an occupancy-guided memory and temporally consistent object association pipeline."
},
"_bibtex": {
"value": "@misc{\nlee2024textotextvis,\ntitle={\\${\\textbackslash}text\\{O\\}\\_{\\textbackslash}text\\{2\\}\\${VIS}: Occupancy-aware Object Association for Temporally Consistent Video Instance Segmentation},\nauthor={Seunghun Lee and Jiwan Seo and Minwoo Choi and Kiljoon Han and Jaehoon Jeong and Ehsan Adeli and Sang Hyun Park and Sunghoon Im},\nyear={2024},\nurl={https://openreview.net/forum?id=1BlEVFmqwn}\n}"
},
"abstract": {
"value": "In this paper, we present Occupancy-aware Object Association for Video Instance Segmentation ($\\text{O}_{\\text{2}}$VIS), a new framework crafted to improve long-term consistency in instance tracking. We introduce the Instance Occupancy Memory (IOM) that tracks global instance features and their occupancy status to effectively differentiate between recurring and new objects. It ensures consistent tracking and effective management of object identities across frames, enhancing the overall performance and reliability of the VIS process. Moreover, we propose a Decoupled Object Association (DOA) strategy that handles existing and newly appeared objects separately to optimally assign indices based on occupancy. This technique enhances the accuracy of object matching and ensures stable and consistent object alignment across frames, especially useful in dynamic settings where objects frequently appear and disappear. Extensive testing and an ablation study confirm the superiority of our method over traditional methods, establishing new standards in the VIS domain."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Seunghun_Lee3",
"~Jiwan_Seo2",
"~Minwoo_Choi1",
"~Kiljoon_Han1",
"~Jaehoon_Jeong1",
"~Ehsan_Adeli1",
"~Sang_Hyun_Park1",
"~Sunghoon_Im1"
]
},
"authors": {
"value": [
"Seunghun Lee",
"Jiwan Seo",
"Minwoo Choi",
"Kiljoon Han",
"Jaehoon Jeong",
"Ehsan Adeli",
"Sang Hyun Park",
"Sunghoon Im"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Video instance segmentation",
"Long-term memory",
"Temprorally consistent learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "lee|\\texto_\\text2vis_occupancyaware_object_association_for_temporally_consistent_video_instance_segmentation"
},
"pdf": {
"value": "/pdf/2559f6b728d197bf558232ea5fc8abb942b31cc9.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "$\\text{O}_\\text{2}$VIS: Occupancy-aware Object Association for Temporally Consistent Video Instance Segmentation"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
1CIUkpoata | 6D Object Pose Tracking in Internet Videos for Robotic Manipulation | main | Active | 6DoF pose estimation;robotic manipulation from video | applications to computer vision, audio, language, and other modalities | 5;5;6;6 | 4;4;4;4 | 3;3;3;3 | 3;3;3;3 | 3;3;3;3 | 5.5 | 4 | 3 | 3 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why in Figure 1 and Figure 2, the same image has two different retrieved CAD models?\n2. Can you provide the results of the error based on the quality of the retrieved CAD model?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The task of predicting the 6D pose of internet videos without additional prior is important for a lot of downstream tasks.\n2. The whole pipeline is reasonable, fetch the similar CAD model and do rough alignment. Then further leverage the 2D tracking results to get the smoothed trajectories, that are more motion-consistent across time.\n3. The experiments on the retargeted motion on robotics further show the usefulness of the extracted smoothed trajectories."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a pipeline for extracting the 6D pose trajectory from an internet video without the need of the CAD for the specific object. The authors leverage vision features to retrieve the most similar CAD model of the object, then do per-frame alignment leveraging the same vision features of the original image and rendered from the CAD. They further estimate the rough object size using LLM and leverage 2D tracking models to get inter-frame rotation consistency. The authors conduct experiments and demonstrate their superior performance. They also show demos that their trajectory can be retargeted to guide the movement of the robot."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors demonstrate that compared to model-based methods, whose performances suffer from the inaccurate CAD mode, their method addresses the challenge. However, there is lack of experiments compared to SOTA model-based methods with their fetched CAD models (e.g. FoundationPose with their retrieved CAD model).\n2. In the 6D pose alignment part, the method applies a sapling-based trajectory to get the rotation, which potentially limits the accuracy of the rotation. In the results figure, there are some rotation errors, not sure if due to the sampling-based strategy or the DINO feature extractor.\n3. For the robotics demo, the end-effector position control is on 6D pose or only on the rotation? From the Figure 9, the translation of the end-effector seems not consistent with the original video and in the simulator"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- l. 323, are the ground-truth meshes contained in the object datasets? \n- Table 1, was the same scale estimate for the meshes used for MegaPose and GigaPose like for the proposed method? \n- Which dynamics model is used for the optimization problem in eq 4? How is tracking of the optimized trajectory implemented?\n- See additional questions in sec. \"Weaknesses\"."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The proposed approach for detecting and estimating 6D motion of unknown objects from RGB images is novel and interesting.\n- The paper is well written and easy to follow.\n- The set of experiments demonstrate the shape retrieval and pose estimation well and also compare with state of the art methods.\n- A qualitative example is provided with a real robot which show the robot pouring from one object to another."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new approach to detect and track the 6-DoF pose of unknown objects from RGB video. The approach is motivated by robot imitation learning from internet video. The approach uses off-the-shelf open-set object detectors, foundation models for segmentation, vision-language (CLIP), and visual features (DINOv2) to detect objects, retrieve similar shapes from a database of CAD models, and matching the object image with a set of rendered views of the object CAD model to estimate 3D orientation. Experimental evaluation is performed quantititvely on YCB-Video and HOPE-Video datasets and a comparison is made with state of the art object detectors for unseen objects for which the CAD model is assumed known (MegaPose, GigaPose). Also, qualitative results on EPIC-Kitchen, and an example of executing the estimated object trajectories on a real robot are shown."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- l. 197ff, CAD model retrieval by rendering views and calculating visual features seems expensive in both, the database generation and the retrieval stage for large datasets such as Objaverse-LVIS. What is the retrieval time for these datasets and how is it implemented to make retrieval efficient?\n- l. 220ff proposes to retrieve rotation by matching to a set of rendered views. What is the choice of N in the experiments? What is the avg/std angular distance between sampled rotations?\n- l. 243ff, the way to prompt the LLM in the supplementary is an offline procedure to collect size estimates for approximately 2200 objects. In the main paper, the description reads as if the LLM is prompted for each detected object using the CLIP text classification. Please describe this more clearly. What if the detected object is not included in the offline calculated set ? \n- l. 286, was estimating the motion of the camera relative to the static background evaluated in this work ? Please clarify.\n- The optimization problem in eq 4 does not provide a description of the used system dynamics model. \n- l. 361, please write more clearly, that while a similar mesh is known, the retrieved mesh does not exactly correspond to the ground truth mesh which is an assumption used for MegaPose and GigaPose. \n- Please introduce the pCH metric formally, at least in the supplemental material. The current description is insufficient.\n- l. 519ff, the real robot experiment is rather anecdotal and lacks important details in its descriptions and quantitative evaluation (e.g., success rate). How are the observed object trajectories transfered to the real robot experiment incl. considering the change of view point and embodiment? How does the robot know where the manipulated objects are and how is this matched to the observed object motion? \n- Fig. 8, in the upper additional qualitative result, the bowl object pose is not correctly tracked. Why does the robot still turn the object in a quite different angle ?\n\nAdditional minor comments:\n- Fig. 6, rightmost real robot image seems to be a repetition of the image next to it. Was the wrong image included?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. It might be great if the authors could ablate on the performance variation under different LLMs. Currently it only applies GPT-4, but it is important to know how different LLMs might influence the performance (i.e. one GPT-3.5 & one open-source LLM).\n2. What's the efficiency & cost of such pipeline when performing inference on a 1-minute Instructional videos? \n3. Using a CAD model can be costly since it requires a large database to store predefined meshes, and in open-world scenarios, finding an exact match is often unlikely. However, numerous approaches avoid relying on CAD models. For instance, \"6DGS: 6D Pose Estimation from a Single Image and a 3D Gaussian Splatting Model\" [ECCV 2024]. Have you tried experimenting with such methods? Or say, how do you envision those methods' strengths and weaknesses compared to your method.\n4. For the standard evaluation, it might be beneficial to add another dataset evaluation using different cameras, say iPhone sensor as proposed in \"Robust 6DoF Pose Estimation Against Depth Noise and a Comprehensive Evaluation on a Mobile Dataset\" to further validate the approach's generalizability."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The impact of the paper is dominant in the way that it provides an envision of enriched data for robotic manipulation without human labor force to construct the specific datasets. The methodology is intuitive and the performance enhancement is non-trivial. The paper is overall well-written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present a novel approach to extract temporally consistent 6D pose trajectories of manipulated objects from Internet videos to be applied with robotic manipulation task. It tackles the challenges posed by uncontrolled capture conditions, unknown object meshes, and complex object motions. Their evaluation on YCB-V and HOPE-Video datasets shows state-of-the-art performance, with successful motion transfer to a robotic manipulator in both simulated and real-world settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My primary concern lies with the methodological novelty, as the approach largely involves applying an existing pipeline to internet videos. Specifically, the use of an LLM for estimating object scale may be questionable, given potential uncertainties around its accuracy in providing a realistic scale for each object. Aside from this, the methodology essentially adapts previous methods to fit the proposed pipeline. Given these factors, I feel this work might not align with ICLR's focus but could be more suited to a robotics conference."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1 With the similar CAD model retrieval, the classification can also be obtained. I wonder if it is possible to use the CAD model to perform classification directly?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1 The pose estimation method by retrieving a CAD model, aligning the retrieved CAD model with the object, and grounding the object scale with respect to the scene.\n\n2 Consistent 6D pose trajectory estimation from Internet videos and retargeting trajectories to a robotic manipulator.\n\n3 The pose estimation improvement on YCB-V and HOPEVideo datasets, and transfer from 6D object motion to a 7-axis robotic manipulator."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper pays attention on 6D pose trajectory estimation of a manipulated object from an Internet instructional video with a novel framework. The framework first predicts the 6D pose of any object by CAD model retrieval. Then the smooth 6D object trajectories are extracted and retargeted via trajectory optimization into a robotic manipulator. Experiments on YCB-V and HOPE-Video datasets demonstrate the improvements over RGB 6D pose methods. Moreover, the 6D object motion can be transferred to a 7-axis robotic manipulator."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1 The original contributions should be expressed more clearly. In the proposed method, various existing methods are employed. It is suggested to clearly distinguish the original contributions in this paper and usage of other methods. Specifically, the first contribution locates in the pose estimation method by retrieving a CAD model, aligning the retrieved CAD model, and grounding the object scale with respect to the scene. The subsequent question is that what is the original contribution, the whole pipeline or the detailed design of a particular module? The authors are suggested to express this more clearly in the revised version. For the second and third contributions, it is also recommended to present more clear expressions. \n\n2 For robotic manipulation, the running time of the pose estimation method is a key factor. The proposed method in the paper is somewhat time-consuming with 2s for detector, retrieval and scale estimation per scene and 0.2s for pose estimation per object. To further improve the paper, two suggestions are given. For one thing, the comparaions with other methods on running time are suggested to add. For another, more analysis about the running time is also preferred, such as the recommendations for accelerate the whole method."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A method to estimate 6D pose and trajectory of an object in the Internet video"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024d,\ntitle={6D Object Pose Tracking in Internet Videos for Robotic Manipulation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1CIUkpoata},\nnote={under review}\n}"
},
"abstract": {
"value": "We seek to extract a temporally consistent 6D pose trajectory of a manipulated object from an Internet instructional video. This is a challenging set-up for current 6D pose estimation methods due to uncontrolled capturing conditions, fine-grained dynamic object motions, and the fact that the exact mesh of the manipulated object is not known. To address these challenges, we present the following contributions. First, we develop a new method that estimates the 6D pose of any object in the input image without prior knowledge of the object itself. The method proceeds by (i) retrieving a CAD model similar to the depicted object from a large-scale model database, (ii) 6D aligning the retrieved CAD model with the input image, and (iii) grounding the absolute scale of the object with respect to the scene. Second, we extract smooth 6D object trajectories from Internet videos by carefully tracking the detected objects across video frames. The extracted object trajectories are then retargeted via trajectory optimization into the configuration space of a robotic manipulator. Third, we thoroughly evaluate and ablate our 6D pose estimation method on YCB-V and HOPE-Video datasets and demonstrate significant improvements over existing state-of-the-art RGB 6D pose estimation methods. Finally, we show that the 6D object motion estimated from Internet videos can be transferred to a 7-axis robotic manipulator both in a virtual simulator as well as in the real world. Additionally, we successfully apply our method to egocentric videos taken from the EPIC-KITCHENS dataset, demonstrating potential for Embodied AI applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"6DoF pose estimation",
"robotic manipulation from video"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/615dd70e5b1540a6e035285198f7dd6fdd5c5d76.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/25538edbe05f4d6d9b33e61503a69a7887b0fc84.zip"
},
"title": {
"value": "6D Object Pose Tracking in Internet Videos for Robotic Manipulation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1CLzLXSFNn | TimeMixer++: A General Time Series Pattern Machine for Universal Predictive Analysis | main | Active | time series;pattern machine;predictive analysis | learning on time series and dynamical systems | 5;6;10 | 4;5;4 | 1;3;4 | 1;4;4 | 3;3;4 | 7 | 4.333333 | 2.666667 | 3 | 3.333333 | -0.327327 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Q1. The proposed architecture adds significant computational cost to the internal representation of the model compared to vanilla transformers and some of the previously proposed models. It seems this does not have a significant effect on the training time compute and memory complexity of the model. Have the authors conducted any studies to compare the inference-time cost of TM++ compared to other methods? \n\nQ2. As mentioned by authors, some time series tasks (imputation, anomaly detection) benefit more from diverse representations while others like forecasting and classification benefit from consistent representation. Given this, is there any way to leverage a routing model dependent on the proposed task type, which could lower the inference-time cost of this model?\n\nQ3. MTS-Mixer (Li et. al., 2023) presents another approach to channel decomposition which similarly outperformed competing models, but they found the approach worked best with MLPs rather than attention-based models. Have the authors explored this technique for separating from attention mechanisms which could lead to further efficiency and model performance?"
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "S1. The proposed model captures both short- and long-term dependencies by transforming time series data into multi-resolution images, enabling the analysis of complex temporal and frequency-domain patterns that challenge traditional models. The authors validate this with experimental results showing the new architecture outperforms SOTA models on most standard benchmarks. The ablation study helps validate the importance of the individual parts of the architecture – the channel mixing, image decomposition, and multi-scale and multi-resolution mixing. This approach continues to validate the benefits of integrating image analysis techniques with time series tasks.\n\nS2. The architecture is flexible for supporting different kinds of time-series tasks. The hierarchical multi-scale and multi-resolution mixing modules enable the model to flexibly adapt across various time series tasks, from forecasting to anomaly detection, promoting robust and accurate performance across applications.\n\nS3. Empirical Validation: the testing in this paper on eight benchmark time series tasks, including the hyperparameter ablation results, shows TIMEMIXER++ consistently surpasses both general-purpose and task-specific models, affirming its potential as a high-performance, general-purpose solution for time series analysis. The experiments were very thorough."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents TIMEMIXER++, a general-purpose model for various time series tasks, including forecasting, classification, anomaly detection, and imputation. Utilizing a multi-scale, multi-resolution framework, the proposed method transforms time series data into multi-resolution images to capture complex temporal and frequency-domain patterns, enabling flexibility across analytical applications. The model’s approach includes dual-axis attention for decomposing seasonal and trend components and hierarchical multi-scale and multi-resolution mixing to integrate patterns across scales. The proposal achieves strong performance across eight benchmark tasks, outperforming both general-purpose and task-specific models. This work contributes to advancing time series analysis with new state-of-the-art benchmarks across settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. There is little exploration of scaling of model size, which would be an interesting avenue for validating the model architecture in a zero shot setting. The current zero-shot experiments are primarily in-domain and not cross-task."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses (W2, W3, W4)."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. the authors introduce a robust framework TimeMixer++ that leverages multi-resolution time imaging, multi-scale mixing, and dual-axis attention to enhance general time series analysis. They present SOTA results on four different tasks.\n2. the integration of both multi-scale and multi-resolution mixing strategies for adaptive pattern extraction demonstrates innovation.\n3. the manuscript and appendix are well-prepared, but the authors have not yet released the promised code."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents TimeMixer++, an advanced framework designed to enhance general time series analysis. TimeMixer++ integrates multi-resolution time imaging, multi-scale mixing, and dual-axis attention mechanisms to effectively capture and adapt to diverse patterns within time series data. This innovative approach allows for robust and flexible analysis across various temporal scales and resolutions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. the fonts in the figures should be enlarged for better readability. For example, in Figure 1 (right), the label \"Benchmarking model performance across representation analysis in four tasks\" appears blurred. Additionally, consider using a single set of legends for all four tasks to enhance clarity.\n2. the source code repository has not released for reproducing, i will consider raising the score if the released repository and the consistency of the results.\n3. more detail on how it compares to recent models like TimesNet and iTransformer on specific time series tasks would strengthen the paper’s claims.\n4. including a discussion on computational efficiency (e.g., FLOPs, memory usage) for different tasks could enhance the paper’s utility."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "[1] Questions follow from the points listed in the weakness section.\n[2] What is the naming of the method, i.e., TimeMixer++ in terms of? or just because both the methods target processing multi-scale time series? \n[3] What is the connection with the method TimeMixer?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The methods used in the paper (e.g., time imaging and image decomposition) are very interesting. The evaluation is comprehensive: the authors discuss long and short-term forecasting, zero-short forecasting, classification, and anomaly detection."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a time series pattern machine method called TimeMixer++ for processing multiscale time series. The method transforms time series into multi-resolution time images to enable pattern extraction with respect to temporal and frequency domains followed by three dominant modules - (1) input projection; (2) a stack of Mixerblocks; (3) output projection. Extensive experiments show the proposed method obtains improvement over the well-established competing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In terms of the forecasting results shown in Tables 3 and 4, the performance gain is negligible, and such minor improved performance certainly can be attributed to the parameter tuning, e.g., a well-tuned parameter settings for TimeMixer++ while a weak parameter settings for other competing methods. \n\nThe paper barely offers insights both theoretically and experimentally. The theoretical understanding of the improvement as well as its time imaging and multi-resolution mixing is lacking, mostly based on intuition and simply blending the models. \n\nThere are some papers that already discussed the use of frequency analysis and the frequency components extraction for model deployment (e.g., [1][2][3]) to capture the periodic patterns, and they all claim it can capture the global interaction and patterns among time series, so what is the benefits of introducing multi-resolution time imaging, and it is worthwhile to compare them in ablation study? In addition, it is encouraged to cite the papers [1][2][3] if not yet in the references. \n\n\n\nReferences:\n[1] Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors https://arxiv.org/pdf/2305.18803\n[2] TFDNet: Time-Frequency Enhanced Decomposed Network for Long-term Time Series Forecasting https://arxiv.org/abs/2308.13386\n[3] FEDNET: FREQUENCY ENHANCED DECOMPOSED NETWORK FOR OUT-OF-DISTRIBUTION TIME SERIES CLASSIFICATION https://openreview.net/forum?id=OVu9DsOjgH"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "TimeMixer++ is a time series pattern machine that employs multi-scale and multi-resolution pattern extraction to deliver SOTA performance across 8 diverse analytical tasks, including forecasting, classification, anomaly detection, and imputation."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024timemixer,\ntitle={TimeMixer++: A General Time Series Pattern Machine for Universal Predictive Analysis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1CLzLXSFNn},\nnote={under review}\n}"
},
"abstract": {
"value": "Time series analysis plays a critical role in numerous applications, supporting tasks such as forecasting, classification, anomaly detection, and imputation. In this work, we present the time series pattern machine (TSPM), a model designed to excel in a broad range of time series tasks through powerful representation and pattern extraction capabilities. Traditional time series models often struggle to capture universal patterns, limiting their effectiveness across diverse tasks. To address this, we define multiple scales in the time domain and various resolutions in the frequency domain, employing various mixing strategies to extract intricate, task-adaptive time series patterns. Specifically, we introduce \\method, a general-purpose TSPM that processes multi-scale time series using (1) multi-resolution time imaging (MRTI), (2) time image decomposition (TID), (3) multi-scale mixing (MCM), and (4) multi-resolution mixing (MRM) to extract comprehensive temporal patterns. MRTI transforms multi-scale time series into multi-resolution time images, capturing patterns across both temporal and frequency domains. TID leverages dual-axis attention to extract seasonal and trend patterns, while MCM hierarchically aggregates these patterns across scales. MRM adaptively integrates all representations across resolutions. TimeMixer++ achieves state-of-the-art performance across 8 time series analytical tasks, consistently surpassing both general-purpose and task-specific models. Our work marks a promising step toward the next generation of TSPMs, paving the way for further advancements in time series analysis."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"time series",
"pattern machine",
"predictive analysis"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f2234161056e92e06729fbbd68ea8ecfb4ae47f8.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on time series and dynamical systems"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "TimeMixer++: A General Time Series Pattern Machine for Universal Predictive Analysis"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1CRu6bGx25 | Crack in the Armor: Universal Stability Measurement for Large Language Models | main | Active | Large Language Models;sensitivity analysis;local influence measure | foundation or frontier models, including LLMs | 3;3;5 | 4;3;3 | 2;2;3 | 2;2;3 | 2;2;3 | 3.666667 | 3.333333 | 2.333333 | 2.333333 | 2.333333 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* How does the method presented in the paper differs from existing works?\n* Figure 3, Table 1, Figure 4: A baseline where the parameters are randomly selected is needed. What are the performances of such a baseline?\n* Table 1, Figure 3: How does the method compares to other pruning methods such as the one presented in this survey [2] \n* Table 2: How does the method compares to other model merging method such as the one presented in this survey [3] \n\n\n[2] https://arxiv.org/pdf/2308.06767\n[3] https://arxiv.org/pdf/2408.07666"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper presents interesting and novel applications of saliency maps to model quantization and model merging.\n\nThe paper strongly motivates the usage of saliency maps."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "A white-box method for identifying the most important/salient regions of an input for making a prediction is presented. The paper describes the method using the formalism of differential geometry and apply it to VLMs and LLMs. For VLMs, they apply their method to sensitivity analysis For LLMs, they apply their method to model quantization and model merging."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I currently suggest to reject this paper on the basis that I don't understand the novelty of the method and due to the lack of comparisons with the baselines. I am open to changing my mind, but I strongly encourage the authors to focus on those specific points in their response and be very clear how their method differs from existing works.\n\nNovelty of the method: Not clear how the proposed method is different from other analysis methods such as Saliancy maps [1] and adversarial perturbations. A throughout analysis of the related methods should be presented and compared against in the paper.\n\nComparison with the baselines: Several compelling applications of the method are proposed, but no comparison to existing baseline methods tacking these applications. The authors should consider comparing the presented method against relevant baselines on standard benchmarks so that the reader can assess the usefulness of the method.\n\nClarity of the paper: I found the paper to be hard to follow. The paper introduces unnecessarily abstracts notions to describe the method. I don't understand why such abstraction is needed to describe the idea presented in the paper. Moreover, a lot of terms a unnecessarily defined. For example, $l(\\omega|y,x,theta)$ could be written as $\\log P(y|x,\\theta,\\omega)$ and $f(\\omega)$ as $-P(y_\\text{pred}|x,\\theta,\\omega$ and it would make the reading clearer. Some terms like $h_j$ are not clearly defined.\n\n[1] https://arxiv.org/abs/1312.6034"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ FI offers a theoretically grounded approach to assessing parameter and input sensitivity, which can support robustness improvements across applications.\n\n + The paper provides experiments on various models, including applications of FI in quantization and model merging, demonstrating practical value."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a \"First-order Local Influence\" (FI) metric for quantifying LLM and VLM sensitivity to perturbations. By examining both internal (parameter) and external (input) perturbations, the FI metric aims to identify model weaknesses and improve robustness through selective parameter preservation. Experiments demonstrate FI’s potential in tasks like model quantization and merging."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper is positioned primarily around LLMs and VLMs, but these stability concerns are more broadly applicable to general ML. A broader contextual framing would benefit the paper.\n\n- The choice to protect high-FI parameters during distillation/model merging is questionable since some high-FI parameters might correspond to irrelevant or “nonsensical” inputs.\n\n- Prior works on Fisher Information Matrix (FIM) in pruning and parameter sensitivity (e.g., Frantar & Alistarh, 2023; Yu et al., 2024) and Sharpness-Aware Minimization (SAM) (Foret et al., 2021) are not mentioned. These are relevant for contextualizing FI's robustness contributions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses.\n\n- Can you provide more details about the compute cost of your approach?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The proposed approach seems effective in identifying input regions/parameters that have a large effect on model output. \n- The authors test their approach in a range of applications. \n- I like the application of identifying sensitive parameters that should remain intact during quantization/sparsification."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a new approach for understand VLM predictions, especially relating to their robustness to perturbation. Their metric (FI) essentially estimates the change in the model output with respect to input (or parameter) perturbations. The authors test their metric by identifying for a range of images the pixels that affect model predictions the most and altering them. Furthermore, the authors test their approach with respect to input parameters by identifying crucial parameters that should be left intact during quantization, and validating that performance deteriorates less when they're not changed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper does not compare against existing baselines. \n- On the sensitivity to input pixels, how does this approach compare to Grad-CAM [1] and subsequent work? It is important to see a quantitative analysis.\n- On the sensitivity to model parameters, it would be nice to see a comparison with existing approaches, e.g., [2], ...\n- I feel there is too much going on in the paper: merging sensitivity to input images + parameters at the same time seems too much for a single project. I would suggest focusing on one and studying it in detail.\n- Sensitivity of VLMs under different prompts is interesting but requires further analysis especially as to which changes in the prompts affect the influences, the semantic closeness of images and prompts, etc. \n\n\n[1] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Selvaraju et al., 2016.\n[2] Decomposing and Editing Predictions by Modeling Model Computation, Shah et al., 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024crack,\ntitle={Crack in the Armor: Universal Stability Measurement for Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1CRu6bGx25},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) and Vision Language Models (VLMs) have become essential to general artificial intelligence, demonstrating impressive capabilities in task understanding and problem-solving. The real-world functionality of these large models critically depends on their stability. However, there is still a lack of rigorous studies examining the stability of LLMs when subjected to various perturbations. \nIn this paper, we aim to address this gap by proposing a novel influence measure for LLMs. This measure is inspired by statistical methods grounded in information geometry, offering desirable invariance properties. Using this framework, we analyze the sensitivity of LLMs in response to parameter or input perturbations. \nTo evaluate the effectiveness of our approach, we conduct extensive experiments on models of varying sizes, from 1.5B to 13B parameters. The results clearly demonstrate the efficacy of our measure in identifying salient parameters and pinpointing vulnerable areas of input images that dominate model outcomes. Our research not only enhances the understanding of LLM sensitivity but also highlights the broad potential of our influence measure in optimizing models for tasks such as model quantization and model merging."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models",
"sensitivity analysis",
"local influence measure"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fc26c5826bd765c826a0c9ba426c7c5f46dcc26f.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Crack in the Armor: Universal Stability Measurement for Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1CeIRl147S | Domain-specific Benchmarking of Vision-Language Models: A Task Augmentation Framework Using Metadata | main | Active | VLM;Benchmark;Annotation;Ambiguity | datasets and benchmarks | 3;5;5 | 5;4;3 | 2;3;2 | 1;2;2 | 2;2;3 | 4.333333 | 4 | 2.333333 | 1.666667 | 2.333333 | -0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "$I(C_{i,q,m})$ in Eqn.(1) is not explained."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1) The paper is well-structured and easy to follow.\n\n2) The paper thoughtfully considers building strong domain-specific VLM benchmarks while sparing human annotation costs. I agree that picking the right tasks is challenging.\n\n3) Building benchmarks on existing ones with re-annotations is a smart and efficient way to control data quality and diversity. The data curation pipeline may be helpful to the community.\n\n4) Extensive evaluation results are provided. Some observations are somewhat interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new paradigm for Vision-Language Models (VLMs) by creating multiple tasks from a single existing task, called task augmentation. This is achieved by re-annotating an existing benchmark with various tools for diverse purposes. The new paradigm is validated on the COCO and KITTI datasets. Extensive experiments on the created benchmarks are conducted, giving several interesting observations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Although the idea is smart, the applicability of the data re-annotation pipeline is unknown. Currently, it is demonstrated on COCO and KITTI where instance-level annotations are provided. It would be good to elaborate more about how to generalize the data generation pipeline.\n\n2) I do not make it clear how the proposed approach can address the challenges listed in Sec.1: domain-specific validation, picking the right tasks, balancing quantity and quality.\n\n3) The notes drawn from the evaluation results seem not new for authors. Similar conclusions can be seen in various VLM evaluation papers. \n\n4) I do not see a reason why the proposed approach can be more useful than existing evaluation benchmarks. A detailed comparison with existing ones should be presented.\n\n5) The paper lacks an analysis of the evaluation results or evaluation approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could you offer a detailed demonstration of the human effort required in each dataset creation stage? This would help in understanding the resource-efficiency and automation of the \"Automatic task augmentation\" technique.\n2. How does this benchmark compare to existing VLM benchmarks in terms of task quantity, question diversity, and problem difficulty? A thorough comparison would highlight the benefits of the proposed task augmentation method.\n3. Can you clarify the task generation method using metadata? Is this done through pre-set question templates, generated by LLMs, or manual writing? A clear description of this would be valuable for reproduction. \n4. Could you include the statistical data about the 25 tasks, such as the number of questions in each task?\n\n[1] Luo Z, Xu C, Zhao P, et al. Wizardcoder: Empowering code large language models with evol-instruct[J]. arXiv preprint arXiv:2306.08568, 2023.\n[2] Muennighoff N, Liu Q, Zebaze A, et al. Octopack: Instruction tuning code large language models[J]. arXiv preprint arXiv:2308.07124, 2023.\n[3] Shypula A, Madaan A, Zeng Y, et al. Learning performance-improving code edits[J]. arXiv preprint arXiv:2302.07867, 2023."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper presents a new benchmark for evaluating VLMs, which contributes for the development of this field."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a domain-specific benchmark for evaluating Vision-Language Models (VLMs), utilizing a task augmentation technique. The benchmark provides interesting conclusions, such as considerable model performance variations across related domains. However, the primary contribution—the automatic and efficient task augmentation technique—warrants further examination. And some important details concerning the benchmark lack clarity. In summary, I think this work makes a valuable contribution but requires further revisions for publication."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The core contribution, \"Automatic task augmentation\", as claimed in line 98, appears not to be \"automatic\" nor generally available. The dataset creation still involves considerable human efforts, including metadata annotation, rule-writing, task template design, and multi-round refinement of prompts (lines 308-309).\n2. The concept of \"Task Augmentation\", although presented as new, has been thoroughly studied in previous works [1,2,3]. These works have explored methods of generating additional tasks using metadata or simple tasks for either model evaluation or instruction tuning."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see above"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The paper addresses a critical issue: developing evaluation datasets for domain-specific benchmarking of VLMs\n\n* It includes an extensive evaluation using a diverse set of VLMs across various model sizes, enhancing the robustness of the findings.\n\n* The method demonstrates effectiveness, as even powerful models struggle with some tasks, demonstrating that the generated benchmark is challenging. \n\n* Human validation is incorporated to ensure clarity of image-question pairs and reduce ambiguity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a method for repurposing existing vision datasets to new visual tasks by leveraging the same imagery and obtaining additional metadata through a combination of human input, simple heuristic rules, and pre-trained models (e.g., segmentation and depth models). The generated data is then used to evaluate a comprehensive set of existing VLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* While the authors formalized a pipeline for “task augmentation,” the concept of repurposing existing imagery from available datasets and leveraging metadata (using off-the-shelf models or human input) to evaluate different tasks or augment training sets is well-explored in prior work. For instance, see [1],[2],[3],[4] among many others. In a way or another those benchmark repurpose existing vision datasets and use either humans or off-the-shelf models to generate additional metadata or VQA type questions. \n\n* The paper initially frames itself as a method for generating validation data for domain-specific foundation models with predefined, specific purposes. However, most models evaluated are “generalist” VLMs rather than “specialist” models. This is fine but the motivation and message should be adjusted accordingly. Additionally, while the motivation includes applications in fields like pathology and autonomous driving, no data or model relevant to these high-stakes areas is evaluated. Thus, the suitability of the pipeline for evaluating such specialized tasks remains uncertain.\n\n* The writing could be further refined, as some sections take longer to convey main points. Streamlining sections such as the introduction, Section 2.2, and Section 3.3 could improve clarity and flow.\n\n* While the proposed metric evaluation may be intuitive to the authors, incorporating more widely recognized metrics alongside individual scoring for each task could improve the benchmarks' accessibility and broader adoption.\n\n* Some important figures, like Figure 4, are difficult to interpret due to crowding. Grouping models by parameter count or model family could help clarify these visuals. Models differing in parameter count by more than 10x may not need to be displayed together unless a significant point is being illustrated.\n\n* In addition to releasing the code, sharing the final generated dataset could enhance its utility for the community, potentially offering greater practical value than the code alone.\n\nOverall, I recommend that the authors improve the writing and presentation, with an emphasis on the benchmark and findings as the main focus rather than the data generation pipeline.\n\n[1] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs\n[2] SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models\n[3] Reasoning Paths with Reference Objects Elicit Quantitative Spatial Reasoning in Large Vision-Language Models\n[4] Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce a task augmentation framework using metadata to create resource-efficient, domain-specific benchmarks for vision-language models, revealing that model performance varies significantly across domains, even on the same tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024domainspecific,\ntitle={Domain-specific Benchmarking of Vision-Language Models: A Task Augmentation Framework Using Metadata},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1CeIRl147S},\nnote={under review}\n}"
},
"abstract": {
"value": "The reliable and objective evaluation of AI models is essential for measuring scientific progress and translating methods into practice. However, in the nascent field of multimodal foundation models, validation has proven to be even more complex and error-prone compared to the field of narrow, task-specific AI. One open question that has not received much attention is how to set up strong vision language model (VLM) benchmarks while sparing human annotation costs. This holds specifically for domain-specific foundation models designed to serve a predefined specific purpose (e.g. pathology, autonomous driving) for which performance on test data should translate into real-life success. Given this gap in the literature, our contribution is three-fold: (1) In analogy to the concept of data augmentation in traditional ML, we propose the concept of task augmentation - a resource-efficient method for creating multiple tasks from a single existing task using metadata annotations. To this end, we use three sources to enhance existing datasets with relevant metadata: human annotators (e.g. for annotating truncation), predefined rules (e.g. for converting instance segmentations to the number of objects), and existing models (e.g. depth models to compute which object is closer to the camera). (2) We apply our task augmentation concept to several domains represented by the well-known data sets COCO (e.g. kitchen, wildlife domain) and KITTI (autonomous driving domain) datasets to generate domain-specific VLM benchmarks with highly reliable reference data. As a unique feature compared to existing benchmarks, we quantify the ambiguity of the human answer for each task for each image by acquiring human answers from a total of six raters, contributing a total of 162,946 human baseline answers to the 37,171 tasks generated on 1,704 images. (3) Finally, we use our framework to benchmark a total of 21 open and frontier closed models. Our large-scale analysis suggests that (I) model performance varies across domains, (II) open models have narrowed the gap to closed models significantly, (III) the recently released Qwen2 72B is the strongest open model, (IV) human raters outperform all VLMs by a large margin, and (V) many open models (56\\%) perform worse than the random baseline. By analyzing performance variability and relations across domains and tasks, we further show that task augmentation is a viable strategy for transforming single tasks into many and could serve as a blueprint for addressing dataset sparsity in various domains."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"VLM",
"Benchmark",
"Annotation",
"Ambiguity"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a4eb7a860608608f42edb1f867e2e28e251a875a.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/4c1e34711d39d97836414add3f633b29a35adb27.pdf"
},
"title": {
"value": "Domain-specific Benchmarking of Vision-Language Models: A Task Augmentation Framework Using Metadata"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1D3TjFidCS | Logarithmic Linear Units (LogLUs): A Novel Activation Function for Improved Convergence in Deep Neural Networks | main | Active | Activation Function;Deep Neural Networks;Optimisation | learning theory | 1;3;5;5 | 5;2;3;5 | 2;2;2;2 | 1;2;2;3 | 2;2;2;2 | 3.5 | 3.75 | 2 | 2 | 2 | -0.174078 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please check the weaknesses and respond to the comments. Here is a summary:\n\n(1). Address the unsupported claims in the paper.\n\n(2). Include more experimental results for ablation studies, more neural architectures, and more tasks."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper proposes a new activation function for deep neural networks. This is an important topic, considering the significant impact of activation function choice on deep neural network performance. The paper is clear and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work presents the new logarithmic linear unit (LogLU) activation function for deep neural networks.\nThe LogLU activation solves the problem of vanishing gradient.\nThis paper shows that LogLU outperformed the other activation functions considered."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This paper does not support some of its claims with enough evidence. For example: Under the abstract, we have: \"Its capability to solve fundamental yet complex non-linear tasks, such as the XOR problem, with fewer neurons demonstrates its efficiency in capturing non-linear patterns\". There is no evidence to support the claim that LogLU uses fewer neurons. You can strengthen this claim by providing the evidence to support this.\n\nUnder the conclusion, we have: \"The empirical results show that LogLU consistently outperforms traditional activation functions in terms of convergence speed, stability, accuracy, and loss reduction.\". The measure of stability is not clear in this paper. You can strengthen this by explaining how you observed the stability of the networks.\n\nThe experiments are limited and insufficient to conclude that LogLU is better than the other activation functions for deep neural networks. This paper did not address possible interaction with other components of a neural network (For example: dropout, learning rate, batch normalization, and so on). Please consider an ablation study that examines LogLU's interaction with other neural network components like dropout, batch normalization, etc.\n This work only considered some image classification tasks. This is not representative enough to generalize over all deep neural networks. For example, consider other cases such as simple generative models, language-based tasks, and so on."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1.\tSome inconsistent/undefined concepts? The loss function used in Section 3.2 seems to be binary cross entropy loss. While this might be obvious to some, the loss function was not defined prior to Section 3.2, which make the further discussion confusing. In Section 5, the authors talk about achieving greater “stability” with LogLU. Stability in what? This term in not (well-)defined in the paper. \n\n2.\tLack of error analysis/multiseed runs: The work lacks any error analysis (no error bars in plots or tables) whatsoever. Moreover, all the loss/accuracy curves were evaluated for a single seed. Showing the robustness of LogLU in a multiseed setting will enhance the efficacy of the proposed approach. \n\n3.\tExtend empirical comparison scope: Include additional activation functions, particularly the newer ones like Parametric RSigELU, ErfReLU, etc. to establish a more comprehensive benchmarking framework. Further, investigate LogLU’s performance on diverse and prominent architectures like DenseNet, ResNet, VGG, etc. to reinforce its general applicability.\n\n4.\tDetailed computational complexity analysis: A more granular breakdown of the time complexity will enhance the results of the paper. It might be worth performing time-complexity analysis for images instead of multiple realizations of a large vector of fixed size. Test and report the computation time of LogLU within different network architectures (e.g., shallow networks, ResNet, VGG) and layer types (e.g., dense layers vs. convolutional layers). This analysis can reveal how the activation function’s computational demands vary with the network’s depth, type, and layer configuration, especially for architectures optimized for speed.\n\n5.\tComparison to other methods for mitigating vanishing/exploding gradients issue: There are other successful and competitive methods for mitigating vanishing/exploding gradient problems at the architectural level such as the ResNet architecture. These tackle the gradient issue via architectural design using skip-connections and identity mapping to reformulate the CNN layers for learning residual functions, while specially engineered activation functions address it via their mathematical properties like non-saturating properties (LeakyReLU), gradient preservation (Swish, GELU) for negative inputs, incorporating learnable parameters (Parametric ReLU or PReLU) etc. While exploring architecture vs. activation function for solving gradient issue is out of the scope of this work (which focuses solely on activation functions), a detailed discussion highlighting other non-activation function based techniques for overcoming vanishing/exploding gradient problem will help with the completeness of the paper. \n\n6.\tExamine gradient flow in various conditions: Explore gradient dynamics with respect to learning rate schedules and optimizers to provide insight into how LogLU performs under different training regimes. Additionally, ablation studies on placement within specific layers could clarify LogLU’s most impactful applications.\n\n7.\tTheoretical insights on regularization effect: Since the logarithmic component potentially regularizes activations for negative inputs, discussing theoretical implications related to regularization could open new perspectives on the theoretical advantages of LogLU in avoiding overfitting."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Innovation in activation functions: The proposal of LogLU as a hybrid activation function is novel and provides an interesting alternative to traditional activation functions. The logarithmic component for negative inputs introduces a unique way to handle the dead neuron problem while also limiting gradient vanishing, especially compared to ReLU and Leaky ReLU.\n\n2. Experiments performed: The authors performed evaluations on classification benchmark datasets (Caltech 101 and Imagenette) and used InceptionV3 architecture for the classification task. The consistent improvements in Val accuracy (on Caltech 101) and convergence speed were presented in Tables 3 and 4 and Figures 4 and 5, which suggest that LogLU might be a competitive alternative to existing activation functions.\n\n3.\tPerformance on classification task: By demonstrating that LogLU can solve the XOR problem with a simplified architecture, the authors underscore LogLU’s efficiency in capturing non-linear relationships with fewer neurons, an advantage for both resource efficiency and model scalability.\n\n4.\tAddressing gradient problems: The paper discusses how LogLU mitigates the vanishing and exploding gradient problems, which are common in deeper networks due to the use of traditional activation functions. LogLU’s bounded gradient across all input values is well-explained and experimentally supported, potentially making it an optimal choice for complex neural architectures.\n\n5.\tEfficient computation: The paper also presents an analysis of computation times, demonstrating that LogLU is computationally efficient (Figure 2). LogLU achieves an average computation time significantly lower than other activation functions (except ReLu and Leaky ReLU), with performance that consistently outpaces more complex alternatives like Mish and Swish."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new activation function, Logarithmic Linear Unit (LogLU), aimed at addressing issues inherent in widely used activation functions like ReLU, Leaky ReLU, ELU, etc. LogLU uses a logarithmic function for negative inputs, allowing it to maintain active gradients even with negative inputs, potentially reducing issues like dead neurons and vanishing gradients. Experiments are conducted comparing LogLU with other established activation functions across datasets like Caltech 101 and Imagenette using the InceptionV3 architecture. The authors highlight benefits in convergence speed and accuracy, proposing LogLU as a robust alternative for deep learning models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tLack of a rigorous study/analyses: Although the paper tries to solve an important problem in deep learning based training of CNNs in the presence of vanishing/exploding gradient problem, the work done in the current version of the paper appears to be very preliminary in nature and there is a huge scope for improvement. \n\n2.\tComparison with more recent activation functions: While the paper covers popular functions like ReLU, ELU, and Swish, it could benefit from comparisons with other activation functions such as SiLU, GELU, Softplus or more recent alternatives like Parametric RSigELU (Kiliçarslan, et al, Feb 2024) and ErfReLU (Rajanand, et al, May 2024). Including such comparisons would provide a broader perspective on LogLU’s competitive positioning.\n\n3.\tAccuracy on Imagenette dataset: There does not seem to be any significant gain in performance on the Imagenette dataset, where activations such as Swish and Mish marginally beat the proposed activation function. Therefore, the claims of better performance is not applicable on this dataset. \n\n4.\tComputational Complexity Analysis: Although the authors claim computational efficiency, the complexity analysis could be strengthened. The time complexity is presented in aggregate form (average time over multiple runs), but there is limited discussion on LogLU's computational demands relative to exponential or polynomial components in activation functions like ELU or Mish, which could help enhance the claims of efficiency.\n\n5.\tScalability to other deep CNNs and datasets: While the experiments are valuable, they focus primarily on moderately sized datasets for only image classification tasks. Testing LogLU on larger datasets, such as the MNIST, CIFAR10, COCO, CelebA, Pascal VOC, SVHN, etc., and using architectures beyond InceptionV3 (e.g., ResNet or transformer-based models) could provide deeper insights into LogLU’s applicability in large-scale settings.\n\n6.\tScalability to loss functions beyond cross-entropy: Since the gradient computation depends on loss function, it would be highly valuable to assess the effectiveness of LogLU for different loss functions for the classification task. These directions were not explored in the current version of the work. \n\n7.\tScalability to tasks beyond classification: The effectiveness of LogLU on other tasks such as image segmentation, object detection or image generation, etc. remains unexplored. The work could potentially benefit from showing superior performance/computational efficiency over other activation functions in a variety of other prominent computer vision tasks.\n\n8.\tAblation Studies: The effectiveness of LogLU in specific neural network layers (e.g., convolutional layers vs. dense layers) or different learning rates and optimizers remains unexplored. Adding ablation studies could help isolate the benefits of LogLU more distinctly across various configurations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* On page 1, the manuscript states: \"Although Leaky ReLU addresses this problem by permitting small negative values, it introduces the vanishing gradient problem, limiting its effectiveness in deep networks (Maas, 2013).\" However, I believe that Leaky ReLU does not introduce the vanishing gradient problem. In fact, Leaky ReLU was proposed to mitigate issues like the dying ReLU problem by allowing a small, non-zero gradient for negative input values. Additionally, no such discussion regarding Leaky ReLU introducing vanishing gradients is found in Maas et al. (2013).\n\n* On page 5, the manuscript states that Table 1 shows the derivative of LogLU, but Table 1 does not include this information. Please update Table 1 to include the derivative of LogLU or revise the manuscript to accurately reflect the contents of Table 1.\n\n* On page 6, the term \"more controlled activations\" is ambiguous and requires clarification. The authors should provide a clear definition or explanation of what is meant by \"more controlled activations\" to enhance the reader's understanding.\n\n* The lines in the figures are difficult to distinguish. Please use more distinct colors or linestyles to enhance clarity. \n\n* On page 7, why are the model sizes different across datasets, even though Inception-V3 is used for both?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The proposed LogLU is very simple while being both continuous and differentiable. It requires less computation than modern activation functions such as Swish because it does not involve exponential computations in either the forward or backward pass, although it only requires logarithmic computation in the forward pass."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes LogLU as a new activation function, which is both continuous and differentiable.\nLogLU is empirically shown to be computationally more efficient compared to modern activation functions such as Swish or Mish, but requires slightly more computation than ReLU or Leaky ReLU.\nThe authors claim that a simple one-hidden-layer MLP with LogLU activation can learn the XOR function.\nLogLU is compared to other activation functions using the Caltech-101 and Imagenette (a simplified variant of ImageNet) datasets with the Inception-V3 architecture, demonstrating faster convergence of models with LogLU activation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The authors claim that a simple MLP with LogLU activation can learn the XOR function, highlighting this as an advantage of using LogLU. However, MLPs with other activation functions are also capable of learning the XOR function. The authors should discuss why using LogLU is more advantageous than other activation functions in the context of the XOR example.\n\n* The experimental evaluations are insufficient. At a minimum, it is necessary to compare the proposed activation function with other methods using network architectures beyond Inception-V3. Additionally, each experiment should be conducted with various random seeds to assess the variability of the outputs (loss or accuracy)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How to justify the theoretical reason for using the log function, could you give any intuition?\n2. Here are some thoughts to justify LogLU and address the theoretical side. $f=-\\log(-x+1)$ solves an instance of Monge-Ampère equation $$\\log\\det f''=2f$$ \nwhere $\\det$ is the analog in the high-dimensional case, associated with Dirichlet boundary condition $\\lim_{x\\to\\partial \\Omega} f=\\infty$ on the domain $\\Omega=(-\\infty,1)$. We can alternatively set a Neumann boundary condition $f'(0)=1$ on $\\Omega=(-\\infty,0)$ to guarantee the $C^1$ continuity. The intuition is that the logarithmic curvature is proportional to the value. The property includes self-concordance and logarithmic homogeneity.\nSee [1] in Chapter 2.3.3: properties; Chapter 2.5: universality---the log function as a canonical construction.\nSee [2] in Proposition 1.4.3: a connection with the Calabi theorem.\n\n[1] Interior point polynomial time methods in convex programming. A. Nemirovski 2004.\n\n[2] Conic optimization: affine geometry of self-concordant barriers and copositive cones. R. Hildebrand 2017."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. I believe this direction of activation search is fundamentally impactful in deep learning, because it changes the basic part of neural networks. Though the experiments are very limited, it is already a good sign that this important part works better.\n2. The paper's message is minimal, direct, and clear."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to use $-\\log(-x+1)$ in ReLU instead of $0$ when the input is negative. The performance on fine-tuning InceptionV3 on Caltech101 and Imagenette is improved over ReLU, ELU, Leaky ReLU, Swish (SiLU) and Mish."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My major concern is that the experiments are restricted to very limited data and models, so LogLU's validity is still questionable on other models and tasks.\n2. More specifically, the results would be convincing if the author could add experiments on common models, such as , ResNet, UNet, and Transformers. If LogLU works on more models I believe it will improve the paper. \n3. Another solution that could help is to ask if it is possible to find a dataset or a toy model where LogLU significantly outperforms other activations.\n3. The model has 73M parameters for Caltech 101 and 37M for Imagenette, both pre-trained on the Imagenet dataset. I don't understand why the models are both InceptionV3 but are different in size.\n4. I don't understand why the experiments only include fine-tuning, but not training from scratch."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "The Logarithmic Linear Unit (LogLU) is a novel activation function designed for deep neural networks, improving convergence speed, stability, and overall model performance"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024logarithmic,\ntitle={Logarithmic Linear Units (Log{LU}s): A Novel Activation Function for Improved Convergence in Deep Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1D3TjFidCS},\nnote={under review}\n}"
},
"abstract": {
"value": "The Logarithmic Linear Unit (LogLU) presents a novel activation function for deep neural networks by incorporating logarithmic elements into its design, introducing non-linearity that significantly enhances both training efficiency and accuracy. LogLU effectively addresses common limitations associated with widely used activation functions include ReLU, Leaky ReLU, and ELU, which suffer from issues like the dead neuron problem and vanishing gradients. By enabling neurons to remain active with negative inputs and ensuring effective gradient flow during backpropagation, LogLU promotes more efficient convergence in gradient descent. Its capability to solve fundamental yet complex non-linear tasks, such as the XOR problem, with fewer neurons demonstrates its efficiency in capturing non-linear patterns. Extensive evaluations on benchmark datasets like Caltech 101 and Imagenette, using the InceptionV3 architecture, reveal that LogLU not only accelerates convergence but also enhances model performance compared to existing activation functions. These findings underscore LogLU's potential as an effective activation function that improves both model performance and faster convergence."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Activation Function",
"Deep Neural Networks",
"Optimisation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/daa47b24c9ed168fdcd48a71eb55f57188cf2b30.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Logarithmic Linear Units (LogLUs): A Novel Activation Function for Improved Convergence in Deep Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1DEEVAl5QX | Mini-batch Submodular Maximization | main | Active | smoothed analysis;submodular maximization | optimization | 3;5;6 | 4;4;4 | 3;3;3 | 1;2;2 | 3;3;4 | 4.666667 | 4 | 3 | 1.666667 | 3.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you for the review. We address your comments below.\n\nWeaknesses:\n- \"The lunch menu optimization example, while a clear illustration, does not really motivate the problem from a practioner's perspective\"\n\nAgreed, we chose this for simplicity. However, utility maximization is an extremely natural problem. Other examples for utility maximization include: adding medical services to a healthcare package as to maximize the welfare of all patients, adding features to a website to maximize user engagement, etc...\n\n- \"There are no p-system experiments\"\n\nIndeed, we couldn't find a real-world dataset for this problem. Previous papers seems to either be completely theoretical (no experiments), or run experiments just under a cardinality constraint.\n\nQuestions:\n\nQ1) It is quite natural in welfare maximizations (e.g., many people with different preferences). Another example is finding a representative set of images (e.g., thumbnails for a video). Here N can be very large (the number of frames in the video), clearly there is plenty of redundancy, so our approach is very natural here.\n\nQ2) It is defined just above Model 1. It is the set $\\{f^i(e)\\}_{i\\in [N]}$ and in our models we assume that every $f^i(e)$ (the value of the i-th func on e) is a random variable, not the set $A_e$.\n\nQ3) We roughly followed the paper of Rafiey and Yoshida which introduced this set. They simply say that they select a set of \"popular pickup locations in the dataset\". We used k-means to pick \"popular locations\"."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you for your review. \n\nAbout the figure, basically all algorithms achieve almost the same quality when we sample enough elements.\nWe will move Section 4 before Section 3 if accepted."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you for your review. \n\nWe would like to emphasize that our main contribution is the introduction of the *uniform* sampling algorithm, observing that it outperforms other approaches empirically, and using smoothed analysis to bridge the gap between theory (no worst case analysis possible) and practice. We believe that this algorithm can be used as the first line of attack for many real-world massive datasets. The improved weighted sampling is \"nice to have\" and lays the groundwork for the smoothed analysis of the uniform sampling algorithm, but this is not our main contribution.\n\nWe address your questions below. \n\nQ1) An upper bound is not sufficient. This is because our proofs require a *multiplicative* error bound for the minibatch approach to work. Consider the following example: all functions except one are always zero ($f^i \\equiv 0, i\\neq j$), and one function, $f^j$, is upper bounded by 1. Clearly both the minibatch and the sparsifier algorithms can't optimize the sum as they will keep sampling functions that are always 0. The above example is unlikely to appear in real world applications, but it illustrates that worst-case analysis is simply not the right tool here. This is why we use smoothed analysis to explain the superior performance of uniform sampling in practice.\n\nQ2) Yes, specifically Model 2. The assumptions of Model 2 are *extremely* mild and we verify empirically that they hold for *all* of our datasets. We would like to emphasize that we only introduced smoothed analysis in this revision of the paper, while we used the same datasets in previous revisions. That is, we did not simply pick datasets where our models apply (and indeed Model 1 does not apply to all datasets)."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Our main contribution is the uniform alg + smoothed analysis."
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* If the $f_i$ are all bounded by a value $R$, could theoretical guarantees be gotten for uniform sampling?\n* Do you expect Models 1 and 2 would hold widely in applications of decomposable functions?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Exploring submodular optimization algorithms that do not view the function $f$ as simply a black box is an interesting research direction that I think deserves attention.\n- They explained their results clearly and the paper was easy to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper considers maximization of decomposable monotone submodular functions over a ground set of size $n$, meaning that the objective function $f$ is a sum of $N$ monotone submodular functions $f_1,...,f_N$. If $N$ is large, then evaluations of $f$ may be computationally demanding. Previous work on the topic (Rafiey & Yoshida, 2022; Kenneth & Krauthgamer 2023) proposes constructing a random sparsified version of $f$ that is a weighted sum of some subset of the functions, and is within a multiplicative $\\epsilon$ factor approximation on all sets. A sparsifier such as those mentioned could be constructed as a preprocessing step for an algorithm, and then the algorithm would be run using the sparsifier in place of the original function. The state of the art is that of Kenneth & Krauthgamer, where a sparsifier of $O(k^2n\\epsilon^{-2})$ functions is constructed using $O(Nn)$ oracle calls. The sparsifier is constructed by iterating over the functions, computing a probability $p_i$ for each function $f_i$ to be included, and then sampling that function with probability $p_i$ (which takes a total of $O(Nn)$ queries). Then querying the sparsifier takes $O(k^2n\\epsilon^{-2})$ function evaluations, compared to $O(N)$ function evaluations to query the original $f$. If $N$ is relatively large, the sparsifier is more efficient.\n\nInstead of computing a sparsifier as a preprocessing step for an algorithm, this paper proposes a \"mini-batch method\" (which have been used in other areas of ML) for this problem (Algorithm 3). That is, a new sparsifier is sampled every iteration of the greedy algorithm. The approach in this paper uses the same sampling probabilities $p_i$ as Kenneth & Krauthgamer, and therefore still needs the $O(Nn)$ queries as a preprocessing step to compute the $p_i$. In order to prove some of the results in their paper, they make additional assumptions on the problem setting (Models 1 and 2). Several analyses are done on the number of function queries needed for their algorithm. Finally, they include an experimental comparison of their algorithm and related works."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It seems a lot of the difficulty of these sparsification approaches is because the sampling of the $f_i$ is non-uniform, but it is still unclear to me that this is so much better than uniform sampling. According to this paper, uniform sampling does better in practice, and requires no preprocessing to compute the $p_i$ since they would be uniform. It is also stated that no theoretical bound can be gotten for uniform sampling. But if we assume that all the $f_i$ are bounded by some value $R$, why can't concentration inequalities be used to get a theoretical guarantee for the uniform approach?\n- Some of the results are dependent on assuming Models 1 or 2 (see Table 1), but it isn't clear to me that these models are realistic for applications of the problem.\n- Improvements over Kenneth and Krauthgamer mainly include the curvature of the function in the bound on the number of function queries, so the bounds are instance dependent.\n- The bounded curvature results (which don't depend on Models 1 and 2) don't use ideas that are that novel compared to related work. It seems the biggest difference from Kenneth and Krauthgamer is computing the sparsifier at each round of the greedy algorithm, and only relatively minor changes are needed to the argument of Kenneth and Krauthgamer."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "It might be better to put Section 4 before Section 3 to ensure the continuity of the analysis."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Overall, the paper is well-structured and easy to understand. The definitions and explanations are clear, and related work is discussed in sufficient detail. \n\nThe discussion on uniform and weighted sampling, along with the smoothing model, helps bridge the gap between theoretical results and the empirical performance of the algorithms. It provides insights into why an algorithm without a worst-case guarantee can still perform well in experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the problem of maximizing a non-negative, monotone, decomposable submodular function under the cardinality constraint and $p$-system constraint. It introduces the first mini-batch algorithm with weighted sampling for this problem, demonstrating that it outperforms the sparsifier-based approach both theoretically and empirically. Additionally, the authors observe that, in experiments, uniform sampling outperforms weighted sampling. To explain this outcome, they define two smoothing models. The first model provides theoretical guarantees for both the mini-batch and sparsifier algorithms on some datasets, while the second model applies only to the mini-batch algorithm but is effective across all datasets tested."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The algorithm is simple, and the analysis is quite straightforward. The technical contribution is limited. \n\nWith 12 indistinguishable lines in Figure 1, it is hard to see which algorithm with $\\beta=10^{-2}$ achieves the best performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In the introduction, you claim that \"in many of the above applications, $N$\n (the number of underlying submodular functions) is extremely large, making the\n evaluation of $F$ prohibitively slow.\" Are there realistic examples where $N\n \\gg 1000$? It's not clear to me how often we really encounter $N$ *distinct*\n personalized submodular functions.\n- What exactly is the quantity $A_e$ when you first introduce it on page 3?\n This should be made more clear. Initially, I thought it was a vector of all\n marginal values, but then in model 1 you say it's a random variable.\n- For the Uber pickups experiment, why do you use Llyod's algorithm to find\n centers instead of a data-indepedndent grid?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Uses smoothed analysis to more accurately study realistic inputs\n- Table 1 cleanyl describes the results, including a comparison with [Kenneth-Krauthgamer, ICALP 2024]\n- Draws connections to the lazier-than-lazy greedy algorithm of [Mirzasoleiman et al., AAAI 2015]\n and explains how the two ideas can be combined to reduce query complexity by a factor of $\\Theta(k)$\n- Good comprehensive set of experiments for cardinality constraints, though the\n values of $k \\le 20$ are quite small. It would be nicer to increase $k$ to see\n how fast the different algorithms converge (relatively) to lazy greedy"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work studies a sampling-based algorithm for faster non-negative monotone *decomposable* submodular maximization subject to\ncardinality or $p$-system constraints. In particular, it builds on work of\n[Kenneth-Krauthgamer, ICALP 2024] (please update reference in paper), which sparsifies\nand reweights the set of functions $f^{(i)}(S)$ for the input function $F(S) = \\sum_{i=1}^N f^{(i)}(S)$.\nThe goal of this paper is to eliminate the dependence on $N$, which the authors do under mild assumptions\nvia *smoothed analysis*. They also show that this is not possible in the general case with a simple pathological example.\nIn short, the main idea is to sample a subset of $f^{(i)}(S)$ functions at each step to form a\n\"mini-batch\" for approximating the full $F(S)$. The algorithm then greedily\nselect the next element based on the sampled funciton (which changes in each iteration), not $F$ itself.\n\nFurther, under the mild realistic assumptions, they prove why uniform sampling is a competitive approach,\nwhich helps explain initially surprising experimental observations.\nLastly, this work provides a clean set of experiments comparing their mini-batch sampling-based methods to\na full lazy greedy algorithm and the sparsification idea in [Kenneth-Krauthgamer, ICALP 2024]."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The lunch menu optimization example, while a clear illustration, does not\n really motivate the problem from a practioner's perspective\n- There are no $p$-system experiments\n- It is unclear if ICLR is an appropriate venue for this work. The\n non-exhaustive list of topics in the Call for Papers includes \"optimization\",\n but submodular maximization in its raw form seems one hop away from the\n target areas of ICLR (deep learning)"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Present the first mini-batch algorithm for submodular maximization; use smoothed analysis to justify performance"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024minibatch,\ntitle={Mini-batch Submodular Maximization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1DEEVAl5QX},\nnote={under review}\n}"
},
"abstract": {
"value": "We present the first *mini-batch* algorithm for maximizing a non-negative monotone *decomposable* submodular function, $F=\\sum_{i=1}^N f^i$, under a set of constraints. \nWe consider two sampling approaches: uniform and weighted. We show that mini-batch with weighted sampling improves over the state of the art sparsifier based approach both in theory and in practice. Surprisingly, we experimentally observe that uniform sampling achieves superior results to weighted sampling. However, it is *impossible* to explain this using worst-case analysis. Our main contribution is using *smoothed analysis* to provide a theoretical foundation for our experimental results. We show that, under *very mild* assumptions, uniform sampling is superior for both the mini-batch and the sparsifier approaches. We empirically verify that these assumptions hold for our datasets. Uniform sampling is simple to implement and has complexity independent of $N$, making it the perfect candidate to tackle massive real-world datasets."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"smoothed analysis",
"submodular maximization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/644548452ada9eb3f77b2c0cf6d9d7a2258281e0.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Mini-batch Submodular Maximization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1DEHVMDBaO | Adaptive Memory Mechanism in Vision Transformer for Long-form Video Understanding | main | Active | Key-Value Cache;Vision Transformer;Video Understanding | applications to computer vision, audio, language, and other modalities | 3;3;3;5;5 | 4;4;4;3;5 | 2;1;2;3;2 | 2;1;2;3;2 | 3;2;2;2;2 | 3.8 | 4 | 2 | 2 | 2.2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Will there be a significant performance difference if the model is not pre-trained with UMT?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The method have better performance than baselines without additional cost."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses a solution for better long-form video understanding using a method named Adaptive Memory Mechanism (AMM). This method enables the Vision Transformer (ViT) to adjust its temporal receptive field dynamically depending on the input video. A memory bank is utilized to save the most important Key-Value when temporally processing the videos. The proposed method is tested on AVA and Epic-Kitchens datasets for action detection, recognition, and anticipation tasks. Experiment results show performance improvement to the ViT baselines without additional cost."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks SoTA comparisons. Is the task different from common action recognition and action detection? Multiple methods such as VideoMAE, Omnivore, or MMT have been tested on these datasets. It would be helpful if the authors could explain the difference between previous SoTAs with the proposed method, for example in parameter count or GFLOP difference.\n2. The improvement to ViT and MeMVit baselines is marginal.\n3. There is no difference in the FLOPs and Param(M) numbers compared to the baselines. Can the authors explain further the efficiency advantage achieved by the proposed method?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "* Long-form video understanding is an important video research topic and the idea of using an adaptive memory bank sounds reasonable and promising. \n* Compared to MeMViT, the results show consistent improvements though some datasets only have marginal gain."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an adaptive memory method to improve the existing memory-augmented methods for long-form video understanding. The method is based on MeMViT but makes the memory bank adaptive to support the adaptive temporal receptive field. The experiments are conducted on Ava and Epic-Kitchens dataset with the comparison with ViT and MeMViT."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* One of the main motivations of the paper is to retain embeddings instead of discarding memory when the memory limit is reached. However, based on the experiments, it's unclear if the effective receptive field of AMViT is indeed larger than MeMViT through the proposed adaptive memory module. Are they still using the same memory bank size?\n* In the model section, the paper presents two new modules, including Input-aware selective module (ISM) and Adaptive Memory mechanism(AMM). However, there are no ablations to validate the individual effectiveness of these modules.\n* How do we select parameters for MeMViT? Some parameters for MeMViT (Table 6) are not defined, e.g, memory bank size. Is it the same as AMViT? Given the authors are reproducing MeMViT with a different backbone, how the results compare to the original paper.\n* In Table 1, it's unclear why all the three methods are having the same FLOPs and parameters given MeMviT and AMViT has additional memory bank modules. It's also better to conduct run-time comparison.\n* The experiments are also missing a system-level comparison with the current SOTA results on the benchmarks."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Does the KV Cache in this paper retain the gradient?\n2. This paper focuses on a pure vision model with enhanced memory design. However, the ViT-only architecture is capable of a limited range of video-related tasks. Is it possible to integrate it with video-language models to achieve wider range of video tasks to exert more impact on the community?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The analysis of the limited temporal receptive field in long-term video understanding makes sense, and the motivation is clear.\n2. The method is simple and intuitive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to enhance ViT for long-term video understanding. The authors design a memory bank to store historical information and develop input-aware adaptive memory selection to retrieve the relevant information to assist long-term analysis. The experiments show that the architecture demonstrates satisfactory performance with high efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experiments are limited. Only AVA and Epic-Kitchens are reported. Results on more video datasets are required to verify the effectiveness of the adaptive memory design. Besides, the performance improvements are marginal.\n2. The memory bank is recurrently updated by adaptive selection. Is it possible that in a long video, the content in the middle of the video is not closely related to the beginning, and only relevant content appears towards the end? However, during the memory bank update process, the tokens of the earlier video content were already discarded."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "When comparing with MeMViT, your model uses the memory bank and the selected Q-V cache, while MeMViT only uses Q-V cache. Have you ensured that the number of embeddings in both model is consistent? Specifically, does the size of the memory bank plus the size of the selected Q-V cache match the size of the unselected Q-V cache?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.Long-form video understanding is an important research task, and the author has provided a reasonable solution.\n\n2.The paper is well-written, making it easy to read."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an Adaptive Memory Mechanism (AMM) for Vision Transformer (ViT) in long-form video understanding. It addresses the issue of selecting an optimal Temporal Receptive Field by allowing ViT to adjust TRF dynamically. Instead of directly discarding early Key-Value cache, AMM uses a Memory Bank to retain important embeddings from the Key-Value cache based on attention scores. Experiments on AVA and Epic-Kitchens show the advantages of AMM in action recognition, anticipation, and detection tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.The novelty of memory bank is limited. Many studies have explored how to utilize memory to retain important historical information and how to dynamically update memory. For example, Xmem[1] prioritizes retaining the most frequently used candidates. MA-LLM[2] and MovieChat[3] merge the two most similar candidates based on similarity once the memory bank capacity is exceeded. The innovations and advantages of the memory bank proposed in this paper compared to these methods are unclear.\n\n2.The fairness of the experiment is in question. When comparing with the baseline model MeMViT, the authors replaced the backbone of MeMViT from MViT to UMT. This seems to have led to a decline in the performance of the baseline model. For example, in the EPIC-KITCHEN-100 action recognition task, the performance reported in the original paper on MeMViT was 48.4%, while the performance presented in this paper is 43.03%. The authors should maintain the same settings as MeMViT for the experiments to make the results more credible.\n\n3.The performance improvement is limited. Compared to the baseline model MeMViT, the performance improvement is less than 1% in all experiments.\n\n4.Lacks of comparison with the latest methods. This article only presents comparisons with ViT and MeMViT. Some recent methods are missing, such as MAT[4] and MC-ViT[5].\n\n5.Lacks of necessary ablation studies. (2) This paper uses an input-aware selective module to prevent redundant embeddings from being retained, and uses a memory bank to retain useful embeddings. However, there are no ablation experiments to demonstrate the effectiveness of these two components individually. (2) The lack of ablation experiments on the memory bank update method. For example, comparing the update of the memory bank using attention score of class tokens proposed in this paper with previous methods (see weakness 1) and First-In-First-Out (FIFO).\n\n[1] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model, ECCV 2022\n[2] MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding, CVPR 2024\n[3] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding, CVPR2024\n[4] Memory-and-Anticipation Transformer for Online Action Understanding, ICCV 2023\n[5] Memory Consolidation Enables Long-Context Video Understanding, arxiv 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please revise the Weaknesses section point by point. This is a paper with great potential. If the authors can provide additional responses to certain issues, discuss related work more thoroughly, and include more experiments and observations, I would be very happy to raise my score."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Long-form video understanding is an important task, and efficiency is indeed a crucial metric in this context.\n\n2. The proposed method can reduce both training and inference costs.\n\n3. Introducing a memory bank to handle long sequence inputs is intuitive and reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an Adaptive Memory Mechanism (AMM) to improve Vision Transformers (ViT) for long-form video understanding. AMM dynamically adjusts the Temporal Receptive Field (TRF) based on video content, overcoming limitations of fixed TRF approaches that either lose key information or increase computational costs. Experiments show that AMViT, integrating AMM, outperforms existing models like MeMViT in tasks such as action recognition, anticipation, and detection, while reducing computational overhead, validated on datasets like AVA and Epic-Kitchens."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. (important) The number of benchmarks (only 2) and baselines (also only 2) compared seems somewhat limited. Adding more experiments would make the paper more convincing.\n\n2. (important) Although the authors emphasize that the new architecture is designed for long-form video, this aspect is not discussed in the experimental section. Are the benchmarks presented in the paper truly for long videos, and what is the average input length? It would have been better if the authors had conducted more detailed evaluations on benchmarks like MovieChat-1K [1] or LongVideoBench [2].\n\n3. The writing and figures in the paper need improvement, especially regarding the notation for memory. There are too many subscripts and superscripts, along with the extensive use of qkv notations, which made it take me three times longer to understand the entire paper. \n\n[1] Song, Enxin, et al. \"Moviechat: From dense token to sparse memory for long video understanding.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[2] Wu, Haoning, et al. \"Longvideobench: A benchmark for long-context interleaved video-language understanding.\" arXiv preprint arXiv:2407.15754 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce the Adaptive Memory Vision Transformer (AMViT), which dynamically adjusts its Temporal Receptive Field using an Adaptive Memory Mechanism for effectiveness and efficiency improvement in long-form video understanding."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024adaptive,\ntitle={Adaptive Memory Mechanism in Vision Transformer for Long-form Video Understanding},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1DEHVMDBaO},\nnote={under review}\n}"
},
"abstract": {
"value": "In long-form video understanding, selecting an optimal Temporal Receptive Field (TRF) is crucial for Vision Transformer (ViT) models due to the dynamic nature of diverse video motion contents, which varies in duration and velocity. A short TRF can result in loss of critical information, while a long TRF may decrease ViT's performance and computational efficiency caused by the unrelated contents in videos and the quadratic complexity of the attention mechanism. To tackle this issue, we introduce Adaptive Memory Mechanism (AMM) that enables ViT to adjust its TRF dynamically in response to the video's dynamic contents. Instead of discarding Key-Value (KV) Cache from the earliest inference when the settings limit is reached, our approach uses a Memory Bank (MB) to retain the most important embeddings from the Key-Value Cache that would otherwise be discarded in memory-augmented methods. The selection is based on the attention score calculated between the Class Token (CLS) in current iteration and the KV Cache in previous iterations. We demonstrate that Adaptive Memory Vision Transformer (AMViT) outperforms existing methods across a diverse array of tasks (action recognition, action anticipation, and action detection)."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Key-Value Cache",
"Vision Transformer",
"Video Understanding"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/89a290cc5b5541967c05c287db67008301f18a46.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Adaptive Memory Mechanism in Vision Transformer for Long-form Video Understanding"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1DIdt2YOPw | Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations | main | Active | LLMs;uncertainty;abstention;correctness;hallucinations;safety | alignment, fairness, safety, privacy, and societal considerations | 3;3;3;5;5 | 5;5;5;4;4 | 2;3;4;3;2 | 2;1;2;3;2 | 3;3;4;3;2 | 3.8 | 4.6 | 2.8 | 2 | 3 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- To be clear, in these experiments, the model might not actually abstain, right? You would have to calculate these metrics and then hide the response from the user if it were deemed unacceptable, right?\n- It was hard to tell if there was a proposed training or inference method from reading the intro. It took me a while to realize that this was more of an analysis paper, showing how these metrics could be used for filtering model outputs.\n- Sec. 3 probably doesn’t need to take up as much space is it currently does (people should know what NLL is), but at the same time it could give more understanding into the metrics (computing actual entropy over samples is hard, so you compute predictive entropy).\n- L.377 typo “unanswerable vs. unanswerable”\n- L.471 “abstaining to answer”→ “abstaining from answering”"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- Important: The core idea of the paper is sensible, relating uncertainty metrics to abstention in order to improve factuality and safety of model responses.\n- Important: The experiment design is sound and the chosen metrics are reasonable. The experiments include many relevant datasets for measuring knowledge, hallucination, and safety.\n- Of some importance: The paper is fairly well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies how token-level and semantic uncertainty metrics on generated LLM text relate to accuracy on knowledge-intensive tasks, hallucination on unanswerable questions, and response safety on adversarial/malicious question datasets. Among the uncertainty metrics explored is one based on on counting hedge words in model responses. Experiments show that these uncertainty metrics are useful for model abstention in order to improve correctness of model generations, reduce hallucination, and increase response safety (at the cost of an increased abstention rate). Experiments are conducted across many relevant datasets using Llama 2 models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Very important: The novelty of the work is quite limited in my view. The high level conclusion that uncertainty is useful for abstention has already been thoroughly explored. What I see as this paper’s contributions beyond this observation are: (1) measuring in-dialogue uncertainty can help with abstention, specifically for factuality/hallucination; (2) uncertainty can help with safety, as it turns out that responses on AutoDAN-like datasets are more likely to be unsafe if they are uncertain. I don’t think the paper claims much beyond this. So a further issue with the novelty here is that (1) has already been shown, more or less, in https://arxiv.org/abs/2405.21028. The (2) result is interesting but I do not think it is a large enough result for a full paper, and it is not explored in much depth beyond one paragraph in this paper.\n- Important: The measurement of in-dialogue uncertainty, even if useful, is a heuristic that does not feel particularly generalizable, especially compared to other model-based measurements of in-dialogue confidence."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How can we practically account for all possible hedge words for every use case? Some prompts might even require responses to include hedge words; seems like a lot of finetuning and engineering effort to incorporate this uncertainty metric\n- I'm not sure I agree with hallucinations being only considered for unanswerable questions. LLMs definitely hallucinate in other situations. How extendable are these findings?\n- Statistical uncertainty metrics perform at more or less the same level. What should the reader take away from all these results?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Well-written paper with a clear walkthrough over the different problems and uncertainty metric considerations\n- Interesting idea of using in-dialogue uncertainty as a measure of response uncertainty\n- Clear description of experiments, metrics, and results; strong scientific method"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Paper shows that abstention based on different measures of uncertainty for different types of prompts works well. Specifically, for correctness and safety, statistical uncertainty-based abstention helps improve correctness and reduce unsafe responses. For hallucinations, abstention based on in-dialogue uncertainty (coined by authors as the inclusion of phrases such as \"I don't know\" in model responses) helps reduce hallucinations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It should not come as a surprise that using uncertainty metrics helps LLMs abstain when they should not engage with the prompt, as shown in Kadavath (2022) and multiple other papers cited in the related works. The core contributions of this paper can be boiled down to the introduction InDU (which also was inspired by an existing paper by Islam (2020)) and when to use each kind of uncertainty, both of which seem more fitting for, e.g., an appendix in Kadavath's paper, especially since this reads more like a survey paper of implementation details than novel ideas or concepts\n- Minor: various typos such as \"In-Dialogoe\" in Introduction, Islam et al. without year in 3.2"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- It would be worth discussing the tradeoff between abstention and usability further. \n- In-Dialogue Uncertainty is given an acronym but the acronym isn't used. It's also misspelled on L053."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- **Well written and clearly organized**: the paper is easy to follow, the writing is clear, and the questions being tested are clear. \n- **In-dialogue uncertainty metric is new**: As far as I can tell, past work has not proposed counting the number of hedge words as a method of confidence estimation. \n- **Sufficient datasets examined**: The authors do a good job of testing multiple datasets to make their point."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on abstention and uncertainty in LLMs, benchmarking how useful different uncertainty estimates are across three broad tasks.\nThese tasks are correctness, unanswerable questions, and safety. \nCorrectness is evaluated against standard QA data (TriviaQA, SciQA, CoQA, StrategyQA, and GSM8K).\nUnanswerable vs answerable questions are sourced from the SelfAware dataset and SciQA. \nAdversarial examples are sourced from AttaQ and AutoDAN. \nThe authors examine negative log-likelihood, predictive entropy, semantic entropy, and In-Dialogue Uncertainty, which is the number of hedge tokens present in the output. \nAll experiments were run on Llama2.\nAcross different tasks, the authors find that different uncertainty estimates lead to better or worse calibration, with no one method consistently outperforming the others. \nThe authors show that thresholding uncertainty scores can lead to better correctness, safety, and less hallucination on unanswerable questions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Limited novelty:** The novelty of the paper is pretty limited. From the abstract/intro, it seems like the main contribution of the paper is in showing that abstention based on uncertainty can improve results. This result is not new (see next point about missing related work). Moreover, the primary methodological novelty in this work is In-Dialogue Uncertainty, which is a fairly small contribution and does not consistently provide benefits in all settings. The Discussion presents a more nuanced view of the contribution (i.e. framing this paper as a survey of confidence estimation methods and showing that there isn't one method that consistently does well.) This framing would have been more novel but then I would have expected to see more different uncertainty estimation methods tested. \n- **Missing related work:** This paper misses a large chunk of the related work on abstention and confidence estimation from the last 2 years, focusing on older work. Examples:\n\t- https://arxiv.org/pdf/2407.18418\n\t- https://arxiv.org/abs/2308.13387\n\t- https://arxiv.org/abs/2311.09677\n\t- https://arxiv.org/abs/2404.00474\n\t- https://arxiv.org/abs/2405.21028\n\t- https://arxiv.org/abs/2401.06730\n\t- https://aclanthology.org/2024.naacl-long.301/\n\n- **Outdated models**: It's not clear why the authors only conduct experiments on Llama2, when there are many newer and more performant models available (even in the same family). To make a strong claim about when different estimation methods work and don't work, I would have expected to see more open-source models tested. \n- **No unified method**: one way this paper could have been made more compelling is if it presented a unified estimation method/recipe that worked well across settings. Currently, the paper does not have any such unified method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "This paper utilizes human evaluations to conduct a safety assessment of the model's outputs. The authors state, \"We validate the effectiveness of fuzzy exact match by comparing it with human evaluations on 200 samples each from TriviaQA and SciQA.\" However, details regarding the background and diversity of these 200 individuals remain unclear, as well as whether these evaluations comply with IRB requirements."
},
"flag_for_ethics_review": {
"value": [
"Yes, Potentially harmful insights, methodologies and applications",
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The motivation behind this work is compelling and relevant for the deployment and real-world application of LLMs. However, while the authors highlight the benefits of combining RLHF with uncertainty to enhance performance, reviewers suggest that additional validation experiments, particularly in the areas of hallucination and safety, would strengthen the claims.\n2. The paper underscores the importance of uncertainty for abstention, demonstrating that incorporating uncertainty can improve various aspects of model performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors investigate the potential of uncertainty-based abstention to improve performance in question-answering tasks, specifically focusing on correctness, hallucinations, and safety scenarios. They analyze two types of uncertainty—statistical uncertainty and in-dialogue uncertainty—and examine the effects of RLHF on these uncertainties."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the reviewers appreciate the study's motivation, they raise concerns regarding the experimental setup. For instance, in the hallucination settings, tests are only conducted on the SelfAware dataset. It would be beneficial to include additional datasets to more comprehensively evaluate the method's effectiveness in reducing hallucinations, especially given that current approaches primarily rely on Retrieval-Augmented Generation (RAG) [1].\n\n2. In the safety setting, the reviewers are interested in seeing how the uncertainty mechanism performs across a broader range of evaluation datasets. For example, PKU-SafeRLHF [3] provides safe, decoupled preferences and red-teaming prompts; how does the proposed approach perform on safety measures in these rigorous evaluations via case by case gpt-4 evaluation?\n\n3. The reviewers are not fully convinced by the claim that \"our experiments demonstrate that RLHF fine-tuning not only aligns the model with safety but also enhances its uncertainty awareness in relation to safety.\" RLHF alone does not guarantee model safety, particularly when the preference data distribution is uncertain. For instance, the GPT-4 technical report highlights that while RLHF helps align model responses with user intent, models may still exhibit brittle or undesired behaviors on both safe and unsafe inputs, especially when labeler instructions during reward model data collection are underspecified. Reviewers suggest that the authors provide a more detailed discussion on this aspect and include comparisons with models specifically designed for safety alignment, such as RLCD [2] and Safe RLHF [3].\n\n4. Regarding evaluation, the authors rely primarily on statistical measures, such as keyword-based approaches. However, this static evaluation method may fall short of detecting nuanced harmful responses, such as those involving emotional abuse. Additionally, Llama Guard’s performance drops in non-OOD (Out-of-Distribution) scenarios. Reviewers recommend including case-by-case GPT-4 evaluations to directly assess the safety of two responses, providing a more granular safety evaluation.\n\n[1] Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection \n[2] RLCD: Reinforcement Learning from Contrastive Distillation for Language Model Alignment \n[3] PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "\"we recommend that practitioners differentiate between different types of questions before they decide whether to use statistical uncertainty or verbalized uncertainty for abstention...\" Could you explain why the experiment you conducted supports this claim?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Studied topic is timely and important. \n- Paper outlines multiple applications of the proposed method.\n- Paper is well-written and easy to follow. \n- Experiments test diverse uncertainty measures\n- Experiments consider different model sizes and variants."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes uncertainty-based methods for abstaining from providing incorrect, hallucinated, or unsafe answers. It considers probabilistic uncertainty (log likelihood, entropy, and semantic uncertainty) as well as verbal uncertainty. Experiments with Llama2 models across various question-answering and adversarial-prompting benchmarks demonstrate that (1) the considered uncertainty measures contain information about whether an answer is incorrect, hallucinated, or unsafe, and (2) abstention based on these measures is effective."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- There exist more sophisticated approaches to these problems but they are not compared empirically [1, 2]. [1] build classifiers based on hidden representations, and [2] constructs ensembles using LLM prompting. Authors mention the weaknesses of prompting and fine-tuning in related work but do not demonstrate them through experiments. In fact, the experiments do not seem to concern distribution shift so there is no reason not to compare with those methods. \n- Related work is missing some recent work showing similar results (e.g., [1,2,3]).\n- Experiment sections mostly discuss observations but does not attempt to explain the observed phenemena. \n- Some parts of the experiment section are unclear or can be further improved. Specifically, in figure 3, \"statistical uncertainty\" should be replaced with a specific measure and model (e.g., entropy). It is also missing model names. The plots need to have baseline curves to clearly illustrate improvements. \n\n[1] https://arxiv.org/abs/2304.13734\n\n[2] https://arxiv.org/abs/2402.00367\n\n[3] https://arxiv.org/abs/2402.13213"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Abstention based on the right form of uncertainty improves correctness, hallucinations and safety in LLMs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024uncertaintybased,\ntitle={Uncertainty-Based Abstention in {LLM}s Improves Safety and Reduces Hallucinations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1DIdt2YOPw},\nnote={under review}\n}"
},
"abstract": {
"value": "A major barrier to the practical deployment of large language models (LLMs) is their lack of reliability. Three situations where this is particularly apparent are correctness, hallucinations when given unanswerable questions, and safety where responses are harmful or offensive. In all three cases, models should ideally abstain from responding---much like humans refrain from answering questions when uncertain. Inspired by analogous approaches in classification, this study explores the feasibility and efficacy of LLMs abstaining when uncertain in the domain of question-answering. We investigate two kinds of uncertainties, statistical uncertainty metrics and a distinct verbalized measure, termed as In Dialogue Uncertainty (InDU), measuring hedge words such as `I don't know' in responses. Using these uncertainty measures combined with models with and without reinforcement learning with human feedback (RLHF), we show in all three situations, abstention based on the right kind of uncertainty measure can boost the reliability of LLMs. By abstaining for a few highly uncertain samples we improve correctness by up to 8\\%, avoid 50\\% of hallucinations by correctly identifying unanswerable questions, and in particular increase safety by 70-99\\% with almost no additional computational overhead."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLMs",
"uncertainty",
"abstention",
"correctness",
"hallucinations",
"safety"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/66216b5fd69ab306b3caa73bd9d7d3a92e0c88b6.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1DVgysiIt7 | Improved Diffusion-based Generative Model with Better Adversarial Robustness | main | Active | Generative Model; Adversarial Robustness; Diffusion Model; Distributional Robustness Optimization | generative models | 5;6;6;6 | 4;4;3;4 | 3;3;3;3 | 3;3;3;3 | 4;2;3;3 | 5.75 | 3.75 | 3 | 3 | 3 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "as above"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper present theories to show that DRO can help address the distribution matching problem in training and testing diffusion models.\n\n2. The improvement over baselines on Cifar and Imagenet64 show that DRO is useful."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes to introduce DRO to address the distribution matching problem at training diffusion model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There is no qualitative comparisons. Authors mainly conduct experiments on Cifar, ImageNet and Laion dataset. It would be better to put some images for more direct comparisons. In addition, the code is not provided. \n\n2. The efficiency comparison. I am wondering how much overhead it brings to adopt eq 14 instead of the classical denoising objective. I am expecting that it is quite large.\n\nI am giving score of 6 based on the prerequisite that above two concerns are answered during rebuttal."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. some ablation studies for different perturbation levels $\\alpha$ should be given.\n2. Some discussions about different perturbation methods ($\\ell_1$, $\\ell_2$, or $\\ell_\\infty$) should be discussed."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper performs a theoretical analysis of diffusion models and identifies the distribution mismatch problem.\n\n2. This paper further builds a connection between the distribution robust optimization and adversarial learning for diffusion models, and develops an adversarial training method for diffusion models.\n\n3. This paper conducts efficient adversarial training methods on both diffusion models and consistency models in many tasks. Experimental results demonstrate the effectiveness of the developed algorithms."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the training of unconditional diffusion model. In particular, in order to achieve a better generation quality and enable robust learning of the score network, this paper develops a DRO-based method, and prove the DRO objective in training diffusion models can be formulated as an adversarial learning problem. The paper also identifies a similar mismatch issue in the recently proposed consistency model (CM) and demonstrates that AT can address this problem as well. The authors propose efficient AT for both DPM and CM, with empirical studies confirming the effectiveness of AT in enhancing diffusion-based models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In general, the algorithm developed in this paper is motivated by the distribution mismatch along the diffusion path. However, there is no experimental results to justify the motivation, there are also no experimental results to verify that the DRO framework can indeed help mitigate the distribution mismatch problem. \n\n2. Proposition 2 has already been discovered in existing theoretical papers [1], see their section 3.1. The authors should comment on this point around Proposition 2.\n\n3. The advantage of ADM-AT is not that significant compared with the ADM method, a more detailed ablation study or theoretical analysis on using adversarial noise or random Gaussian noise should be added.\n\n4. Some statements are not clearly presented. For instance, the description of ADM is not given, the norm notations $\\|\\|$ are abused, should that be $\\ell_1$, $\\ell_2$, or $\\ell_\\infty$?\n \n\n[1] Chen, Lee, and Lu, Improved Analysis of Score-based Generative Modeling: User-Friendly Bounds under Minimal Smoothness Assumptions, ICML 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "The paper is a good one in general. I like how the problem is formulated and how the solution is derived. However, given the current evaluation (see the weakness), I am not fully convinced the proposed method is an effective way to deal with the problem. I would like to see how the authors respond to my concerns."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Identifying and formulating the distribution mismatch problem in diffusion model is an important problem in practice.\n2. The proposed solution is elegant, supported by sufficient theoretical analysis. The derivations of the solution is clear and sound.\n3. The writing is fairly clear."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper identifies the distribution mismatch problem in the training and sampling processes. Consequently, they propose a distributionally robust optimization procedure in the training to bridge the gap. The authors apply the method to both diffusion models and the consistent model, and demonstrate the effectiveness of the proposed method on several benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern on this paper is the evaluation. Currently the proposed method is only evaluated using the ADM model. I wonder whether the effectiveness on more advanced model such as the stable diffusion still holds?\n\nFurthermore, the authors only use FID score as the evaluation metric, while it is easy to evaluate the results using other metrics such as IS, sFID, precision, recall, as done in the ADM paper. Why these metrics are not included?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. As the weakness above, for Table 1 and Table 2, more NFEs should also be verified.\n\n2. Why not also try generation using consistency models on benchmark datasets such as CIFAR10 and ImageNet, which can be more common and convincing?\n\n3. Derivations in supplementary material should be checked carefully and written with more details. \n\n4. Why efficient AT can improve performance compared with PGD is a bit confusing. Intuitively, PGD should be more accurate to find $\\delta_t$, thus more deep insights should be provided here."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The motivation for mitigating distribution mismatching is clear and important for efficient sampling. \n\n2. This paper provides strong theoretical support for implementing adversarial training to correct distribution mismatching, making this method convincing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper points out the distribution mismatching problem in traditional training of diffusion-based models (DPM) and proposes to conduct efficient adversarial training (AT) during the training of DPM to mitigate this problem. Theoretical analysis is strong enough to support its argument and experiments also verify the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experimental results may not be enough, for example, for Table 1 and Table 2, more NFEs should also be verified, although this method can improve efficient sampling, whether is adaptable and robust for more denoising steps should also be verified. \n\n2. Some complex derivations in supplementary material are too brief to understand, such as Eq(30) and Eq(59-62), I'm not sure if there are any typos in them, I suggest checking the equations carefully and modifying them."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024improved,\ntitle={Improved Diffusion-based Generative Model with Better Adversarial Robustness},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1DVgysiIt7},\nnote={under review}\n}"
},
"abstract": {
"value": "Diffusion Probabilistic Models (DPMs) have achieved considerable success in generation. However, its training and sampling processes are confronted with the problem of distribution mismatch. During the denoising process, the input data distributions of the model are different during the training and inference stages, which makes the model potentially generate inaccurate data. To obviate this, we conduct an analysis of the training objective of DPM, and theoretically prove that the mismatch can be mitigated by Distributionally Robust Optimization (DRO), which is equivalent to conducting robustness-driven Adversarial Training (AT) on the DPM. Furthermore, for the recently proposed consistency model (CM), which distills the inference process of the DPM, we prove that its training objective similarly faces the mismatch issue. Fortunately, such a problem is also mitigated by AT. Thereafter, we propose to conduct efficient AT on both DPM and CM. Finally, a series of empirical studies verify the effectiveness of AT in diffusion-based models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Generative Model; Adversarial Robustness; Diffusion Model; Distributional Robustness Optimization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/35452c9c02243416f0b7374c1ed3cdb12e86bc86.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Improved Diffusion-based Generative Model with Better Adversarial Robustness"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1EEst6oDU7 | Informing Reinforcement Learning Agents by Grounding Language to Markov Decision Processes | main | Active | Language Grounding;RLang;RL;Formal Language;LLM | reinforcement learning | 3;5;6;6 | 3;4;3;3 | 3;3;3;3 | 1;2;2;2 | 3;3;3;3 | 5 | 3.25 | 3 | 1.75 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We appreciate the reviewer’s insightful feedback and constructive suggestions. We have addressed their concerns below.\n\n**Weaknesses**\n\nRegarding Weakness 1:\n\nThe reviewer correctly points out that we only use Q-Learning-based RL methods in this work. Our goal in this work, however, is to demonstrate how language can be used to inform a tabula rasa agent, and our choice of DynaQ was motivated by the fact that by choosing a discrete tabular agent, we were better able to isolate the impacts of our language grounding approach on learning in comparison to more modern deep learning methods. Integrating natural language advice into such deep RL algorithms is a pressing and interesting area that we leave open for future work.\n\nRegarding Weakness 2:\n\nWe appreciate the reviewer’s suggestions about including LLM + RL baselines, but we point out that the suggested baseline the reviewer discusses is essentially how the RLang-DynaQ-Plan agent works—in this case, RLang acts as an intermediary to convert action plans into a policy that can be run by the RL agent. Regarding comparisons to other agents, the goal of this work is to demonstrate that grounding language to every component of an MDP can improve performance, and our experiments demonstrate this. We don’t claim that this is the most efficient way to ground language, and we welcome follow-up works that can ground language without using RLang.\n\nRegarding Weakness 3:\n\nThe topics the reviewer mentions have been the subject of existing works, but the focus of our work is precisely on how to ground human-given language in RL. The assumption of language for an environment is part of the problem statement we aim to solve, and not a limitation.\n\nRegarding Weakness 4:\n\nWe believe that the symbol-grounding via VLM demonstration in section 4.3 shows that VLMs can be used to obviate the need for some human annotations. The symbol-grounding problem is outside of the scope of this work, but we invite the reviewer to elaborate on what kind of qualitative results would aid a reader in finding this method convincing.\n\n**Questions**\n\nRegarding Question 1:\n\nThis is a good question that was also raised by Reviewer 3iuj. The combined agent is worse than the plan-informed agent due to the effect advice, which decreases performance due to non-determinism in the VirtualHome simulator which is an unintended bug and not a feature of the environment. We note this in the description for Figure 6.\n\nRegarding Question 2:\n\nTypo fixed, thanks!\n\nRegarding Question 3:\n\nThis is an important question and we thank the reviewer for raising it. Our approach relies on RLang, a formal symbolic language, for capturing language advice. While natural language itself is symbolic and discrete, MDPs may not be, and this raises an important question on how to bridge the gap between a symbolic syntax and a continuous semantics, or meaning. One possible solution could be to represent relevant symbolic action and perception abstractions with continuous functions. For example, representing a “Kick” skill with a Dynamic Movement Primitive or a “is_open()” predicate with a Convolutional Neural Network. This question suggests many open avenues for future work.\n\nWe again thank the reviewer for their questions and comments, and welcome any additional feedback."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Regarding Weakness 3:\n\nYour point that the results of this method are somewhat finicky is well-taken. We have addressed some of your concerns by adding a paragraph at the end of section 4.2 to discuss how various kinds of advice impact the performance of agents:\n\n> The impact of each kind of advice (e.g. plans, policies, transitions, and rewards) varied across tasks in the VirtualHome and Minigrid experiments, with some environments benefiting primarily from plan-centric advice and others benefiting most from policy advice. In virtually all cases, model-centric advice---about transitions and rewards---was less valuable than other forms of advice. We suggest that this discrepancy is due to how useful model-based advice is in comparison to explicit policy and planning advice. While policy and planning advice describe which actions to take in a given context, model-based advice was often used to suggest which actions \\textit{not} to take, relying on the underlying learning agent to find the best action. Furthermore, model-based advice was useful less of the time, i.e. in fewer states. This is best illustrated by comparing the relative performance of effect-enabled RLang-Dyna-Q agents with policy and plan-enabled agents in the MidMazeLava Experiment in Figure \\ref{fig:midmazelava} and the FoodSafety Experiment in Figure \\ref{fig:foodsafety}. The model-based advice in the first experiment is to avoid lava, which there are many opportunities to walk into, resulting in the performance of the effect-enabled agent closer to the plan and policy-enabled agents. By comparison, the model-based advice in the second experiment is more niche, accounting only for a handful of transitions, and the effect-enabled agent correspondingly performs closer to baseline Dyna-Q than to the plan and policy-enabled agents.\n> \n\nWe note that this has bumped the user study table to the appendix.\n\nRegarding your specific comparison of the 5th and 10th piece of advice in the user study, we note that the advices are semantically different in a crucial sense: that the 10th piece of advice makes no mention of the grey door, which must be opened before going to the red door, while the 5th piece of advice explicitly addresses opening the grey door. We agree that this seems like a small difference, but when grounding the advice to an executable plan it makes a meaningful difference. We have included the following sentence at the end of section 4.4 to address your concern about the final piece of user advice:\n\n> Failures also occurred when users specified plans whose pre-conditions were not met at the start state of the environment and failed to execute (e.g. the last piece of advice suggests to go to the room with the red key, but the agent cannot visit the room without first opening the grey door).\n> \n\n**Questions**\n\nRegarding Question 1:\n\nWe don’t compare to conventional RL methods, RLang on its own, or LLMs because the goal of this work is precisely to leverage human advice for RL. We compare our agent (RLang-DynaQ) to a structurally-identical agent (DynaQ) that does not use language advice. We don’t claim to perform competitively against modern deep RL methods, as the center of our work is on language grounding, not maximizing agent performance.\n\nRegarding Question 2:\n\nWe cut training off after a small number of episodes because that is all it takes to demonstrate that language-informed agents can learn faster than uninformed agents. We also note that the reward charts used in this paper do not plot average episodic reward on the Y-axis, they plot cumulative reward, i.e., when the reward curves achieve a stable slope, it means the agents have converged, yielding the same amount of reward on each time step. In most figures in the paper, the slope of agent rewards is stable or only slowly changing by the end of each experiment.\n\nRegarding Question 3:\n\nWe address this question in our response to Weakness 3.\n\nWe again thank the reviewer for their comments and questions, and welcome any additional feedback."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Part 2 of our Initial Response"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We are grateful to the reviewer for their careful evaluation and helpful comments. Please find our responses to the critiques below.\n\n**Weaknesses**\n\nRegarding Weakness 1:\n\nUpon revisiting the method-related sections of the paper, we agree with your point that the paper heavily relies on the cited RLang paper, so we have added the following sentence at the end of section 3.1 to help explain to the reader how RLang-DynaQ works:\n\n> Similar to Dyna-Q, RLang-Dyna-Q leverages the Bellman update rule to update Q-values using rollouts collected both from environment interaction and from simulated interaction, which is generated from a partial model of the environment that is learned over time. However, RLang-Dyna-Q also leverages a partial model given by an RLang program to generate simulated rollouts before learning begins (see Algorithm \\ref{alg:rlang-dynaq}, our modifications to Dyna-Q are in blue).\n> \n\nAnd also have added the following sentence at the end of section 3 before section 3.1 to explain what the original RLang work does:\n\n> These programs are compiled using RLang's compiler into Python functions corresponding to transition functions, reward functions, policies, and plans that can be leveraged by a learning agent.\n> \n\nWe hope that these adjustments make this work feel more stand-alone.\n\nRegarding Weakness 2:\n\nWe appreciate the reviewer’s comments on our usage of DynaQ and how we may compare our work to other baseline agents. However, we argue that DynaQ is reasonable tabular RL algorithm to base our RLang-enabled agent on and compare it to due to its simplicity and stature as an early discrete, model-based RL algorithm that was one of the first of its kind to learn from model-based simulated rollouts. Our goal in this work is to demonstrate how language can be used to inform a tabula rasa agent, and our choice of DynaQ was motivated by the fact that by choosing a discrete tabular agent, we were better able to isolate the impacts of our language grounding approach on learning, which may have been difficult to separate from an RL algorithm using more modern deep learning methods. Integrating natural language advice into such deep RL algorithms is a pressing and interesting area that we leave open for future work."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Part 1 of our Initial Response"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We sincerely thank the reviewer for their valuable feedback and thoughtful comments. We address their comments below.\n\n**Weaknesses**\n\nRegarding Weakness 1:\n\nYour point that the dense, verbose, and low-level expert advice is potentially at a similar level of abstraction to the pre-defined symbols is well-taken. However, we don’t suggest that this should be surprising, and argue that more granular advice is objectively easier to operationalize than more abstract advice. Such advice relies less on the common-sense capabilities of LLMs and more on their capacity to perform approximate machine translation.\n\nRegarding Weakness 2:\n\nWe agree with your suggestion to elucidate how various kinds of advice impact performance, and have included the following paragraph at the end of section 4.2 to address these questions. Unfortunately this will displace the table of results for the user study to the appendix, but we believe this analysis is more central to the thrust of the paper.\n\n> The impact of each kind of advice (e.g. plans, policies, transitions, and rewards) varied across tasks in the VirtualHome and Minigrid experiments, with some environments benefiting primarily from plan-centric advice and others benefiting most from policy advice. In virtually all cases, model-centric advice---about transitions and rewards---was less valuable than other forms of advice. We suggest that this discrepancy is due to how useful model-based advice is in comparison to explicit policy and planning advice. While policy and planning advice describe which actions to take in a given context, model-based advice was often used to suggest which actions \\textit{not} to take, relying on the underlying learning agent to find the best action. Furthermore, model-based advice was useful less of the time, i.e. in fewer states. This is best illustrated by comparing the relative performance of effect-enabled RLang-Dyna-Q agents with policy and plan-enabled agents in the MidMazeLava Experiment in Figure \\ref{fig:midmazelava} and the FoodSafety Experiment in Figure \\ref{fig:foodsafety}. The model-based advice in the first experiment is to avoid lava, which there are many opportunities to walk into, resulting in the performance of the effect-enabled agent closer to the plan and policy-enabled agents. By comparison, the model-based advice in the second experiment is more niche, accounting only for a handful of transitions, and the effect-enabled agent correspondingly performs closer to baseline Dyna-Q than to the plan and policy-enabled agents.\n> \n\nRegarding Weaknesses 3 and 4:\n\nWe have increased the fonts in those figures from size 14 to 17 for smaller text and from 20 to 24 for titles and axes labels. We hope they are more legible now! We have also made the text casing match in Figure 7.\n\n**Questions**\n\nRegarding Question 1:\n\nExpert advice was given by two students who were familiar with the environments, including the action (i.e. a go-to skill) and perception space of the agent (e.g. that the agent sees the world in terms of objects and primitive predicates).\n\nWe added an aside to a sentence in the third paragraph in section 4.1 explaining a bit about human experts:\n\n> For each environment, we collected multiple pieces of natural language advice from human experts---people familiar with both the environment and how the agent interacts with it via perception and action, i.e. the skills the agent has access to and the fact that its perception consists of objects and simple predicates\n> \n\nRegarding Question 2:\n\nWe do not have ablations for HardMaze. In contrast with the other MiniGrid environments, DynaQ was not able to achieve any reward in this environment due to its long-horizon nature. Our goal with this experiment was to show that language advice could make the difference between some reward and none at all — as you may notice, the returns were relatively modest compared to other environments, but significant.\n\nRegarding Question 3:\n\nThe combined agent is worse than the plan-informed agent due to the effect advice, which decreases performance due to non-determinism in the VirtualHome simulator which is an unintended bug and not a feature of the environment. We note this in the description for Figure 6.\n\nWe again thank the reviewer and welcome any additional questions, comments, and concerns they may have."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you for your insightful observations and questions. We've responded to them below.\n\nRegarding Weakness 1 and Question 1:\n\nThis is a good observation, a more integrated system might train a language model and an RL agent end-to-end. For the goals of this paper, however, in-context translation sufficed to show that language could be used to inform RL agents. We acknowledge this interesting research direction and would be excited to see it addressed in future work.\n\nRegarding Weakness 2 and Question 2:\n\nThe reviewer raises an important question about the expressivity of natural language and RLang, and how variations in expressivity between the two languages can make effective translation difficult. These concepts have been explored somewhat in the machine translation and semantic parsing communities. A potential solution—as the reviewer suggests—might be to introduce another intermediate representation for natural language such as LambdaDCS, a general logical form, and compile this language into RLang, an MDP-theoretic specification language. We note, however, that RLang enables the introduction of novel syntax via Vocabulary Files, which we have leveraged in these experiments to increase the expressivity of RLang itself, bypassing the need for another intermediate language. In future work, we hope to automate this process of semantic expansion so that more language may be grounded using this methodology.\n\nWe welcome any additional questions or critiques."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Any idea why the DynaQ baseline doesn’t work in Figure 6’s experiment?\n2. Typo in line 164.\n3. If we are going to extend the algorithm to high-dimensional continuous RL problem, what could be the biggest challenges?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The writing is overall good and easy to follow.\n2. The idea of translating natural language advice to RLang and using RLang to generate synthetic transitions makes sense.\n3. The writing flow in the experiment is great – sec 4.1 and 4.2 present two effective cases with assumptions on semantically-meaningful labels, while sec 4.3 also presents efforts to try to address this assumption. Also, user study has been completed in Table 2."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a framework to leverage natural language-based advice to accelerate RL learning process. The RLang-Dyna-Q algorithm extends the original RLang framework and combines it with the traditional Dyna-Q. Empirical results over two sets of experiments help verify the effectiveness of propose algorithms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Only Q-learning-based RL is tested in the experiment. More advanced and modern RL algorithms are needed to show the generality, e.g. PPO.\n2. More LLM + RL baselines are needed. There are a few simple alternatives to directly leverage LLM to process natural language advice to help RL training. For example, what if we don’t use any RLang-based program, and only treat LLM’s as the generator for actions and transitions?\n3. Another important assumption (and limitation) in the paper is that each environment will be provided with human-annotated natural language advice. This is a strong prior compared with all RL baselines. The author needs to discuss more about this assumption and whether we can use any other ways to bypass the need for human labels. For example, could LLMs directly generate advice for any given environment?\n4. More qualitative results are needed for section 4.3 (a demo is not enough)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Why not compare to conventional RL methods (e.g., PPO with a small neural network), to RLang itself, or to LLMs that generate code for plans? \n* Why cut training off at 30-75 episodes, which is quite a small budget given that these are not expensive or safety-critical domains? It seems that one argument for RLang-Dyna-Q is that it could be is significantly more efficient than modern RL baselines by leveraging human advice, but if so then this should be shown by empirical comparisons (e.g., how many episodes does each method require to achieve maximum returns?).\n* What differentiates good vs. bad advice for RLang-Dyna-Q? The user study provides great insight into the effects of different natural language prompts for the method. However, at times the prompts appear semantically identical, but they yield different results."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The paper is well written and provides a clear overview of the motivation, problem setting, and proposed solution.\n* The paper proposes a blend of conventional planning literature and formal specification with the advancement of LLMs, leading to a significant improvement over conventional tabular RL solutions.\n* The authors conduct a small scale user study, which solicits and leverages advice from untrained human coaches for a planning task."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This pager introduces RLang-Dyna-Q, an extension of prior work, RLang, that transforms natural language into a formally specified language for solving tasks. Rather than focusing on policy-centric translations, as in much prior work, the authors observe that much of the advice or coaching offered by human experts will come in the form of statements that describe a transition function (e.g., \"If you miss, you will fall down\"), a reward function (e.g., \"You get 10 points when you touch the key\"), or a plan (e.g. \"Go to X, then pick up Y, then return to Z\"). RLang-Dyna-Q is a combination of Dyna-Q and RLang that uses the learned world model/transition functions of RLang to further refine the Q function learned in Dyna-Q. \n\nThe proposed RLang-Dyna-Q algorithm is compared to Dyna-Q and to a random policy in a handful of tabular RL domains, showing that it significantly outperforms Dyna-Q. The authors also perform an ablation study in which they test only subsets of RLang-Dyna-Q (only policy advice, only transition advice, or only plan advice). Finally, the authors conduct a small user study in which 10 undergraduate students provide advice to warm start an RLang-Dyna-Q, with each student contributing one piece of advice, and 5/10 pieces of advice leading to policy improvements over the baseline, un-advised policy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The method is not entirely clear, particularly given the heavy reliance on prior work (RLang) in this paper. It is not clear how the Q table relates to the RLang vocabulary files or RLang declarations, and this information must be obtained by referring to and reading prior work (meaning that RLang-Dyna-Q does not entirely stand on its own, but feels more like the \"second half\" of the RLang work).\n* The results for RLang-Dyna-Q are not very convincing, and the comparison to a method that is nearly three decades old is also not very convincing. Comparisons to more modern RL baselines would improve the work. In particular, comparing to an LLM that generates Python/programming solutions seems like a very important baseline (even if there is no RL refinement, it would be useful to simply see to what extent an advanced LLM can solve these tabular domains out-of-the-box).\n* The advice required to make RLang-Dyna-Q actually improve over baselines seems very particular. For example, looking at the advice in Figures 3-6, there is a combination of plans, general advice, and state transition advice. There is not a discussion or written analysis on what types of advice work best, or why. Similarly, the success of different types of advice seems extremely finicky. Comparing advice from participants 5 and 10 in the user study, the written advice is nearly identical. However, the performance deltas are quite significant (from a 33% increase to just a 2% increase)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How was the expert advice (textboxes on page 8) collected for the main experiments (who are the experts, how are they trained, what's the protocol)?\n2. Do you have ablation studies for HardMaze in Figure 4?\n3. Why is RLang-Dyna-Q-combined worse than RLang-Dyna-Q-plan curve in Figure 6?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. I appreciate the range of experiments included, as well as a comparison with SayCan in the appendix.\n2. I also enjoy reading the section on user studies, and section 4.3 on automated semantic labeling for disambiguation advice. In general, I agree that gradually removing the need for expert annotations is important, let it be dense natural language advice, crafted vocabulary, or RLang grounding files in this case.\n3. This is an important research topic and the contribution is contextualized nicely.\n4. Most of the paper is quite clear. A few improvements can be made - see weakness 3."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the problem of leveraging natural language advice for training effective/efficient RL agents. The authors hypothesize that some natural language advice is more suitable to be grounded in reward function while others are better captured in transition functions. They further suggest that leveraging natural language advice by grounding them in a formal language (RLang) that is capable of describing all aspects of the system, is better than advising a subset of the system. \n\nThe authors adapt Dyna-Q to work with grounded advice in RLang. They evaluate this method in Minigrid and VirtualHome with mostly positive results to support their hypothesis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Expert advice seems much more dense, verbose, and low-level (almost program-like) than non-expert advice. It is not completely surprising to me that LLMs can ground them to predefined symbols that are approximately defined on a similar level of abstraction.\n2. It might help to have a paragraph discussing results and how advice on effect/policy/plan each contributes to the combined policy. Are they the same? Is it task-dependent? I think this can help better justify that an approach to encode \"information about every element of an MDP\" is necessary.\n\n(The two concerns above are why I gave 2 for contribution and not higher. Would be happy to improve scores if they are addressed)\n\n3. Stylistic nit-picking: could you please increase the font size in Figure 1, and reward curves in Figure 2-6? The text in Figure 7 looks much better. Perhaps capitalize \"Perception\" in the title in the left figure for consistency. Consistent legend colors and orders for different methods on page 8 would improve comparability across figures.\n4. Broken reference on line 164."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How to go beyond in-context learning?\n2. How to handle the inexpressiveness of human language for RLang?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The algorithm automates language-to-MDP component translation, and streamlines the process of learning human decision-making for robots\n2. The authors conducted extensive experiments and described the algorithm clearly"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed RLang-Dyna-Q, which can ground any language advice to all components in MDP, compared to grounding only to individual components before. The solution uses in-context learning to first select the grounding type, then translate the advice to RLang program. The enhancement outperforms prior approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In-context learning limits the capability enhancement of the language model. It might be better if we could make the LM trainable and train the language model and the RL system end-to-end\n2. Human language might not be expressive enough to be translated to RLang. In the experiment section, it stated that some advice cannot be converted to RLang. Could we have a more natural intermediate representation for the advice and agent?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We ground language to MDPs by translating advice to RLang programs using an LLM."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024informing,\ntitle={Informing Reinforcement Learning Agents by Grounding Language to Markov Decision Processes},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1EEst6oDU7},\nnote={under review}\n}"
},
"abstract": {
"value": "While significant efforts have been made to leverage natural language to accelerate reinforcement learning, utilizing diverse forms of language efficiently remains unsolved. Existing methods focus on mapping natural language to individual elements of MDPs such as reward functions or policies, but such approaches limit the scope of language they consider to make such mappings possible. We present an approach for leveraging general language advice by translating sentences to a grounded formal language for expressing information about *every* element of an MDP and its solution including policies, plans, reward functions, and transition functions. We also introduce a new model-based reinforcement learning algorithm, RLang-Dyna-Q, capable of leveraging all such advice, and demonstrate in two sets of experiments that grounding language to every element of an MDP leads to significant performance gains."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Language Grounding",
"RLang",
"RL",
"Formal Language",
"LLM"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/bb8bd368236f547e4fce40cb7ad2d089070605af.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/52e17e1880d785c49c9562aeab9cc627b6a52d4e.zip"
},
"title": {
"value": "Informing Reinforcement Learning Agents by Grounding Language to Markov Decision Processes"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1EJIax7ekV | Reinforcement Learning from Wild Animal Videos | main | Active | Legged Locomotion;Imitation Learning from Videos;Reinforcement Learning | applications to robotics, autonomy, planning | 3;5;5 | 4;4;4 | 2;2;2 | 2;3;2 | 3;3;4 | 4.333333 | 4 | 2 | 2.333333 | 3.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper studies an interesting problem of learning reward models from videos\n- The proposed approach is interesting and in a good direction\n- The paper is well written and the presentation is clear"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper trains a supervised video classification model on a dataset of wild animal videos (walking, running, standing, and jumping). It then uses the video model classifications as rewards to train a policy to control a quadroped robot in simulation. The policy is then transferred onto a quadroped robot in the real world."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Position of the paper (title, abstract, intro) is a bit misleading. It suggests that the reward function would come purely form videos. However, the approach uses a number of hand-designed reward terms such as air time, symmetry, and terminations. I think that this is ok but the positioning of the paper should be updated to reflect that. In the current version of the approach, the video model serves only as part of the overall reward function.\n- The results are promising but overall limited. Looking at the supplementary materials video it looks like the learnt skills do not quite match the desired behaviors, \"keeping still\" seems to be moving and \"running\" does not seem to be running.\n- It would be good to ablate the impact of each of the reward terms. The current version of the manuscript includes the symmetry loss ablation which shows that the symmetry term plays a considerable role."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. If we do not use the trained classifier as a rewarder, but instead manually assign simple rewards to encourage the quadruped robot to stay still, walk forward, or jump, how effectively can the robot learn these skills?\n2. How would the method perform if the output criterion of the classifier is only the movement of the robot's center of mass?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Learning quadruped robot locomotion skills from existing wild animal locomotion is a good inspiration. \n2. The task setup and experimental details are described clearly in the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Reinforcement Learning from Wild Animal Videos (RLWAV), a novel method for training quadruped robots to perform locomotion skills by observing wild animal videos. The authors train a video classifier on a large dataset of labeled animal videos and use the classification scores to provide rewards for RL training. RLWAV avoids the need for reference trajectories or hand-designed rewards by transferring learned skills from animals to robots. The method is validated both in simulation and on a real quadruped robot (Solo-12), demonstrating the transfer of skills such as walking and jumping."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The current ablation study of the classifier training set is inadequate, making it hard to determine whether the method effectively utilizes cross-embodiment skills acquired from a diverse range of wild animal videos. The ablation should encompass factors such as the size of the training set and the number of different types of animals included in it.\n2. While we anticipate gaining insights into four-legged movement skills from wild animal datasets, the only information we can provide the robot is the output of a classifier. This classifier appears to be able to achieve its task by focusing primarily on the background and the animal's torso, neglecting the movement of the four legs."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The paper claims that the reward model can help policy learn well in simulation and then successfully transfer to real. However, to me the \"transfer to real\" part seems orthogonal to the reward model itself. Could author explain why better reward model can lead to better sim-to-real transfer? For example, if we use a hand crafted reward function in the same setup and learn a policy in sim, can it also transfer to real? My impression is the answer should be yes."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The idea is novel.\n2. The paper is well written. Easy to follow.\n3. Experiments and ablation among its own algorithm shows effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an interesting idea to learn reward function for locomotion through wild animal video. The authors curate a dataset of 8.7 wild animal videos, train a video classifier and then use it as a reward model to train RL policy to control robot for locomotion. The multi-skill policy can be trained in a physical simulator and transfer to real world."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It seems the paper lacks comparison to some baseline or other works. For example, can we compare the results in sim w/ some hand crafted reward models? Then you can compare sample efficiency of the proposed method.\n2. Would like to know how large the animal dataset needs to be to make it work. This work uses 8.7K videos. Do we need more or it can work w/ less? Can we add an ablation on it?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We learn legged robot locomotion skills by watching thousands of wild animal videos."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024reinforcement,\ntitle={Reinforcement Learning from Wild Animal Videos},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1EJIax7ekV},\nnote={under review}\n}"
},
"abstract": {
"value": "We propose to learn legged robot locomotion skills by watching thousands of wild animal videos from the internet, such as those featured in nature documentaries. Indeed, such videos offer a rich and diverse collection of plausible motion examples, which could inform how robots should move. To achieve this, we introduce Reinforcement Learning from Wild Animal Videos (RLWAV), a method to ground these motions into physical robots. We first train a video classifier on a large-scale animal video dataset to recognize actions from RGB clips of animals in their natural habitats. We then train a multi-skill policy to control a robot in a physics simulator, using the classification score of a third-person camera capturing videos of the robot's movements as a reward for reinforcement learning. Finally, we directly transfer the learned policy to a real quadruped Solo. Remarkably, despite the extreme gap in both domain and embodiment between animals in the wild and robots, our approach enables the policy to learn diverse skills such as walking, jumping, and keeping still, without relying on reference trajectories nor hand-designed rewards."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Legged Locomotion",
"Imitation Learning from Videos",
"Reinforcement Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/65fc54722f40faea3d1f119342db02d232ac6333.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/3a2036007b97a7809a8f044e2cbbfef249332f8c.zip"
},
"title": {
"value": "Reinforcement Learning from Wild Animal Videos"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1EnpStvBU8 | Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models | main | Active | high-resolution adaptation;multimodal large language models | applications to computer vision, audio, language, and other modalities | 5;5;6;6;6 | 4;4;4;4;4 | 3;3;3;3;3 | 2;2;3;3;3 | 3;3;3;3;3 | 5.6 | 4 | 3 | 2.6 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In section4.3(line 258), the statement, global average pooling is confusion, is the features are pooled into 1 global token? If so, it seems to be not consistent with figures. Please clarify the exact dimensions of fv after global average pooling.\n2. In Table 1, resizing LLaVA-1.5 to 672 pix achieves close performance with 768pix version of LLaVA-HR, is there a direct comparision between 768-pix version of them?\n3. In table 2, there is an ablation of \"tune vision\" referring to finetune vision encoder. However, I think the vision encoder in LLaVA-1.5 is fixed, can you provide a detailed description about this. For example, implementation and aim of tuning vision encoder.\n4. LLaVA-HR is proposed to process input resolution of 1024, what if input images larger than 1024. Is there any extended experiments for even larger images such as 4K ones.\n5. What do you mean by \"stages\" in vision transformers? And, currently only final features from ConvNext is utilized, is there any experiments of multi-stage feature integration for that of CNN encoder?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This work proposed a novel method to increase resolutions of MLLMs, which is an important problem in the field and critical in fine-grained vision tasks. Without large modification of training recipe and computational cost of its baseline, LLaVA-1.5. \nEvalutions are conducted on many existing benchmarks and performance of LLaVA-HR is quite impressive. Besides, the computational cost involved is quite small compared with related works."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to enhance MLLM by enlarging resolution of input images. By combining features from ViT and a CNN encoder through an adapter, performances of MLLM are improved a lot. Meanwhile, fusing high-resolution features from convolution-based encoder into low-resolution features from transformer-based encoder does not increase vision tokens to LLM decoder, so that additional computational cost is low. Proposed LLaVA-HR increases effective resolution for MLLM to 1024 and outperforms concurrent MLLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please see as in questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. There are several micro-designs in the Mixture-of-Resolution Adapter, including $\\mathcal{F}_l$, $\\mathcal{F}_h$, and the gate function. Why do we choose a conv layer for $\\mathcal{F}_l$, an MLP layer for $\\mathcal{F}_h$? Are these layers and functions necessary? Please provide some analyses.\n\n2. In the Mixture-of-Resolution Adapter, the authors choose the addition operation to fuse features of different resolutions. (Deformable) Cross Attention is also an option. I wonder which method is better?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-written and easy to follow.\n2. The comparison of MRA and other high-resolution adaptation solutions is clear, highlighting the effectiveness of the dual visual pathways.\n3. The experiments are well-conducted and quite comprehensive.\n4. The study demonstrates strong performance on most datasets compared with other MLLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose the Mixture-of-Resolution Adaptation method to embed the high-resolution features into the low-resolution pathway. The MRA enhances the visual perception ability in MLLMs, and allow them to benefit from high-resolution visual inputs with reduced computational cost. Extensive experiments demonstrate the effectiveness of the MRA."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In Table 1, the MRA is compared to other high-resolution adaptation methods that use a single visual pathway. However, the introduction of a new visual encoder in the MRA raises concerns about the fairness of this comparison. Could the authors provide a baseline that uses dual visual pathways without the MR-Adapter?\n2. The analyses of the MRA’s architecture and design details are insufficient, particularly regarding $\\mathcal{F}_l$, $\\mathcal{F}_h$, and the gate function. Could the authors provide ablation studies on these components?\n3. The main novelty of the paper appears to be the Mixture-of-Resolution Adapter. While the application of dual visual pathways for high-resolution adaptation in MLLMs is innovative, the overall contribution of the paper seems somewhat insufficient. If MR-Adapter could integrate a wider variety of low- and high- resolution visual encoders, its contribution would be significantly enhanced."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "> ### 1. Clarification on Visual Encoder Notation\n\nIn line 206, it states that $F_{I_l}$ and $F_{I_h}$ are visual encoders for high- and low-resolution images, which seems to be a typo. The correct notation should reflect that $F_{I_l}$ and $F_{I_h}$ correspond specifically to low- and high-resolution encoders, respectively.\n\n> ### 2. MR-Adapter Placement in ViT Architecture\n\nFigure 2 shows the MR-Adapter is applied starting from the second stage of the ViT architecture. Does this mean the initial stage of the ViT does not utilize high-resolution features? Clarifying this could help illustrate the feature extraction flow more clearly.\n\n> ### 3. Implementation of LLaVA-1.5-448\n\nFor LLaVA-1.5-448, only the image resolution is modified at the fine-tuning stage. Have you considered modifying the visual backbone from ViT-336 to ViT-448 and retraining it for both pre-training and fine-tuning? This comparison could provide insight into performance differences when using higher resolution throughout the model’s entire training process.\n\n> ### 4. $SEED^{img}$ Performance Comparison\n\nCould you provide the $SEED^{img}$ performance for LLaVA-1.5, LLaVA-1.5-448, and LLaVA-NeXT? This metric would help evaluate relative image-processing capabilities across these models."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-written and easy to follow.\n\n2. Figures 2 and 3 are effectively designed and enhance understanding of the framework.\n\n3. The ablation study is solid to reveal the contribution of component."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a new approach for efficient multimodal large language models (MLLMs) by addressing the high computational cost of processing high-resolution images. The authors introduce Mixture-of-Resolution Adaptation (MRA), a method that combines both low- and high-resolution visual features to enhance model efficiency without compromising visual recognition quality. MRA uses two visual pathways: one for low-resolution and one for high-resolution images, with novel mixture-of-resolution adapters (MR-Adapters) that embed high-resolution information into the low-resolution pathway. This design significantly reduces input sequence length and computational load.\n\nThe authors apply MRA to the LLaVA model, resulting in an improved version called LLaVA-HR, which demonstrates superior performance across 15 out of 17 vision-language (VL) tasks, including a 5.2% increase in accuracy on TextVQA. Furthermore, LLaVA-HR maintains efficient training and inference times, showing improvements over LLaVA-NeXT."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "> ### 1. LImited performance imprvement.\n\nThe performance gains with MRA are modest. The low-resolution branch operates at 448×448, so the appropriate baseline is LLaVA-1.5 with 448-pixel resizing. Compared to this baseline, the improvements MRA achieves are minimal (e.g., +0.7 on VQA v2, +31 on MME, and +0.8 on POPE). Training cost and inference speed are also similar between MRA and LLaVA-1.5-448, reducing the practical benefit.\n\n> ### 2. Limited novelty\n\nThe dual-pathway, high-and-low-resolution approach isn’t particularly new. Similar strategies have been explored in other works, such as Mini-Gemini and CogAgent, yet the authors do not compare their method with these models. Explicitly differentiating MRA from these approaches would help clarify its unique contributions.\n\n> ### 3. Limited generalizability\n\nThe authors apply MRA solely to LLaVA-1.5. Expanding the evaluation to other MLLMs, like Qwen-VL, would strengthen claims of the method’s generalizability across architectures.\n\n\n[1] CogAgent: A Visual Language Model for GUI Agents\n[2] Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "As mentioned, in Table 1, it seems that there is no significant gap between ‘Avg. Pooling’ and the proposed MRA for the VQAv2 task, which is perplexing. The paper should explain the experimental phenomenon.\n2. The paper should carry out a qualitative experiment between the proposed MRA and the model variant in Table 2."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper aims to explore the high-resolution adaptation for MLLMs, which is crucial and engaging.\n2. The paper is well written and easy to follow.\n3. The paper is well motivated and the proposed MRA appears reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the efficient high-resolution adaptation for multimodal large language models (MLLMs) and proposes a mixture-of-resolution adaptation (MRA) method for MLLMs. To be specific, the proposed MRA employs two visual pathways for images of different resolutions, where high-resolution visual information is embedded into the low-resolution pathway via the mixture-of-resolution adapters. Besides, the paper conducts extensive experiments to verify the effectiveness of the proposed model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. As demonstrated in Table 1, it seems that there is no significant gap between ‘Avg. Pooling’ and the proposed MRA for the VQAv2 task, which is perplexing. The paper should explain the experimental phenomenon.\n2. The paper should carry out a qualitative experiment between the proposed MRA and the model variant in Table 2.\n3. The paper fails to clarify the version of LLaVA-1.5 used in Figure 4."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Unlike previous strategies that divide high-resolution images into sub-images, this paper introduces an innovative dual visual pathway structure, offering a fresh perspective for high-resolution adaptation. The MR-Adapter effectively embeds high-resolution information into the low-resolution pathway, introducing a new adaptation mechanism within the visual processing framework of MLLMs. This design overcomes the efficiency limitations of traditional high-resolution processing.\n\n- The paper conducts extensive experiments across multiple vision-language tasks, providing a range of comparisons, with promising results.\n\n- The writing is clear and easy to follow. It effectively highlights MRA's performance gains and efficiency advantages across different tasks, helping readers fully understand the model’s effectiveness and strengths."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an novel high-resolution adaptation method for multimodal large language models (MLLMs), termed Mixture-of-Resolution Adaptation (MRA). MRA employs a dual visual pathway design to process high- and low-resolution images simultaneously from both macro and micro perspectives, while integrating high-resolution information into the low-resolution pathway through the Mixture-of-Resolution Adapter (MR-Adapter). This approach reduces the number of visual tokens while preserving rich visual semantics, significantly enhancing the model's visual descriptive power."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The processing of both low-resolution and high-resolution images in the paper is mainly square-based, such as 448x448 and 1024x1024. Is there any adaptation mechanism for handling images with different aspect ratios? Would processing high-resolution images in a way that matches the input image's aspect ratio lead to better performance?\n\n2. For high-resolution image inputs, we are more focused on improvements in OCR-related tasks. The results for OCRVQA in Table 5 don’t seem to be the best. Additionally, Table 6 only presents results for LLaVA-HR+, but it lacks results for LLaVA-HR-7B, LLaVA-HR-13B, and LLaVA-HR-X with less training data. It would be helpful to include these results to better illustrate the impact of MRA on OCR-related tasks.\n\n3. Could the authors further explain why the MR-Adapter is inserted in the last 3 stages? What is the design principle behind this decision? Could it be inserted in the earlier stages instead?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024feast,\ntitle={Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1EnpStvBU8},\nnote={under review}\n}"
},
"abstract": {
"value": "In existing multimodal large language models (MLLMs), image resolution plays a significant role for granular visual recognition. However, directly increasing image resolution leads to expensive computational cost for MLLMs. In this paper, we reveal that a combination of low- and high-resolution visual features can efficiently mitigate this shortcoming. Based on this principle, we propose a novel and efficient method for MLLMs, termed Mixture-of-Resolution Adaptation (MRA). In particular, MRA adopts two visual pathways for images of different resolutions, where high-resolution visual information is embedded into the low-resolution pathway via the novel mixture-of-resolution adapters (MR-Adapters). This design also greatly reduces the input sequence length of MLLMs. To validate MRA, we apply it to a recent MLLM called LLaVA, and term the new model LLaVA-HR. We conduct extensive experiments on 17 vision-language (VL) tasks, which show that LLaVA-HR outperforms existing MLLMs on 15 VL tasks,e.g., +5.2% on TextVQA. More importantly, both training and inference of LLaVA-HR remain efficient with MRA, e.g., 20 training hours and faster inference speed than LLaVA-NeXT. Source codes are anonymously released at: https://anonymous.4open.science/r/LLaVA-HR-4BB5."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"high-resolution adaptation",
"multimodal large language models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/676cc4f5d7fc493af918857ac67509339da9a126.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Feast Your Eyes: Mixture-of-Resolution Adaptation for Multimodal Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Euu8FPr3d | Unsupervised Multi-Agent Diversity With Wasserstein Distance | main | Active | Multi-Agent Reinforcement Learning;Multi-Agent diversity;Cooperation;Wasserstein Distance | reinforcement learning | 3;5;5;5 | 5;3;5;4 | 2;3;3;3 | 2;3;2;2 | 4;2;3;3 | 4.5 | 4.25 | 2.75 | 2.25 | 3 | -0.522233 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Beyond the major concerns I have listed, there are the following questions: \n1. Can this method be applied to agents in continuous action spaces, and to multi-agents with different action spaces? \n2. Regarding the discussion in lines 171-173 of the paper, can the authors provide an example to illustrate this point, and why wouldn't the WD directly become zero?\n3. Compared to vanilla methods like QMIX and QTRAN, WMAD will undoubtedly introduce additional computational overhead. If WMAD appears to require fewer timesteps to achieve comparable performance levels, but demands more CPU/GPU computation time (or real time), could this impact its practical use? Have there been any experiments conducted to assess the extent of this additional computational load?\n\n4. WMAD chooses the Euclidean distance as the cost function to compute the Wasserstein distance. I am curious about the results if the Euclidean distance were used directly as the intrinsic reward."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is clearly articulated and well-structured, with the discussion on MI-based methods being particularly enlightening.\n2. The discussion in the experimental section is comprehensive, with a thorough design of ablation studies."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel approach to multi-agent policy diversity within the MARL domain. Firstly, the paper provides a detailed analysis of the shortcomings of current diversity methods based on mutual information. Subsequently, it leverages a CPC-based next-step prediction method to facilitate the learning of distinguishable representations of agent trajectories. Furthermore, it introduces a method for rapidly calculating the Wasserstein distance in multi-agent systems, which is integrated into practical MARL algorithms in the form of intrinsic rewards. Finally, the effectiveness of the proposed method is validated on the Pac-Men, SMAC/SMACv2 environments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although the paper is well-written, I have major concerns regarding the novelty of the paper. \n1. The first is the introduction of Wasserstein Distance (WD) to quantify the policy diversity (as represented by trajectories) among agents, where there has already been related work in the MARL domain, which may not represent a significant innovation. For example, work [1] introduces the concept of system neural Diversity based on WD, and work [2] proposes a policy distance concept also based on WD by learning representations of policy distributions. \n2. The second concern is about translating the diversity's WD into intrinsic rewards to encourage diversity. In fact, methods purely encouraging diversity are not limited to intrinsic rewards or objective functions but also include controlling the network structure. Work [3] has even gone beyond merely encouraging diversity to being able to control the diversity of a multi-agent system to a specific value. Therefore, this work might not be novel enough to match the ICLR community. \n3. Correspondingly, there are concerns regarding the selection of baselines. Since this is a method encouraging multi-agent diversity, why has it only been compared with MI-based methods? Baselines should include MARL diversity-related but MI-unrelated methods. For example, RODE [4], ADMN[5], and the previously mentioned methods?\n\n*I hope the authors can understand my concerns and address them together with the following questions.*\n\n\n[1] Bettini M, Shankar A, et al. “System neural diversity: measuring behavioral heterogeneity in multi-agent learning”[J]. arXiv preprint arXiv:2305.02128, 2023.\n\n[2] Hu T, Pu Z, Ai X, et al. “Measuring Policy Distance for Multi-Agent Reinforcement Learning” International Conference on Autonomous Agents and Multiagent Systems (2024)\n\n[3] Bettini M, Kortvelesy, et al. “Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning” International Conference on Machine\nLearning (2024) \n\n[4] T. Wang, T. Gupta, A. Mahajan, et al. “RODE: Learning Roles to Decompose Multi-Agent Tasks” International Conference on Learning Representations (2021)\n\n[5]Yu Y, Yin Q, Zhang J, et al. “ADMN: Agent-Driven Modular Network for Dynamic Parameter Sharing in Cooperative Multi-Agent Reinforcement Learning”\nInternational Joint Conference on Artificial Intelligence (2024)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the weaknesses. Look forward to more explanation."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors propose SMAC to promote exploration by using Wasserstein distance to evaluate the difference in trajectories of different agents, which is more reasonable than mutual information.\n2. Experimental results show that the proposed algorithm WMAD is much better than baseline algorithms."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In order to promote exploration in multi-agent reinforcement learning, the authors propose a method WMAD to maximize the difference in agents’ trajectories. The difference in trajectories is represented by Wasserstein distance, which is calculated with latent variables. Extensive experiments are conducted to show the superiority of WMAD in tasks including Pac-Men and SMAC."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the use of Wasserstein distance is better than mutual information, it seems that the idea of using Wasserstein distance to enhance the difference in trajectories of agents has been proposed, such as “Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning”.\n2. The authors claim that the proposed algorithm WMAD achieve SOTA with better exploration, while the baseline algorithms in experiments are not specifically designed for exploration. Baselines are fundamental MARL algorithms and mutual information-based exploration algorithms. Other kinds of exploration methods are missing, such as “Episodic multi-agent reinforcement learning with curiosity-driven exploration”.\n3. It seems that the results of baselines are much worse than those in original papers, such as MAVEN in 6h_vs_8z (super hard) and corridor (super hard)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I hope the authors can provide comparison results between WMAD and baselines in terms of the number of parameters and training costs.\n2. Please see Weaknesses 3."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors describe the framework and implementation of WMAD in detail, which makes the method easy to understand.\n2. The motivation of this paper is very clear: First, promote multi-agent diversity for exploration; Second, improve previous mutual-information-based approaches.\n3. The visualization of the visited area strongly demonstrates the effectiveness of WMAD in promoting diversity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the issue of diversity in cooperative multi-agent reinforcement learning (MARL). As parameter sharing in MARL often leads to homogeneous behaviors and limited exploration, some previous methods promote identity-aware multi-agent diversity by mutual information (MI). The authors point out the drawbacks of MI and replace it with Wasserstein Distance. The Wasserstein Multi-Agent Diversity (WMAD) uses the Wasserstein distance between the trajectory distributions of different agents as an intrinsic reward to facilitate exploration. The authors conducted experiments on Pac-Men, SMAC and SMACv2 to test the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The novelty is relatively limited. As the authors mentioned, there are already many works about promoting diversity for enhanced exploration. WMAD follows them and replaces the metric of diversity with Wasserstein distance.\n2. According to Figure 4(d), the diversity of agents' trajectories is improved significantly. However, how would WMAD perform in scenarios that require homogeneous behaviors (e.g., focus fire on the same enemy in SMAC)? I think the authors need to include results or discussion on WMAD's performance in such scenarios.\n3. The experiment results in SMAC and SMACv2 are very significant. However, it is worth noting that the learning rate is set to 0.005 and the batch size is 128. The exploration rate is also tuned. These settings are proven to significantly improve the performance of QMIX in *pymarl2* [1]. Therefore, I wonder how the other baselines are implemented, and I'm concerned about the fairness of the experiments. Maybe the authors could clarify the implementation and hyperparameter settings of other baselines. Furthermore, It would be better to provide the results of WMAD under the hyperparameter settings in *pymarl* and *pymarl2*, respectively.\n\n[1] Hu J, Jiang S, Harding S A, et al. Rethinking the implementation tricks and monotonicity constraint in cooperative multi-agent reinforcement learning[J]. arXiv preprint arXiv:2102.03479, 2021."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Have you explored alternative cost functions tailored to specific tasks, and if so, how did they impact the results?\n\n- Although you adopted a kernel-based method to reduce computational costs, what challenges did you face in scaling the method to larger multi-agent systems? Will this limit the proposed method's scalability?\n\n- How sensitive is your method to the selection of hyperparameters, such as the kernel width or the coefficient for intrinsic rewards? Did you conduct any sensitivity analysis to understand their impact?\n\n- Since the effectiveness of your method heavily relies on CPC for trajectory representation, how robust is the learned representation to noise or perturbations in agent observations?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper introduces a new approach by using the Wasserstein distance to promote agent diversity, addressing the limitations of mutual information-based methods in encouraging effective exploration.\n\n- Using contrastive predictive coding for learning distinguishable trajectory representations enhances the ability to measure differences between agents’ behaviors.\n\n- The method is evaluated across multiple challenging multi-agent environments (like Pac-Men, SMAC, and SMACv2), demonstrating consistent outperformance over baseline methods. The proposed approach is integrated with MARL algorithms like QMIX, showing its practical applicability and potential to improve real-world cooperative learning tasks.\n\n- The paper is organized clearly and it is easy to follow its core idea."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Wasserstein Multi-Agent Diversity (WMAD), a new method for promoting exploration in Multi-Agent Reinforcement Learning (MARL). Unlike mutual information-based approaches, WMAD maximizes the Wasserstein distance between agents' trajectory distributions to encourage diverse behaviors. The method leverages Contrastive Predictive Coding (CPC) to learn trajectory representations and introduces a nearest neighbor intrinsic reward based on the Wasserstein distance. WMAD achieves more diverse policies and better exploration, outperforming state-of-the-art methods in complex multi-agent tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The Wasserstein distance relies on an appropriate cost function to measure trajectory differences, and the paper uses a simple Euclidean distance without exploring task-specific alternatives, which may limit the method’s adaptability.\n\n- Although the paper employs kernel-based techniques to reduce costs, computing the Wasserstein distance for every pair of agents in large-scale multi-agent systems can still be computationally intensive, making the system not scalable. \n\n- The paper does not thoroughly explore the sensitivity of the method to key parameters, such as the choice of kernel or the weighting of intrinsic rewards, which could affect generalizability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024unsupervised,\ntitle={Unsupervised Multi-Agent Diversity With Wasserstein Distance},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Euu8FPr3d},\nnote={under review}\n}"
},
"abstract": {
"value": "In cooperative Multi-Agent Reinforcement Learning (MARL), agents sharing policy network parameters are observed to learn similar behaviors, which impedes efficient exploration and easily results in the local optimum of cooperative policies. In order to encourage multi-agent diversity, many recent efforts have contributed to distinguishing different trajectories by maximizing the mutual information objective, given agent identities. Despite their successes, these mutual information-based methods do not necessarily promote exploration. To encourage multi-agent diversity and sufficient exploration, we propose a novel Wasserstein Multi-Agent Diversity (WMAD) exploration method that maximizes the Wasserstein distance between the trajectory distributions of different agents in a latent representation space. Since the Wasserstein distance is defined over two distributions, we further extend it to learn diverse policies for multiple agents. We empirically evaluate our method in various challenging multi-agent tasks and demonstrate its superior performance and sufficient exploration compared to existing state-of-the-art methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multi-Agent Reinforcement Learning",
"Multi-Agent diversity",
"Cooperation",
"Wasserstein Distance"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a9f0cd6a9ed84e008fae0bc782684a9ad36b4622.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/f4ae5dbb8f1e602ca27e8d1ec3f3bf6272b2734b.zip"
},
"title": {
"value": "Unsupervised Multi-Agent Diversity With Wasserstein Distance"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1ExfUpmIW4 | Towards Robust and Cost-Efficient Knowledge Unlearning for Large Language Models | main | Active | Machine Unlearning;Large Language Models;Low-rank Adaptation | alignment, fairness, safety, privacy, and societal considerations | 5;5;5;6 | 3;3;3;3 | 3;3;2;3 | 3;3;3;2 | 2;3;2;3 | 5.25 | 3 | 2.75 | 2.75 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In Figure 3, why are some data points missing? \n2. It is better to add legends in Figures 3 and 4 to improve the clarity.\n3. It is better to define “Model Utility” within the paper instead of referring readers to other papers.\n4. For the Hinge loss equation in Line 233, since the probability p(.) is in the range of (0,1), the second item within max() function is always larger than 0, right? If so, IHL is to reduce the probability of true tokens but to increase the probability of other tokens, right?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The motivation is clearly explained. \n2. Extensive experiments have been conducted to prove the effectiveness of the proposed methods. \n3. The theoretical analysis strengthens the rationale of the proposed methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a framework to remove sensitive information from LLMs without retraining them from scratch. Recognizing the limitations of common unlearning methods like Gradient Ascent (GA), which risks instability and unintended forgetting, the authors introduce two new techniques. The Inverted Hinge Loss (IHL) method enhances stability by suppressing unwanted tokens with the next most likely alternative, while the Fisher-weighted Initialization of Low-rank Adapters (FILA) uses Fisher information to initialize LoRA adapters, selectively targeting parameters associated with unwanted information to optimize unlearning. This dual approach was evaluated on the Training Data Extraction Challenge and TOFU benchmark with models such as GPT-Neo, Phi-1.5B, and Llama2-7B, achieving efficient unlearning with minimal loss to the model’s reasoning and generative capabilities, and demonstrating improved performance over existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. One of the most significant contributions of this paper is the proposal of Inverse Hard Loss (IHL), which claims to increase the probability of the second-best token only. However, it is not clear why IHL does not affect the probability of other tokens. Based on the definition of IHL in Lines 233, the probability of all other tokens is impacted. As such, IHL can only address problem 1 (Line 224) but cannot address problems 2 and 3 of GA (Lines 224 ~ 226).\n2. In Figures 3 and 5, the unlearning performance of employing only IHL (represented in green) does not outperform the GD baseline (depicted in blue), which undermines the effectiveness of IHL. \n3. The main results only use GPT-neo models, which are old models. It is better to use more recent models like Llama and Mistral models to make it more practically useful. It is also inconsistent to use different models for main results and analysis. \n4. There are no ablations studies for the following settings: 1) full parameter fine-tuning with IHL; 2) LoRA + FILA only; 3) GD + LoRA + FILA."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How robust are the proposed methods to changes in the data distribution of the forget set? For instance, if the forget set contains highly diverse or outlier data, would the unlearning process still be effective?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Authors analyze the derivatives of GA and highlight its shortcomings, the motivation is clear and the theoretical foundation strengthens the rationale for the proposed methods.\n2. The introduction of IHL addresses the instability issues of GA by focusing gradient updates on a minimal number of viable replacements for the ground-truth token. This results in a more controlled and stable unlearning process.\n3. The proposed strategies are effective. The authors evaluate the methods on multiple datasets and multiple model sizes. This comprehensive evaluation demonstrates the robustness and generalizability of the proposed methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper works on machine unlearning in LLMs, particularly focusing on the challenges of removing specific data instances from a model's memory without retraining from scratch. The authors propose two strategies: Inverted Hinge Loss (IHL) and Fisher-Initialization of Low-rank Adapters (FILA). IHL is designed to replace the unbounded negative cross-entropy loss in gradient ascent with a more stable and efficient loss function. FILA aims to initialize low-rank adapters in a way that prioritizes the removal of unwanted information and accelerates the unlearning process. Extensive experiments validates that the proposed methods significantly outperform existing baselines in efficiency and post-unlearning performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The intuition and connection between the proposed methods, IHL and Fisher-Initialization of FILA, appear somewhat weak. This makes the paper feel like it is stacking two separate tricks rather than offering a unified and coherent approach. A more systematic linkage between these methods would enhance the overall coherence and impact of the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "What is the reason for deriving the unlearning mechanism of GA from the formulas in lines 217-220?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "### Originality\n- This paper points out the shortcomings of Gradient Ascent (GA) by analyzing its inverse. \n- This paper proposes two strategies to improve these shortcomings. \n- This paper demonstrates the effectiveness of their improvements on two datasets.\n\n### Clarity\n- The structure of this paper is clear, and most of the content is explained clearly.\n\n### Significance\n- This paper provides insights into knowledge unlearning through the analysis of Gradient Ascent (GA)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors identify the limitations of current unlearning methods (e.g., Gradient Ascent (GA)), which can lead to unstable optimization and catastrophic forgetting of retrained knowledge. To overcome these challenges, the paper introduces two novel techniques for robust and efficient unlearning:\n\n1. **Inverted Hinge loss (HIL):** This new loss function suppresses unwanted tokens while maintaining fluency by boosting the probability of the next most likely token.\n\n2. **Fisher-Initialization of Low-rank Adapters (FILA):** Developed through low-rank approximation weighted with relative Fisher information, this method focuses updates on parameters critical for removing targeted knowledge.\n\nThe paper demonstrates the effectiveness of these techniques through experiments on the Training Data Extraction Challenge dataset using GPT-Neo models and on the TOFU benchmark with Phi-1.5B and Llama2-7B models. The proposed approach successfully removes sensitive information while maintaining the reasoning and generative capabilities of the models with minimal impact on performance.\n\nIn summary, this paper provides innovative solutions to the drawback of GA and demonstrates the effectiveness of the solutions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- This paper lacks the state-of-the-art knowledge unlearning baselines (such as [1][2]). Although the main goal of the paper is to address the shortcomings of GA, incorporating the state-of-the-art knowledge unlearning for comparison would make it more convincing.\n\n- Some descriptions are not clear enough. For example, lines 221-223 should include more explanation for the reasons. The authors should explain in detail why GA increases the prediction score for all other tokens $v \\neq x_t$ in the vocabulary.\n\n- From the experimental results, when only IHL is used, the performance is worse than the original GA. Does this contradict the paper's claim that IHL is designed to address the shortcomings of GA and the analysis of derivatives of GA?\n\n- The paper devotes too much content to the background of knowledge unlearning in the Abstract and in the first paragraph of the Introduction. Since knowledge unlearning is a common problem, I believe it is unnecessary to describe it in such detail. The main content should focus on describing the research conducted in this paper. Specifically, Figure 1 should illustrate the proposed approach rather than the knowledge unlearning problem.\n\n[1] Zhang R, Lin L, Bai Y, et al. Negative preference optimization: From catastrophic collapse to effective unlearning[J]. arXiv preprint arXiv:2404.05868, 2024.\n\n[2] Gao C, Wang L, Weng C, et al. Practical unlearning for large language models[J]. arXiv preprint arXiv:2407.10223, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In Introduction Section (Line 72-74), can you explain more about the reason that low-rankness can be beneficial in stabilizing optimization and preventing catastrophic forgetting? Clearer illustration would be better here.\n\n2. In Table 1, why you record results from running different epochs? Does it mean the method reaches the optimal with these epochs?\n\n3. In Experiments Section, why different LLMs are used for those two tasks? Have you evaluated more popular and larger LLMs such as Llama3.1-8B? I suggest giving explanation of the strategy and purpose of model selection."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper makes a good contribution to knowledge unlearning of LLMs through improving tuning stability and efficiency with Inverted Hinge Loss and Fisher-Initialization of Low-rank Adapters, respectively. The proposed method is valuable of managing unnecessary forgetting and unbounded optimization in typical Gradient Ascent strategies.\n2. The paper is generally well-written, particularly in formula derivation and clear explanation about IHL and FILA solve the weaknesses of GA in Section 3.3 and Section 3.4.\n3. The experiments and analysis of the paper is comprehensive, with great illustration on performance evaluation using high quality charts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper focus on the problem of unstable optimization, catastrophic forgetting, and computational cost from Gradient Ascent for LLM unlearning, and propose two novel techniques, including the Inverted Hinge Loss and Fisher Information weighted initialization of LoRA adapters, for robust and efficient unlearning for LLMs. Experiments on two tasks with different LLMs show that the proposed methods enable faster and more stable LoRA-based LLM unlearning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In Introduction Section (Line 51-53), you mention GA and its shortcomings, I think a better way of writing here would be providing a brief overview of 2-3 other key knowledge unlearning approaches beyond GA, and summarize 1-2 common shortcomings across these methods that motivate the proposed approach. GA should be only one of those existing typical methods.\n\n2. In Introduction Section (Line 76), you mention the application of LoRA to LLM unlearning remains unexplored, however, there are some existing studies using LoRA for LLM unlearning, including Practical Unlearning for Large Language Models (https://arxiv.org/abs/2407.10223) and Machine Unlearning in Large Language Models (https://arxiv.org/abs/2405.15152v1). It would be better to briefly summarize (in 1-2 sentences each) how LoRA was used for unlearning in these two papers, and then explain how their proposed approach differs or improves upon these methods.\n\n3. Some important content is missing. In Introduction Section, a lack of clear summarization of contributions in the paper, making readers difficult to capture the important points of the study. Besides, in Related Work Section, a brief comparison between your proposed method and other relevant studies should be presented to better emphasize the advantages of your work. \n\n4. In Section 3.3 (Line 223-227), you should provide a more detailed explanation of how you arrived at these hypotheses. There is still a gap between GA motivation and its weaknesses. To make the illustration more convincible here, a better way would be providing a specific example or mathematical derivation showing how the GA loss function leads to one of the stated problems (e.g., unbounded optimization or inefficient gradient updates).\n\n5. In Section 3.4 (Line 265), a basic and brief description of Fisher Information is necessary here for better understanding of the reason you employ it to address the importance quantification you mentioned before.\n\n6. In Table 1, to make the table more readable, the data of important results should be highlighted via bolding the best result in each row or using color coding to show relative performance between methods, in order to show either your FILA LoRA performs much better than traditional LoRA, or it can approach the performance of full fine-tuning.\n\n7. There are some writing flaws:\n* Some sentences are too long to follow and comprehend, such as Line 89-93.\n* In Line 419-421, there are two \"Second\" in these two sentences, making them difficult to be understood.\n* The capitalization in the text should be more consistent. For instance, you use lowercase \"l\" in \"Inverted Hinge loss\" at Line 20 and Line 161, but uppercase \"L\" in \"Inverted Hinge Loss\" at Line 82. All uppercase would be better."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel loss function and LoRA initialization scheme for robust and efficient LLM unlearning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Robust and Cost-Efficient Knowledge Unlearning for Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1ExfUpmIW4},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora. However, this poses risk of privacy and copyright violations, highlighting the need for efficient machine unlearning methods that remove sensitive data without retraining from scratch. While Gradient Ascent (GA) is commonly used to unlearn by reducing the likelihood of generating unwanted content, it leads to unstable optimization and catastrophic forgetting of retrained knowledge. We also find that combining GA with low-rank adaptation results in poor trade-offs between computational cost and generative performance. To address these challenges, we propose two novel techniques for robust and efficient unlearning for LLMs. First, we introduce Inverted Hinge loss, which suppresses unwanted tokens while maintaining fluency by boosting the probability of the next most likely token. Second, we develop a data-adaptive initialization for LoRA adapters via low-rank approximation weighted with relative Fisher information, thereby focusing updates on parameters critical for removing targeted knowledge. Experiments on the Training Data Extraction Challenge dataset using GPT-Neo models as well as on the TOFU benchmark with Phi-1.5B and Llama2-7B models demonstrate that our approach effectively removes sensitive information while maintaining reasoning and generative capabilities with minimal impact."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Machine Unlearning",
"Large Language Models",
"Low-rank Adaptation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fd54b49958e65e9cd9c984361c58a23abd66ce3c.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/6b29b619023c912af8e21adc85479f11e75d0773.zip"
},
"title": {
"value": "Towards Robust and Cost-Efficient Knowledge Unlearning for Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1F8xTfv6ah | Advancing Out-of-Distribution Detection via Local Neuroplasticity | main | Active | Out-of-Distribution Detection;Local Neuroplasticity;Kolmogorov-Arnold Networks | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 5;5;5;6 | 3;4;4;4 | 3;3;2;3 | 2;3;2;3 | 3;3;3;3 | 5.25 | 3.75 | 2.75 | 2.5 | 3 | 0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Questions and suggestions:\n\nMajor: \nTesting of approach on other large-scale datasets would be beneficial, consider other leaderboards from openOOD like ImageNet-200, 1K.\nThe choice of the K-means clustering approach looks quite arbitrary for initial data splitting. Why not use other clustering approaches like DBScan, Spectral, Agglomerative or even Gausian mixture? I believe, K-means choice should be justified here.\n\nOne can assume a dataset with a lot of natural clusters (like ImageNet-1K) will require a lot of time for training KANs. Show that the approach is actually scalable, robust, and not computationally burdensome in case of a large number of clusters.\n\nThe robustness of clustering approach is not evident for the case of regression task due to the poor internal separability of data clusters. I suggest adding one example of OOD detection where the training dataset is directly related to the regression task. \n\nThe method looks strongly backbone dependent and may be poorly working for the plethora of practical tasks where the good backbone feature extractor is not known. Is it possible to exemplify the method robustness for the case of the absence of backbone preprocessor? \nProbably, some classic ML tabular datasets (e.g. from sklearn) could be useful here.\n\n“Importantly, our experiments show that the previous methods suffer from a non-optimal InD dataset size” - this statement requires more experimental support. Currently, the method superiority was shown only for the CIFAR-10 dataset. \n\nMinor:\nLine 183 (figure caption): “- “(e) InD score S(x)∀x ∈ [−1,1] “ - why the InD score can take negative values? The original formula (5) contains absolute value operation brackets. Is this the typo? \n\nLine 187: “A simple, yet effective approach is to split the dataset based on class labels.” - It is not obvious how to train KANs in case of such splitting. One can imagine a situation where positive class is OOD for a KAN trained on samples of negative class, and the maximization scoring procedure identifies positive class as an OOD. This point should be clarified or rephrased.\n\nI’m interested if the method will be robust for the case of NaN-enriched data samples? It is not a request for an additional analysis but rather an interesting point for the discussion of method limitations."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The described method is clearly defined and is easy to reproduce.\n\nThe method is validated across image and tabular medical data benchmarks, demonstrating improved performance and robustness compared to other state-of-the-art OOD detectors. \n\nThe findings highlight KANs' potential in enhancing model reliability across diverse environments by maintaining high detection accuracy, even with a relatively small training dataset.\n\nThe results (although not on all datasets) look promising in terms of different OOD detection accuracy especially for the case of low number of training samples."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new out-of-distribution (OOD) detection method leveraging Kolmogorov-Arnold Networks (KANs), which utilize “local neuroplasticity” to differentiate in-distribution (InD) data from OOD data via comparing the activation patterns of a trained KAN against an untrained counterpart. KANs stand out due to their spline-based architecture, which preserve specific network regions during training, aiding in the OOD detection."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Despite the clarity, some steps of the approach implementation look like ad-hoc tricks for improving the method’s performance without developing a deep intuition why a particular step is better than alternatives (please, see questions below for details).\n\nThe fact that not all datasets (leaderboards) from the OpenOOD were used for testing the approach, along with the obtained not perfect results on CIFAR-100, suggest that the datasets were selected manually. The authors need to prove absence of any selection bias. \n\nI am strongly concerned about the scalability of the proposed method, which requires splitting the training dataset into a number of subsets and fitting a model per a subset (see comments below).\n\nThe method resembles feature-preprocessor (backbone-dependent), being not applicable to the case where a good feature extractor is not known."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please answer the points raised in the questions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. It introduces an innovative approach to OOD detection, offering fresh ideas and a unique viewpoint that advances the current understanding of OOD detection techniques.\n2. The paper effectively harness the neuroplasticity characteristic of KANs, ensuring that learning new tasks only affects the network regions activated by the training data, effective motivation for OOD detection.\n3. The paper includes thorough experiments on standard benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel OOD detection method that leverages the unique local neuroplasticity of Kolmogorov-Arnold Networks (KANs). By comparing the activation patterns of a trained KAN against its untrained counterpart, the method identifies OOD samples across diverse benchmarks, including computer vision and tabular medical data. Experimental results demonstrate that the KAN-based approach outperforms existing methods and shows resilience to variations in in-distribution dataset sizes. This robust, adaptable approach makes KANs a promising tool for enhancing the reliability of ML systems."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the core idea is clear, the method appears loosely structured. Specifically, the role of multiplying location-specific information with regions activated by InD samples to achieve the delta function (used in the score function) is unclear (e.g., Eqn 5). Additionally, no study is provided to analyze these aspects, leaving parts of the methodology unexplored.\n2. The paper does not present or discuss the generalization performance of models when KANs are incorporated into the training scheme. \n3. Results on CIFAR-100 indicate minimal advantage over existing methods, as the improvements in detection performance appear statistically insignificant.\n4. Including a discussion on the computational cost of the proposed method would strengthen the paper. Given that the approach involves dividing the dataset into different groups, insights into computational efficiency would enhance understanding of the method’s practicality."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- It is not clear how different KAN$_i$'s are trained. It would be\n good to explain this a bit more in depth.\n- Authors state that the method can be seamlessly integrated with any\n pre-trained model. I do not really understand this. Doesn't one need\n to use KAN model for this?\n- How are the pre-trained backbones used for KAN? Does one use the\n features extracted from these networks and build classifiers and\n regressors with KAN architecture?\n- Authors state that hyperparameters are tuned using a validation\n set. How much do the trained hyperparameters generalize to OOD types\n unseen in the validation set?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ The topic is very relevant.\n+ The idea is novel and quite intuitive.\n+ The results are motivating. Even though this is not the best\n performing all around, it is one of the top algorithms.\n+ Authors do a great job explaining the method as well as motivating\n the approach.\n+ Large set of experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Authors utilize Kolmogorov Arnold Networks (KAN) for out of\ndistribution detection. The main idea is to leverage the fact that KAN\nuses BSplines as non-linear functions. Feature values that appear\nwithin InD, if they are concentrated in certain part of the feature\nspace - which is $\\mathbb{R}$ in this case, will only modify certain\nBSpline coefficients. In this scenario when a feature value that is\ndifferent than the InD comes, the BSpline coefficients at those\nlocations will not have been modified during training. Hence, the\ndifference in activation between trained and untrained network will be\nlow. Experiments with benchmark datasets and comparisons with large\nset of alternatives are presented."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The model - due to KANs - is heavily univariate. While authors do\n dataset partitioning to alleviate the problem, I do not see how they\n can actually do so. Unsupervised combinations of features are\n mentioned, however, their applicability also raises questions.\n- Partitioning the dataset requires having multiple trained models,\n which limits the applicability of the approach for large scale\n problems.\n- KANs are interesting but most recent work do not use these\n networks. This naturally limits the applicability of the approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Line 43 has wrong citation \n\nYou mention that the hyperpareter search can be quite challenging. How did you decide for the parameter space especially regarding number of epochs, learning rate, partitionings?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Originality: Given that KANs are a novel type of architecture the research is a very current \n- The method is evaluated on image and tabular data, demonstrating feasibility across different domains. \n- Performance: The performance on the benchmarks is convincing and demonstrates superiority over a vast set of previous methods \n- Exhaustive experimentation on toy datasets including multiple important ablations that erase questions (such as stochasticity)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose to use Kolmogorov-Arnold Networks (KAN) for out-of-distribution detection. The key advantage of KANs is their plasticity which results in avoiding catastrophic forgetting. The authors show that this property can be leveraged to detect OOD samples. \n\nThe method demonstrates good performance on small datasets, but the proposed method does not properly address the shortcomings of the KAN architecture, and the method was not validated in terms of scalability to realistic problems. Overall I rate weak reject."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major: \n- Scalability: No experiments demonstrate the method's scalability to larger images or real-world problems. \n- Insufficient capturing of joint distribution: I believe the partitioning problem of KANs is very severe. While the problem is mentioned I believe it is not properly addressed. Essentially, by partitioning the dataset you are just scaling the problem down to subclasses. What if the l-shaped differences, that you mention in Table 2, appear on an intra-class level instead of a class level? While this may work for toy data if the data is sufficiently separable using k-means or class labels directly, I doubt it will work for more difficult problems such as MVTech. \n- The influence of Model capacity is unclear: KANs are known for their improvements in lack of catastrophic forgetting. How does the model size influence this. Additionally, if KANs treat features individually, the difficulty of the problem and the necessary capacity of the method scales drastically with the image size."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A novel method leveraging the local neuroplasticity of Kolmogorov-Arnold Networks for OOD detection"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024advancing,\ntitle={Advancing Out-of-Distribution Detection via Local Neuroplasticity},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1F8xTfv6ah},\nnote={under review}\n}"
},
"abstract": {
"value": "In the domain of machine learning, the assumption that training and test data share the same distribution is often violated in real-world scenarios, requiring effective out-of-distribution (OOD) detection. \nThis paper presents a novel OOD detection method that leverages the unique local neuroplasticity property of Kolmogorov-Arnold Networks (KANs). \nUnlike traditional multilayer perceptrons, KANs exhibit local plasticity, allowing them to preserve learned information while adapting to new tasks. \nOur method compares the activation patterns of a trained KAN against its untrained counterpart to detect OOD samples. \nWe validate our approach on benchmarks from image and medical domains, demonstrating superior performance and robustness compared to state-of-the-art techniques. \nThese results underscore the potential of KANs in enhancing the reliability of machine learning systems in diverse environments."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Out-of-Distribution Detection",
"Local Neuroplasticity",
"Kolmogorov-Arnold Networks"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/675a9e37ab905ea1c8fb1e86c71ee7a67d7a0323.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/cd69aa443ab34097403cabac28a0c62906662dcd.zip"
},
"title": {
"value": "Advancing Out-of-Distribution Detection via Local Neuroplasticity"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1FY1apsMxc | LLM as GNN: Graph Vocabulary Learning for Graph Foundation Model | main | Active | large language model;foundation model;graph neural networks | foundation or frontier models, including LLMs | 3;3;3;5 | 5;4;4;5 | 2;1;2;2 | 2;1;2;2 | 2;3;2;3 | 3.5 | 4.5 | 1.75 | 1.75 | 2.5 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The discussion on generating graph vocabulary remains incomplete. Specifically, which module is responsible for creating the graph vocabulary: the graph understanding module or the graph inference module? Based on Figure 4 and the data flow depicted, it appears that the graph vocabulary is generated before the predictor LLM. However, the paper discusses this in Section 4.2, which focuses on the graph inference module. Could it be that the graph vocabulary is constructed through another process between the two modules?\n- The distinction between a regular node textual feature and a language-based ID is unclear. In the case study, the language-based ID seems to be a rephrasing of the input textual features. How, do these language-based IDs address the OOV problem if they only replicate a rephrased version of input?\n- The representation of node features in graphs like Cora and PubMed as text is not addressed. Given that these graphs contain high-dimensional features, creating an optimal textual representation from them is challenging. How are these features conveyed in the text domain?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper introduces a novel approach by employing multilayer LLMs to replicate the message-passing process of GNNs.\n- The authors tackle an important issue of OOV tokens in LLMs when applied to graph tasks.\n- Experimental results demonstrate the superiority of their model compared to existing LLM + GNN architectures.\n- The paper does a comprehensive review of the current methods and the baselines are state-of-the-art."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a graph foundational model. The proposed method employs one LLM to replicate a GNN’s workflow through prompting and another LLM for fitting downstream tasks. Specifically, they replicate the message-passing and local aggregation processes of a multilayer GNN by summarizing input using an LLM, prompting sampled one-hop neighborhoods of nodes to the LLM, and prompting it for aggregation across the neighborhoods. To mitigate the problem of out-of-vocabulary (OOV) faced by LLMs observing unseen nodes, they introduce graph vocabulary learning by making language-based IDs. This graph vocabulary is used for making prompts for the inferencing LLM. Finally, to increase generalization the LLM of inference module is fine-tuned on different tasks and datasets by multi-instruction learning"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the proposed method demonstrates clear superiority over current state-of-the-art techniques, there are several significant concerns regarding the paper that need to be addressed:\n\n- While the use of multiple layers of LLMs to replicate message passing across multi-hop structures is a novel approach, the fundamental concept of prompting LLMs for message passing in 1-hop neighborhoods is well-explored with similar methods [1, 3]. The authors should clearly distinguish their “graph understanding” module from existing techniques. Notably, their method incorporates nodes sequentially, which raises practical concerns for large graphs due to the maximum input length limitations of LLMs and the associated performance decline when handling long sequences. For example, showing how long-range dependencies and long sequences can be captured by their understanding module would discriminate from previous works. Additionally, the authors do not provide a clear example of a template showing how a node's textual representation is constructed. The examples in Figure 3 and the Appendix are not sufficiently informative and differ significantly from the case study provided. A concrete example that aligns with the templates outlined in the paper would greatly enhance understanding of the method. Concerning this, the authors would provide special tokens, words, phrases or more details of matching input text to a unified template.\n- The prompt examples provided in the paper, along with the case study, illustrate that the graph understanding module summarizes the input text sequentially across multiple rounds. However, in GNNs, information is propagated through message passing rather than summarized. As a result, the LLM has not effectively replicated the message-passing and aggregation processes. Additionally, because the graph understanding module utilizes a non-deterministic LLM, some nodes and their associated feature and structural information may be lost across multiple layers. Consequently, retrieving the embeddings of all nodes after several rounds of prompting becomes challenging. The paper does not address how this information preservation is ensured, especially since the output of the n-th layer of the LLM is expected to represent all input nodes. For example, to address information loss due to the non-deterministic nature of LLMs authors would keep records of node representations after each round of prompting LLM and do an overall aggregation. \n- The generalization of the LLM module for inference might be limited to the tasks and datasets used for fine-tuning which is far from a universal vocabulary as claimed in the paper. Also, this type of multi-task learning is also studied with GNN modules trained on different tasks and datasets as proposed in [1, 2, 3]. Authors would provide evidence or arguments supporting their claim of a \"universal vocabulary\", particularly in comparison to existing multi-task learning approaches with GNNs.\n- The term \"prompt-based GNNs\" can be misleading, as the underlying model is actually an LLM, not a GNN, and there are fundamental differences between GNN-based models and LLMs. This confusion is further compounded by the visualization in Figure 2, which portrays the current work as a combination of a GNN and an LLM, despite the absence of a GNN module in the model. To enhance clarity, it would be beneficial to revise the terminology and the visualization to better reflect the model's true nature. For example, they can call their method a \"prompt-based information propagation\" and also remove the \"GNN\" block from the figure and keep the single LLM. \n- In the \"Data Description\" section, the paper states that the Cora, Citeseer, and PubMed datasets are \"introduced\" by this work. This wording is misleading, as these datasets are not contributions of the paper. Authors would instead use alternatives like \"used\", \"utilized\", \"evaluated/experiments on\", etc.\n- The explanations of some components in the proposed method, particularly in the sections on \"Graph Vocabulary Learning\" and \"GNN Replication with LLMs,\" are overly detailed, which can detract from the paper's fluency. The authors should consider summarizing these sections to better highlight the main contributions. Also, the detailed explanations can be moved to an appendix or supplementary material.\n\n\n[1] Hao Liu, Jiarui Feng, Lecheng Kong, Ningyue Liang, Dacheng Tao, Yixin Chen, & Muhan Zhang (2024). One For All: Towards Training One Graph Model For All Classification Tasks. In The Twelfth International Conference on Learning Representations.\n\n[2] Sun, X., Cheng, H., Li, J., Liu, B., & Guan, J. (2023). All in One: Multi-Task Prompting for Graph Neural Networks. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 2120–2131). Association for Computing Machinery.\n\n[3] Bahare Fatemi, Jonathan Halcrow, & Bryan Perozzi (2024). Talk like a Graph: Encoding Graphs for Large Language Models. In The Twelfth International Conference on Learning Representations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How the node is mapping into the LLM's vocabulary (eq5)? How many tokens are need for each node?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "It is a good idea to prompt the LLM to simulate message passing in a GNN. The design of feature transformation, message passing, and message aggregation all sound reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a graph foundation model, PromptGFM. Specifically, it includes a graph understanding module where the LLM is prompted to perform 'message passing,' similar to that in GNNs. Additionally, there is a Graph Inference Module, in which each node in the graph is mapped to text tokens, ensuring expressiveness, transferability, and scalability"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I am not convinced that this qualifies as a graph foundation model. It is an interesting exploration of how to integrate LLMs and GNNs. However, from a methodological perspective, since PromptGFM prompts the LLM to simulate message passing, it requires text-only attributes and is unable to handle non-textual graphs. In the experiments, the inter-domain ability was demonstrated by transferring from Cora/Citeseer/Arxiv to PubMed. However, this is not a typical inter-domain setting, as they are all citation graphs. There might be shared patterns within citation graphs that contribute to the observed 'inter-domain' ability. I would need more experiments to be convinced of the 'inter-domain' ability.\n- Lack of strong baseline models. Table 1 didn't include those strong baseline modesl, such as TAPE[1].\n\n[1] He, Xiaoxin, Xavier Bresson, Thomas Laurent, Adam Perold, Yann LeCun, and Bryan Hooi. \"Harnessing explanations: Llm-to-lm interpreter for enhanced text-attributed graph representation learning.\" arXiv preprint arXiv:2305.19523 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Are node labels in the NC task encoded via text or somehow else? \n- Are LLMs asked to predict a text label of a node label or select one of K options? \n- How many negative samples were used in the LP task for PromptGFM and GNN baselines? \n- What is the size of T5 fine-tuned on the NC and LP tasks (there are many options)? \n- Was it a full fine-tune or LoRA?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "S1. Intra- and inter-domain transferability experiments might be of certain interest."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces PromptGFM, an LLM-based approach to perform node classification and link prediction on text-attributed graphs (TAGs) only. First, PromptGFM uses one LLM to summarize textual node features (like a title and abstract of a paper) into a “Language-based ID” string. Second, another LLM is fine-tuned on prompts with this textual node ID and a sub-sample of IDs of its one-hop neighbors to “perform neighbor aggregation” and predict a label for node classification or node ID for link prediction."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**W1.** The paper completely lacks theoretical and experimental support for its elaborated claims such as “meticulously design a series of prompts to align with the GNN workflow at the finest granularity”, “faithfully reproduce the message passing paradigm of GNNs”, “concise yet meaningful representations”, “we capture semantic and structural information through the prompt-based GNN”, “generate expressive representations”.\n\nThere is no formal study, proofs, or experimental analysis of how LLM prompts like “please aggregate the following neighbors” can ever capture the math of message passing or its results. Or how “meaningful” or “expressive” the LLM representations are compared to GNNs (there is a whole body of literature on the theory of GNN expressiveness that might have been of help). Perhaps the biggest mistake in claiming the alignment with GNNs is the fact that GNNs are permutation-invariant models whereas all autoregressive LLMs are by default *permutation-variant*, that is, the result of “Please aggregate <node 1> and <node 2>” is very likely be different from “Please aggregate <node 2> and <node 1>” (at least due to positional encodings which will featurize the nodes differently). Constructing a few prompts and claiming “faithful reproduction” without any formal study or theoretical guarantees is not enough to sell such claims.\n\nSimilarly, claiming in the ablations (Section 5.3) that PromptGFM suffers from over-smoothing after 4 “message-passing layer” prompting steps has no theoretical or experimental evidence - since there is no notion of a discrete node in PromptGFM (a node is represented with several tokens from the LLM vocab), then what is getting oversmoothed? There are rather rigorous mathematical analyses of oversmoothing [1,2] that measure the distance between node representations of different layers - it would require evidence of a similar phenomenon in scattered tokenized LLM representations to claim oversmoothing in this case. \n\n[1] Oono, Suzuki. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. ICLR 2020\n[2] Southern, Di Giovanni, et al. Understanding Virtual Nodes: Oversmoothing, Oversquashing, and Node Heterogeneity\n\n**W2.** The paper largely oversells its technical contributions, namely, the Graph Understanding Module that replicates message passing (see **W1** why it does not, there is no evidence of such replication); and the universal graph vocabulary which works “across all the graphs and tasks” - in fact, PromptGFM does not propose any new vocabulary for encoding graphs and just relies on the existing LLM token vocabularies and textual descriptions of nodes. If input graphs do not have textual features (a common case for non-citation graphs), PromptGFM appears to be of questionable value as a graph foundation model. The paper repeatedly claims that “existing methods treat nodes as OOV tokens” whereas the vast majority of “LLMs for graphs” approaches (including compared OFA or GraphText) do exactly the same as PromptGFM and use textual node features as part of an LLM prompt.\n\n**W3**. The experimental agenda is rather underwhelming and raises a lot of questions about the practical applicability of PromptGFM. \n\n* Only 4 standard citation datasets for node classification (NC) and link prediction (LP);\n* Link prediction experiments employ GNN baselines unsuited for this task - the authors are aware of the benchmark by Chen et al, 2024 which consists of 20 node/link/graph-level tasks and used much stronger baselines like BUDDY for link prediction;\n* Comparing billion-sized components of PromptGFM (GPT 3.5 Turbo for Language Node IDs + fine-tuned T5 for actual tasks) which need several GPUs and high monetary inference costs even for small standard graph datasets like Cora vs much smaller GNNs (often 100-1000x smaller) that run for free even on CPUs presents quite a myopic and biased perspective on the advantages of LLMs for graph learning tasks;\n* It is hard to quantify the importance of reported results when many important experimental details are missing. Are node labels in the NC task encoded via text or somehow else? Are LLMs asked to predict a text label of a node label or select one of K options? How many negative samples were used in the LP task? What is the size of T5 fine-tuned on the NC and LP tasks (there are many options)? Was it a full fine-tune or LoRA?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Novel attempt at LLM-GNN unification using natural language as a graph vocabulary.\n2. Promising results on basic benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents PromptGFM, an approach for integrating LLMs with GNNs by developing a language-based graph vocabulary. It aims to resolve limitations in current GNN-LLM architectures and demonstrates competitive performance in node classification and link prediction."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper claims applicability to all graph types, though it only demonstrates effectiveness on text-attributed graphs.\n2. Lacks evidence that the method generalizes to non-textual graphs, which is critical given the claim of a \"universal\" graph model.\n3. How does the model handle graphs without inherent text attributes?\n4. Can the authors provide clarity on novel contributions beyond combining existing techniques?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper presents a graph foundation model grounded in graph vocabulary learning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024llm,\ntitle={{LLM} as {GNN}: Graph Vocabulary Learning for Graph Foundation Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1FY1apsMxc},\nnote={under review}\n}"
},
"abstract": {
"value": "Graphs typically exhibit distinctive structure and domain-specific knowledge, motivating the development of a Graph Foundation Model (GFM) capable of generalizing across various graphs and tasks. While recent efforts have focused on combining the strengths of Large Language Models (LLMs) and Graph Neural Networks (GNNs), they often struggle to maximize mutual benefit due to the decoupled architectures. Moreover, existing methods assign out-of-vocabulary (OOV) tokens to nodes, which are incompatible with the natural language vocabulary for task-oriented prompt generation, hindering knowledge transfer in GFM. In this paper, we introduce PromptGFM, a versatile GFM grounded in graph vocabulary learning, comprising two key components: (1) Graph Understanding Module, which explicitly replicates the finest GNN workflow in the language space using LLMs, enabling seamless GNN-LLM integration and elegant graph-text alignment; (2) Graph Inference Module, where we establish a novel language-based graph vocabulary to ensure expressiveness, transferability, and scalability. This vocabulary enables the generation of readable instructions for LLM inference, resolving modality incompatibility and facilitating positive transfer. Extensive experiments demonstrate the superiority of PromptGFM in node classification and link prediction, along with its strong transferability across different datasets and tasks. The code is available at \\url{https://anonymous.4open.science/r/PromptGFM}."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language model",
"foundation model",
"graph neural networks"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/89f7a2a4946eba694ab04868ef8cdcb47f73f8b1.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "LLM as GNN: Graph Vocabulary Learning for Graph Foundation Model"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Ffzgglq2I | Binary Reward Labeling: Bridging Offline Preference and Reward-Based Reinforcement Learning | main | Active | Preference based reinforcement learning; Offline reinforcement learning | reinforcement learning | 3;3;3;5 | 3;2;4;4 | 3;2;2;2 | 3;2;2;2 | 2;1;2;2 | 3.5 | 3.25 | 2.25 | 2.25 | 1.75 | 0.522233 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the Weakness section."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper introduces a simple and unique method for translating preference feedback into a format that can be used by standard offline RL algorithms, which is a significant step forward in the field of PBRL.\n\n- The authors provide a theoretical analysis that connects their framework with existing PBRL techniques, providing an interesting point of view and adding depth to the understanding of how preference information can be utilized in RL."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a novel framework aimed at bridging the gap between offline preference-based reinforcement learning (PBRL) and standard offline reward-based reinforcement learning (RL). The authors propose a method called Binary Reward Labeling (BRL), which transforms preference feedback into scalar rewards, allowing the application of any reward-based offline RL algorithm to datasets with reward labels. The key insight is simply relabel the reward function with $\\pm 1$ using preference labels. The paper provides theoretical connections between PBRL techniques and the proposed framework combined with specific offline RL algorithms. Empirical tests on preference datasets based on the D4RL benchmark demonstrate that the framework's performance is comparable to training on datasets with actual rewards and superior to recent PBRL baselines in many cases."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper suffers from poor writing quality and formatting issues, which detract from the overall presentation and readability. For example, in Definition 4.2, there should be a period after \"reward modeling in model-based approaches,\" and the comma should not appear at the start of a line. The subtitle \"Offline standard RL algorithms are model-based.\" in Section 4.2 can be misleading.\n\n- The soundness of the proposed method is questionable. While the $\\pm 1$ reward labeling is theoretically correct, it is usually not a good choice to overfit the preference dataset. Having a more rigorous analysis under the function approximation scenario would be nice.\n\n- The paper needs some benchmarks and baselines to validate the effectiveness of the proposed method. For benchmarks, The D4RL benchmark is known to be insensitive to the accuracy of the reward function [1], and adding benchmarks like Meta-World would greatly strengthen the paper. Also, there are some recent works on offline PbRL that have a strong performance, like [2,3], and BRL should be compared with them.\n\n\nReferences\n\n[1] Li, Anqi, et al. \"Survival instinct in offline reinforcement learning.\" Advances in neural information processing systems 36 (2024).\n\n[2] Kim, Changyeon, et al. \"Preference transformer: Modeling human preferences using transformers for rl.\" arXiv preprint arXiv:2303.00957 (2023).\n\n[3] Zhang, Zhilong, et al. \"Flow to better: Offline preference-based reinforcement learning via preferred trajectory generation.\" The Twelfth International Conference on Learning Representations. 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can you evaluate your algorithms on various domains? For example, Antmaze, Kitichen and Adroit?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This work investigate an important problem and conduct the theoretical analysis for the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This manuscript introduces a novel framework aimed at addressing the challenge of transferring knowledge from reward-based to preference-based offline reinforcement learning (PBRL). The authors highlight that while offline RL has gained practical significance, most research has been limited to scalar reward feedback, leaving a gap in understanding how to apply offline RL techniques to preference-based settings. The proposed solution involves converting preference feedback into scalar rewards through binary reward labeling (BRL), which allows the application of any reward-based offline RL algorithms to datasets with these labels. This approach minimizes information loss during the transition from preference to scalar rewards. The paper establishes theoretical connections between recent PBRL techniques and the proposed framework when combined with specific offline RL algorithms, suggesting that the framework can yield new and more efficient offline PBRL algorithms. Empirical tests on preference datasets from the D4RL benchmark demonstrate that the framework's performance, when combined with various efficient reward-based offline RL algorithms, is often comparable to training on datasets with actual rewards and superior to recent PBRL baselines in most cases."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing of this work is not good. I get confused for many spaces. What is link function? What is link-loss function? The writing of Section 4.1 is very confusing and incomprehensible. The pseudocode is too concise.\n\n2. Missing a lot of baseline algorithms. For example, OPRL [1] and PT [2].\n\n[1] Shin, Daniel, Anca D. Dragan, and Daniel S. Brown. \"Benchmarks and algorithms for offline preference-based reward learning.\" arXiv preprint arXiv:2301.01392 (2023).\n\n[2] Kim, Changyeon, et al. \"Preference transformer: Modeling human preferences using transformers for rl.\" arXiv preprint arXiv:2303.00957 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Why does the binary reward outperform BT model? Will the empirical results still hold in more complex tasks such as Meta-World?\n2. How do baseline methods such as Preference Transformer perform on the benchmarks?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The problem of reward labeling from preference labels is a fundamental challenge in offline PBRL.\n2. The performance improvement is impressive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper discusses the problem of acquiring a reward function from offline preference datasets. The authors claim that binary reward labelling is sufficient for solving this problem. Results on D4RL demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Presentation is poor. The citations are poorly formatted and hard to read.\n2. Lack of discussion about comparison against the commonly used BT reward model. The contribution is poorly justified.\n3. The authors claim that \"For the baseline methods, to the best of our knowledge, no existing empirical study works in exactly\nthe standard offline PBRL setting considered in our work\". However, there have been massive studies on offline preference-based RL, such as PreferenceTransformer (https://arxiv.org/pdf/2303.00957) and OPRL (https://arxiv.org/pdf/2301.01392) and can be readily adopted into the experiment framework.\n4. (https://proceedings.neurips.cc/paper_files/paper/2023/file/c3e969ea20542a6a11e6caeac736a0b9-Paper-Conference.pdf) reveals that D4RL tasks are not sensitive to reward labels. So the empirical results may not be convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1) Please authors further clarify the motivation of this paper. (This is the main question)\n\n2) How does the algorithm perform in cases where trajectories overlap and labels are inconsistent? The author could discuss how their theoretical results might extend to or be limited by scenarios with overlapping trajectories.\n\n3) What are the advantages of the binary-encoding-based reward model compared to the traditional reward model?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) Theoretical: In the case of non-overlapping trajectories, the relationship between the binary-encoding-based reward model and the traditional reward model is established.\n\n2) Experimental: The performance of the algorithm is simulated under both overlapping and non-overlapping trajectory scenarios."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a binary-encoding-based reward model learning method for preference-based reinforcement learning. The method demonstrates superior performance in both overlapping and non-overlapping trajectory scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Writing: The sections on related work and theoretical foundations are overly redundant. Some statements, particularly in the introduction, are inaccurately expressed. For example, current offline PbRL methods primarily focus on reward model learning, rather than on the policy learning aspect itself. For example, in lines 47-49 and 72-76 of the paper.\n\n2) Motivation: The motivation of the paper is unclear. The authors state that the main goal is to develop a framework to bridge the gap between PbRL and standard RL, allowing a standard offline RL algorithm to address the PbRL problem. However, the primary motivation behind PbRL is to resolve the challenge of setting rewards in standard RL. The difficulty in PbRL lies in accurately learning rewards from human preferences, which is not a problem that standard offline RL addresses. The author could approach this from the perspective of overlapping (or similar) trajectories and inconsistent labels, which might lead to a more effective explanation.\n\n3) Theory: Theoretical 4.5 only considers the case of non-overlapping trajectories and does not account for the scenario of overlapping trajectories with inconsistent labels.\n\n4) Experiments: The dataset is limited, with experiments conducted solely in the mujoco tasks. The paper does not compare results with cutting-edge PbRL methods, such as PT ( Preference transformer: Modeling human preferences using transformers for rl)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024binary,\ntitle={Binary Reward Labeling: Bridging Offline Preference and Reward-Based Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Ffzgglq2I},\nnote={under review}\n}"
},
"abstract": {
"value": "Offline reinforcement learning has become one of the most practical RL settings. However, most existing works on offline RL focus on the standard setting with scalar reward feedback. It remains unknown how to universally transfer the existing rich understanding of offline RL from the reward-based to the preference-based setting. In this work, we propose a general framework to bridge this gap. Our key insight is transforming preference feedback to scalar rewards via binary reward labeling (BRL), and then any reward-based offline RL algorithms can be applied to the dataset with the reward labels. The information loss during the feedback signal transition is minimized with binary reward labeling in the practical learning scenarios. We theoretically show the connection between several recent PBRL techniques and our framework combined with specific offline RL algorithms. By combining reward labeling with different algorithms, our framework can lead to new and potentially more efficient offline PBRL algorithms. We empirically test our framework on preference datasets based on the standard D4RL benchmark. When combined with a variety of efficient reward-based offline RL algorithms, the learning result achieved under our framework is comparable to training the same algorithm on the dataset with actual rewards in many cases and better than the recent PBRL baselines in most cases."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Preference based reinforcement learning; Offline reinforcement learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9d707d7297f2999fedb344ecb95199dc5acab694.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/be2e2fc3e795b5acc9e3f594e53b4f9762df8e25.zip"
},
"title": {
"value": "Binary Reward Labeling: Bridging Offline Preference and Reward-Based Reinforcement Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1FiMrJxPAM | A Super-Aligned Driving Generalist Is Your Cockpit | main | Active | Driving Cockpit; Super alined; Driving Generalist | applications to robotics, autonomy, planning | 1;3;3;5;5;6 | 4;4;3;3;5;3 | 1;3;2;3;2;2 | 2;2;3;2;3;2 | 1;2;1;3;2;2 | 3.833333 | 3.666667 | 2.166667 | 2.333333 | 1.833333 | -0.177998 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "Not applicable"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The novelty of the contribution.\n2. In Section 4.3, the model uses visual tokenization of physiological signals to infer emotions. How does the model account for individual differences in physiological baselines or external factors (e.g., physical activity, environmental conditions) that might affect these signals?\n3. Why was ResNet18 chosen over more advanced models like ResNet50, EfficientNet, or transformer-based architectures? Did you conduct any initial tests with these models?\n4. Would you consider performing ablation studies comparing ResNet18 with more powerful feature extractors to evaluate improvements in capturing behavioral and environmental nuances?\n5. How does ResNet18 perform in capturing temporal dependencies in sequential data, particularly for tasks requiring context awareness over time, such as fatigue tracking?\n6. Given the current use of a concatenation-based fusion approach, have you explored other fusion techniques, such as attention-based fusion or cross-attention mechanisms, to maximize the complementary data from RGB, NIR, and depth inputs? Have you considered ablation studies to evaluate the impact of each modality independently?\n7. How does the model handle or prioritize input from non-verbal cues compared to language-based cues in dynamic driving contexts?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Sage Deer integrates multi-modal and multi-view data sources, combining RGB, NIR, and depth cameras to achieve a highly adaptive and personalized intelligent cockpit system. The model’s use of Retrieval-Augmented Generation (RAG) allows it to pull relevant context-specific information from external sources, enhancing the system’s real-time responsiveness and ability to deliver highly accurate, personalized interactions aligned with individual driver preferences. This capacity for personalization goes beyond standard cockpit systems, as Sage Deer monitors physiological, emotional, and behavioural states to tailor responses to the driver's unique profile, significantly boosting both user engagement and safety.\n\nThe fusion of diverse sensor data enables Sage Deer to accurately perceive and interpret complex, dynamic conditions within and outside the vehicle, making it capable of maintaining performance under varying lighting and environmental scenarios. Its robust, real-time capabilities show substantial potential for practical applications in ADAS, offering intelligent, responsive support that adapts continuously to real-world challenges. Sage Deer’s architecture sets a new standard for intelligent cockpit systems, bringing together advanced AI components to enhance driver experience and overall vehicle safety in ways that align with the evolving demands of autonomous and semi-autonomous vehicles."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Sage Deer, a multi-modal, multi-view framework for intelligent driving cockpits, designed to provide personalized, context-aware assistance. It integrates RGB, NIR, and depth cameras, and captures diverse data on driver states such as physiology, emotion, and behaviour which enables comprehensive monitoring and real-time response. This data is processed through a language model, allowing for nuanced comprehension and interaction capabilities.\n\nThe system’s architecture relies on three core components: retrieval-augmented generation (RAG), multi-modal fusion, and expert knowledge incorporation. RAG allows Sage Deer to retrieve relevant external information, tailoring responses to user preferences. Multi-modal fusion combines data from various camera views, enhancing the model's understanding of the environment and driver states. Expert knowledge fusion further refines Sage Deer’s outputs by integrating specialized insights into physiological and emotional monitoring, optimizing its response relevance and accuracy.\n\nExperimental results demonstrate Sage Deer’s effectiveness in multitasking and adapting to diverse user needs, providing a benchmark for intelligent cockpit design. By aligning AI capabilities with user-centered safety requirements, Sage Deer advances the potential of personalized driver assistance systems, positioning itself as a foundational technology for future ADAS applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper needs improvement in writing. There are mistakes and citation errors. See at the end of this message.\n\nThe main issue is the novelty. It seems to combine multiple models to improve intelligent driving. This contribution is not good enough for a conference like ICLR.\n\nIn (Section 4.3), the authors mentioned that the model relies on visual tokenization of physiological data, such as heart rate and blood oxygen levels, to infer emotional or behavioural states. However, this approach assumes direct correlations with emotions, potentially leading to inaccuracies. While the authors have cited studies in psychophysiology suggesting links between physiological signals and emotions, the real-world application requires greater nuance. Authors should discuss impact on signals due to factors like individual baselines, environmental conditions, and physical activity.\n\nThe model’s use of a pre-trained ResNet18 for tokenizing RGB, NIR, and depth inputs may lack the capacity to capture the complex nuances needed for an intelligent cockpit system. To address this, the authors should conduct ablation studies comparing ResNet18 with advanced models like ResNet50, EfficientNet, ViT, and Swin Transformer to assess improvements in accuracy and robustness. Additionally, the current concatenation-based fusion strategy may underutilize the complementary data from multi-modal inputs. Testing different fusion techniques, such as attention-based and cross-attention methods, could identify more effective integration approaches. Further analysis of each modality’s impact would clarify the significance of RGB, NIR, and depth data, while transformer-based models could improve temporal understanding for tasks like fatigue tracking.\n\nThe reliance on a language model for contextual understanding may oversimplify dynamic driving scenarios, missing essential non-verbal cues for real-time safety. Ablation studies could address this by comparing language-only input to multi-modal input (e.g., visual, physiological, behavioural data) to assess non-verbal contributions to accuracy in safety-critical tasks. Testing each modality individually would highlight their impact while comparing the language model with and without RAG would clarify RAG’s role in context accuracy.\n\nWriting:\nLine 37: He -> it\nLine 50: s possess?\nLine 53: with s?\nLine 61: reference a?\nLine 66: repeat of Sima et al.\nLine 188: Beachmarking -> Benchmarking"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See weaknesses. Please provide more details as much as possible."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is the first to construct a unified dataset for MLLMs in intelligent driving cockpit. A multi-task dataset is provided and an MLLM is trained on the dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to build a super-aligned and generalist driving agent, called sage deer for the intelligent driving cockpit. A new dataset is constructed for many tasks, e.g., physiological estimation, emotional estimation, gesture estimation, body motion estimation, driving behavior estimation, driving behavior detection, and driving decision-making. An MLLM is trained for unified tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The dataset construction part is extremely lacking in details, including data curation, GPT4 labeling, etc. The paper states in many places that the details are in supplementary materials, but the supplementary materials have not been submitted. Besides, the contribution of \"An intelligent driving cockpit super alignment evaluation protocol involving generalization ability for different needs was established\" cannot be well-established in the paper.\n- The qualitative results are very limited. Only in Fig. 4, some conversations are provided, and from this figure, we cannot know the full ability of the model.\n- The writing of the paper was very hasty. many sentences are not clear and typos are everywhere, e.g., \"serves as the interface for human interaction with s\" in L053, and \"Tokenizing Multi-Model\" in L257."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see weakness"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Strength:\n\n-\tSeamlessly integrates data from various sensors (RGB, NIR, depth cameras) and multiple perspectives, enabling comprehensive environmental understanding.\n\n-\tEmploys a unique mechanism that combines an updatable knowledge base with a large language model, enabling contextually relevant responses without extensive fine-tuning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This research presents \"Sage Deer,\" an innovative super-aligned and generalist driving agent designed to enhance intelligent cockpit systems. The proposed framework addresses the challenges of personalized driving experiences and comprehensive omni-modal information processing through several key innovations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weakness:\n-\tThe authors should provide more comprehensive information about the model architecture, including specifics such as the choice of LLM with its size and so on.\n\n-\tIn Figure 3, a \"Pre-trained Video Encoder\" is depicted, whereas Section 4.2 mentions the use of an \"ImageNet pre-trained ResNet18.\" Are these referring to the same component? Additionally, how does this encoder handle other modalities? Lastly, how many tokens does the encoder output? Providing more detailed explanations would enhance understanding.\n\n-\tIn Section 4.1, the author introduces specific start and end symbols to denote different modalities. Are these symbols newly added special tokens to the LLM's vocabulary? If so, how are these tokens initialized? Since the LLM remains frozen and is not further trained, how does the pretrained model recognize these new tokens?\n\n-\tIn Section 5.2, the maximum sentence length is set to 64. How was this value determined? Since text sentences are processed by a tokenizer, why not base this parameter on the number of tokens instead? Were any experiments conducted to evaluate the impact of this choice on performance or the training and inference computational budget?\n\n-\tThe sequence of tables and figures should be adjusted for consistency. For instance, Table 2 is only mentioned in Section 5.5, while Tables 3 and 4 are referenced earlier in the document before Table 2.\n\n-\tThe manuscript requires improved writing quality, as numerous typographical errors are present. For example, on line 414, \"model\" should be corrected to \"figure,\" and on line 261, a space is needed between the text and the equation.\n\n\nThe manuscript currently contains several typographical and writing errors, as well as some missing details, which is not ready for submission. I believe it would benefit from further revisions to address these issues and ensure it meets the standards required for submission to ICLR."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I recommend this paper be flagged for an ethics review due to concerns related to the proposed dataset, which includes human subjects. Key ethical considerations include:\n\n1) The dataset's inclusion of human data raises concerns about compliance with copyright laws, data protection standards, and consent protocols under regulations like GDPR.\n\n2) The use of human data requires careful consideration of ethical research practices, including whether informed consent was obtained, how the data will be stored, and the responsible handling and potential release of this data.\n\nTo ensure an ethically sound review, an ethics reviewer with expertise in privacy, legal compliance, and responsible research practices would be most suitable."
},
"flag_for_ethics_review": {
"value": [
"Yes, Legal compliance (e.g., GDPR, copyright, terms of use)",
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses section.\n\nAdditionally, real-world dataset construction rarely captures abnormal behaviors. How, then, does training on the proposed dataset support effective human behavior anomaly detection?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The motivation to combine 3D scene perception with both visual and physiological signals from humans is clear and compelling. However, an ablation study on each signal and modality would enhance understanding of their individual contributions.\n2. A dataset is established, incorporating multi-modal, multi-view sensor data along with QA pairs for evaluating the LLM’s scene understanding and reasoning capabilities.\n3. The proposed method shows improved overall performance on the provided dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a multi-modal LLM designed for human and scene understanding in autonomous driving. It integrates multi-view image and multi-modal inputs, using a retrieval-augmented generation strategy to improve test-time adaptation. For evaluation, a multi-modal, multi-view dataset for driving perception is introduced. The proposed method outperforms standard multi-modal LLM models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Overall, I regret to say that this submission appears of low quality, with numerous errors suggesting it was submitted without thorough proofreading—a potential disservice to the reviewers.\n\n1. There are numerous typographical and formatting errors, including typos, incorrect notations, capitalization issues, and incomplete sentences. Examples include:\n - L050: \"technologies, s possess\"\n - L053: '\"with s,\"'\n - L208: \"supplement the captain\"\n - L261, L265: \"the language spaceemface ∈ C × L.\", \"emrgb ∈ C × L, < RGB bos > emrgb < RGB cos >.\"\n - L271, L277: \"emfront ∈ C × L,\"\n - L288: \"framework ash shown in Fig. 3\"\n - L307: \"The Relationship Between Physiological State and Emotion: Classical\"\n - L315-L317: \"other tasks, including: The Relationship Between Physiological State and Behavior…\" (repeated thrice)\n\n2. The proposed method lacks novelty, as it is essentially a multimodal LLM with RAG, without any specific design tailored for the target task. Additionally, key methodological details, such as training strategies, specific model architectures, and hyperparameters, are missing.\n\n3. Experimental analysis is limited. In-depth experimentation and analysis are needed to substantiate the claimed benefits of using a multimodal approach.\n\n4. The dataset setup is unclear. Since the captions are generated by open-source VLMs, please clarify the measures taken to ensure their quality.\n\n5. The related work citations do not consistently support the claims made. For instance, L308 references \"Classical studies in psychophysiology (e.g., James-Lange, Schachter-Singer)…” without sufficient context.\n\n6. The appendix section is empty. Please remove the placeholder text: \"You may include other additional sections here.\"\n\n7. Finally, as the dataset includes human subjects, please provide an ethics statement to address concerns regarding its use."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see weakness"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The concept of Sage Deer as a super-aligned, generalist agent offers a fresh approach to intelligent cockpit systems, adapting in real-time to individual user preferences.\n2. The tailored application of a retrieval-augmented generation framework for the driving domain is a notable contribution, enabling efficient and adaptive responses to evolving user needs.\n3. The development of a large-scale benchmark using a variety of datasets (AIDE, DMD, and others) to assess the system's decision-making and perception capabilities adds rigor and depth to the system’s evaluation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces \"Sage Deer,\" an intelligent driving cockpit system aimed at meeting personalized user needs through a multi-modal framework and a retrieval-augmented generation mechanism."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. How are the various inputs (e.g., visual, physiological) integrated to influence real-time driving decisions? \n2. Could the paper delve deeper into how user interactions are managed, especially in complex scenarios? Are there any limitations to the system’s ability to interpret nuanced or less common user behaviors?\n3. There are a few errors: for example, the purpose of \"s\" in lines 50 and 53 is unclear, “Out View Caption” is duplicated in Figure 2, and “Accurate labele” contains a spelling error."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1、Can you explain why the paper chose ResNet18 as the pre-trained model instead of a more powerful option?\n2、With multimodal data and personalized preferences present, how does RAG decide which information to prioritize for retrieval? Is there a specific prioritization or weighting system?\n3、Could Sage Deer be compared more thoroughly with other recent intelligent driving agents, like DriveGPT or DriveLM, to provide a deeper understanding of its performance?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1、The paper introduce an interesting problem, emphasizing the importance of individual preferences in enhancing the driving experience.\n2、The RAG framework addresses the need for flexible, personalized responses without extensive model fine-tuning.\n3、The proposed method was evaluated on multiple driving datasets for its generalist and super-aligned performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes to leverage multimodal data and LLMs to understand driver physiology, emotions, and behaviors in real-time. The authors use a RAG framework combined with expert knowledge integration to provide personalized feedback."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1、The paper shows limited novelty, like RAG, are pre-existing approaches.\n2、The \"Expert Knowledge Fusion\" section isn’t clearly explained. Adding pseudocode or a flowchart could make it easier to follow.\n3、The paper lacks ablation studies to verify the effectiveness of individual modules, such as physiological indicators and expert knowledge fusion."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024a,\ntitle={A Super-Aligned Driving Generalist Is Your Cockpit},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1FiMrJxPAM},\nnote={under review}\n}"
},
"abstract": {
"value": "The intelligent driving cockpit, an important part of intelligent driving, needs to match different users' comfort, interaction, and safety needs. This paper aims to build a \\textbf{s}uper-\\textbf{a}ligned and \\textbf{ge}neralist \\textbf{dr}iving agent, \\textbf{sage deer}. Sage Deer achieves two highlights: (1) Super alignment: It achieves different reactions according to different people's preferences and biases. (2) Generalist: It can understand the user's physiological indicators, facial emotions, hand movements, body movements, driving scenarios, and behavioral decisions. (3) Multimodal: He can understand RGB, NIR, and depth video to build more robust perception, understanding, and reasoning. To achieve the above requirements, we design retrieval-enhanced multimodal frameworks. We collected multiple data sets and built a large-scale benchmark. This benchmark measures the sage deer's perceptual decision-making ability and the super alignment's accuracy."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Driving Cockpit; Super alined; Driving Generalist"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c864c2a67fe5fa01faaa1498a975f8671c0af9e0.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "A Super-Aligned Driving Generalist Is Your Cockpit"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1GIVx7COef | Event-aided Dense and Continuous Point Tracking | main | Active | event camera;dense point tracking;continuous motion;motion representation | applications to computer vision, audio, language, and other modalities | 3;5;5 | 4;4;4 | 2;2;2 | 2;2;2 | 1;3;2 | 4.333333 | 4 | 2 | 2 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Please explain how the events are used for the local correlation construction and how the M^{local} is computed.\n\n- Please explain what are the warp and fusion operations in (1).\n\n- Please explain how the global motion representations M^{global} are used.\n\n- Please explain how events are obtained for the training data."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- While combining event data with RGB data had been used in previous work in optical flow, this is the first work using event data for long-range point tracking.\n\n- The authors conduct experimental evaluations on standard point-tracking benchmarks and report SOTA results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a model for dense point tracking from a multimodal input consisting of RGB frames captured by a standard shutter camera and events coming from an event camera.\n\nIn order to represent the motion, the method proposes to parametrize point trajectories with B-Splines, predicting therefore the control points {P_i}_i=1...Nc to recover the curves {T_t}.\n\nTheir proposed \"local motion etimation\" model operates on pairs of adjacent frames I_t and I_{t+1} as well as the events happening between these adjacent frames E_{t->t+1}; and predicts local trajectories {T_{t->t+1}} between these adjacent frames. Then, these trajectories are sequentially combined to obtain {T_{1->t}}, a process which is sometimes aided by the current global motion representation M^{global} in case of occlusions (eq. 1). This global motion representation M^{global} is iteratively updated using the local motion representations M^{local} extracted by the \"local motion estimation\" module.\n\nThe model is trained on the synthetic MOVI-F dataset with 10k 7-frame videos during 500k steps, and evaluated on CVO-test and TAPVid-DAVIS. For the evaluation datasets, events are simulated using vid2e.\n\nQuantitative results show the proposed model can obtain SOTA performance on TAPVid-DAVIS and CVO point-tracking benchmarks, as well as in the DSEC optical flow leaderboard.\n\nThe authors also present ablation experiments for their global motion aggregation, curve representation and input data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The method is not fully understandable nor reproducible with the details given. Figure 1 gives the reader the best guess about how the method works, but it is not really clear how the events are processed by the model, how the local motion representations are obtained, what is the trajectory decoder (L247) and how the global representations are are used for the final trajectory predictions.\n\n- It's not clear how the events are obtained for the synthetic MOVI-F training data.\n\n- Overall the paper is poorly written and difficult to understand. There are errors that show it was not carefully proofread, there are organization issues, and there are notation issues. For example, sec 3.1 speaks about the global motion representation without having introduced it. The notation in sec. 3.1 is also difficult to follow. For instance, it's not clear what the \"initial current global trajectory\" T^init_{1->t+1} means and how it is used, as it doesn't appear in any equation. There are also no details about that the Warp and Fusion operations in eq (1) are."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How does the framework handle sparsity or noise in the event data? Since real-world event cameras often produce sparse or noisy data, it would be valuable to understand the robustness of the proposed method in these conditions.\n- Some parts of the article are a bit vague and need more explanation. For example, the paper mentions an occlusion handling strategy but lacks quantitative evidence of its effectiveness. Could the authors provide more information on how occlusions are managed and evaluated?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The method addresses a key limitation in point tracking by integrating event cameras, effectively combining their high temporal sensitivity with the spatial information of traditional video.\n- The proposed multi-frame iterative streaming process for motion aggregation is well-designed and enables the model to adapt to variable video lengths.\n- The paper provides comprehensive experiments on simulated and real-world datasets. The results convincingly demonstrate the advantages of using event data for fine-grained and continuous tracking, with significant improvements over baseline methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel event-aided dense and continuous point tracking framework (EDCPT) that integrates the strengths of both image and event data to achieve high-resolution motion tracking in video sequences. The method proposes a multi-frame aggregation strategy for dense point tracking, leveraging event cameras to address temporal limitations in conventional video data. Through this approach, EDCPT can capture temporally continuous point trajectories, which is validated by experiments showing significant performance gains over existing state-of-the-art methods in dense tracking tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The reliance on event data limits the framework's flexibility, as it may not perform optimally without event camera input. This restricts its applicability to setups where event cameras are available.\n- Some technical assumptions are not fully supported by the results. For instance, while the multi-frame aggregation is shown to improve performance, there is limited analysis of its specific contribution compared to simpler aggregation techniques.\n- The framework’s computational cost, especially given the use of multi-frame input and dense tracking, could make it challenging for use in real-time applications, which is not fully addressed in the paper.\n- The EDCPT framework is computationally demanding, limiting its real-time applicability in scenarios where immediate results are required. Additionally, its reliance on event cameras restricts its use to specific hardware configurations, reducing its flexibility. While the proposed method is validated on benchmark datasets, further testing in a broader range of real-world applications would strengthen the claims of generalizability. Finally, the integration of event data introduces complexity in the framework, which may pose challenges in deployment and necessitate robust calibration and setup procedures for optimal performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The author needs to emphasize the distinct aspects of the framework compared to existing methods beyond merely adding events. Currently, these differences are hard to identify. \n\n- Demonstrating the framework’s effectiveness through computational cost analysis would support that it’s more than just a parameter-heavy approach and instead an efficient method.\n\n- While real-world experiments would be ideal, it’s understandable that this may be infeasible within the given timeframe."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The writing in this paper is very straightforward, making it easy to read even for those new to events or point tracking. The structure is well-composed, with sufficient references, which is commendable.\n\n- Another strength is in the evaluation on multiple datasets—not only on synthetic ones but also on real-world event datasets. This aligns with the evaluation protocols of many prior studies, which adds credibility.\n\n- Various supplementary materials and extensive experimental data also enhance the paper's quality."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper performs dense point tracking to estimate large motion in long video sequences using events. To solve this task effectively, a multi-frame iterative framework is proposed, which estimates inter-frame motion and uses an aggregation method to estimate global motion. This approach was evaluated on multiple datasets, highlighting its strengths and opening a new field in motion estimation for event cameras."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The novelty of the proposed method is difficult to discern. It appears to be a straightforward adaptation of prior point tracking methods with event stacking. For instance, compared to methods like FlowTrack or DOT, it’s challenging to see any distinctive or novel aspects. Specifically, the approach of estimating local motion and then accumulating it is an old technique, commonly used in optical flow and dense tracking.\n\n- Another drawback is the lack of an inference time comparison, which is a common benchmark in prior protocols (e.g., in the FlowTrack paper). While comparing all methods on both synthetic and real datasets may be impractical, comparison with some representative studies is essential.\n\n- Since there’s no dedicated event-based dense tracking dataset, the authors rely on synthetic datasets and evaluate event-based optical flow in real-world settings. However, this does not truly reflect event-based dense tracking, which is a significant weakness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024eventaided,\ntitle={Event-aided Dense and Continuous Point Tracking},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1GIVx7COef},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent point tracking methods have made great strides in recovering the trajectories of any point (especially key points) in long video sequences associated with large motions. \nHowever, the spatial and temporal granularity of point trajectories remains constrained by limited motion estimation accuracy and video frame rate. \nLeveraging the high temporal resolution motion sensitivity of event cameras, we introduce event data for the first time to recover spatially dense and temporally continuous trajectories of any point at any time. \nSpecifically, we define the dense and continuous point trajectory representation as estimating multiple control points of curves for each pixel and model the movement of sparse events triggered along continuous point trajectories. \nBuilding on this, we propose a novel multi-frame iterative streaming framework that first estimates local inter-frame motion representations from two consecutive frames and inter-frame events, then aggregates them into a global long-term motion representation to utilize input video and event data with an arbitrary number of frames. \nExtensive experiments on simulated and real-world data demonstrate the significant improvement of our framework over state-of-the-art methods and the crucial role of introducing events for modeling continuous point trajectories."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"event camera",
"dense point tracking",
"continuous motion",
"motion representation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4f010e7f0fe20f823608c7fab8bc19fa0d7d57e4.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e40d29adef01291cd782bf645b71a01b6f986062.zip"
},
"title": {
"value": "Event-aided Dense and Continuous Point Tracking"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1GPN2oa7P7 | ClipGrader: Leveraging Vision-Language Models for Robust Label Quality Assessment in Object Detection | main | Active | label quality;clip;object detection | applications to computer vision, audio, language, and other modalities | 3;3;3;6;6 | 5;5;3;4;5 | 3;2;2;3;3 | 2;1;2;3;3 | 2;3;2;3;4 | 4.2 | 4.4 | 2.6 | 2.2 | 2.8 | 0.102062 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- The process of synthesizing unrealistic bounding boxes needs more clarification. What defines a realistic distribution of incorrect boxes? Even preliminary insights would be valuable.\n- While CLIPGraders shows promise in evaluating pseudo-labels, can it also improve noisy human annotations? A small-scale study exploring this application would strengthen the paper."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The idea to use visual prompts in CLIP to evaluate detection labels is novel\n- The performances with large enough training data sounds strong (low false positive rates)\n- When using CLIP-grader to improve pseudo-labels, the performances improvements persists. It is non-trivial to translate the performances improvements from labels to model performances in a data-centric manner."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes to re-purpose CLIP to evaluate object detection label qualities. CLIP is firstly introduced to align image-level semantics and image captions. On the contrary, this papers leverage visual prompt to promote awareness in certain image regions. The experimental results show that the CLIP-grader achieve non-trivial performances even with 1% COCO data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Lack of baselines of CLIP-grader when evaluated recall and false positive rates (Table 1)\n- Lack of deeper analysis of the tail classes and small bounding boxes, which are considered much more important in object detection.\n- Limited zero-shot performances in Sec. 4.2"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Based on the weaknesses, I have the following questions?\n\nCould the proposed CLIPGrader achieves performance gain on other detection datasets rather than COCO/LVIS?\n\nWill the performance gain mostly from CLIP, rather than the proposed strategy? If so, applying a modern light-weight VLM will be better?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "In general, the proposed method is somehow simple, also the author claimed that the proposed method can be helpful in downstream tasks, which can benefit the process of downstream object detection learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces ClipGrader, a novel framework that leverages vision-language models to automatically assess the accuracy of bounding box annotations in object detection datasets. It employs CLIP (Contrastive Language-Image Pre-training) to evaluate both class label correctness and spatial precision of bounding boxes, offering a scalable solution for enhancing annotation quality control. ClipGrader demonstrates high accuracy and robustness, achieving 91% accuracy on COCO with a 1.8% false positive rate, and effectively maintaining performance even when trained on a subset of the data. ClipGrader's can help downstream object detection tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "However, I think the proposed method has severe drawbacks as follows:\n\n1. The proposed method is simple and lack novelty. The author only applies a simple classification strategy and a simple contrastive learning pipeline. It is just a simple application of a pre-trained CLIP model. Neither the author proposes a new paradigm (contrastive learning with multiple positive pairs is a common practice in previous papers, especially object detection paper), nor the author has proposed new arch/algorithm to train a grader.\n\n2. For object detection datasets, the author only shows performances on COCO and LVIS datasets. COCO and LVIS share the same data sources. Then the performance, even the few-shot performance doesn't convince me here. The author could show results on downstream object detection benchmarks rather than the COCO source, for example, OpenImages and some small datasets to verify the effectiveness of the proposed method. i.e., on VOC dataset and some auto-matic driving datasets like video surveillance datasets. \n\n3. For the downstream SSOD teacher, though the CLIPGrader is also trained on 10% of the COCO data, CLIP is trained on multiple data sources, which can not prevent the leakage of the data. The author could find better ways to verify the effectiveness of CLIPGrader, i.e., find the noisy annotations in COCO (with ratio and visualization)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The idea to utilize CLIP to improve some downstream tasks like object detection is interesting, and this paper is overall good. It would be better to add some baseline results for more comprehensive comparison and validate the proposed method on more complex datasets."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-organized and easy to follow, different components are illustrated in detail, making the framework comprehensible and logically structured.\n2. Adapting CLIP to assess bounding box quality is innovative and addresses a real challenge in maintaining large-scale object detection datasets. This repurposing of CLIP as a “grader” rather than a classifier or detector is novel and promising."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces ClipGrader, a framework for automatically assessing bounding box quality in object detection datasets by adapting CLIP, a vision-language model, to evaluate label accuracy and spatial alignment of bounding boxes. Evaluation on COCO and LVIS datasets demonstrates the effectiveness of ClipGrader."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the ablation studies (Section 4.3) are detailed, adding a quantitative comparison with simpler baseline methods, such as non-CLIP-based label grading techniques or direct confidence-based bounding box assessments, would strengthen the claim of ClipGrader’s superiority.\n2. As mentioned in the introduction (Section 1), widely used datasets such as COCO are subject to label errors and ClipGrader can be used to assess object detection annotations, it would be better to use ClipGrader to filter incorrect annotations in COCO training set and show some performance gains on the test set to validate the usefulness of the proposed method.\n3. It’s mentioned in Section 3.3 that “we found that model size significantly impacts performance, with the largest CLIP model yielding significantly better results”, it would be better to have some quantitative comparison between different sizes of CLIP models.\n4. The majority of evaluations are conducted on datasets with well-defined and distinct classes (COCO, LVIS). Testing ClipGrader on more complex datasets, such as OpenImages, where bounding box quality varies more, would be better to show its generalizability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses above for a detailed list of questions."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "– The paper is well-written and easy to follow\n\n– The proposed method seems to be quite effective at assessing label quality"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Proposes CLIPGrader, an approach to fine-tune CLIP to judge the quality (correctness of box position and label) of detection bounding boxes.The approach is shown to achieve high accuracy at label assessment on COCO and LVIS and shown to improve the performance of semi-supervised object detection methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "– The paper makes limited technical contributions. It’s main contribution – empirically showing that CLIP can be fine-tuned to assess label quality – is interesting but in my opinion not substantial. The proposed finetuning strategy is a straightforward generalization of the original CLIP objective (and bears similarity to the supervised contrastive loss [A], which the paper should cite).\n\n– The motivation of the paper is somewhat weak. Why is CLIP well-suited to label assessment, beyond being a popular multimodal model? Why not use a specialized approach like RegionCLIP [B], which has been designed with detection in mind? \n\n– The problem formulation is also somewhat contrived. Why treat label quality assessment as a classification problem rather than regressing to the correct coordinates (or a delta therefrom)? Regression would alleviate the need to arbitrarily define “good” and “bad” bounding boxes, and allow for more fine-grained metrics (like AP). \n\n– The paper primarily measures accuracy based on test-set label assessment accuracy. A far more helpful measure would be performance of detection methods that factor in the predicted label quality (maybe by loss weighting, or pseudolabel filtering). While the paper does include a single experiment in Sec 4.4 on semi-supervised object detection, I think a comprehensive set of additional experiments is required to verify that the proposed task and model is actually useful for a real-world task.\n\n– The paper studies a synthetic label assessment setup, as the “bad” bounding boxes are generated by randomly perturbing bounding boxes. While this is reasonable as a starting point, the paper would be strengthened with experiments on “in the wild” datasets (eg. by having humans annotate ground truth boxes in an existing evaluation set as “good” and “bad”). This is particularly important since prior work has shown that labeling errors in detection are not always random and in-fact can be class and annotation protocol dependent [C].\n\n– The dataset contains examples of good and bad bounding boxes for each class as well as background boxes, but does not include examples of good bounding boxes but for the wrong class? How does the translate to the model’s performance?\n\n[A] Khosla, Prannay, et al. \"Supervised contrastive learning.\" NeurIPS 2020\n\n[B] Zhong, Yiwu, et al. \"Regionclip: Region-based language-image pretraining.\" CVPR 2022\n\n[C] Liao, Yuan-Hong, et al. \"Transferring Labels to Solve Annotation Mismatches Across Object Detection Datasets.\" ICLR 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the Weaknesses. \n\n#"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper leverages vision-language models for label quality assessment is valuable. \n2. The experimental results show the potential ability for dataset refinement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "High-quality annotations are essential for object detection tasks. This paper propose to leverage vision-language models (e.g. CLIP) to automatically assess the accuracy of bounding box annotations. The author tried a lot of ways, including prompt engineering, changes in model size, and different model fine-tuning strategies. The final results demonstrate that the proposed approach can identify errors in existing COCO annotations, highlighting its potential for dataset refinement."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Employing pre-trained models to relabel or denoise dataset is not novel. A large amount of literature has demonstrated the ability of multimodal models. \n2. The contribution of this paper is limited. \n3. The experimental results are not novel. As the CLIP model has been trained on a large number of text-image datasets, including the COCO dataset used in this experiment. I think this paper is more of a good attempt in engineering, leaning towards a technical report."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "train clip to assess label quality(correctness of class labels and spatial accuracy of bounding boxes) in object detection"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024clipgrader,\ntitle={ClipGrader: Leveraging Vision-Language Models for Robust Label Quality Assessment in Object Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1GPN2oa7P7},\nnote={under review}\n}"
},
"abstract": {
"value": "High-quality annotations are essential for object detection models, but ensuring label accuracy — especially for bounding boxes — remains both challenging and costly. This paper introduces ClipGrader, a novel approach that leverages vision-language models to automatically assess the accuracy of bounding box annotations. By adapting CLIP (Contrastive Language-Image Pre-training) to evaluate both class label correctness and spatial precision of bounding box, ClipGrader offers an effective solution for grading object detection labels. Tested on modified object detection datasets with artificially disturbed bounding boxes, ClipGrader achieves 91\\% accuracy on COCO with a 1.8% false positive rate. Moreover, it maintains 87% accuracy with a 2.1% false positive rate when trained on just 10% of the COCO data. ClipGrader also scales effectively to larger datasets such as LVIS, achieving 79% accuracy across 1,203 classes. Our experiments demonstrate ClipGrader’s ability to identify errors in existing COCO annotations, highlighting its potential for dataset refinement. When integrated into a semi-supervised object detection (SSOD) model, ClipGrader readily improves the pseudo label quality, helping achieve higher mAP (mean Average Precision) throughout the training process. ClipGrader thus provides a scalable AI-assisted tool for enhancing annotation quality control and verifying annotations in large-scale object detection datasets."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"label quality",
"clip",
"object detection"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/179b7edcd9635409b90516b5e1fedada4c6f27a4.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "ClipGrader: Leveraging Vision-Language Models for Robust Label Quality Assessment in Object Detection"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1GTARJhxtq | Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models | main | Active | Data;Data Filtering;Data Pruning;Pretraining;Perplexity;Large Language Model;LLM | foundation or frontier models, including LLMs | 3;5;6;8 | 3;5;4;3 | 2;3;3;3 | 2;3;3;3 | 3;3;4;3 | 5.5 | 3.75 | 2.75 | 2.75 | 3.25 | -0.083624 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How do you expect the results to scale on models larger than 3B parameters? \n\nHow does models' performance change on domains which are pruned the most?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The goal, and the process, and algorithm are defined and presented very clearly. Experiments cover multiple settings, with different model sizes and training algorithms. \nThe proposed method is super useful for researchers who investigate practical techniques for data curation, with insightful empirical results. \nExperiments include two very different dataset distributions, the Pile dataset and Dolma. The work shows thorough experiments for various selection rates and perplexity criteria, presenting strong evidence about settings in which perplexity pruning does and does not work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes that smaller language models effectively prune large datasets in a way that benefits the training of much larger model. Applying perplexity-based pruning techniques, they explore using a small model to filter high-quality subsets of data for training larger models. This approach is interesting because it’s a cost-effective alternative to using large models for pruning, and is applicable in real settings. The findings indicate benefits for downstream accuracy and training efficiency.\n\nThe paper demonstrates that a 125m parameter model can successfully prune data for large models and improve downstream task performance. The paper shows empirical results testing on The Pile and Dolma, two datasets with very different domain structures.\nThey also study the two settings of over-training and data-constrained setups and provide additional insights."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Authors claim that datasets pruning increases the proportion of general domain data from web-scraped domains, and decreases the proportion of specific and technical domains. But it is unclear and counter intuitive why training on general domain data improves performance of models on benchmarks. I think the paper lacks analysis to explain this observation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- A quantized model may lead to better inference efficiency while calculating the perplexity. Was this considered while running the experiments?\n- High perplexity selection will also inevitably lead to the inclusion of a significant portion of the noisier examples in the overall dataset. How can we determine the proportion of such examples in the final dataset and exclude them reliably?\n- Minor typo (line 66): perplexity-basd -> perplexity-based\n- It would be useful to include the following closely related data pruning works in the related work section:\n - https://arxiv.org/abs/2403.07384\n - https://arxiv.org/abs/2402.09668"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper addresses an important problem of pruning the pre-training datasets to enable efficient training of LLMs.\n- The experiments are thorough and cover different dimensions of perplexity-based pruning. \n- The paper is well-written and the results are presented clearly. \n- The findings are significant, as they show that perplexity-based data filtering can not only reduce the size of the pre-training datasets, it also leads to better performance on certain downstream tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a perplexity-based pruning method for reducing the size of pre-training datasets. The effect of pruning is evaluated through the performance on downstream tasks as well. Two datasets are used for evaluation: Pile and Dogma. The pruning efficacy is determined for over-trained and data-constrained regimes as well."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper does not currently cover the computational complexity of the proposed pruning procedure. A few important questions that need to be considered in this regard:\n - How do the computational requirements for perplexity-based pruning increase with the size of the dataset to be pruned?\n - How does the cost of computing perplexity (before pruning) amortize over the efficiency improvements achieved while pretraining the model on the pruned datasets? \n- A discussion for choosing the right perplexity pruning method (low, medium, high) for the dataset should be included for the practitioners. From the experimental results, we can see that high perplexity selection performs better on Pile while medium perplexity selection is better for dolma. Can we extract any patterns from these results and other experiments that can be generalized to other datasets? \n - For example, prior theory on data pruning for vision tasks shows that the optimal pruning strategy changes depending on the amount of initial data. When data is abundant, the better pruning strategy is to keep harder examples. In contrast, for smaller datasets, keeping the easier examples leads to better performance. [1] \n- The results show that test set perplexity may not always be a sound metric for evaluating a pruning strategy and that downstream evaluation is necessary. What should be the cheapest way of conducting the downstream evaluation of the correct perplexity pruning method, i.e., the one that can yield reliable results at a minimal cost? For example, could there be a small set of representative downstream tasks or metrics that could serve as efficient proxies for full downstream evaluation?\n\nReferences:\n\n[1] https://arxiv.org/abs/2206.14486"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Fig.4 is interesting, but I'm not sure how Fig. 3 is relevant in practice - could you clarify?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The method is well motivated. Except for some uncommon terminology that is explained in later sections like \"non-standard training regime\", \"over-training\" (which is not over-fitting) the paper is clearly written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors filter LLM pre-training data by using the perplexity of a smaller language model. They demonstrate that dataset filtering improves the [initial] learning curve of LLM pre-training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "L186 suggests that the final models are (pre-)trained for a fixed number of steps, no matter the dataset size. This sets the stage for dataset filtering, since training on the full dataset may go through fewer epochs. It would be interesting to train for long enough to show convergence in the plots in Fig. 1. The story would be more convincing if there is an offset between the blue and red curves even after convergence. In fact, the \"over-training\" experiment in Sec. 3.4 shows diminishing gains, so I can imagine that they disappear fully at some point. The method would still have merits (steeper pre-training curve), just not the ones claimed in the paper.\n\nNovelty. Perplexity-based pruning and countless variations of it are well-studied. The authors set their work apart from prior work in L058, but neither of the arguments (i)-(iii) (evaluation on downstream task, exploration of domain compositions, \"non-standard\" evaluation regimes) strike me as particularly strong.\n\nI don't think that Algorithm 1 is really helping clarity. 1-2 normal equations would be just as expressive and more concise."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* L290: \"These results show that while the higher quality data resulting from perplexity-based data pruning does still lead to an improvement in downstream performance in the over-trained regime, there is not a relative increase in downstream improvement over the baseline when over-training.\" It would be good to understand why this is the case since there are no repeats. \n* L314: \"That training on repeated perplexity-pruned data leads to diminishing gains after four repetitions post- pruning suggests that the higher quality data resulting from pruning does not change the point for which repeating data yields diminishing improvements in performance.\" This sentence is confusing and should be reworded.\n* In section 4.2, the paper presents results showing that the pruning affects data composition such that some domains (e.g. web) are oversampled compared to others (e.g. pubmed). It would be useful to perform additional analysis to understand why this is the case e.g. is it possible that the training split (L113) resulted in a smaller proportion of these domains for the reference dataset?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* Describes a simple approach to improve the performance of large language models using perplexity based data filtration using a smaller reference model.\n* Presents useful results e.g. 1) filtration criteria varies by dataset type and 2) test set perplexity is not a good indicator of the downstream task performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates whether a small model can be used to perform perplexity based data selection for a larger model. The key findings are that 1) a reference model with 30x fewer parameters compared to the larger model can be used to identify a subset of the training data which can improve the performance of the larger model relative to no pruning. 2) the filtered data subset can speed up training of the larger model, 2) the improvements carry over to some extent to over training and data constrained regimes, 3) ideal pruning criteria can vary by dataset e.g. for Pile, a high perplexity subset performs better while for Dolma, a medium perplexity subset works the best. The paper shows that test data perplexity is not a good indicator of the downstream task performance when using perplexity based pruning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The main results (Table 1) do not include a random baseline i.e. what is the performance of a model trained on a subset of the data which has a similar size as the perplexity filtered buckets but is selected randomly?\n* The paper does not contain ablations on the size of the reference model and sensitivity of the results to the random split (L113) used for training the reference model. Though exploring this space is computationally expensive, it may be useful to present 1-2 additional data points.\n* It would be good to see some additional analysis to understand why a high perplexity set works better for one domain while a medium perplexity set works better for others."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We demonstrate that the perplexity of a small language model can be used to prune the dataset that a significantly larger language model is trained on."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024perplexed,\ntitle={Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1GTARJhxtq},\nnote={under review}\n}"
},
"abstract": {
"value": "In this work, we investigate whether small language models can determine high-quality subsets of large-scale text datasets that improve the performance of larger language models. While existing work has shown that pruning based on the perplexity of a larger model can yield high-quality data, we investigate whether smaller models can be used for perplexity-based pruning and how pruning is affected by the domain composition of the data being pruned. We demonstrate that for multiple dataset compositions, perplexity-based pruning of pretraining data can significantly improve downstream task performance: pruning based on perplexities computed with a 125 million parameter model improves the average performance on downstream tasks of a 3 billion parameter model by up to 2.04 and achieves up to a 1.45× reduction in pretraining steps to reach commensurate baseline performance. Furthermore, we demonstrate that such perplexity-based data pruning also yields downstream performance gains in the over-trained and data-constrained regimes."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Data",
"Data Filtering",
"Data Pruning",
"Pretraining",
"Perplexity",
"Large Language Model",
"LLM"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/96d5d71930a535d10fdb22c78b69b7a985e564d0.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1H90Gb9rJ9 | Optimizing Neural Network Representations of Boolean Networks | main | Active | Neural Networks;Boolean Networks;Lossless Optimization;Integer Linear Programming;NPN Classification | optimization | 5;6;6;8 | 4;3;1;2 | 2;3;2;4 | 2;3;3;3 | 3;3;2;4 | 6.25 | 2.5 | 2.75 | 2.75 | 3 | -0.512989 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Please provide comparison with traditional DAG optimization methods for a fair comparison."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper proposes to speedup the optimization of neural network representation of Boolean functions and consider architecture constraints. Instead of solving each subproblems independently, the paper finds solutions of each NPN class and exploit shared optimized representations of two-layer sub-networks. The optimization of sub networks is modeled as finding a polynomial with fewer monomials or monomial degrees. In architecture aware lossless optimization part, the level constraints are considered when performing sub-network replacement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to optimize neural network representations of Boolean networks by improving the efficiency with NPN classification of sub-networks and considering objective in sub-networks during optimization. It achieves up to 5.9x speedup than the caching solution and reduces neurons and connections of final representations by 60% and 70% respectively."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The scientific novelty is limited. NPN classification and level constraints based DAG optimization are common techniques used in logic synthesis tools and neural network compilers. \n2. The k-input LUT technology mapping lacks fair comparison with other traditional DAG optimization methods such as ABC (Boolean DAG optimization tools including technology indepedent and technology dependent optimization) and AI compilers like Google XLA.\n3. Only two-layer sub-NN optimization is considered which is relatively too local for better neurons and level optimization."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 1
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* how well does it scale, as solving integer LP can be more demanding than solving LPs. \n* Computation complexity wiase, how does it compare against SOTAs?\n* What's the impat of NPN equivalence lasses on the optimization process?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper provides a solid theoretical foundation for the proposed approach, such as the equation introduction of new concepts like NPN and detailed analysis of the underlying optimization problem.\n* Evaluation looks comprehensive, including various bnchmarks, to demonstrate the effectiveness especialy in reducing network size and improving optimization time. \n* The proposed method looks interesting and novel as it combines multiple techniques including MP-based mapping, NPN equivalence classes and objec-aware optmization, to optimize representations of BNs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a new approach to optimizing neural network representations of Boolean networks. The authors propose a technique compressing NNs via minimizing the MP representation of each Boolean function in the network."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please see questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See Weaknesses"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is very well organized and written, providing the necessary background as well as the required proofs to support the claims of the authors. The experimental results clearly reflect the contribution of this work.\n\nThe proposed method outperforms the current state-of-the-art in terms of decreasing the size of neural networks, which means the number of connections and neurons, while simultaneously preserving the equivalent functionality. The proposed lossless compression, along with the objective-aware optimization, resulting in a faster and more efficient solution than the state-of-the-art.\n\nThe paper establishes a novel framework for lossless optimization techniques for neural network representation of boolean networks that can provide further advantages to neurosymbolic AI."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the authors present an optimization framework for deriving representations of neural networks of Boolean networks. The overall goal of the proposed framework is to overcome known limitations of current state-of-the-art methodologies that result in suboptimal neural networks, in terms of number of neurons and connections, which hinders their application in real-case scenarios. More specifically, the proposed method introduces a lossless technique for optimizing neurons and connections needed in a two-layer NN representation of a Boolean function. This is achieved by establishing an optimization problem that transforms the pruning of the network architecture into monomial reduction tasks for a given polynomial. The lossless functionality between the Minimized Multilinear Polynomial and the represented Boolean function is achieved by incorporating the heavyside threshold of the NN, with the relaxation of the optimization objective to the $l_1$-norm providing the required convexity. Due to the NP-hard nature of the proposed optimization, the authors introduce an objective-aware optimization, which is based on Negation-Permutation-Negation classification, that constructs subclasses of multilinear polynomial representations for the minimization of the Boolean functions, exploiting the shared representations among them and accelerating the NN optimization process using unique function caching. Finally, the paper provides two alternatives for optimizing the Neural Networks, one that involves all the vertices of the binary networks to the minimization of the multilinear polynomial and another that selects the subset of vertices in such a way that the depth of the resulting layer-merged neural network does not increase. The proposed method achieves significant improvements in contrast to the state-of-the-art approach in terms of optimization speed and the required connections and neurons."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The proposed method is established in two-layer NN representation without discussing the potential generalization on the (2 + n) layer NN representation and the theoretical limits of the proposed method regarding this potential. Taking into account the non-stochastic nature of the proposed NPN transformation and the required time $O(m2^k) + e$, the proposed algorithm seems quite limited to the 2-layer NN representation. However, a further discussion of this can provide fruitful insights for future work.\n\nEven though the relaxation of the optimization objective provides the required convexity, the problem still remains NP-hard. Indeed, the proposed deterministic solution ensures the lossless functionality of the binary network with the caching solution providing significant acceleration, hindering, however, the scalability of the proposed method in target networks. To this end, I recommend further discussion of the existing stochastic methodologies in the bibliography for lossy solution, studying the accuracy-efficiency tradeoff between deterministic and non-deterministic methodologies. In my opinion, the deterministic linear programing nature of the proposed optimization method should be noted in the abstract of the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. **Generalization to other Boolean network domains:** The experimental results focus primarily on digital circuits. Could the authors elaborate on the applicability of their methods to other types of Boolean networks, such as gene regulatory networks or biological networks? Are there any specific adaptations or considerations needed for these domains? Presenting results on even a small set of non-circuit BNs would greatly bolster the claim of general applicability.\n\n2. **Scalability analysis and potential optimizations:** The optimization time appears to grow considerably with BN size and K. Could the authors provide a more detailed analysis of the computational complexity of their methods? Are there any potential optimizations or algorithmic improvements that could be explored to enhance scalability, such as parallelization or more efficient data structures? A breakdown of execution time for different stages of the algorithm would help identify bottlenecks\n\n3. **Clarifying the impact of NPN transformations on the number of MMP computations:** The paper mentions that using NPN classification can reduce the number of MMP computations compared to function caching. However, the precise reduction factor ((2^k)!/2^(k+1)k!) is not immediately intuitive. Could the authors provide a more detailed explanation of how this reduction is achieved and its significance in practice, perhaps with a concrete example for a small value of k? It would be especially insightful to directly visualize how many MMP computations are saved for each benchmark circuit.\n\n4. **Connection to Neurosymbolic AI Implementations:** The paper mentions the potential of the work for neurosymbolic AI, but the link is somewhat abstract. Could the authors expand on how specifically the proposed methods could be integrated into neurosymbolic systems? For example, are there specific neurosymbolic architectures or frameworks where these optimized NN representations would be particularly beneficial? Perhaps a concrete example application scenario, even if hypothetical, could illustrate the potential."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**Originality:** The paper tackles the problem of optimizing NN representations of Boolean networks from a fresh perspective. While NN compression is a well-studied area, the authors identify a specific gap in existing techniques, namely the lack of *lossless* methods applicable to the BN-to-NN mapping problem. They creatively combine ideas from Boolean function classification (NPN) and convex optimization to develop a new approach to address this gap. The concept of leveraging NPN classes to accelerate the optimization by exploiting shared representations among subproblems is particularly original and demonstrates a deep understanding of the underlying structure of the problem. Moreover, the introduction of the *leeway* concept and the architecture-aware algorithm for maintaining depth in layer-merged NNs showcases innovative thinking in adapting the optimization process to different NN architectures.\n\n**Quality:** The paper is technically sound, with rigorous mathematical formulations and proofs supporting the proposed methods. The authors carefully define the necessary concepts and notation, ensuring clarity in their technical exposition. The experimental methodology is well-designed, with appropriate benchmarks and metrics used for evaluation. The comparison against relevant baselines (naive and caching solutions) provides a strong validation of the proposed techniques. The inclusion of additional results and analysis in the appendix further reinforces the quality of the work.\n\n**Clarity:** The paper is generally well-written and organized. The introduction effectively motivates the problem and summarizes the key contributions. The background section provides the necessary context and definitions, though perhaps could benefit from a slightly higher-level motivating example early on for a broader audience. The use of figures, especially Figure 1, significantly aids in understanding the optimization problem and the proposed approach. The steps of the algorithms are clearly presented, and the results are reported in a concise and informative manner.\n\n**Significance:** The paper addresses an important problem with practical implications for various domains. Efficient BN simulation is crucial in areas like circuit verification and design automation. The proposed optimization techniques can lead to substantial reductions in NN size and faster optimization times, making NN-based BN simulation more practical and scalable. Moreover, the connection to neurosymbolic AI highlights the potential of the work for advancing this emerging field. The ability to represent symbolic systems efficiently using NNs could pave the way for new hardware architectures and algorithms that combine the strengths of both symbolic and connectionist AI approaches. The paper's focus on lossless compression is also significant from a safety and reliability perspective, as it ensures that the optimized NN representation remains functionally equivalent to the original BN."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the challenge of optimizing neural network (NN) representations of Boolean networks (BNs). The authors point out that while NNs can represent Boolean functions, the current state-of-the-art method for deriving these representations often results in suboptimal NNs with a large number of neurons and connections, leading to inefficient simulation. Existing NN compression techniques are either lossy or inapplicable due to the specific nature of the NN-based technology mapping problem.\n\nThe paper makes three key contributions. First, it proposes a novel lossless technique for optimizing two-layer NN representations of Boolean functions, focusing on reducing the number of neurons and connections while preserving functional equivalence. This is achieved by formulating the problem as a constrained minimization task and employing a convex relaxation technique to solve it. Second, the authors introduce an objective-aware optimization algorithm that leverages Negation-Permutation-Negation (NPN) classification. This algorithm exploits shared representations among the two-layer sub-networks to significantly speed up the optimization process, demonstrating a substantial speedup over naive and caching-based solutions. Third, an architecture-aware lossless optimization algorithm is proposed, targeting both unmerged and layer-merged NN architectures. This algorithm determines which sub-NNs should be minimized to achieve overall NN size reduction while optionally maintaining the depth of the layer-merged network, which is critical for latency-sensitive applications. Experimental results on benchmark BNs derived from digital circuits show significant reductions in the size of the resulting NNs, confirming the effectiveness of the proposed optimization techniques."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Impact of L1 Relaxation:** The authors acknowledge that relaxing the ℓ0-norm objective to ℓ1 might lead to suboptimal solutions. However, the paper lacks a detailed analysis of the extent to which this relaxation affects the quality of the results. Quantifying the gap between the ℓ0 and ℓ1 solutions for the benchmark BNs, or investigating alternative approximation methods for the ℓ0-norm minimization, would provide a more complete understanding of the trade-offs involved. Perhaps experiments comparing the optimized NN size obtained with ℓ1 relaxation to theoretical lower bounds achievable with ℓ0 could highlight the potential room for improvement.\n\n2. **Scalability to Larger BNs:** The experimental results suggest that the optimization time can become substantial for large BNs and higher values of K (maximum input size of LUTs). While the NPN classification algorithm offers speedups compared to caching, the paper does not thoroughly investigate the scalability limitations of the overall method. Analyzing the runtime complexity as a function of BN size and K, and potentially exploring strategies for further improving the efficiency of the optimization process (e.g., by leveraging parallelism or more sophisticated data structures), would be beneficial. Consider profiling the algorithms to pinpoint bottlenecks and focus optimization efforts.\n\n3. **Clarity and Accessibility for a Broader Audience:** Although the technical content is generally well-explained, the paper could benefit from a more intuitive and accessible introduction to the problem and its significance. Providing a high-level illustrative example that highlights the practical implications of optimizing NN representations of BNs would engage a broader readership within the ICLR community. While the paper currently focuses on a specialized audience with expertise in Boolean functions, making it more approachable for readers with a general machine learning background would enhance its impact."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024optimizing,\ntitle={Optimizing Neural Network Representations of Boolean Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1H90Gb9rJ9},\nnote={under review}\n}"
},
"abstract": {
"value": "Neural networks are known to be universal computers for Boolean functions. Recent advancements in hardware have significantly reduced matrix multiplication times, making neural network simulation both fast and efficient. Consequently, functions defined by complex Boolean networks are increasingly viable candidates for simulation through their neural network representation. Prior research has introduced a general method for deriving neural network representations of Boolean networks. However, the resulting neural networks are often suboptimal in terms of the number of neurons and connections, leading to slower simulation performance. Optimizing them while preserving functional equivalence --lossless optimization-- is an NP-hard problem, and current methods only provide lossy solutions. In this paper, we present an algorithm to optimize such neural networks in terms of neurons and connections while preserving functional equivalence. Moreover, to accelerate the compression of the neural network, we introduce an objective-aware algorithm that exploits representations that are shared among subproblems of the overall optimization. We demonstrate experimentally that we are able to reduce connections and neurons by up to 70% and 60%, respectively, in comparison to state-of-the-art. We also find that our objective-aware algorithm results in consistent speedups in optimization time, achieving up to 34.3x and 5.9x speedup relative to naive and caching solutions, respectively. Our methods are of practical relevance to applications such as high-throughput circuit simulation and placing neurosymbolic systems on the same hardware architecture."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Neural Networks",
"Boolean Networks",
"Lossless Optimization",
"Integer Linear Programming",
"NPN Classification"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b3825bd7435a5bc3838c75cfae0770e05a639cbc.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Optimizing Neural Network Representations of Boolean Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1HCN4pjTb4 | Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse | main | Active | neural collapse;gradient descent training;weight decay;balancedness | optimization | 5;6;6;8;8 | 4;3;3;4;2 | 3;3;3;4;4 | 3;3;2;4;4 | 2;3;1;4;3 | 6.6 | 3.2 | 3.4 | 3.2 | 2.6 | -0.356348 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See Weaknesses for the most concerning questions. \nAdditionaly:\n\nQ1: Line 188: \"... aproaches 1 ...\" Shouldn't it be \"... aproaches 2\" based on (3)? Is it still sufficient for orthogonality claims (can the proof be adjusted to account for it)?\n\nQ2: Proof sketch of Theorem 4.4., lines \"306\". Why and how is are \"two phases\" guaranteed to happen during GD training?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "S1: Novelty, pushing for (more) general results on timely and attractive topic of 'neural collapse' in DNN training\nS2: Striving for a theoretically solid argumentation stating necessary assumptions (in Theorems and Propositions) and as Assumtions 4.1, 4.2, 4.3 ...\nS3: Well done Introduction positioning paper within existing works (UFM, NTK and other specific results)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Dear authors, thank you for submitting your work to ICLR 2025. The paper considers the phenomenon of 'neural collapse' and attempt to extend current rather special case results to brader class of networks, specifically, a deep neural (non-linear) networks with a wide first layer, funnel architecture and several, i.e. 2+, linear layers (head) before the output. After taking several assumptions, paper shows in series of Theorems (3.1, 4.4,5.2) and Propositions (4.5, 5.1,5.3) that GD training with weight decay (under further assumptions) leads to within class variability collapse (NC1). Results are supported by experiments on MNIST and CIFAR and MLP and ResNet + MLP head."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1: Accessibility of the paper for a '15 mins' reader. The main results are formulated using (Abstract, l19) 'balancedness', (Abstract, l20) 'bounded conditioning' and other only later defined phrases and makes it hard to asses the attractivity/usefullness of paper unless reading it whole. It is recommended to rework (the Abstract at least, if no Conclusions are present) to make it clear and self-sustained.\n\nW2: Quite a few assumptions are required in general. More over thay are added along the way, e.g., Assumption 4.2, $|\\sigma(x)| \\leq |x|$, etc., (which is ok, but also narrows the applicability of the final result). Some of those are very technical (especialy those in Theorems, such as in Theorem 3.1, (2),(3), (4)) but as well Assumption 4.3, Theorem 4.4. (10) and more) and with paper specific notation to learn, e.g., $\\lambda_{3 \\rightarrow L}$. It would help paper to have an thorough discussion on applicability/limitations of these assumptions. Perhaps at expense of shortening some 'proof sketches' refering them to SM?\n\nW3: 'Discussion and Conclusion' sections are not presented. This is most likely due to space constrained and will be added in case of acceptance (?). Yet, I find it impacts the paper quality negatively. Especially in a light of the previous (W2) comments, the discussion and conclusions could have brought a critical 'back to reality' summary. Idealy it would bring more intuition for results and their application. Yet, I find it very important to have such Discussion reviewed before publishing ...\n\nW4: Some references are rather inaccurately interpreted/generalized too much perhaps. For instance the lines 399-400 \"...Thankfully, instead of simply diverging, for large η (but not too large) the parameters naturally end up at the ‘edge of stability’: the top eigenvalue of the Hessian is close to $2/\\eta$ , i.e., the threshold below which GD is stable...\" from (Cohen et all. 2021). Referenced work provides only experimental evidence for a phonomenon and only approximately, i.e., for certain settings operator norm of Hessian exceeds 'stability threshold' $2/\\eta$, etc. Than approximation used on l410, especially $O(\\epsilon_1)$ is only valid if $\\nabla^2_{\\theta} Z_L$ norm is bounded, which is ok for NTK regime, but not necessarily for large learning rate regime. Or is it?\n\nW5: Following up on W4, Proposition 5.3, and other Theorem combine NTK with large learning rate regime, which sounds dangerous. Also requirement on wide first layer, suggest a NTK limit case is required. Could authors clarify a bit more on this relation?\n\nOverall, I find it to be a solid attractive paper with technically legit reasoning, taking few shortcuts (some noted above) and with missing discussion and conclusions. I suggest authors to work on alleviating weaknesses and discussing limitations to improve contributions of this interesting work significantly."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In Figure 3 the authors' numerical results show that non-linear layers are increasingly linear, as the depth of the non-linear part increases. Could the authors provide more insights into how this observation relates to their theoretical results and the mechanisms driving this increased linearity?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "This work provides an interesting theoretical insight on the role of training algorithm in the emergence of neural collapse, which I found especially exciting, and I think it opens up new directions for understanding the generalization properties of deep learning models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an interesting theoretical advancement in understanding neural collapse in practical, end-to-end training scenarios, moving beyond the unconstrained features model. The authors provide a rigorous demonstration that neural collapse arises in networks with linear layers appended to a nonlinear backbone, given conditions of interpolation and balancedness. They show that these conditions hold for sufficiently wide networks trained with gradient descent and L2 regularization. The empirical results further support the theoretical findings, showcasing the robustness of neural collapse across various architectures and datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In my opinion, this is a solid paper and I can not think of a weakness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- What exactly is the role of these additional linear layers? Are they only required for proving NC2 and NC3?\n- Do these results suggest that neural networks with 1 non-layer and many linear layers can also exhibit neural collapse?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The authors provide the first theoretical analysis of neural collapse that does not rely on the unconstrained features model.\n- The use of additional linear layers is novel and interesting technique. To the best of my knowledge such an architecture has not been studied.\n- The results apply to deep neural networks trained with gradient descent (under a particular architecture/nonstandard activation function) as well as networks which are globally optimal for the weight decay regularized objective."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper provides theoretical evidence for end-to-end training of true deep neural networks. This is contrary to previous works which primarily rely on the unconstrained features model. The paper provides explicit bounds on NC1, NC2 and NC3 for both globally optimal ($\\ell_2$) regularized deep neural networks as well as neural networks trained with gradient descent. The results provide new insights on the role of additional linear layers, weight decay regularization and large learning rates with respect to neural collapse."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- My main concern with the paper is the writing. The theoretical statements are very difficult to parse this is especially the case for eqs 2-5 in Theorem 3.1. Can the authors provide a more interpretable version of this theorem (hide constants, big-O etc.) and delay the details to the appendix?\n- I have the same concern about Theorem 4.4.\n\n### Minor\n- The proof of Theorem 3.1 in the Appendix could also be more clear. For example mentioning Weyl's inequality in the first inequality of (21)\n- In the proof of Theorem 5.2 at the Appendix (line 1264) shouldn't it be\n$$\\kappa(W_L) = \\kappa(W_{L:L_1+1})^{\\frac{1}{L_2}}$$\nnot \n$$\\kappa(W_L) = \\kappa(W_{L:L_1+1})^{\\frac{1}{L_1}}$$\n- Same concern about $L_1$ vs $L_2$ in the statement of Theorem 5.2."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Are there no assumptions on the activation function $\\sigma$ needed for Theorem 3.1? None are stated in Section 3, although assumptino 4.2 appears later. It seems odd that a potentially discontinuous and horribly behaved activation function would be permitted (I imagine Lipschitz is required?)\n\n2. Assumption 4.2 seems more like a smooth leaky relu, which has appeared in prior work [Chatterjee arXiv:2203.16462, Frei et al COLT 2022]\n\n3. The authors talk about (NC1)-(NC3), what about (NC4) from the original Papyan paper?\n\n4. Are there any important differences between the proof of Theorem 4.4 and the proofs/results in Nguyen and Mondelli? I'm not familiar with that work, but it seems like it's not super necessary to have details about optimization (e.g. the PL inequality) in the main section of the paper, it distracts from what I think are the more interesting and stronger results elswhere in it (it would also give additional space to have a proper conclusion and outline what the authors think are future directions). I am assuming balancedness didn't appear in the prior analysis but does here. A few sentences describing high-level differences between proofs and ideas would be helpful. \n\n5. Also, to be clear, the optimization analysis in Sec 4 is in the NTK regime right? I didn't see this explicitly mentioned but it should be if it is known."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The authors provide a novel characterization of conditions under which neural collapse can provably occur. I am not an expert on the NC theory literature, but to my knowledge, no prior work identified balancedness and interpolation as being key factors which can allow for this to occur, and this is a pretty strong finding. Since the balancedness of post-fixed linear layers is the strongest condition (as most common neural nets do not have multiple linear layers at the end, only a single one), the findings in Section 5 about how boundedness and interpolation can suffice for NC2+NC3 are also a nice addition. The numerical findings nicely complement their theoretical ones."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors identify a set of conditions under which various neural collapse phenomena provably occur in deep neural nets. They consider neural nets which have a sequence of nonlinear layers and then a sequence of linear layers. They find that approximate interpolation, weight balanced-ness, and boundedness suffice for deriving various neural collapse phenomena. They then show that GD on networks with a \"pyramidal\" overparameterized topology (i.e., first width is >= number samples, remaining widths are decreasing), under suitable initialization and regularization, allow for one of the neural collapse phenomena to hold. They then identify conditions (near-interpolation and small-norm) which ensure that global minimizers of the loss can satisfy all of the neural collapse phenomena. Finally, they look at the neural collapse phenomena from the perspective of the edge of stability via an analysis of the properties of the hessian under EoS assumptions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There aren't any serious weaknesses to me. The strongest assumptions--namely the need for multiple linear layers in order for the NC phenomenon to occur via the main theorem--seem necessary per experiments in Figure 1. So it seems that these assumptions are strong for a fundamental reason.\n\nThe pyramidal structure assumption is a bit odd/strong, but only seems needed for the optimization result, which I don't think is the central point of the paper. \n\nThe authors don't provide a conclusion or discussion section, and it would have been useful to have comments from the authors about what they think are the weakest parts of their work and what work should be prioritized in the future. I think they could get some additional space by removing some of the details about optimization from Sec 4 since I don't think they're super important."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I am wondering whether the layer balanced-ness property after training can be proved as a direct consequence of [1] in the author's setting.\n\n[1] Du, Simon S., Wei Hu, and Jason D. Lee. \"Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced.\" Advances in neural information processing systems 31 (2018)."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This work goes beyond the conventional unconstrained feature model (which many previous works have worked on) and proves that neural collapse can happen after training for full-connected neural networks satisfying some architectural conditions such as pyramidal topology and smooth activation. The conditions established in Theorem 3.1 of this work under which neural collapse can happen seem very reasonable. Indeed, later, the authors proved that those conditions can be satisfied by training a neural network via gradient descent which automatically implies that neural collapse can happen after training. \n\nI am not very familiar with the prior works along this line of research such as (Kothapalli & Tirer, 2024), (Hong & Ling, 2024) and many others mentioned in the introduction and related work. Based on my knowledge on neural collapse, I think this work made some interesting contributions towards understanding this phenomenon."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies neural collapse phenomenon in training wide neural networks. The first result in this work establishes some general conditions (interpolation, balancedness between layers, well-conditioned-ness of weight matrices) such that NC1, NC2, and NC3 can hold. The second result considers training a wide neural network with gradient descent such that the aforementioned conditions hold after training, which implies neural collapse can happen after training."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The analysis critically relies on the fact that the last two layers of the neural network are linear. I can definitely see this condition makes the problem a lot easier to analyze. I am wondering how hard it is to remove such restrictions. \n2. It seems the analysis of Theorem 4.4 relies on the neural network in the NTK regime, as the pyramidal topology assumption has appeared in previous works such as (Nguyen & Mondelli, 2020). I don't regard this as a major weakness even if it turned out to be true that the networks are in the NTK regime given the contribution of this work, however, I do appreciate clarification on this."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We consider deep neural networks with at least two final linear layers, and we show that neural collapse provably holds in the end-to-end training of the model with weight decay"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024wide,\ntitle={Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1HCN4pjTb4},\nnote={under review}\n}"
},
"abstract": {
"value": "Deep neural networks (DNNs) at convergence consistently represent the training data in the last layer via a highly symmetric geometric structure referred to as neural collapse. This empirical evidence has spurred a line of theoretical research aimed at proving the emergence of neural collapse, mostly focusing on the unconstrained features model. Here, the features of the penultimate layer are free variables, which makes the model data-agnostic and, hence, puts into question its ability to capture DNN training. Our work addresses the issue, moving away from unconstrained features and studying DNNs that end with at least two linear layers. We first prove generic guarantees on neural collapse that assume (i) low training error and balancedness of the linear layers (for within-class variability collapse), and (ii) bounded conditioning of the features before the linear part (for orthogonality of class-means, as well as their alignment with weight matrices). We then show that such assumptions hold for gradient descent training with weight decay: (i) for networks with a wide first layer, we prove low training error and balancedness, and (ii) for solutions that are either nearly optimal or stable under large learning rates, we additionally prove the bounded conditioning. Taken together, our results are the first to show neural collapse in the end-to-end training of DNNs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"neural collapse",
"gradient descent training",
"weight decay",
"balancedness"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ea1298d0fca925f8217de3bd556fbec936700e3d.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1HQZ4QFWi8 | Aligning Large Language Models via Self-Steering Optimization | main | Active | LLM;Alignment;Automated alignment | foundation or frontier models, including LLMs | 3;3;3;6 | 4;4;2;3 | 3;2;2;3 | 3;2;2;3 | 1;2;2;2 | 3.75 | 3.25 | 2.5 | 2.5 | 1.75 | -0.174078 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Related work (2.1) should be its own section preceding section 2.\n2. Should not use theta for loss weight (since it’s commonly used to refer to policy parameters).\n3. The problem setting is not clearly defined - should be defined in the beginning of section 2 or its own section.\n4. Line 199/200 - what does this backdoor refer to? This needs to be more clearly explained.\n5. No error bars are given in results. This is particularly because many of the results show little difference between SSO and the baselines. \n6. GSM8K iter1 of SSO seems misbolded in Table 1 - it is lower than modified PBAA iteration 1.\n7. I would argue all MATH and GSM8K (Table 1) results are within noise. AE2 is also marginal (15.0, vs 14.9 for PBAA iteration 2).\n8. Understanding why PBAA AE2 drops significantly would be an interesting contribution.\n9. A good ablation would be simply removing the self-steering term (and keeping the WPO-inspired term) to understand its impact."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper considers a very interesting idea of introducing principle-based methods into preference optimization algorithms such as DPO and IPO. Such methods, especially their iterative versions, have substantial drawbacks as identified by section 1 of this paper, and addressing them would go a long way in achieving scalable and efficient LLM alignment."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an auxiliary additive “self-steering” loss for iterative preference optimization algorithms (e.g. iterative IPO, DPO) for LLM alignment. This self-steering term is inspired from the principle-based alignment literature, and is designed to maintain a distinguishable gap between positive and negative responses despite sampling them on-policy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I found the paper to be weak in the following aspects:\n1. **Experimental results.** Many of the improvements of the method seem very incremental, or within noise (Table 1). Seeing these results, I'm not convinced that this method offers noticeable improvements over existing baselines.\n2. **Clarity.** The paper structure and writing were lacking in several areas (see below), and I found the method to be explained poorly despite its simplicity. In particular, the loss term could be explained and motivated much better in section 2.3."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can you show the improvement of SSO from the generative data and the proposed optimization objective separately?\n2. Can you further explain why using $y^-$ in Equation (2) will cause a bookdoor problem? In Equation (3), why should $x^-$ prefer $y^O$ over $y^+$?\n3. Why do you choose different base models in the experiments, e.g., the pretrained model, instruct model, and also SFT model (from Table 3)? Is the SFT model the baseline from previous experiments?\n4. In Figure 4 (a), why can we see \"IPO caused a gradually decreased accuracy\" since both the optimization methods and the data are different?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The automated alignment process reduces dependence on human annotations, making model alignment more scalable and cost-effective.\n2. The results of SSO demonstrates improvements on benchmarks for both subjective and objective tasks with effective length control.\n3. SSO can be extended to other base loss functions, e.g., IPO and DPO."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel method called Self-Steering Optimization (SSO) for automated large language models (LLMs) alignment. SSO autonomously generates high-quality preference signals based on predefined principles to aid in preference learning. The authors also propose a new optimization objective based on WPO and IPO. The method demonstrates effectiveness on Qwen2 and Llama3.1 models across multiple benchmarks compared to SFT models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The improvement of this paper is mainly based on two categories, the synthetic data and the new objective. However, in the experiments the authors do not separate them well to state their effectiveness. \n2. The clarity of this paper is not enough. The authors should provide more background on previous methods like WPO. The notations are also unclear. For example, $p^+, p^-$ from $\\mathcal{G}$ defined in Equation (1) do not appear in following contents. Meanwhile, in Section 2.3, the authors introduce multiple QA pairs for their objective without well explaining their expectations. \n3. The SFT baseline is based on basic data rather than the synthetic data. DPO/IPO with SSO data is also not compared."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* How would your method scale with smaller models?\n\n* How does SSO handle scenarios where human-like feedback is ambiguous or lacks clear contrastive principles?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The paper provides extensive benchmarking of the method and with additional experiments proving the robustness of the method.\n\n* As the paper touched upon, the method can be extended to other loses that is not IPO-based which makes it more flexible.\n\n* The method reduces reliance on costly human annotations, paving the way for more scalable training processes."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Self-Steering Optimization (SSO), an method used to align LLMs with minimal human intervention. SSO autonomously generates on-policy preference signals to guide the training of policy models without the need for manual annotation. This approach leverages predefined contrastive principles during iterative training to maintain a consistent quality gap between chosen and rejected responses. The paper validates SSO using two foundation models, Qwen2 and Llama3.1, showcasing significant performance gains across both subjective and objective benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The current design of the self-steering and weight functions is simplistic as mentioned in the limitations. \n\n* The writing is a unclear at times and things in the method section could afford some more clarity. Especially reasoning about how your method solves your fundamental problem. Right now it's offered as a solution without going into details how.\n\n* It's unclear what the author means with \"Expectations\" at section 2.3.\n\nOverall, a plan on how you will improve the clarity of the introduction where you should clearly state the problem and then how your method mend this problem would go a long way."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Questions on teh writing in the draft:\n* Several terms are not properly defined. What are principles $p^+$ ad $p^-$. Why are there only two of them? \n* What is $y^0$ and where does it come from?\n* How does $x^+$ relate to $p^+$.\n* Several ambiguous terms. What does \"accurate signal\" mean?\n* What does \"We also designed a W for learnable signals\" mean?\n\nQuestions on the method:\n* Could the authors be precise about what the delta is versus prior work? I pose this question in more detail in the weaknesses section. \n\nQuestions on Experiemnts:\n* The Modified PBAA baseline is never defined. What is it?\n* Why do we evaluate alignment methods on benchmarks like MMLU-Pro and math? Looking at the appendix, the alignment principles often have nothing to do with these benchmarks, yet they are the core means of evaluation. How can we know how helpful SSO is for alignment if the reported benchmarks are not actually concerned with alignment.\n* Why should we be able to compare PBAA-based methods and Ultrafeedback? It seems like these are just totally different datasets. Could the authors explain this?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* this paper tackles an important problem -- namely improving efficiency in the generation of alignment data.\n* the paper does evaluation across a large number of benchmarks and sceniors (though the methodology and reasoning behind them is questionable, see weaknesses.)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Self-Steering optimization, a preference finetuning method that automatically genreates perferences using contrastive pairs. SSO uses a combination of losses on automatically generated data to finetune an LLM."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Writing** \n\nThe paper is a bit hard to approach without proper background, but htis is not provided by the paper. In several places notation is not proerly defined. See the \"questions\" section as well.\n\n* I understand that the authors build on top of contrastive principles, but given an overview of this seems like necesary background.\n* More intuition on the individual loss terms is necessary.\n* Several grammar mistakes which should be corrected.\n\nThere are several unclear sentences / phrases in the paper. At present, I do not believe the writing passes the bar for publication. At the end of reading the paper, it is a bit unclear why the authors chose the specific losses / formulations used. \n\n**Delta Versus Prior work** \nIt's unclear to me what the delta versus prior work is. Granted, I am not extermely familiar with principle based alignment. However, the authors do not do a good job articulating the differences between SSO and other methods. The closest they come to doing so is at the end of Section 2.1 where it is stated that \"Additional inputs, such as principles, could lead to insufficient... we propose SSO to address these limitations\"\n\nWhat part of SSO is different than prior work? I assume prior work has the contrastive principle sampling? Is the difference then just the on-policy weighting function W? Why is this important? This also seems to be taken from WPO.\n\n\n**Experiments**\nThe experiments section is not clearly written enough for me to discern what conclusions should be made. After reading the work, I was left with several questions about the methodology and presentation of the experiments:\n* The Modified PBAA baseline is never defined. \n* it doesn't make sense to me that the authors use ultra-feedback for training, but evaluate on MMLU-Pro and math. How does alignment influence math performance? \n* Several of the results do not compare to baselines, and only present results for SSO. This includes Table 3 and Table 4"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduces Self-Steering Optimization (SSO), a novel approach that enhances model automated alignment by iteratively optimizing the learnability and accuracy of generated signals, demonstrating significant improvements."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024aligning,\ntitle={Aligning Large Language Models via Self-Steering Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1HQZ4QFWi8},\nnote={under review}\n}"
},
"abstract": {
"value": "Automated alignment develops alignment systems with minimal human intervention.\nThe key to automated alignment lies in providing learnable and accurate preference signals for preference learning without human annotation.\nIn this paper, we introduce Self-Steering Optimization ($SSO$), an algorithm that autonomously generates high-quality preference signals based on predefined principles during iterative training, eliminating the need for manual annotation. \n$SSO$ maintains the accuracy of signals by ensuring a consistent gap between chosen and rejected responses while keeping them both on-policy to suit the current policy model's learning capacity.\n$SSO$ can benefit the online and offline training of the policy model, as well as enhance the training of reward models.\nWe validate the effectiveness of $SSO$ with two foundation models, Qwen2 and Llama3.1, indicating that it provides accurate, on-policy preference signals throughout iterative training.\nWithout any manual annotation or external models, $SSO$ leads to significant performance improvements across six subjective or objective benchmarks.\nBesides, the preference data generated by $SSO$ significantly enhanced the performance of the reward model on Rewardbench.\nOur work presents a scalable approach to preference optimization, paving the way for more efficient and effective automated alignment."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM",
"Alignment",
"Automated alignment"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/795a77964b59b763f9e7616f8b20fdc430133a81.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Aligning Large Language Models via Self-Steering Optimization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1IeCqgULIM | Abstracting and Refining Provably Sufficient Explanations of Neural Network Predictions | main | Active | explainability;XAI;explainable AI | interpretability and explainable AI | 5;5;6;8 | 3;3;1;3 | 3;3;3;4 | 2;2;3;3 | 3;3;4;3 | 6 | 2.5 | 3.25 | 2.5 | 3.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Did you consider some trade-off of sufficiency versus runtime?\n\n- How do you solve the issue of uniqueness of the relevant feature set?\n\n- How does this work compare to \"brute-force\" occlusion?\n\n- How did you verify the sufficiency for the heuristics-based approaches?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The work is clearly written.\n\n- The proposed approach is well-motivated.\n\n- The work makes the limitations of its approach clear.\n\n- The application of abstraction-refinement from the area of model verification\n to feature explanation in an occlusion-based context is original."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The work introduces an approach to provide \"provably minimally sufficient\nexplanations\", based on \"abstraction refinement\". Provably minimally\nsufficient explanations are defined as the minimal set of features required to\nunder which the model produces the same prediction. Abstraction\nrefinement is a method from model checking and verification, which reduces the\ncomplexity of the model while keeping the prediction (winning class) constant,\nsomewhat similar to model pruning. The work includes a formal proof motivating\nits approach, and some empirical experiments analyzing the runtime and\nidentified number of minimal features, as well as a comparison to two similar\napproaches with respect to sufficiency and runtime."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Significance: The practicality of the approach is severely limited by is long\n runtime (up to 3 hours for a single attribution mask). This issue could be\n alleviated by discussing trade-offs, especially considering the approaches in\n Table 2. Comparing with these methods, SIS and Anchors, there is likely some\n optimal trade-off between sufficiency and runtime that would be valuable to\n analyze.\n\n- Contribution: The identified explanations may not necessarily be unique. A purely additive\n model with some threshold could have any combination of inputs, as long as\n the threshold is passed (i.e., the prediction does not change). Due to this,\n the identified feature attributions might not necessarily present all\n features relevant for the prediction, but rather only a subset thereof.\n A discussion of the uniqueness, and the issue of the order of removing the\n features, would be very valuable.\n\n- Novelty: There is a plethora of approaches (see, e.g., Ancona et al., 2019 for\n approximations of Shapley values) that assign relevance to features (somewhat\n different to choosing feature subsets) with this issue without constraining\n the sufficiency (i.e., the fidelity) of the model directly. These mostly\n avoid computing the Occlusion (see Zeiler 2014), which observes the\n prediction under removal of individual features, due to its infeasible\n runtime. The approach presented is very similar to occlusion-based\n approaches, as the model is reduced in order to occlude parts of the input.\n This is an important body of related work to discuss.\n\n\nReferences:\n\nAncona, M., Oztireli, C., & Gross, M. (2019, May). Explaining deep neural\nnetworks with a polynomial time algorithm for shapley value approximation. In\nInternational Conference on Machine Learning (pp. 272-281). PMLR.\n\nZeiler, M. D. (2014). Visualizing and Understanding Convolutional Networks. In European conference on computer vision/arXiv (Vol. 1311)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could the authors provide further insights or potential solutions on how to extend the applicability of their method to more complex, state-of-the-art neural network architectures?\n2. In the experimental setup, was the computational overhead of the abstraction-refinement process compared to traditional methods quantified beyond explanation size and time? A breakdown of this could enhance the paper's impact.\n3. How does the method perform across different domains, such as vision versus text? Are there domain-specific challenges that might affect the sufficiency of explanations?\n4. Have the authors considered evaluating their method on a complex regression problem, such as the aircraft taxiing task used in Wu et al.'s work?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. Introduces a novel abstraction-refinement framework integrated with neural network verification for generating provable explanations, a unique combination in the field.\n2. Empirical results are robust, with detailed comparisons to baseline methods.\n3. The paper is well-written with precise definitions and a logical flow that enhances its readability and understanding.\n4. Addresses a critical challenge in XAI, making it possible to deploy more reliable AI systems in regulated environments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an abstraction-refinement technique to create provably sufficient and minimal explanations for neural network predictions. The method works by initially generating a smaller, abstract network through a neuron-merging technique. Neurons with similar behavior are combined, controlled by a reduction rate, which specifies the extent of abstraction. This smaller network allows faster computation of explanations, but if the explanation is insufficient, the network is iteratively refined by gradually adding neurons back until a satisfactory explanation is found.\n\nThe evaluation on datasets like MNIST, CIFAR-10, and GTSRB shows that this approach is more efficient in both computation time and explanation size than traditional verification-based methods. However, the method’s reliance on neural network verification may limit scalability, and its testing on only standard benchmarks raises questions about real-world applicability. Nonetheless, the paper’s contribution lies in using formal verification to ensure that the explanations are very sound and reliable, which is critical for safety-sensitive domains."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The method’s reliance on neural network verification queries limits its scalability to real-world applications. The verification tools required here may not support the level of scalability needed for larger, state-of-the-art networks.\n2. The paper primarily tests on MNIST, CIFAR-10, and GTSRB, standard benchmarks that do not adequately test the scalability and generalizability of the method. This narrow evaluation undermines claims of efficiency and limits insights into practical, diverse applications (such as regression problem). Including a challenging regression problem, such as the aircraft taxiing task evaluated by Wu et al., would provide stronger evidence of the method's scalability and applicability in high-stakes, continuous-output domains.\n3. The abstraction process risks oversimplifying the neural network to a degree that explanations may lose meaningful detail, leading to explanations that are formally sufficient but practically uninformative.\n4.The paper’s current evaluation lacks a comprehensive set of baselines, particularly from perturbation-based and gradient-based explainability methods. Including comparisons with these widely used XAI techniques would better contextualize the capabilities of the proposed abstraction-refinement approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How well does the abstraction-refinement approach scale to more complex architectures, such as Transformers or deeper CNNs, beyond the datasets tested? Can the authors provide insights or preliminary results on its performance with larger models?\n2. The paper mentions that abstraction reduces the network size, potentially losing information. How does this information loss impact the quality or trustworthiness of the explanations? Could the authors quantify or analyze this trade-off?\n3. The experiments focus on image datasets. How does the approach generalize to other types of data, such as time-series, tabular data, or text? Are any modifications to the method necessary for non-image data?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper presents a unique abstraction-refinement technique, effectively bridging the gap between interpretability and scalability in neural network explanations. This approach is innovative in the realm of provable explanations.\n2. Unlike many heuristic-based explainability methods, this technique provides formal guarantees for explanation sufficiency, which is highly valuable in safety-critical applications requiring reliability in interpretability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an abstraction-refinement approach to efficiently generate provably sufficient explanations for neural network predictions. Traditional formal explainability methods are computationally intensive and struggle with scalability on larger models. This method simplifies the neural network by creating a smaller, abstract version, which speeds up the computation of explanations. If the explanation is insufficient, the network is gradually refined until a minimal, sufficient subset of features is identified."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper primarily demonstrates results on relatively simple models and standard datasets. Testing on more complex architectures (e.g., deep CNNs or Transformer-based models) would strengthen the claims regarding scalability and broader applicability.\n2. While the abstraction-refinement approach reduces computational load, it is still constrained by the scalability limits of neural network verification. The authors could address this limitation by discussing ongoing advancements in verification techniques and how they might enhance this method.\n3. The comparison focuses primarily on heuristic-based methods (e.g., Anchors and SIS) but lacks depth regarding alternative formal explanation methods. Adding comparisons with other provable techniques would provide a more comprehensive evaluation.\n4. The abstraction process may lead to information loss, which could affect the explanation's precision or fidelity. The paper could benefit from a more in-depth analysis of the trade-offs between explanation minimality and information retention across abstraction levels."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 1
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- How hard is it to extend these results to arbitrary modifications of the complement, therefore not limiting to an epsilon-ball perturbation?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Providing provably sufficient explanations is a relevant problem, and developing methods to compute them efficiently is certainly of great interest. The results confirm the effectiveness of the proposed methodology\n- The overall quality of the writing and presentation is very good\n- The authors provided the source code with a good description of how to use it"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes to adopt an abstraction-refinement approach similar to distillation techniques for scaling up the extraction of provably sufficient explanations.\nThis is a relevant research direction, as traditional methods have scalability issues.\nResults confirm that the extracted explanations are indeed sufficient and also minimal, and the runtime shows great improvements compared to baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Corollary 1 is essential for the overall message of the paper, but the proof is not self-contained and seems more like a straightforward application of the result of [1]. The authors should make the proof more self-contained, also clarifying how it builds on top of [1].\n\n- The title suggests that the main contribution is a method providing provably sufficient explanations for neural networks. To my understanding, however, providing a provably sufficient explanation of an abstract model as per [1] is *fairly easy*, given that a sufficient explanation for any abstract model will also be sufficient for the original one. Nonetheless, this does not guarantee the minimality of the explanation, requiring the iterative refinement method proposed in Algorithm 2. I wonder, therefore, whether the main contribution lies in providing provably sufficient explanations, or in making a provably sufficient explanation also provably minimal.\n\n\n\n\n\n\n[1] Fully Automatic Neural Network Reduction for Formal Verification. Tobias Ladner and Matthias Althoff"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "An abstraction-refinement approach to derive provably sufficient explanations for neural network predictions."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024abstracting,\ntitle={Abstracting and Refining Provably Sufficient Explanations of Neural Network Predictions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1IeCqgULIM},\nnote={under review}\n}"
},
"abstract": {
"value": "Despite significant advancements in post-hoc explainability techniques for neural networks, many current methods rely on approximations and heuristics and do not provide formally provable guarantees over the explanations provided. Recent work has shown that it is possible to obtain explanations with formal guarantees by identifying subsets of input features that are sufficient to determine that predictions remain unchanged by incorporating neural network verification techniques. Despite the appeal of these explanations, their computation faces significant scalability challenges. In this work, we address this gap by proposing a novel abstraction-refinement technique for efficiently computing provably sufficient explanations of neural network predictions. Our method *abstracts* the original large neural network by constructing a substantially reduced network, where a sufficient explanation of the reduced network is also *provably sufficient* for the original network, hence significantly speeding up the verification process. If the explanation is insufficient on the reduced network, we iteratively *refine* the network size (by gradually increasing it) until convergence. Our experimental results demonstrate that our approach substantially enhances the efficiency of obtaining provably sufficient explanations for neural network predictions while additionally providing a fine-grained interpretation of the network's decisions across different abstraction levels. We thus regard this work as a substantial step forward in improving the feasibility of computing explanations with formal guarantees for neural networks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"explainability",
"XAI",
"explainable AI"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/02eca24063051eeafe6ab359bdbde39b690af5f1.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/5a6782ca9bca0eacb9107367df055b906f40639d.zip"
},
"title": {
"value": "Abstracting and Refining Provably Sufficient Explanations of Neural Network Predictions"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Iq1qIsc2s | Revisiting Positional Information in Transformers in the era of Fused Attention | main | Active | Efficient Vision Transformers;Position Embeddings;CUDA | applications to computer vision, audio, language, and other modalities | 5;6;8 | 4;2;2 | 2;4;3 | 2;2;3 | 3;4;3 | 6.333333 | 2.666667 | 3 | 2.333333 | 3.333333 | -0.755929 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could you expand on the justification for RoPE's superior performance compared to RPB, beyond the intuitive explanations provided?\n- Would the gains in efficiency scale to larger model size and resolution combinations?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The expansion of RoPE to images is presented clearly and makes intuitive sense\n- The paper does a great job of motivating and describing the CUDA implementation.\n- It is deeply appreciated that they go deeper and explore multiple different Rotary Positional Embeddings and report the comparisons"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes using RoPE embeddings, a popular and widely used method for LLMs for vision transformers motivated by imperial gains in accuracy and efficiency when applying to multiple models of various sizes. For this, they extend RoPE to fit image space and tackle the challenge of implementing it and studying multiple rotary positional embedding implementations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While a lot of small improvements are introduced in the method (3.3.2) the support or estimation of impact is somewhat lacking in the results.\n- While the impact of k is detailed and appreciated, the measurement of performance is limited to accuracy and makes it hard to understand the gains or sacrifices associated with the implementations.\n- The paper claims \"noteworthy gains\" across models, however the gains in Table 2 seem relatively limited (0.1-0.2) in most cases.\n- Limited novelty, while the expansion of RoPE makes sense, the novelty both in terms of method and results might be limited."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In comparing Table 4 and Table 5, the shared angles consistently outperform the non-shared angles. Why, then, did the authors choose to use non-shared angles in Axial RoPE?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is well-written and easy to follow. The figures demonstrate the ideas clearly."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the use of Rotary Position Embeddings (RoPE) in vision transformer models and provides an analysis of the differences between Relative Positional Biases (RPE) and RoPE through empirical experiments. Additionally, it proposes a fused CUDA-based implementation to improve inference speed. The paper also presents various design choices for RoPE, supported by experimental results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Unclear Contribution:** The novelty of the paper is uncertain, as RoPE has been previously applied to vision transformers. Notably, Heo et al. (2024) and Chu et al. (2024) have already explored RoPE in this context, with 2D RoPE in particular resembling Heo et al.'s work. Further discussion is needed to clarify the differences between the current approach and previous implementations. A comparison in the experimental section is also recommended. Additionally, the authors should consider reorganizing the contribution sections, as the first two contributions appear unconvincing.\n\n- **Inconclusive Results:** The paper lacks a definitive conclusion regarding the performance of 2D RoPE versus Axial RoPE. For instance, Table 4 shows that 2D RoPE outperforms Axial RoPE, warranting further discussion.\n- **Limited Generalization Testing:** The paper does not assess the generalization ability of Axial RoPE across downstream tasks (e.g., detection and segmentation). Additional experiments to showcase RoPE’s generalization potential are recommended."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Explain in detail the actual contributions of this paper and what novel ideas where brought in?\nIs Axial RoPE your contribution or has it been taken from another paper and you just did a lot of analysis on it?\nWhy do we even need RoPE if no bias gives best results?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "Instead of applying RPB which slows down the attention computation in flash attention modules, the paper devices a way to add positional embeddings before attention is computed.\nSince RoPE implementation is not dependent on which attention module is used, it can be integrated with any of the modern fast modern attention implementation unlike RPB.\nDeveloped an efficient CUDA implementation for RoPE with an easy-to-use Python wrapper.\nFound that applying RoPE to a fraction of embedding is enough."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes Rotary Postional Embeddings as a replacement for relative positional bias in transformers. They shows that it is leads to better accuracy and faster implementation. The paper tries to tackle the issue of latency when RPB is used with modern attention implementations such as flash attention."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The fact that RoPE will improve performance in ViT is not a novel idea and has already been shown in \nP. Jeevan and A. Sethi, \"Resource-efficient Hybrid X-formers for Vision,\" 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2022, pp. 3555-3563, doi: 10.1109/WACV51458.2022.00361. This paper has not been cited.\nThe scope of this paper is limited to cases when fused attention kernel is used. When RPB is introduced in this case, it hampers the fast attention compute. \nMost of the paper is just a review of postional embeddings and biases. \nTable 3 shows best performance when there is no bias introduced. The paper does not explain why this is so and also why even RoPE is needed then.\nThe ablations and experiments needs to more elaborated."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We provide insights about training efficiency concerning positional embeddings in transformer-based vision models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024revisiting,\ntitle={Revisiting Positional Information in Transformers in the era of Fused Attention},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Iq1qIsc2s},\nnote={under review}\n}"
},
"abstract": {
"value": "Imparting positional information has been a crucial component in Transformers due to attention's invariance to permutation. Methods that bias attention weights, like Relative Positional Bias (RPB), have been preferred choice in more recent transformer-based architectures for vision. In parallel, fused attention has become the standard implementation for attention, largely thanks to open source solutions such as Flash Attention and FMHA. However, it is not trivial to fuse explicit biasing or masking of attention weights into a fused attention kernel without affecting its performance. In this scenario, position embeddings present themselves as a viable replacement for attention weight biases. Position embeddings are applied to the tokens directly, decoupled from the attention mechanism, thereby sidestepping the problems that arise with attention weight biases in fused kernels. In this work, inspired by the booming LLM landscape, we analyze the applicability of Rotary Position Embeddings (RoPE) as a replacement for RPBs in vision models. Unlike RPB which explicitly biases attention weights, RoPE biases the dot product inputs (query and key) directly and ahead of the attention operation. We empirically show the prowess of RoPE over RPBs in terms of accuracy and speed. We study multiple implementations of RoPE and show that it is sufficient to use only a fraction of hidden dimensions for RoPE to achieve competitive performance. We also develop a fast implementation for Axial RoPE. Together with the most performant fused attention implementations, and our fast RoPE implementation, we observe inference speedups compared to RPB with improved or similar accuracy. We foresee RoPE as a replacement for RPBs, paving the way for the widespread adoption of fused attention in transformer-based vision models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Efficient Vision Transformers",
"Position Embeddings",
"CUDA"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d3e24f5ed69a104f61923ecc0eabafbeaf2d27bb.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Revisiting Positional Information in Transformers in the era of Fused Attention"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Iu2Yte5N6 | Rapid Selection and Ordering of In-Context Demonstrations via Prompt Embedding Clustering | main | Active | in-context learning;order sensitivity;LLMs;clustering;cluster-based search;positional encoding;attention mask;serial-position effect;cluster-based search | generative models | 5;5;6 | 3;3;4 | 2;3;2 | 2;2;3 | 3;2;3 | 5.333333 | 3.333333 | 2.333333 | 2.333333 | 2.666667 | 1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* How does a variant of Figure 3b with demonstrations instead of chunks of text compare?\n* Do the prompts that share a close representation get similar scores? (what is the standard deviation ?)\n* How does the performance change with the number of intermediate demonstrations? ( Some insights are given, but more results would significantly improve the demonstration)"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Running large language models is costly, and few-shot in-context learning is a common approach to alleviate the cost. The proposed method is simple and greatly reduces search time, making a practical contribution. \n* Even though the theoretical assumptions are strong, their partial derivative analysis is original and clearly advocates for the clustering property.\n* The cluster-based search proposed by the authors is well explained."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors investigate the few-shot setting and the order of given demonstrations/examples in the prompt. Analyzing the last hidden state of the last layer of decoder-only transformers, they study the clustering property, which prompts sharing the first and last demonstration. Experiments are conducted in two domains: classification and reasoning. Each is divided into two tasks, and the classification sub-tasks are further modified into symbolic ones.\nThe explanation proposed is that this property depends highly on the causal attention mask and the positional encoding. The first demonstration clustering depends on the causal attention mask. However, the last demonstration clustering depends on a more complex interplay of the causal attention mask and the positional encoding.\nFollowing their findings, the authors propose a selection and ordering method based on the uncovered clusters. Experiments are conducted using their methods with an already-used entropy-based search. They compare their methods with an oracle and unmodified entropy methods. Their findings show that the clustering-based method while suffering a slight drop in performance, their method is more than 90% faster."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Too little evidence of clustering is given on the classification tasks, and clustering is unclear on The 2D projection (Figure 1, Figure 4).\n* Few experiments have been done varying the number of demonstrations and the pool size; it would be really beneficial to give some insight on the scaling possibility of the method. \n * A more thorough analysis of the results would be appreciated to confirm the findings, for example: Do the prompts sharing a close representation share similar scores? (what is the standard deviation ?) How does the performance change with the number of intermediate demonstrations? ( Some insights are given, but more results would greatly improve the demonstration). \n* Not enough selection methods are considered for comparison in terms of time and scores.\n* A table showing time performance and or gap with other methods is needed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "My main concerns have been listed above. I look forward to the authors' response and am willing to reopen and adjust the score upward."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Clear Argumentation: The paper is well-structured, with clear explanations that make the objectives and contributions easy to follow.\n2. Robust Proofs: The theoretical analysis is thorough, supporting the proposed mechanisms in in-context learning.\n3. Comprehensive Experiments: The experiments are detailed and varied, effectively demonstrating the method’s efficacy across multiple tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper explores the issue of demonstration order sensitivity in large language models (LLMs) during in-context learning (ICL) and uncovers a clustering phenomenon in the embedding space, where prompts with the same first and last demonstrations tend to cluster together. Through theoretical analysis and empirical evidence, the paper identifies that this clustering effect stems from the interaction of causal attention masks and positional encoding. Moreover, they propose a \"Cluster-based Search\" method that significantly reduces the computational complexity of selecting and ordering demonstrations while maintaining high model performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The models used in this study seem somewhat outdated. Models with the equivalent size should include newer architectures, such as LLaMA 3, Phi, or similar. Why were these not used?\n2. The datasets and tasks included in the study are limited. For instance, why is there no mathematical task such as GSM8k included in the paper\n3. While the authors highlight the importance of the first and last demonstrations in ICL, the figures in the paper suggest that the first demonstration may be particularly or even most significant. However, in the cluster-based method, the authors did not conduct an ablation study that uses only the first or only the last demonstration in clustering to analyze the contributions of the first and last demonstrations independently."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to my concerns in the weakness part."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The proposed idea of cluster-based search is simple yet effective for ICL.\n* The performance of the proposed method, especially efficiency improvement, is very promising."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studied the ordering effects of demonstrations in in-context learning (ICL) and claimed that the first and last demonstrations are the most crucial ones for effective demonstrations using both empirical and theoretical analyses. Based on this observation, this paper proposed a cluster-based search method to find out effective demonstration orders (considering only the first and last demonstrations instead of all demonstrations), which will not suffer from the efficiency issue in Exhaustive Search. The experiments showed that the proposed method achieve small drop in accuracy but significant improvement in efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Some claims are not well supported by the empirical analyses. The cluster structure of GPT-2 model in Figure 1 seems unclear, compared to the other two LLMs. Figure 3 (a) shows that the clusters also share the same second demonstrations with high percentage, and for the two bottom figures, the percentage of sharing the same second demonstrations is even higher than the percentage of sharing the same last demonstrations. These observations may be conflict with the main claim of this work. Also, the analyses about the last demonstration seem to be less convincing, e.g., lines 340-346.\n* The theoretical analyses are counter intuitive. According to Prop. 4.1, the embedding of the transformer layers will eventially the same if two promopts share the same first input token. I cannot understand this claim in the proof also, in which the authors mentioned that \"if causal attention mask is applied, then x_1(t) = x'_1(t) for all t >= 0.\" I am not sure why this assumption holds. Intuitively, if this proposition holds, I may infer that only the first demonstration will affect the performance and the last demonstration will not matter too much, which is different from the authors' claim.\n* More comprehensive experiments are required. In Table 1, the case of Random demostrations is not included. It would be useful to also compare with Random ordering as in Table 2. Also, they authors used k=4 in the experiments, it might be also important to evaluate larger k values, e.g., 10 or 20. The main claim of this paper is that the demonstrations in the middle are not very important to the performance of ICL, but using only a few demonstrations in the middle (as in the experiments) may not be as convincing as using many demonstrations in the middle."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We accelerate selection and ordering of in-context demonstrations in self-adaptive ICL settings by leveraging our newfound clustering property in prompt embedding spaces."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024rapid,\ntitle={Rapid Selection and Ordering of In-Context Demonstrations via Prompt Embedding Clustering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Iu2Yte5N6},\nnote={under review}\n}"
},
"abstract": {
"value": "While Large Language Models (LLMs) excel at in-context learning (ICL) using just a few demonstrations, their performances are sensitive to demonstration orders. The reasons behind this sensitivity remain poorly understood. In this paper, we investigate the prompt embedding space to bridge the gap between the order sensitivity of ICL with inner workings of decoder-only LLMs, uncovering the clustering property: prompts sharing the first and last demonstrations have closer embeddings. We explain this property through extensive theoretical analyses and empirical evidences. Our finding suggests that the positional encoding and the causal attention mask are key contributors to the clustering phenomenon. Leveraging this clustering insight, we introduce Cluster-based Search, a novel method that accelerates the selection and ordering of demonstrations in self-adaptive ICL settings. Our approach substantially decreases the time complexity from factorial to quadratic, saving 92% to nearly 100% execution time while maintaining comparable performance to exhaustive search."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"in-context learning",
"order sensitivity",
"LLMs",
"clustering",
"cluster-based search",
"positional encoding",
"attention mask",
"serial-position effect",
"cluster-based search"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/05717329e13a075b995d526867cf3d7cb6b002eb.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Rapid Selection and Ordering of In-Context Demonstrations via Prompt Embedding Clustering"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Iuw1jcIrf | MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code | main | Active | large language model;mathematical reasoning;continued pretraining | foundation or frontier models, including LLMs | 6;8;8 | 4;4;3 | 3;4;3 | 3;4;3 | 3;4;3 | 7.333333 | 3.666667 | 3.333333 | 3.333333 | 3.333333 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Python was chosen for code snippets; it it possible to use specialized math software language instead (e.g., Mathematica)? This is not a direct limitation of this paper, but a possible future direction."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "MathCoder2’s MathCode-Pile corpus is rigorously curated and filtered from diverse math-related sources, including web data, synthetic data, specialized code, and textbooks. This ensures relevance, reduces noise, and provides a comprehensive dataset tailored specifically for mathematical reasoning, which is essential for pretraining LLMs in this area.\n\nMathCoder2 demonstrates significant gains on multiple mathematical reasoning benchmarks, outperforming comparable models across different tasks. The improvement underscores the effectiveness of continued pretraining on the structured MathCode-Pile corpus and shows MathCoder2's potential for real-world applications in math-intensive fields."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel approach for enhancing mathematical reasoning in large language models (LLMs). Unlike previous models that used math-related code without detailed explanations, MathCoder2 generates mathematical code paired with natural language reasoning. This process involves filtering a large math-related dataset from web pages, synthetic sources, code, and textbooks to build a high-quality corpus called MathCode-Pile.\n\nThis dataset consists of 19.2 billion tokens and includes LaTeX-extracted mathematical expressions, conditions, results, and Python code to capture the underlying reasoning. MathCoder2 uses this corpus to significantly improve performance on various mathematical benchmarks, achieving results competitive with state-of-the-art models. Moreover, the MathCoder2 framework is fully open-source, which supports reproducibility and transparency in model training and data processing. This work sets a foundation for future research by focusing on reasoning capabilities through detailed code integration."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are no major weaknesses."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Code Execution in Evaluation: Is the Python code generated and executed during benchmark evaluations? Clarifying this would help to understand the role of Tool-Integrated Reasoning in the observed performance improvements.\n* Generalization to Formal Proofs: Can the method be extended to generate formal proofs in languages like Lean or Coq? Specifically, how well does the approach handle abstract reasoning steps that require formal verification, which might be better suited to proof assistants rather than executable Python code?\n* Independent Reasoning Steps: Would separating reasoning steps and corresponding code into independent examples still yield significant improvements? Such an ablation could help assess the criticality of their alignment in the dataset."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* Originality: The paper introduces a novel method of generating and pairing Python code with natural language reasoning steps, enhancing the mathematical reasoning capabilities of large language models.\n* Quality of Dataset: The MathCode-Pile dataset, comprising 19.2B tokens, is a significant contribution, demonstrating meticulous curation from diverse sources like web data, math textbooks, and synthetic examples.\n* Significant Performance Gains: The use of this dataset leads to notable improvements across various models, including Llama-3-8B, DeepSeekMath-7B, and Code-Llama-7B, especially on benchmarks like MATH and GSM8K.\n* Detailed Methodology: The process of extracting LaTeX expressions, conditions, and results to generate corresponding Python code is well-documented, offering transparency and reproducibility.\n* Open-Source Commitment: The release of data processing and training code enhances the research community's ability to validate and build upon this work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper \"MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code\" explores enhancing the mathematical reasoning capabilities of LLMs through continued pretraining on a novel dataset called MathCode-Pile. This dataset is constructed from various sources, including math-related web data, textbooks, and synthetic data. A key contribution is the generation of paired natural language reasoning steps and Python code, aimed at improving the alignment between mathematical reasoning and executable code. The authors demonstrate significant improvements in mathematical reasoning benchmarks such as MATH and GSM8K, using models fine-tuned with MathCode-Pile. The paper also emphasizes the open-source nature of their data processing and training pipeline."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Generalizability of Code Generation: The method’s applicability to more abstract or advanced mathematical domains is unclear, particularly beyond high-school-level math.\n* Evaluation Uncertainty: It is ambiguous whether the generated Python code is executed during benchmark evaluations or merely used for pretraining, leaving questions about its practical impact.\n* Scope Limitation: The focus on grade-school-level mathematics is not explicitly emphasized, potentially misleading readers about the dataset’s broader applicability.\n* Ablation Study Depth: While the ablation studies show the value of the synthesized code, further exploration into the necessity of aligning reasoning steps with code versus treating them as independent could provide deeper insights."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "I have interested about whether the MathCode-Pile’s strong focus on mathematical reasoning might impact the model’s performance in non-mathematical domains. For example, whether this dataset would enhance the model’s general coding abilities beyond math-focused tasks."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. The author combining symbolic math reasoning with executable code in the dataset, MathCode-Pile, which is noval. This innovative methodology extends prior research, making MathCode-Pile a significant resource for advanced math reasoning tasks.\n\n2. The paper is clearly organized, with a well-structured explanation of each step in the MathCode-Pile creation and model evaluation process. Figures and tables also effectively illustrate the overall data pipeline.\n\n3. This work has great significance in advancing mathematical reasoning within language models. MathCoder2, using MathCode-Pile, achieves superior results on math benchmarks, demonstrating the potential of code-paired reasoning data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes MathCode-Pile, a 19.2B-token dataset of math text and Python code. The dataset includes high-quality math-related web content, code with mathematical packages, math textbooks, and synthetic data. In addition, they present MathCoder2, a family of large language models with enhanced mathematical reasoning capabilities over MathCode-Pile."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks a analysis of potential data leakage between MathCode-Pile and evaluation benchmarks, which could artificially inflate model performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024mathcoder,\ntitle={MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Iuw1jcIrf},\nnote={under review}\n}"
},
"abstract": {
"value": "Code has been shown to be effective in enhancing the mathematical reasoning abilities of large language models due to its precision and accuracy. Previous works involving continued mathematical pretraining often include code that utilizes math-related packages, which are primarily designed for fields such as engineering, machine learning, signal processing, or module testing, rather than being directly focused on mathematical reasoning. In this paper, we introduce a novel method for generating mathematical code accompanied with corresponding reasoning steps for continued pretraining. Our approach begins with the construction of a high-quality mathematical continued pretraining dataset by incorporating math-related web data, code using mathematical packages, math textbooks, and synthetic data. Next, we construct reasoning steps by extracting LaTeX expressions, the conditions needed for the expressions, and the results of the expressions from the previously collected dataset. Based on this extracted information, we generate corresponding code to accurately capture the mathematical reasoning process. Appending the generated code to each reasoning step results in data consisting of paired natural language reasoning steps and their corresponding code. Combining this data with the original dataset results in a 19.2B-token high-performing mathematical pretraining corpus, which we name MathCode-Pile. Training several popular base models with this corpus significantly improves their mathematical abilities, leading to the creation of the MathCoder2 family of models. All of our data processing and training code is open-sourced, ensuring full transparency and easy reproducibility of the entire data collection and training pipeline."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language model",
"mathematical reasoning",
"continued pretraining"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8b6e886f9c388704e549d9ac1896d45217b80132.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/d2acc3266ff8e16fc825a09c26044f43a7c2ce79.zip"
},
"title": {
"value": "MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1IuwdOI4Zb | Animate-X: Universal Character Image Animation with Enhanced Motion Representation | main | Active | Animation;Anthropomorphic;Video Generation;Pose | generative models | 5;5;6;6 | 5;5;5;4 | 2;2;3;3 | 2;2;3;3 | 2;2;3;3 | 5.5 | 4.75 | 2.5 | 2.5 | 2.5 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please address the concerns in the weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The visual results of Animate-X demonstrate notable improvements across various characters compared to existing animation methods.\n2. Comprehensive experiments and ablation studies are presented."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work presents an animation framework capable of animating anthropomorphic characters, along with an accompanying benchmark for animated anthropomorphic characters. Specifically, the framework introduces an Implicit Pose Indicator and an Explicit Pose Indicator to provide rich pose guidance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. No video samples from A2Bench are provided; only selected frames are shown in the paper. Given that the generated videos still struggle with maintaining strict logic and good spatial and temporal consistency, I question the rationale for using T2I + I2V to generate benchmark videos. Additionally, the benchmark lacks detailed information, such as video length and frame rate. Were any additional motion prompts used to generate videos from images? If so, what is their diversity and complexity?\n2. The necessity of a pose pool and the selection of an anchor pose image need clarification. What operations are involved in the \"align\" process, specifically regarding translation and rescaling? Why not use random translation and rescaling instead of relying on an anchor pose image?\n3. The effectiveness of the Implicit Pose Indicator (IPI) is also in question. The motivation for the IPI is that sparse keypoints lack image-level details, while IPI aims to retrieve richer information. However, Tables 7 and 8 indicate that Animate-X achieves comparable performance to Animate-Anyone and UniAnimate on human videos. This suggests that the IPI does not provide any benefits for human animation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See Weakness. If the authors can address all my concerns, I am willing to raise the score."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.The authors introduce A2Bench, which is helpful for the evaluation of character animation.\n2.Both qualitative and quantitative experiments are conducted to evaluate the performance of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the animation of non-human characters, addressing two main issues:\n1)Sole pose skeletons lack image-level details.\n2)Pose alignment in the self-driven reconstruction training strategy.\nTo resolve these issues, the paper introduces a Pose Indicator, comprising an Implicit Pose Indicator and an Explicit Pose Indicator. Experimental results demonstrate that the proposed Animate-X achieves effective performance in character animation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.Some parts of the writing can be quite confusing, words and sentences are bad orgnized. For example, in P5 L260, what exactly is in the pose pool? And how is it aligned with the reference?\n2.The dataset includes 9,000 independently collected videos. Could you analyze these videos, and did other baselines use the same data for training? If not, could this lead to an unfair comparison?\n3.The authors first identify the weaknesses of previous methods as a conflict between identity preservation and pose control. They further expand on this point by highlighting two specific limitations: the lack of image-level details in sole pose skeletons and pose alignment within the self-driven reconstruction training strategy. However, while the authors clearly state that differences in appearance between characters and humans can negatively impact animation, learning image-level details seems to contradict their viewpoint \"sole pose skeletons lack image-level details\", making this contribution appear more like a forced addition.\n4.Additionally, the visualization in Figure 7 provided by the authors also supports w3. The inclusion or exclusion of the IPI appears to have minimal impact on the motion of the Ref image, and with IPI, part of the foot in the Ref image is even missing. This raises doubts about the effectiveness of the IPI module and seems inconsistent with the authors' stated motivation.\n5.Pose augmentation has already been widely explored in existing methods, such as MimicMotion, which makes the innovation in this paper insufficient.\n6.This paper lacks comparisons with similar methods, such as MimicMotion, which makes the experimental results less convincing.\n[1]MimicMotion: High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could the authors provide more details on the construction of the pose pool and alignment pool, such as the pool sizes and how poses are selected from the training set?\n \n- Comparing the results in Table 4 and Table 1, Animate-X outperforms the baselines even without pose augmentation (EPI). Could the authors provide a deeper analysis of why the Implicit Pose Indicator (IPI), with fewer parameters, outperforms the reference network?\n \n- What happens if the reference pose differs significantly from the candidates in the pose pool and alignment pool? The authors should provide a robustness analysis for this scenario and consider adding a difficulty level split for A2Bench.\n \n- Could aligning the driving pose to a \"standard\" one in the pose pool further improve generation quality?\n \n- In the supplementary materials, the authors show results in various styles, yet most styles in A2Bench are in \"3D render style.\" Would it be possible to add a \"style trigger word\" in the prompt template to diversify and strengthen the benchmark?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper introduces a new augmentation method that enhances pose robustness for character animation techniques.\n \n- A novel module is proposed to integrate the driving pose with the reference image without relying on a reference network.\n \n- A new benchmark is established for evaluating anthropomorphic characters.\n \n- The quality of animation results is good, even reference characters do not have leg or arm."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper highlights that character animation models trained exclusively on human-only datasets struggle to learn motion patterns from driving videos, often leading to overfitting on the driving pose and poor generalization to anthropomorphic characters.\n\nTo address this issue, the authors propose a novel character animation framework called Animate-X, which incorporates two Pose Indicators. The Implicit Pose Indicator extracts motion and integrates it with CLIP features, while the Explicit Pose Indicator supports an augmentation pipeline during training that encourages the model to learn motion from misaligned pose sequences.\n\nAdditionally, a new benchmark is established for evaluating anthropomorphic characters. Experiments across multiple datasets demonstrate the effectiveness of the proposed method for animating anthropomorphic characters."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper lacks a detailed analysis of the construction of the augmentation pool, making it difficult to reproduce the method.\n \n- There is insufficient in-depth analysis of the model design, such as why the Implicit Pose Indicator (IPI) outperforms the reference network, which has more learnable parameters.\n \n- Most styles in the A2Bench benchmark are \"3D render style\"; the benchmark should include a wider variety of visual styles."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Does this model still use any input videos in the inference stage? I am asking this question because there are no input driving videos in the “Animating anthropomorphic characters” section of the supplementary materials. Could the author explain the inference setting? If there is a corresponding driving video, it is better to also include them into the results."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The motivation of this work is clear, it comes from an in-depth analysis of the failure cases of existing works. The alignment between the driving signal and reference image is critical to the fidelity of character animation. The authors propose an effective method to tackle this problem.\n2. The experimental results, especially the video results, are reasonable and interesting. The proposed method shows state-of-the-art performance and outperforms baselines in animating in-the-wild characters. This indicates that the training data is well utilised and shows that the proposed method helps improve the generalisation ability of the animation model.\n3. The evaluation benchmark is valuable to the research community. It can help follow-up works measure their methods comprehensively."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed Animate-X, a universal animation framework based on diffusion models. The key insight of this work is that existing image animation frameworks are only focused on the human images and fail to extract the movement pattern of the driving video, leading to the rigid retargeting of the driving pose to the target reference image. The authors propose two pose indicators to address this issue, which can capture comprehensive motion patterns from the driving video. The implicit pose indicator helps retrieve relevant features from the driving video, while the explicit one simulates the unaligned driving poses in the inference stage. To evaluate the approaches, the authors also propose a new benchmark which contains in-the-wild and unmatched driving sequence/reference image pairs. Experiments show that the proposed method outperforms state-of-the-art methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The backbone of this work remains unchanged, it is quite similar to the prior works like your reference AnimateAnyone and MagicAnimate, which makes this work a straightforward extension of existing works and thus reduces the contribution of this paper.\n2. Leveraging driving videos to boost the animation performance has been already explored in a few prior works like [1]. The implicit pose indicator is also a similar design which aims to extract comprehensive motion patterns to improve the animation performance.\n3. The explicit pose indicator is a little bit confusing because I think this module is an augmentation of the driving pose sequences. Therefore, the novelty of the proposed method is not very significant. It is reasonable that the augmentation can break the strong correspondence between the driving video and motion representation. What is the advantage of this training time rescale augmentation and over the test time pose alignment? Are there any ablation studies about this? \n4. From the results of the animation of anthropomorphic characters, the example of a banana shows that although the animation result looks like a banana, the motion precision is decreased. Therefore, I think the implicit pose indicator could harm the motion precision. The authors could conduct more experiments to study this issue.\n\n[1] X-portrait: Expressive portrait animation with hierarchical motion attention"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A universal animation framework based on LDM for various character types (collectively named X), including anthropomorphic characters."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024animatex,\ntitle={Animate-X: Universal Character Image Animation with Enhanced Motion Representation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1IuwdOI4Zb},\nnote={under review}\n}"
},
"abstract": {
"value": "Character image animation, which generates high-quality videos from a reference image and target pose sequence, has seen significant progress in recent years. However, most existing methods only apply to human figures, which usually do not generalize well on anthropomorphic characters commonly used in industries like gaming and entertainment. Our in-depth analysis suggests to attribute this limitation to their insufficient modeling of motion, which is unable to comprehend the movement pattern of the driving video, thus imposing a pose sequence rigidly onto the target character. To this end, this paper proposes $\\texttt{Animate-X}$, a universal animation framework based on LDM for various character types (collectively named $\\texttt{X}$), including anthropomorphic characters. To enhance motion representation, we introduce the Pose Indicator, which captures comprehensive motion pattern from the driving video through both implicit and explicit manner. The former leverages CLIP visual features of a driving video to extract its gist of motion, like the overall movement pattern and temporal relations among motions, while the latter strengthens the generalization of LDM by simulating possible inputs in advance that may arise during inference. Moreover, we introduce a new Animated Anthropomorphic Benchmark ($\\texttt{$A^2$Bench}$) to evaluate the performance of $\\texttt{Animate-X}$ on universal and widely applicable animation images. Extensive experiments demonstrate the superiority and effectiveness of $\\texttt{Animate-X}$ compared to state-of-the-art methods. Please use any web browser to open the $\\textit{.html}$ file in the $\\textit{Supplementary Materials}$ to view the generated videos."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Animation",
"Anthropomorphic",
"Video Generation",
"Pose"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8e2e5fdc52e01a8cc6e67373adc00cb2ab79361b.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/f44793343dd661db9933d8ef586befe8d2324443.zip"
},
"title": {
"value": "Animate-X: Universal Character Image Animation with Enhanced Motion Representation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1IwoEFyErz | Shallow Diffuse: Robust and Invisible Watermarking through Low-Dimensional Subspaces in Diffusion Models | main | Active | diffusion Model;watermark;low-dimensional subspace;consistency;robustness | generative models | 5;5;5;5 | 5;4;5;4 | 2;2;3;3 | 2;2;2;3 | 2;3;2;1 | 5 | 4.5 | 2.5 | 2.25 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses part."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Shallow Diffuse's primary strength lies in its utilization of the low-rank property of the PMP's Jacobian matrix to minimize the visual impact of watermarks, thereby attaining visual consistency within a training-free watermark framework.\n2. The injected watermark is more robust against several image distortions than existing baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed Shallow Diffuse, a watermarking technique for diffusion models. The method is well-motivated and with proper theoretical justification. The proposed Shallow Diffuse has several key advantages compared to existing diffusion watermarks, 1) It is a training-free watermark but simultaneously maintains the consistency between watermarked and original images. 2) It is more robust than existing baselines, achieving nearly no performance drop under different robustness tests. Shallow Diffuse also considers two scenarios including the server (protect generated image) and user (protect existing image) scenarios for injecting the watermark."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation of this paper is poor, for instance, the ablation studies (Appendix C) and index of experimental results (Table 4) are incomplete. Therefore, this leads to a shortage of critical ablation studies.\n2. What is the performance of multi-key identification, specifically, is it possible for the Shallow Diffuse to inject multiple watermarks and distinguish between them?\n3. The image distortions are less than that in previous studies, such as Tree-Ring, where they apply 6 distortions.\n4. Can DiffPure purify the watermarked patterns?\n5. The findings in Table 4 are confusing. It appears that employing channel averaging enhances robustness against image distortions. However, channel averaging involves averaging clean and watermarked images across specific channels. As per my understanding, this process might reduce watermark robustness. Can you explain this observation?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The table in the paper is very difficult to read clearly.\n\n2. In Table 1, for the CLIP-Score index, yours is 0.3285, which seems to be the worst. Please explain further.\n\n3. Please explain why the filter size in the Gaussian blurring is 8 × 8 and how the standard deviation is selected.\n\n4. As can be seen from Table 2, the PSNR and SSIM of most methods are very low, so it is easy for human eyes to find modification traces, which easily leads to the risk of watermarked images being maliciously broken. Please further explain the visual quality of the generated watermarked image."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The originality of this paper is not great, but its quality, clarity and significance are good. It has the support of rich theoretical basis and has advantages in theoretical proof."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a watermarking technique Shallow Diffuse. Unlike existing approaches that integrate watermarking throughout the entire diffusion sampling process, Shallow Diffuse decouples these steps by leveraging the presence of a low-dimensional subspace in the image generation process. This method ensures that a substantial portion of the watermark lies in the null space of this subspace, effectively separating it from the image generation process."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The table in the paper is not very well drawn, it is very difficult to read, especially the header. At the same time, the experimental part is not detailed enough. For example, should the comparison method reproduce the results or use the pre-training model?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Besides Tree-Ring and Stable Signature, I think there are more existing watermarking methods that introduce small perturbation to the image to embed watermark, like StegaStamp. To show the proposed method actually preserves image quality, I think the authors should compare the method with some watermarking methods that watermark the image after image generation with a small perturbation.\n2. In the robustness part, I think the regeneration attacks and some adversarial perturbation should be evaluated on the proposed method to see whether the proposed method is actually robust under various attacks.\n3. Since the authors mention the user scenario, if multiple users rewatermark the same image with the proposed method, can the watermark embeded by the specific user be detected in this circumstance?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed method has smaller impact on the generated images compared to the existing watermarking methods designed for diffusion models.\n2. Experiments are carried on several image-prompt datasets to show the effectiveness of the proposed methods.\n3. The robustness of the propsoed method is evaluated."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed a new image watermarking method that embeds a watermark into the null space of a specific step during image denoising in diffusion model. It shows that the proposed watermarking method have smaller impact on the generated images compared to the existing methods like Stable Signature and Tree-Ring. Additionally, the proposed method shows good robustness to image processing methods like JPEG and Gaussian blur."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Compared to Tree-Ring, the technical contribution of the proposed method is limited.\n2. In the experimental part, the authors mainly compare their method with the watermarking methods that embed watermark into the semantic space like Tree-Ring which changes the image a lot. More other watermarking methods should be evaluated.\n3. In the robustness part, the authors only evaluate the robustness of the proposed method on some common perturbation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Shouldn't the formula $\\hat{\\boldsymbol{x}}_{0, t}:=\\boldsymbol{f}_{\\boldsymbol{\\theta}, t}\\left(\\boldsymbol{x}_{t}+\\lambda \\Delta \\boldsymbol{x}\\right)$ on line 360 be $\\hat{\\boldsymbol{x}}_{0, t}:=\\boldsymbol{f}_{\\boldsymbol{\\theta}, t}\\left(\\boldsymbol{x}_{t}\\right)$\n\n2. Could you specify the parameters used in the Shallow Diffuse method in Section 5, such as the embedding channels and watermark radius?\n\n3. The experiments in Appendix C only provide results, could you include some analysis?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The method utilizes the local linearity of low-dimensional subspaces. As a watermarking method based on diffusion models, it maintains the consistency of generated images.\n\n2. This paper provides rigorous theoretical proof and presents a substantial number of computational formulas."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Current watermarking techniques based on diffusion models often embed watermarks directly into the initial noise,which can alter the data distribution. This paper proposes \"Shallow Diffuse,\" a method that disentangles the watermark embedding from the generation process by leveraging low-dimensional subspaces. This approach supports watermark embedding for both server-side and user-side applications while maintaining high robustness and consistency. Additionally, experiments were designed to validate robustness and conduct ablation studies across multiple datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The attack experiments are limited, consisting of only four fixed-parameter attacks, which do not demonstrate the method's robustness. For instance, the method can be viewed as a variant of treering, could experiments with additional attacks, such as rotation、regeneration be included?\n\n2. The theoretical assumptions of the method are built upon [1], but the experimental results yield a different range of t values compared to the theoretical analysis in [1]. Although this can be explained by the errors introduced by DDIM-Inv, it remains perplexing.\n\n3. The method relies on the properties of DDIM and DDIM-inverse, which may lack certain generalizability. It might not perform well for attacks executed in the latent space."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024shallow,\ntitle={Shallow Diffuse: Robust and Invisible Watermarking through Low-Dimensional Subspaces in Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1IwoEFyErz},\nnote={under review}\n}"
},
"abstract": {
"value": "The widespread use of AI-generated content from diffusion models has raised significant concerns regarding misinformation and copyright infringement. Watermarking is a crucial technique for identifying these AI-generated images and preventing their misuse. In this paper, we introduce *Shallow Diffuse*, a new watermarking technique that embeds robust and invisible watermarks into diffusion model outputs. Unlike existing approaches that integrate watermarking throughout the entire diffusion sampling process, *Shallow Diffuse* decouples these steps by leveraging the presence of a low-dimensional subspace in the image generation process. This method ensures that a substantial portion of the watermark lies in the null space of this subspace, effectively separating it from the image generation process. Our theoretical and empirical analyses show that this decoupling strategy greatly enhances the consistency of data generation and the detectability of the watermark. Extensive experiments further validate that our *Shallow Diffuse* outperforms existing watermarking methods in terms of robustness and consistency."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"diffusion Model",
"watermark",
"low-dimensional subspace",
"consistency",
"robustness"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0c4a1e6882faaf20c6e3e457435444bf1a8c3177.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Shallow Diffuse: Robust and Invisible Watermarking through Low-Dimensional Subspaces in Diffusion Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1JgWwOW3EN | BenchMol: A Multi-Modality Benchmarking Platform for Molecular Representation Learning | main | Active | Multi-Modality Learning;Benchmarks and Datasets;Drug Discovery;Molecular Representation Learning | datasets and benchmarks | 1;3;3;5;10 | 4;5;4;4;5 | 1;2;1;3;4 | 1;1;1;2;4 | 2;2;2;2;4 | 4.4 | 4.4 | 2.2 | 1.8 | 2.4 | 0.558069 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. The experimental results with the randomly initialised sequence-based models are quite intriguing and seem a bit counterintuitive, particularly as it pertains the linear probing, do you have any further intuition of what may be the underlying mechanism that provides them with such a remarkable inductive bias. Have you seen any dependence between model size and model performance in this particular setting?\n2. Some datasets can be quite dirty, a single molecule can be translated into multiple SMILES strings depending on the specific algorithm used, this leads to some datasets having the same molecule duplicated, but with different SMILES which makes it difficult to distinguish. Have you done any tests to detect these duplications (e.g., canonicalising the SMILES or using InChIKeys)?"
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "Main strengths\n---\n1. The paper tackles a really important issue in molecular representation learning, provides a sound and comprehensive benchmark of existing methods, provides a clear way to compare the strengths and weaknesses of available MRL techniques and allows for a clear and fair comparison across modalities. Further they provide the utilities for reproducing their results easy to use. This research will significantly move the field forward and constitutes a strong baseline from which new multimodal research can build upon.\n2. The paper is clearly written, results are concisely conveyed, the methodology is sound and detailed enough as to allow for the reproduction of the results.\n3. Tables and data displayed clearly demonstrate and support the main claims and insights drawn by the authors.\n4. Supplementary information is rich and comprehensive.\n\nMinor strengths\n----\n_(Details of the paper that do not have a direct bearing on my evaluation, but I think are commendable)_\n\n1. The insight regarding sequence-based models having enough inductive bias even when randomly initialised and with linear probing is highly interesting and could merit its further exploration.\n2. The experiments with multiple conformations for the image modality are really interesting, the insights drawn highly informative, and they go beyond what the scope of the paper was to give a really comprehensive evaluation of the benefits and idiosyncrasies of different modalities.\n3. Visual design in the Figures and Tables is crisp and facilitates the comprehension of the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This study is a comprehensive examination of molecular representation learning (MRL) methods benchmarking, covering multiple molecular modalities (1D string representation, 1D fingerprints, 2D Graphs, 2D Images, 3D geometries, 3D geometry images, and video.\n\nThe study proposes 3 separate sets of datasets to evaluate all these modalities: MoleculeNet (pre-existing), MBANet (newly created), and StructNet (newly created). The first benchmark covers broad application in the biomolecular domain, the second benchmark evaluates the ability of MRL methods to capture basic molecular attributes, the third benchmark allows for discerning which MRL are more appropriate for which molecular types.\n\nThe study evaluates multiple MRL techniques and pre-trained models and draws 9 main insights from this extensive and large-scale examination, which I summarise as follows:\n\n1. All modalities are useful for almost every task (models from 6 modalities are the top 6 in performance).\n2. Ensembling multiple conformers can improve image-based MRLs.\n3. Sequence-based models (transformers and similar architectures) perform well even when randomly initialised and without fine-tuning, which suggests that they have good inductive biases.\n4. Geometry images and videos are resilient to different image formatting conventions (RGB vs BGR).\n5. Video modality is the best for recovering basic molecular information (MBANet benchmark).\n6. Pre-training models improve performance on recovering basic molecular information (MBANet), therefore, pre-training tasks are useful for this purpose.\n7. Performance on MBANet within MRLs leveraging the same modality is similar\n8. Modality determines whether the model is best performing at different types of molecules (StructNet benchmark).\n9. Certain pre-trained models perform worse against certain molecular types than their randomly initalised counterparts. Therefore, certain pre-training tasks might be better suited for certain molecular types and will be detrimental for others.\n\nFinally, the study presents tools for standarising multi-modal benchmarking on these datasets, provides splits for replicating results and utilities that are likely to accelerate multi-modal representation learning strategies."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Main weakness\n---\nIn Appendix B1, Figure S1, the histograms for the atom counts of certain atoms like Si (c), Br (f), P (g), S (h), Cl (i), B (j), Se (k), and particularly Ge (l), seem to be completely skewed and quite limited in the independent variable values (0, 1, 2). It seems that they'd be better suited for a classification task. I'd argue that the Ge task introduces only noise to the final metric as there is only one count of value 1, the rest are value 0.\n\nTherefore, it will either be in training and will not be tested; or it will be in the testing and the model will not know that such a value is even possible. I see that the issue of transforming them into classification tasks would be that the the average of the classification and regression metrics would not make that much sense and this could be alleviated by using correlation metrics like Matthew's correlation coefficient (or a multi-class generalisation thereof) for classifcation, and Spearman's or Pearson's correlation coefficient for regression. Another alternative, probably simpler alternative, could be to remove the subtasks that are too skewed to be useful. I am not sure which option is best and I am open to the authors to explain their rationale for including them in their study. I think that, the limitation of this specific part of the benchmark should be at least acknowledged in the main text. \n\nThis is also applicable to Figure S2 - b.\n\nThis is the only major weakness in an otherwise excellent study."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The behaviour of different molecular representations is an important question, despite not being related to multi-modality. Would the authors consider an angle that examined the performance on the toy task described across molecular representations. Failures of representations to perform simple operations would be highly impactful."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The authors point out that benchmarking in molecular representation learning is littered with problems, such as unfair comparisons of methods arising from difference evaluation strategies (e.g. splitting differences) and the absence of a convenient unified benchmarking platform.\n\nThe authors perform a novel task in predicting basic attributes of molecules given their molecular representation of choice. It is surprising that this apparently simple task results in many failures, perhaps this aspect could be made a focus of the paper in the context of different molecular representations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors describe BenchMol, a benchmark for discriminative tasks in the molecular context with a focus on comparing methods for molecular representation learning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors have benchmarked methods for molecular representation, however true multi-modality comes from the underlying modality of the data, such as binary labels vs continuous labels vs 3D data vs dynamics data, and as such this benchmark is not a benchmark for multi-modal models in the way one would expect - the models themselves are single modality.\n\nSince this benchmark is not evaluating multi-modal molecular algorithms, there is no specific need addressed by this new benchmark that isn't already serviced by existing molecular benchmarks, e.g. QM9, MoleculeNet, OGB, etc."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* To summarize the findings of the paper, could you give a concise conclusion on which model/modality to choose in MRL-related tasks?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper gives a good survey for MRL methods using different modalities of molecular data.\n- The paper proposes two benchmark datasets and tests various methods on them. Interesting conclusions are made according to the results.\n- Given the numbers of experiments the paper have conducted, it is evident that the authors have put numerous efforts into this work, unifying code from different methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces BenchMol, a comprehensive and unified platform for molecular representation learning (MRL). BenchMol integrates seven major molecular modalities (fingerprint, sequence, graph, geometry, image, geometry image, and video) and evaluates 23 mainstream MRL methods across various tasks. The authors propose two new benchmarks, MBANet and StructNet, to systematically assess the performance and preferences of different modalities. Through extensive experiments, the paper provides findings into the strengths and weaknesses of various modalities and their suitability for different molecular types and tasks.\n\nThe paper is in general interesting to read, while there are a few concerns that need to addressed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The evaluations in the paper mainly focus on prediction accuracies. However, in many scenarios, such as virtual screening, computational cost is also very important. This is especially relevant in comparing methods using different modalities, while the paper completely ignores this aspect.\n- The paper's presentation quality falls short of expectations. While most contents are still comprehensible, many minor issues exist in the current version of the paper. For example, in Figure 2(b), the word wrapping of “preproc/essing” in is strange; also, what is “combing”? For another example, the paper does not provide good definitions for image, geometry image and video in the MRL context. Minor issues like these truly affect the understanding of the paper.\n- The findings in the experiments are interesting, but many of them are potentially superficial. They are often observations of the performance numbers, but fail to develop into more insightful discussions. For example, in section 5.3 “Fine-tuning of MBANet”, the paper mentions that models using the video modality significantly outperform those using other modalities. But *why* is that? The *findings* of this paper would be much more interesting if they can take one step further to develop into *insights*.\n- The design of the benchmarks seems questionable. The dataset contains only 10,000 molecules, which is a small size considering the vast chemical space. In this case, the video modality seems to be advantageous because the video models can see more frames of the molecules. For fairer comparison, models using other modalities should be also able to access the same amount of molecular conformations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "My questions is listed in the Weakness part."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The author conducts extensive experiments in evaluating property prediction performance of MRL methods on existing datasets. \n\nThe paper re-evaluate a large amount of existing molecular representation methods on moleculeNet."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces BenchMol, a comprehensive and multi-modality platform specifically designed for molecular representation learning. TThe authors introduce two novel datasets and corresponding benchmarks, namely MBANet and StructNet on newly defined MRL tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The authors propose a multi-modality benchmarking platform, yet the study predominantly focuses on the performance comparison of single-modality molecular representation learning \nmethods and missing multimodal molecular representation learning methods (E.g. [1]), which is a critical weakness point considering the data scope as introduced in this paper. \n\n2. The rationale for providing a multi-modality dataset that compares single modality MRL methods is not clear, given that existing packages such as RDKit and OpenBabel already facilitate the conversion between different modalities for a given molecule(E.g converting SMILES to 2D molecular graph). This raises questions about the contributions of the proposed benchmarks compared to readily available tools. \n\n3. It’s better to demonstrate what kind of research gap in machine learning for chemistry this paper is trying to address. What certain type of chemistry questions is this paper trying to address, that may benefit the AI4Science community. For example, in section E, what specific chemistry problems does the atom distribution prediction task try to solve? How does a correction prediction of the atom distribution can benefit the chemistry community? \n\n4. The provided link for accessing the datasets is currently non-functional, as attempts to access the OneDrive URL listed in the README file result in a 'This site can’t be reached' error. Therefore, I am not able to reproduce some of the experiments. \n\n---\n\nMinor concerns. \n\n5. The presentation of Figures S3 (Pg. 21) is somewhat disorganized, notably, the font size on the x-axis of figure c and f is inconsistent with the rest. \n\n6. The organization of the manuscript could be improved for better readability; specifically, the description of the molecular print method is positioned on Page 24, while other molecular MRL methods summarized on Page 6. In addition, it is better to put a reference or hyperlink of the MRL method within each table. \n\n---\n\nFor improving this dataset and benchmark paper, [2] can be possibly considered as a reference. \n\n[1] Wang Z, Jiang T, Wang J, et al. Multi-Modal Representation Learning for Molecular Property Prediction: Sequence, Graph, Geometry[J]. arXiv preprint arXiv:2401.03369, 2024.\n\n[2] Velez-Arce A, Huang K, Li MM, Lin X, Gao W, Fu T, Kellis M, Pentelute BL, Zitnik M. TDC-2: Multimodal Foundation for Therapeutic Science. bioRxiv [Preprint]. 2024 Jun 21:2024.06.12.598655. doi: 10.1101/2024.06.12.598655. PMID: 38948789; PMCID: PMC11212894."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weakness"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well-written and easy to understand.\n2. The multi-modality methods are extensive, covering a broad range of modalities.\n3. The conclusions from the linear probing experiments on different modality methods in MoleculeNet (Section 5.2) are insightful and interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a multi-modality molecular property benchmark for molecular representation learning (MRL) methods. It generates various modalities, including fingerprint, sequence, graph, geometry, image, geometry-based image, and video, and constructs new benchmarks using data from PCQM4Mv2 and CHEMBL 34. A range of single-modality methods are evaluated on both MoleculeNet and the newly constructed benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The labels in MBANet, consisting only of atom types, bond types, and basic molecular attributes, are relatively simple and lack practical value for a comprehensive molecular property benchmark.\n2. I am curious about the rationale for splitting the data into different types (such as acyclic, complete chain, acyclic chain, macrocyclic peptide, macromolecule, and reticular) based on their 2D structural patterns. This approach implies an assumption that these distinctions are meaningful and that different modality models would clearly favor specific 2D graph patterns. However, the performance differences among various modality methods in Table 6 are minor and do not reflect the significance of such a split.\n3. The molecular image and video modalities are generated from 2D or 3D structures. It would be helpful to clarify why these modalities are important and which tasks specifically benefit from such artificial representations.\n4. Why do 3D modality-based methods, such as Uni-Mol, outperform other modalities on MoleculeNet tasks? Are there any insights or reasons behind this?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024benchmol,\ntitle={BenchMol: A Multi-Modality Benchmarking Platform for Molecular Representation Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1JgWwOW3EN},\nnote={under review}\n}"
},
"abstract": {
"value": "Molecular representation learning (MRL) plays a vital role in high-precision drug discovery. Currently, people represent molecules in multiple modalities (such as sequences, graphs, and images), and have developed many MRL methods. However, three key challenges hinder further progress in the field of multi-modal MRL: (i) Lack of systematic and unified evaluation on models of different modalities, resulting in unfair comparisons or being affected by randomness; (ii) The specific advantages between different molecular modalities are unclear; (iii) Lacking a unified multi-modal platform to integrate these multi-modal data and a large number of MRL methods. Therefore, we propose the first multi-modality MRL platform, called BenchMol, to integrate a large number of multi-modal MRL methods and evaluate them systematically and fairly. BenchMol has four attractive features: (i) Rich modalities: BenchMol supports 7 major modalities of molecules, such as fingerprint, sequence, graph, geometry, image, geometry image, and video; (ii) Comprehensive methods: BenchMol integrates 23 mainstream MRL methods to process these modalities; (iii) New benchmarks: BenchMol constructs two new benchmarks based on PCQM4Mv2 and ChEMBL 34, called MBANet and StructNet, for a more systematic evaluation. (iv) Comprehensive evaluation: evaluation covers different aspects of molecules, such as basic attributes and molecular types. Through BenchMol, we conduct large-scale research on methods of different modalities and report many insightful findings. We hope that BenchMol can help researchers quickly use multi-modal MRL methods on the one hand; and on the other hand, provide meaningful insights into multi-modal MRL and help researchers choose appropriate representations in downstream tasks. We open-sourced BenchMol in \\href{https://anonymous.4open.science/r/BenchMol}{Github}."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multi-Modality Learning",
"Benchmarks and Datasets",
"Drug Discovery",
"Molecular Representation Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/73582e0759b5ec3b775853e32a843aafe2fdce4c.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "BenchMol: A Multi-Modality Benchmarking Platform for Molecular Representation Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1JhSJIYX3p | Large Language Models Engineer Too Many Simple Features for Tabular Data | main | Active | LLMs;feature engineering;bias;tabular data;automated data science | foundation or frontier models, including LLMs | 3;3;3;5 | 4;5;4;4 | 2;3;2;4 | 1;1;2;2 | 2;3;2;3 | 3.5 | 4.25 | 2.75 | 1.5 | 2.5 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I’m a little confused about the experimental setup regarding operations. If I understand correctly, the authors are comparing the distribution of operators generated by the LLM and by OpenFE. If OpenFE is considered ground truth, why not compare directly to OpenFE final generated feature set? For example, rather than just counting the number of times we see the operation “GroupByThanRank”, why not look at the original features that were input into this operation?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "It is good to see more examples of evaluation of downstream LLM tasks \"in the wild\".\n\nI appreciate that the authors were rigorous in removing datasets that were thought to be memorized or in the training data of the LLM, even though they did not have access to the training data itself."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors investigate how well LLMs can engineer features for tabular datasets. Specifically, they look at the frequencies of operators, and find that there is bias toward simpler features rather than more interesting or useful ones. They also evaluate on the downstream accuracy of the models trained with and without the engineered features."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "To me, this doesn’t seem like an influential enough contribution. Not only is it tackling a very narrow problem, but it is also only evaluating a specific method for addressing that problem. While there is some prior work around using LLMs for feature engineering, I’m not convinced that this work’s method for feature engineering is necessarily representative of all the methods for using LLMs for this task. \n\nSpecifically, the authors only use one prompting strategy, on a snapshot of models at the current time that this paper is being written. A few examples of people using LLMs for feature engineering are cited (Hatch, 2024; Türkmen, 2024), but it is unclear what methods these citations used– is the author’s method the same, or inspired by them? Should data scientists conclude from this paper that they should never use LLMs for feature engineering, even if they use a different method? Overall, I think this is an interesting use case to evaluate, but the work is not broad enough to be included in ICLR.\n\nNits:\nTypo: “which is send to the LLM” → sent"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What is the statistical significance of the results shown in Table 3?\n2. Why aren't larger models in GPT and Gemini family not explored?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The authors experimentally show the limitations of LLMs for feature engineering. The experimental setting is convincing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper explores the featuring engineering powers of LLM with OpenFE as the baseline. The authors perform experiments on 27 datasets and 4 LLMs. The primary findings are the following.\n\n1. LLM perform worse than the baseline.\n2. Proprietary \"Large\" models perform worse than small \"open\" models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The conclusions of the paper are along expected lines and are not surprising. A more notable contribution would be to address the limitations.\n2. The statistical significance of the results is not provided.\n3. The term \"bias\" is too strong for the problem explored. The authors can use the word \"limitation\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why would I use an LLM for feature engineering anyway, if there are existing SOTA automated systems that already do it and perform much better? \n2. If your answer to #1 is that I probably wouldn't, then the main question about publishing this paper would be - why would I read a paper about the biases such a solution might have? There could be several answers (e.g., to inspire a similar analysis of LLM solutions for other problems) but they need to be clear within the paper. \n\nSmall issues:\n1. Please explain that the improvement in \"Predictive Performance Improvement\" is improvement compared to a system without FE earlier in the document, e.g. before table 3.\n2. While the random experiment is fun and adds to the paper, I don't think it is at all accurate to say that it tests \"whether our prompt template influenced our results\" - seeing as the prompt template itself did not change in this experiment, only the names of the features. I don't think it shows anything about the prompting strategy - but rather that the nature of the bias depends on the feature naming. \n3. Caught typo: The API usage costed -> cost"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is based on solid experimental work, testing using several LLMs and across many datasets, testing for memorization issues separately to check for bias explicitly. \nThe paper is an interesting and easy to follow read. Problematic properties of LLM solution paths for different problems are always appreciated, as we develop more and more systems that significantly rely on this tool, we must strive to understand the biases this seemingly easy fix-all solution of asking an LLM brings into our work and the times it might fail completely. It is also interesting that the large models have failed worse at adding features that helped with the downstream system's results compared to the smaller models, which did help a little."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper tests LLM bias in the task of feature engineering for training a downstream learning system. The paper uses several LLM settings across 27 datasets, to demonstrate that LLMs do indeed have bias for this task, which indeed seems to lead to poor performance compared to an existing SOTA automated feature engineering solution. Some further discussion and experimentation into the properties of the bias shows that the LLM seems to prefer simpler features when the features have meaningful names, but also doesn't sample uniformly when the features have nondescript random names."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main issue with this paper is that it is rather unclear why the usage of LLMs for this task was explored at all. It seems that when feature engineering is done by an LLM, the downstream system's performance is worse than existing SOTA systems - and sometimes even worse than doing any feature engineering at all. Frankly, it's also not a task that I would intuitively expect LLMs to be good at, as general knowledge, common sense and language knowledge is probably not what humans would use for feature engineering, but rather math/engineering skills and perhaps expert domain knowledge or lived experience - all usually less strong qualities of LLMs. The paper does not call this issue out or justify it. Usually, checking for biases of a solution might have one of two purposes: 1. call out poor performance that happens in a way that isn't expected or measured in other ways, so for example, if the system had seemingly good downstream performance, checking for biases or other issues might help guard us from using a problematic solution that looks good in our metrics. 2. try to improve the performance of the biased system by somehow mitigating the bias. It seems that option 1 in this case is unnecessary, since the LLMs have worse performance, and no actual attempt is made towards the #2 target."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1, Could you clarify the source of the original features—were they extracted or provided with the datasets?\n2, Have you considered experimenting with prompts that encourage the use of complex features, perhaps by emphasizing intricate relationships between original features?\n3, What methods were used to validate the effectiveness, fairness, and implications of the operator frequency metric?\n4, How did you account for the stochastic nature of LLM responses, where identical prompts might yield different operators and features?\n5, Would it also be informative to evaluate model performance using only the generated features, excluding original features? Maybe you can try this.\n6, Have you conducted feature-level analysis of the constructed features? Specifically:\nClassification Performance Level: Identifying dominant features in both LLM and OpenFE-generated sets\nFeature Level: Analyzing the characteristics of successful versus unsuccessful generated features\nCombining classification-level, feature-level, and operator-level analyses to strengthen conclusions about LLMs' feature engineering capabilities.\n7, A potential typo in Hypothesis 1: “HYPOTHESIS 1: FEATURE ENGINEERING WITH LARGE LANGUAGE MODELS IS BIASED TOWARD SIMPLE OPERATES.” The last word should be “OPERATORS”?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper presents a novel investigation into LLMs' feature engineering capabilities. The authors introduce an innovative evaluation metric—operator frequency distribution—which effectively quantifies the patterns in operator selection during feature construction. This metric provides valuable insights into how feature engineering tools, particularly LLMs, exhibit preferences for certain operators under different task contexts and prompt conditions. Furthermore, the study's comprehensive evaluation across 27 tabular datasets, with careful consideration for LLM memorization effects, demonstrates robust experimental design and systematic methodology."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates Large Language Models' (LLMs) capabilities in feature engineering for tabular data. The study examines four LLMs across 27 tabular classification datasets that were specifically selected to minimize potential memorization effects. For each dataset, the LLMs were tasked with recursively generating 20 new features, using prompts that contained task context, descriptions of original features, and available operators. The study benchmarks these results against OpenFE, an open-source feature engineering tool, using identical operators and original features. To evaluate the effectiveness of the engineered features, a LightGBM model was trained and tested on datasets combining original and constructed features, with classification accuracy as the performance metric. The results demonstrate that OpenFE produces more consistently beneficial feature sets for classification tasks. Through analyzing operator frequency in feature construction across both LLMs and OpenFE, the authors conclude that LLMs exhibit a bias toward simpler operators when no explicit operator preferences are specified in the prompts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper's analysis lacks sufficient depth in several crucial areas. While the proposed operator frequency metric is interesting, it requires further validation in terms of:\n\nEffectiveness: There is no analysis comparing the variability and information content of features generated by simple versus complex operators.\nFairness: The operator-level analysis overlooks that identical operators applied to different features can yield vastly different outcomes, making tool comparisons based solely on operator frequency potentially misleading.\nImplications: The study lacks experimental evidence linking complex operator usage to improved classification performance.\n\nThe paper's conclusion about LLMs' preference for basic operators requires additional validation. The authors did not explore prompting strategies to encourage complex operator usage, nor did they analyze the specific features and operators suggested by LLMs.\nThe narrative structure could be improved. For instance, the abstract's discussion of LLM bias in text generation appears tangential to the core focus on feature engineering. Similarly, the section on 'Other Applications of Large Language Models for Tabular Data' would be better integrated into the literature review rather than appearing as a standalone paragraph."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We analyze the bias of LLMs when used for feature engineerng and found that LLMs are biased toward creating simple features."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024large,\ntitle={Large Language Models Engineer Too Many Simple Features for Tabular Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1JhSJIYX3p},\nnote={under review}\n}"
},
"abstract": {
"value": "Tabular machine learning problems often require time-consuming and labor-intensive feature engineering.\nRecent efforts have focused on using large language models (LLMs) to capitalize on their potential domain knowledge. \nAt the same time, researchers have observed ethically concerning negative biases in other LLM-related use cases, such as text generation. These developments motivated us to investigate whether LLMs exhibit a bias that negatively impacts the performance of feature engineering. While not ethically concerning, such a bias could hinder practitioners from fully utilizing LLMs for automated data science. \nTherefore, we propose a method to detect potential biases by detecting anomalies in the frequency of operators (e.g., adding two features) suggested by LLMs when engineering new features. Our experiments evaluate the bias of four LLMs, two big frontier and two small open-source models, across 27 tabular datasets. Our results indicate that LLMs are biased toward simple operators, such as addition, and can fail to utilize more complex operators, such as grouping followed by aggregations. Furthermore, the bias can negatively impact the predictive performance when using LLM-generated features. Our results call for mitigating bias when using LLMs for feature engineering."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLMs",
"feature engineering",
"bias",
"tabular data",
"automated data science"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/dc5d16a2eaf96859f4e64e5c9642091171026ab5.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/fec95a0ca55f91e171acb49ba5c79cdb8aee79ff.zip"
},
"title": {
"value": "Large Language Models Engineer Too Many Simple Features for Tabular Data"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1KLBvrYz3V | Century: A Framework and Dataset for Evaluating Historical Contextualisation of Sensitive Images | main | Active | historical;contextualisation;image;dataset;multimodal;VLM;evaluation | datasets and benchmarks | 5;6;8;8 | 4;3;3;4 | 3;3;4;4 | 2;3;4;4 | 2;3;4;3 | 6.75 | 3.5 | 3.5 | 3.25 | 3 | -0.19245 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "In my opinion:\n- The dataset could potentially be misused to train models that generate biased or harmful content related to sensitive historical events. \n- The limited representation of certain communities in the dataset could be harmful for training of future models based on this dataset, I'm not sure about inclusiveness and how to not perpetuate existing biases."
},
"flag_for_ethics_review": {
"value": [
"Yes, Discrimination / bias / fairness concerns"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Since the release of new LLMs are very frequent, I wonder what could be done to further automatise the evaluation on the dataset. \n\n- I believe the dataset could potentially be misused to train models that generate biased or harmful content related to sensitive historical events. What do you think about this aspect?\n\n- Could the limited representation of certain communities in the dataset be harmful for training of future models based on this dataset? I'm not sure about its inclusiveness and how to not perpetuate existing biases."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The proposed interdisciplinary dataset focus on historical contextualization and represent a valuable contribution to the field, addressing a crucial gap in existing evaluation methodologies.\n\n- The work should be well reproducible with the released dataset, search terms, and evaluation details. Every part of the work is well detailed and released. Authors have put significant effort into this.\n\n- The paper is well written, well structured and all parts, also detailed in the appendix, are well informative."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper is about “Century” a dataset and framework designed to evaluate the ability of multi-modal models to contextualize sensitive historical images accurately, thoroughly, and objectively. To build the dataset, images were sourced with knowledge graphs, language models, and they were processed according to museum practices, considering especially recent historical events, figures, and locations, with images that may hold socio-cultural significance. The idea is to address the representation of historical nuance in generative AI and proposes an evaluation protocol for assessing models on tasks requiring socio-cultural and historical contextualization. After the construction of the Century dataset, it is tested with recent private foundation models. The paper reports that these models have difficulties addressing the complexity of historical contextualization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- the use of Wikipedia raises concerns about biases inherent of the platform. Wikipedia’s coverage of historical events is not uniform across regions or cultures, potentially leading to an overrepresentation of certain perspectives. Anyway, the limitation is acknowledged and is anyway a first step into the right direction. \n\n- the definition of \"sensitive\" is based on interpretations from museums and archives, which seems a good starting point. However, I wonder about whose perspectives are considered \"sensitive\" and who gets to define them. Maybe some input from the communities whose histories are represented in the images should be considered, but I understand the difficulty of doing that."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I would like to see some discussion on the first point in the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "I think this evaluation is important conceptually and in the application level. One expectation to a foundation model may be to generate unbiased (or not one-sided) descriptions of sensitive events, and the proposed dataset can serve as a benchmark in this regard. \n\nAlso, the paper recommends that human evaluation is still critical even though LLMs can evaluate a target model, which is fair. According to Table 3, foundation models and humans do not look consistent, and evaluation solely by the automated protocol seems insufficient. The paper seems faithful to this evaluation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a new dataset for evaluating multimodal models’ capability to describe historical and sensitive images in terms of several criteria, including factual errors and due weight. The images in the dataset are carefully chosen so that they are sensitive, controversial, and/or commemorative. The evaluation protocol includes automated evaluation and human evaluation. The paper gives some recommendations for evaluating models with the dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I think the dataset is useful for the application level, while it’s not clear from the technical level what aspects of a model it tries to evaluate. The proposed evaluation task seems to require (1) identification of the event, person, etc. depicted in the image, (2) associating the identified entities with the corresponding historical event (so that it can give a contextualized description), and (3) describing the image in a fair and objective way. I think (1) involves the perceptual capability of a model, while (2) and perhaps (3) involves the knowledge the model has. (3) may also involve the criterion of goodness of generated description used in the training. The proposed protocol evaluates a model without being aware of these different aspects (the paper partly mentions this point in Section 5.1), which makes the interpretation of the result extremely hard. I understand that as the foundation model users rarely have knowledge about how the model is trained, it’s not straightforward to isolate these different aspects. However, without some ways to interpret the results (as described in Section 5.1 as a future application of the dataset), insights that the dataset will provide may be limited. \n\nThe paper is often hard to read. I don’t see what the dataset offers (does it contain only images or some example descriptions of events?) in the main paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Q1. Is it possible to recover the geographic and demographic distribution of the human evaluators? That data seems especially important to consider for historical contextualization."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "S1. The authors tackle a problem that many researchers shy away from or do not even consider, as historical contextualization is a complex task and has no objective ground truth. This paper is a thorough, high-quality effort to 1) help understand our models through this lens, and 2) highlight the importance of historical contextualization abilities in large vision-language models.\n\nS2. The paper is very well-written; the methods and results are presented in a straightforward manner and thoroughly-discussed.\n\nS3. Century is a diverse dataset with a decent balance across regions, content, and image type. The dataset can always be more diverse and balanced along these axes, but it is a respectable collection for evaluation given that its limitations are acknowledged."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Century, a dataset with 1,500 images of sensitive historical images (including a new method to identify images like those in the dataset). Along with Century, the authors propose an evaluation framework to measure how well models do at “historical contextualization.”"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. The evaluations are done on closed-source models, which are helpful in illuminating their capabilities given that we don’t know much about their data or architecture. However, it would be incredibly useful to benchmark open-source VLMs alongside them, as the associations with training data, architecture, etc. and historical contextualization abilities can help the community to identify how to score better on this benchmark.\n\nW2. I would love to see a more thorough limitations section. While the points covered are valid and important, there is so much nuance to the dataset, evaluation metrics, etc. The community would benefit from a paper that not only presented a useful dataset and benchmark for historical contextualization, but thoroughly (to a best approximation) enumerated the pitfalls one could fall into when maximizing performance on this benchmark, and described the demographic and geographic distribution of human evaluators.\n\nW3. Some of the figures seem to be missing legends, or at least are not clear enough in what the colors mean (Figures 6 and 11). I assume the x-axis is labeled 1-5, but the colors and lack of x-axis label are a bit confusing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Have the authors conducted any formal bias testing within the dataset? Is it possible to elaborate on potential approaches the authors have considered for addressing these biases. Understanding how these biases may clarify the power of the dataset, the impact of model outcomes, and outlining potential mitigation strategies, would further enhance the dataset’s robustness for future research.\n\nHave the authors considered ways to expand the dataset or if they envision it being used primarily for evaluation rather than training."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-articulated and clear, enhancing readability and accessibility.\n\nAddressing sensitive historical images is a compelling topic with high relevance, and the proposed framework is both innovative and thoughtfully executed.\n\nThe methodology for identifying and curating sensitive historical images, integrating knowledge graphs with language models, provides a scalable approach with potential research applications across history and AI.\n\nThe Century dataset could serve as a valuable resource for researchers working on similar challenges, including those focused on historical image representation, automated content generation, and bias mitigation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors present Century, a dataset of 1,500 sensitive historical images curated from recent history. It is generated using an automated process that combines knowledge graphs and language models, guided by criteria from museum and digital archive practices to ensure a balanced representation of global events and figures. The dataset is validated through both automated and human evaluations, demonstrating its diversity and comprehensiveness. Additionally, the authors introduce an evaluation framework to measure historical contextualization along dimensions of accuracy, thoroughness, and objectivity, applying it to assess the performance of four foundational models, with both automated metrics and human feedback supporting the results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I'm a bit concerned about the dataset scale. At 1,500 images, the dataset may be too small to train deep learning models directly, potentially limiting its use in large-scale AI training scenarios. A dataset size of more than 10K images would be a good estimation for training models. \n\nFurthermore, as a new framework, the effectiveness of Century could benefit from comparative analysis with existing datasets or similar historical image frameworks. This would provide a clearer benchmark of its strengths and limitations. If there are not closer frameworks, some related research might also help in comparison, such as the following papers for your reference: \n\nWu, Mingfang, et al. \"Automated metadata annotation: What is and is not possible with machine learning.\" Data Intelligence 5.1 (2023): 122-138.\n\nWadhawan, Rohan, et al. \"ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models.\" arXiv preprint arXiv:2401.13311 (2024).\n\nFinally, the authors candidly discuss certain biases, particularly concerning dataset distribution and generative labeling. These limitations could impact future applications, and additional mitigative strategies would strengthen the framework's applicability.\n\nMinor: It is unclear to me whether a dataset-centric paper with a focus on historical content aligns fully with ICLR’s primary scope, which typically emphasizes innovations in machine learning."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A dataset of sensitive historical images is curated and used to demonstrate historical contextualisation capabilities of SOTA multi-modal models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024century,\ntitle={Century: A Framework and Dataset for Evaluating Historical Contextualisation of Sensitive Images},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1KLBvrYz3V},\nnote={under review}\n}"
},
"abstract": {
"value": "How do multi-modal generative models describe images of recent historical events and figures, whose legacies may be nuanced, multifaceted, or contested? This task necessitates not only accurate visual recognition, but also socio-cultural knowledge and cross-modal reasoning. To address this evaluation challenge, we introduce Century -- a novel dataset of sensitive historical images. This dataset consists of 1,500 images from recent history, created through an automated method combining knowledge graphs and language models with quality and diversity criteria created from the practices of museums and digital archives. We demonstrate through automated and human evaluation that this method produces a set of images that depict events and figures that are diverse across topics and represents all regions of the world.\nWe additionally propose an evaluation framework for evaluating the historical contextualisation capabilities along dimensions of accuracy, thoroughness, and objectivity. We demonstrate this approach by using Century to evaluate four foundation models, scoring performance using both automated and human evaluation. We find that historical contextualisation of sensitive images poses a significant challenge for modern multi-modal foundation models, and offer practical recommendations for how developers can use Century to evaluate improvements to models and applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"historical",
"contextualisation",
"image",
"dataset",
"multimodal",
"VLM",
"evaluation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/27f4600653bc4f13f8ac8ca6efd0e88eca851aab.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/a137ddfdae6d5f8015f3e4590098949e366d8835.zip"
},
"title": {
"value": "Century: A Framework and Dataset for Evaluating Historical Contextualisation of Sensitive Images"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1KvYxcAihR | TMGBench: A Systematic Game Benchmark for Evaluating Strategic Reasoning Abilities of LLMs | main | Active | Large Language Models; Benchmark; Strategic Reasoning; Game Theory; Theory of Mind | datasets and benchmarks | 5;5;5;8 | 4;4;3;2 | 3;3;2;4 | 2;3;2;4 | 2;1;3;4 | 5.75 | 3.25 | 3 | 2.75 | 2.5 | -0.870388 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Regarding weakness 1:\n\n- Do you have a clear definition of tasks that require strategic reasoning, as used in this paper?\n\n- Could you explain more on how TMGBENCH addresses gaps in existing benchmarks for evaluating LLM reasoning capabilities?\n\n- What are the fundemental differences between tasks that require strategic reasoning and tasks that do not, perhaps with concrete examples?\n\nRegarding weakness 2:\n\n- Could you conduct an analysis of how different LLM characteristics (e.g., model size, architecture, training data, or objectives) correlate with performance on TMGBENCH? and why."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is well written and well organized.\n- The games included in TMGBENCH are comphrehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces TMGBENCH, a benchmark for systematically evaluating the strategic reasoning abilities of LLMs. By evaluating some LLMs on TMGBENCH, the paper identifies several flaws in LLMs’ performance, such as low accuracy rates and unstable inconsistency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I am not fully convinced there exists the need for a benchmark fo evaluating strategic reasoning abilities of LLMs. In fact, there lacks an universal definition of the ability of strategic reasoning. In other words, what are the fundemental differences between tasks that require strategic reasoning and tasks that do not?\n\n\n- If there is a clear definition of strategic reasoning, I would expect a more systematic study of existing LLMs on strategic reasoning. Why some LLMs perform better than others in terms of strategic reasoning? What are the influencing factors of LLMs? Data, Architecture, Model Size, training objectives?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- interesting that llama70B did worse on DA than 8B, why do you think this is?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Models are tested rigorously; 2,880 times for a single model in the single game tests, the complex games have a baseline of being tested 20 times, and there's testing for positional bias with the reFoToM / reSoToM prompts.\n- Extensibility: this is a great way of creating a difficult-to-overfit-to benchmark, using the synthetic data generated stories as additional \"games\" to play.\n- The metrics used (ID, BD, PAR) are comprehensive for evaluating a model's performance and good insight to how the models perform in these situations.\n- The tables and figures nicely present the findings of the experiments and are mostly given good descriptions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors create TMGBench, a game theory based benchmark for testing the strategic reasoning abilities of LLMs. They create a large number of games based on the \"Robinson-Goforth topology of 2x2 matrix games\" as well as utilizing synthetic data generation to build on top of said games for further game development. The games are then combined in a variety of ways, creating a complex structure for the LLMs to reason in. The authors then evaluate a selection of LLMs on the benchmark and report their results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper can be hard to follow at times. It would be nice to have examples of the complex games to solidify the reader's understanding. The description given for sequential games doesn't quite make sense to me, even with two introductions. And because of that, I'm not sure how well it upholds the task of \"testing for strategic reasoning\".\n- I'm not convinced that parallel forms are actually a test of strategic reasoning either, this seems closer to measuring the model's \"working memory\" and being able to keep track of the different situations at a given time step. But, this may be based on a misunderstanding of what the form is describing; it's not clear to me based on the descriptions given.\n- The prompt given for `Example of classic game: classic/111` gives me pause for the rest of the prompt generation. \"Player A and Player B are playing a game. Either of them has two choices, namely A1, A2/B1, B2.\" Is this telling the model that the choices are {A1, A2} or {B1, B2}? I assume this, but that could lead to the model being confused about the task rather than being honestly judged on the difficulty of the task.\n\n- a number of simple proofreading errors:\n\t- \"sequential form, where LLMs are required to response multiple game tasks in a row\" --> \"to respond to multiple games\"\n\t- \"As explained in Section 2.2, our benchmark are perfectly suitable\" --> your benchmark what?\n\t- \"as for practical result provide by LLMs,\" --> results provided by\n\t- \"which we expect robuster LLMs\" --> \"more robust LLMs\", I'm not sure if \"robuster\" is a word, but if it is it's not commonly used.\n\t\t- \"using CoT prompting, which is robuster\"\n\t- \"We perform 4 independent tests on each data point, covering both the classic setting and the story-based setting. Basically, we conduct 2,880 tests to generally evaluate a certain model\"\n\t\t- this is weird, \"Basically, we conduct 2,880 tests...\" these should be combined to make flow better.\n\t- \"We setup the test by divided it into several types\" --> \"by dividing it\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "It is stated that “Theoretically, using these atomic games, we can expand the framework to generate infinitely many increasingly complex game forms.” However, standard answers are required to compute the inconsistency map. The reviewer wonders how to obtain the standard answers to newly generated games?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The paper is very well-written.\n- Objectives are clear, and how those objectives are achieved by this work is well demonstrated.\n- Quantified metrics and visualisations have been used to compare LLMs on different tasks to assess their capabilities. \n- Extensive experiments were conducted to exam the failure cases and the effect of ToM. \n- Limitations were also discussed.\n- Generation pipeline was demonstrated in Appendix.\nOverall, the reviewer quite enjoyed reading this paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a benchmark TMGBENCH. TMGBENCH incorporates 144 game types based on the Robinson-Goforth topology of 2×2 games and provides three forms (sequential, parallel, and nested) to construct more complex games using those 144 game types. Several LLMs were compared on the benchmark using several quantified metrics to identify their strengths and weaknesses."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "No particular weakness was identified by the reviewer. The reviewer is not an expert in game theory or reasoning. It is quite likely that the reviewer is unfamiliar with some pieces of related work or crucial part of this work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "# originality\nModest.\n\nEvaluating LLMs in strategic reasoning games is a thoroughly investigated topic (as attested by the related work). Examining anti-symmetric reasoning patterns is a question I have not seen probed before and important to consider for this setting in general.\n\n# quality\nModest.\n\nExperiments demonstrate the benchmark can find differences among LLMs. Models fail to saturate the success criteria, particularly for more stringent requirements like perfect answering or demonstrating theory of mind. Biases based on the generated stories show there is clear room for improving LLM context sensitivity, however it is not clear how much this could be mitigated by different prompts for the strategic reasoning (a dimension not explored in the paper).\n\n# clarity\nModest.\n\nThe introduction was vague and hard to follow without reading the rest of the paper. Experiments are documented well. Some figures were hard to parse or could use a different presentation (notes below).\n\n# significance\nModest.\n\nThere are numerous evaluations for strategic reasoning in game theoretic games. This focuses on 2x2 games, omitting multi-agent agents or repeated/multi-turn games (excepting the composite games tested). The paper will be of some interest to the community focusing on this subset of LLM capabilities."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a benchmark for strategic reasoning comprised of all 2x2 game ordinal payoff arrangements. Additional evaluation capabilities include testing agents when reasoning on compositions of these games (in parallel, sequentially, or where one game influence the choices in a subsequent game) and reframing the games in story-based scenarios. Evaluations study open and closed source LLMs on this benchmark, assessing: how well they produce optimal choices, the extent to which they exhibit asymmetrically biased responses when payoff matrices are flipped, and using theory of mind to improve performance. The results demonstrate that existing LLMs do not saturate the benchmark, have varying degrees of bias based on the payoff structure and story framing, and struggle to leverage theory of mind to improve results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Note: These weaknesses are phrased as questions to facilitate discussion.\n\n# originality\nHow do the games in this benchmark cover those not covered in the \"game theory\" subsection of the cited paper \"A Survey of Strategic Reasoning with Large Language Models\"? Or for the \"Societal Behavior\" examples that include theory of mind?\n\n\n# quality\n\nThe experiments should include statistical tests when claiming differences among model types. At least in the cases where multiple runs were possible or multiple scenarios are being aggregated (for example, in Table 1 and Figure 5). Many claims seem plausible, but the tests are there to provide rigor.\n\nThe paper would benefit from evaluating the concern stated in the introduction that there is scenario leakage of common game forms. Was there evidence of scenario leakage based on the games in Robinson-Goforth topology results? Do the games most likely to be leaked (like Prisoner's Dilemma) demonstrate substantial performance differences relative to other games?\n\n\n# clarity \n\nThe introduction could be clearer on details later explained in the paper. Examples: \n- \"performance variability marked by coefficients\" - Coefficients of what?\n- \"marked by an asymmetric pattern\" - What asymmetric pattern?\n\nFigure 6 is hard to read. It might be better plotted by showing the differences in scores between the classic and story-based settings instead.\n\n# significance\n\nWhat are the key insights we learn from TGMBench that were not revealed in prior benchmarks? This is not very clearly articulated in the paper and would help establish it's originality and significance. As TGMBench is a benchmark, the value it provides derives in exposing LLM capabilities that are not already apparent in alternatives."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024tmgbench,\ntitle={{TMGB}ench: A Systematic Game Benchmark for Evaluating Strategic Reasoning Abilities of {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1KvYxcAihR},\nnote={under review}\n}"
},
"abstract": {
"value": "The rapid advancement of large language models (LLMs) has accelerated their application in reasoning, with strategic reasoning drawing increasing attention.\nTo evaluate the strategic reasoning capabilities of LLMs, game theory, with its concise structure, has become the preferred approach for many researchers.\nHowever, current research typically focuses on a limited selection of games, resulting in low coverage of game types. \nAdditionally, classic game scenarios carry risks of data leakage, and the benchmarks used often lack extensibility, rendering them inadequate for evaluating state-of-the-art models.\nTo address these challenges, we propose TMGBench, a benchmark characterized by comprehensive game type coverage, novel and diverse scenarios, and flexible game organization. \nSpecifically, we incorporate all 144 game types summarized by the Robinson-Goforth topology of 2×2 games, which are constructed as classic games in our benchmark. \nFurthermore, we employ synthetic data generation techniques to create diverse, higher-quality game scenarios through topic guidance and human inspection for each classic game, which we refer to as story-based games.\nLastly, to provide a sustainable evaluation framework adaptable to increasingly powerful LLMs, we treat the aforementioned games as atomic units and organize them into more complex forms through sequential, parallel, and nested structures.\nWe conducted a comprehensive evaluation of mainstream LLMs, covering tests on rational reasoning, reasoning robustness, Theory-of-Mind capabilities, and reasoning in complex game forms. \nThe results revealed that \nLLMs still have flaws in the accuracy and consistency of strategic reasoning processes, and their levels of mastery over Theory-of-Mind also vary.\nAdditionally, o1-mini, the latest reasoning model from OpenAI, was also evaluated across the sequential, parallel, and nested game structures and reached accuracy rates of 66.6\\%, 60.0\\%, and 70.0\\%, respectively, highlighting the challenges posed by TMGBench."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models; Benchmark; Strategic Reasoning; Game Theory; Theory of Mind"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/36a8d4cac9f00aec8403f596fd16b3bc00b197b9.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "TMGBench: A Systematic Game Benchmark for Evaluating Strategic Reasoning Abilities of LLMs"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1L52bHEL5d | Test-Time Adaptation for Combating Missing Modalities in Egocentric Videos | main | Active | missing modality;test-time adaptation | transfer learning, meta learning, and lifelong learning | 5;5;6;6 | 4;3;4;4 | 3;3;3;3 | 2;3;3;3 | 3;4;3;2 | 5.5 | 3.75 | 3 | 2.75 | 3 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. If this approach faces extreme examples, such as a video showing a calm street while the audio is an explosion, will this model mislead the baseline model into the wrong direction?\n2. You might consider adding extra blocks to the model, so that if updates are needed, only the added portions need to be updated. Alternatively, updating part of the model's structure could prevent the significant latency introduced by updating the entire system."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper redefines the modality missing problem as a test-time adaptation (TTA) issue, emphasizing the challenges of modality absence faced in online multimodal tasks. This is indeed an urgent problem that needs to be addressed for many online multimodal tasks.\n\nThe proposed MiDl method effectively enhances the baseline model's ability to handle missing modalities, serving as a solution for the modality missing problem in multimodal online tasks. This approach can act as a supplement when facing modality absence in such tasks. For instance, if modalities are functioning normally, this pipeline may not be used; however, when a modality is missing, the proposed solution can improve the baseline model's capability to handle the missing modalities. Additionally, normal inputs and prediction results can serve as supplementary information when modalities are insufficient.\n\nThe experiments presented in the paper are comprehensive, demonstrating that the \nmethod is independent of modality selection, baseline models, and model frameworks, thereby proving the robustness of the proposed solution."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To tackle the issue of modality missing in real-time tasks, this framework offers an online self-supervised learning method called MiDl. MiDl uses mutual information and KL divergence as loss functions to optimize the model in real time, enabling the baseline model to better handle inputs with missing modalities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. \"First, the prediction of should be invariant to the modality source . Ideally, f_{\\theta} should output the same prediction under both complete and incomplete modality, hence satisfying the following equality: (i)...,\" The underlying assumption of the approach is controversial. The task will degenerate into a modality distillation problem if this assumption holds,. Is there a more reasonable way to phrase this?\n2. Implementing this method in real-world production could introduce significant computational overhead and latency. Normal models can be accelerated through techniques like compression and distillation, but this approach involves updating model weights, requiring the retention of the complete model, making it difficult to deploy directly in practice.\n3. Could you include experiments demonstrating the approach's decision-making in more complex online scenarios? The experiments provided in the paper do not represent the best use case for this method; its most suitable application is in online scenarios, so experiments in these contexts would better support the results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weakenesses section. I highly encourage the authors to directly include the code for the verification of reproducibility."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Overall, the paper is interesting and easy to follow. \n- The formulation of the test-time adaptation for tackling missing modality without the need for model retraining is indeed novel and can be foreseen to be applied to various applications which contains multi-modal information.\n- Although the method itself is not complex and consists of components already used for various tasks for multi-modal learning and egocentric video analysis, they are leveraged in MiDl with strong motivation which are intuitive and reasonable. The extended discussion also offers a deeper understanding over the formulation of MiDl. This method could prove to be a good starting point for subsequent discussions in the research community.\n- The authors also provide a comprehensive analysis over the performance of MiDl in the formulated task, and also benchmarked previous methods such as SHOT, TENT under the same setting, which also provides a further insight into the challenges and possible methods to further tackle the task.\n\nIn general, this paper is relatively well presented with a simple yet highly motivated method for an interesting formulation of a realistic challenge."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tackles on missing modalities in egocentric videos without the need to retrain models by formulating this challenge as a test-time adaptation task. The authors proposed MiDl which minimizes the mutual information between prediction and the modality, with the incorporation of self-distillation to maintain performance when all modalities are available. The author benchmarked several methods under such problem formulation, demonstrating a descent performance when part of the modality are missing in two egocentric video datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are a few minor concerns remaining in the paper, mainly on the clarity and possible extension in discussion of the proposed method. I would like the authors to consider the following concerns if possible:\n1. On Page 4, Line 197-198, the author states that \"$f_\\theta$ should retain high performance in predicting data with complete modality, which is generally satisfied for $f_{\\theta_0}$\". Does this imply that the non-adapted pretrained model must be pretrained with all modalities available? What if the pretrained model is only trained with a single modality (e.g., only with visual information without the audio information which is rather common in video models)?\n2. It is observed that there is a large drop when $1-p_{AV}=100$, where none of the data contain both modalities. What would be a possible approach to mitigate this drop in performance. It is observed that the drop for MiDl is significantly more severe than that of SHOT.\n3. The current method only touches upon the case for two modalities (audio and video), is it expandable towards more modalities. Also, are there limitations for the possible types of modalities or it can be any modalities as long as they are obtained from the same set of data?\n4. The experiments are performed for each dataset with a drop in the primary modality, what would be the result if the secondary modality is dropped with the same probability?\n5. Lastly, the code is currently NOT available, which means that the reproducibility of the result is not verified."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "While the technical aspects and experimental results are generally strong, there are areas for improvement in the motivation, clarity of presentation, and some experimental details. \n\nI presented many questions and suggestions in the weaknesses suggestions. In particular, I would suggest the authors focus on the concerns about the motivation and the experiments aligning with that motivation. My comments regarding notation and small fixes are merely suggestions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Introduction:\n1. The second and third paragraphs effectively identify the gap in the literature and provide a robust overview of the proposed solution.\n\nRelated Works:\n\n2. Related works is concise and relevant\n\nMissing Modality as Test Time Adaptation:\n\n3. This section is well-written and easily comprehensible.\n\nExperiments:\n\n4. It is commendable that the experiments were repeated 5 times with reported standard deviations in Table 11; the results appear experimentally robust and convincing.\n5. The Ego4d Warmup result on 100% missing rate in the original distribution is an interesting finding with potentially strong applications.\n\nAnalysis on MiDL:\n\n6. The Components of MiDl section is strong, and the ablations are empirically sound."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel approach to handling missing modalities in multimodal learning using test-time adaptation. The method, MiDl, shows promising results across the EPIC kitchen and EPIC sounds datasets, and the method is motivated by the theoretical intuition of minimizing the mutual information between the predicted and available modality. The authors also provide some interesting analysis of the model through long-term adaptation, out-of-distribution warmup, and various ablation experiments. \n\nThis review follows the sections of the paper."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Introduction:\n1. The motivation presented in the first paragraph is weak. For instance, in Line 40, could you provide an example or application where inference must occur on a redacted modality? I can think of the application of blurring faces in images or deleting private details in medical records, but it's unclear when only one modality would be completely removed for privacy reasons while the other remains intact for the same data instance. Additionally, the relevance of using cheaper modalities to missing modalities is not apparent (line 40). It would be particularly convincing if the motivation aligned with the tested dataset. For example, if using Epic Kitchens, perhaps a scenario involving a humanoid with a malfunctioning audio sensor, or smart glasses with an obscured camera due to steam or food spillage could be considered.\n2. Contribution (3) appears to be describing MiDl, which is already covered in contribution (2). I would recommend reassessing what could constitute a third distinct contribution from your work.\n3.Figure 1 requires improvement. The concept of a \"potential performance trajectory\" needs clarification - is this your hypothesis? This graphic would be more persuasive if it depicted your actual no-adaptation baseline and your TTA method. The purpose of the black line in the middle of the graph is unclear.\n\nProposed Solution:\n\n4. The notation in eq (1) lacks precision. It is not evident that f(x;m) and m are random variables. The output of f is a distribution. Are you considering this as a random variable, with the value being the indices and the probability of those values the logits? Consider introducing a random variable Y ~ f(x;m). Also, consider using capital letter notation (e.g. \"M\") for the random variables. Furthermore, how can you evaluate the KL if x ~ S only has the modality m, not AV? Later in this section, it becomes apparent that you only update the model on complete instances. This limitation/assumption should be made clearer in the introduction or Takeaways subsection. This method would only be applicable for testing data that includes some multimodal instances.\n5. At last line of page 4, is $x_t$ a single sample? Do you mean samples $x_0 \\dots x_t$?\n\nExperiments:\n\n6. Additional details could enhance the reproducibility of this work. Was any hyperparameter tuning conducted for MiDl? Section B.1 mentions the recommended hyperparameters for the baseline but doesn't specify how they were determined for MiDl. Moreover, what were the proportions of the train/val/test split?\n7. In section 5.3 LTA: You allow the model to retrain on some of the unlabeled training data. Why not gather $S_{in}$ from the validation set? In this setting, is MiDl trained on $D \\ S_{in}$? Or is $S_{in}$ still included in the labeled dataset before test time and then used without labels during test time?\n8. Can this method be applied to instances where either modality is missing, e.g., P = {.3,.3,.3}? It would be great to see results for experiment with such a ratio. Currently, it may be the case that the model learns to leverage ONLY the modality that is consistently present in both the complete modality test case and the missing modality test case. In this scenario, would a given unimodal model for the always-present modality perform optimally? Table 1 could be improved by clarifying what is meant by \"Unimodal\" and why it is only present at the 50% missing rate. For Epic Sounds, is the unimodal the always-present modality (video)?\n\nAnalysis on MiDL:\n\n9. Both architecture choices are transformer-based. It would have been more convincing to see a greater diversity of architectures (such as a convolution backbone). Instead of presenting different missing rates as columns in Table 3, it would have been preferable to see different architectures/methods as the columns with a fixed missing rate (perhaps 50%).\n10. Given that the main motivation was to avoid retraining an existing method on a large dataset to perform missing modality adaptation, the results would have been more convincing if the authors had either used an existing model+dataset and just performed adaptation, as opposed to training from scratch an existing method. Alternatively, they could have tested with a very large dataset that was computationally expensive. The omnivore pretraining test is good. Did you train the model from scratch on your dataset or use an existing model and apply MiDl?\n11. In Table 6, shouldn't 55.2 in the Dl column be bolded?\n12. I thought the motivation was that retraining on the train set is computationally expensive, and TTA will prevent that? It's good that you acknowledge the computational requirements of MiDl, but then in the abstract, you shouldn't state: \"Current methods, while effective, often necessitate retraining the model... making them computationally intensive.\" Alternatively, compare your inference computation here with the amount of computation required to retrain training data (to get sufficient performance)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could the authors include comparisons with other approaches specifically addressing the missing modality issue, such as those proposed by Dai et al. (2024), Lee et al. (2019), and Wang et al. (2023)?\n\n2. Given the importance of this task, would the authors consider expanding the benchmark by including more test-time adaptation (TTA) approaches, such as Chen et al. (2022), Wang et al. (2020), Yuan et al. (2023), and Niu et al. (2022), and analyzing performance across different clusters of approaches? Including missing modality approaches as baselines may also strengthen the benchmark.\n\n3. Could the authors provide qualitative results comparing their approach to the baseline, along with a failure case analysis to offer insights into scenarios where the method may fall short?\n\n4. Has the generalizability of the proposed approach been tested on datasets beyond EPIC Kitchen? If not, would the authors consider verifying the performance on additional datasets?\n\n5. Could the authors use TSNE visualization on the latent space to illustrate how the proposed supervision affects feature learning? Specifically, visualizing changes over different epochs in comparison to the baseline might provide additional insights."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Missing modality issue is important for test time adaptation of ego centric action recognition. This task will contribute to the community.\n\n2. Method section is clearly written and easy to follow.\n\n3. Compared with the baseline, the proposed approach show good performance on this new task."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the authors focus on an important task which is test time adaptation for egocentric video action recognition under missing modality. The authors validate existing work of TTA on this new task and propose a new method MiD1 to enhance the robustness of the learned features. The performance of the proposed method is evaluated on the EpicKitchen sound and video dataset"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of the comparison with other approaches specifically targeted at missing modality issue. \n\na. Dai, Y., Chen, H., Du, J., Wang, R., Chen, S., Wang, H., & Lee, C. H. (2024). A Study of Dropout-Induced Modality Bias on Robustness to Missing Video Frames for Audio-Visual Speech Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 27445-27455).\n\nb. Lee, H. C., Lin, C. Y., Hsu, P. C., & Hsu, W. H. (2019, May). Audio feature generation for missing modality problem in video action recognition. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 3956-3960). IEEE.\n\nc. Wang, H., Chen, Y., Ma, C., Avery, J., Hull, L., & Carneiro, G. (2023). Multi-modal learning with missing modality via shared-specific feature modelling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 15878-15887).\n\n\n2. The authors are suggested to enlarge the benchmarks. I aggree that this task is an important task, however the experiments are limited in this paper which will be harmful to its soundness. The authors could enrich the benchmark using more existing TTA approaches, e.g., d,e,f, and g, and try to provide an analysis on the performance on different cluster of approaches. Missing modality works can also serve as good baselines to enrich the benchmark.\n\nd. Chen, D., Wang, D., Darrell, T., & Ebrahimi, S. (2022). Contrastive test-time adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 295-305).\n\ne. Wang, D., Shelhamer, E., Liu, S., Olshausen, B., & Darrell, T. (2020). Tent: Fully test-time adaptation by entropy minimization. arXiv preprint arXiv:2006.10726.\n\nf. Yuan, L., Xie, B., & Li, S. (2023). Robust test-time adaptation in dynamic scenarios. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 15922-15932).\n\ng. Niu, S., Wu, J., Zhang, Y., Chen, Y., Zheng, S., Zhao, P., & Tan, M. (2022, June). Efficient test-time model adaptation without forgetting. In International conference on machine learning (pp. 16888-16905). PMLR.\n\n3. No qualitative reustls. The authors are suggested to provide some qualitative results when comparing their approach with the baseline approach. Some failure case analysis will be helpful.\n\n4. The performance of the proposed approach is only verified on EPIC Kitchen, generaliyability to other dataset can be an issue.\n\n5. TSNE visualization on the latent space will be helpful to see how the proposed supervision help during the feature learning procedure. The authors could visualize the changes for different epoches compared with its baseline."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a Test-Time Adaptation (TTA) method specifically tailored for the problem of missing modalities in multimodal egocentric recognition. Our method outperforms other TTA baselines when modalities are missing at test time."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024testtime,\ntitle={Test-Time Adaptation for Combating Missing Modalities in Egocentric Videos},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1L52bHEL5d},\nnote={under review}\n}"
},
"abstract": {
"value": "Understanding videos that contain multiple modalities is crucial, especially in egocentric videos, where combining various sensory inputs significantly improves tasks like action recognition and moment localization. However, real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues. Current methods, while effective, often necessitate retraining the model entirely to handle missing modalities, making them computationally intensive, particularly with large training datasets. In this study, we propose a novel approach to address this issue at test time without requiring retraining. We frame the problem as a test-time adaptation task, where the model adjusts to the available unlabeled data at test time. Our method, MiDl~(Mutual information with self-Distillation), encourages the model to be insensitive to the specific modality source present during testing by minimizing the mutual information between the prediction and the available modality. Additionally, we incorporate self-distillation to maintain the model's original performance when both modalities are available. MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time. Through experiments with various pretrained models and datasets, MiDl demonstrates substantial performance improvement without the need for retraining."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"missing modality",
"test-time adaptation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/098e173dd3fb3600a85a5ef352cfa40ea6e8584b.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Test-Time Adaptation for Combating Missing Modalities in Egocentric Videos"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1L9vdc7BB5 | ADAPT: Adaptive Prompt Tuning for Pre-Trained Vision-Language Models | main | Active | Prompt Tuning; Multimodality; Vision-Language Models; Network Pruning | applications to computer vision, audio, language, and other modalities | 3;5;5;6 | 5;4;2;5 | 2;3;2;2 | 2;3;2;2 | 3;1;3;2 | 4.75 | 4 | 2.25 | 2.25 | 2.25 | -0.187317 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Could the authors provide more details about the scoring function used to determine token importance during pruning? Were any alternative scoring mechanisms considered, and if so, why was the current approach chosen?\nHow does ADAPT ensure stability during the pruning process, especially given the highly heterogeneous prompt lengths across different layers? Are there any safeguards in place to avoid over-pruning, where the model could lose important contextual information?\nThe evaluation on 11 datasets showed varying degrees of performance, with some datasets exhibiting reduced accuracy compared to the baseline. Could the authors elaborate on the potential reasons behind these inconsistencies and suggest strategies that could mitigate these issues in future iterations of ADAPT?\nGiven the independence of the pruning processes for the text and image branches, is there any mechanism in place to maintain synchronization between the two branches during training? If not, could this lead to potential issues in multimodal understanding?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors propose a novel adaptive prompt tuning approach, ADAPT, that effectively reduces the number of parameters needed for pre-trained vision-language models while maintaining competitive performance across a variety of downstream tasks. This efficiency is a notable contribution to prompt-based fine-tuning methods.\nBy leveraging an iterative pruning mechanism, ADAPT dynamically adjusts the prompt lengths for different layers, enabling a flexible solution that outperforms traditional fixed-length prompt tuning methods, particularly in scenarios that require task-specific adaptations.\nThe approach is validated on 11 diverse datasets, covering different vision-language tasks. This broad evaluation demonstrates the adaptability and applicability of ADAPT across a wide range of contexts.\nThe pruning process used by ADAPT results in heterogeneous context lengths, automatically determining the optimal prompt length at each layer, which is an improvement over manually designed prompts that tend to be homogeneous and less efficient."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To address the limitations of fixed-length prompt tuning approaches for pre-trained vision-language models, the authors propose ADAPT, an adaptive prompt tuning method that dynamically determines optimal prompt lengths during fine-tuning. By employing an iterative pruning strategy, ADAPT identifies and removes less relevant prompt tokens at each layer, allowing efficient parameter usage while maintaining model performance. The authors evaluate ADAPT across 11 benchmark datasets, demonstrating that the method significantly reduces the number of parameters required while achieving competitive or improved accuracy. This adaptive approach highlights the benefits of automatic context length adjustment compared to manually designed fixed-length prompts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "ADAPT shows significant performance degradation in certain categories, such as the Pets class, where it fails to rank even in the top three. It is regrettable that the authors did not conduct further discussion and research on this issue.\nThe highly heterogeneous prompt lengths determined by the pruning mechanism could make the model harder to implement in practical scenarios where consistency and predictability are valuable, compared to using manually fixed homogeneous prompt lengths.\nAlthough ADAPT optimizes both text and image branches independently, there is no explicit mechanism mentioned to ensure that the branches remain aligned in terms of context length adjustments. This could potentially lead to imbalances that affect the model's overall performance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "-The hyperparameter T_target controls sparsity of masks. According to Table 2, the model reaches better averaged performance when T_target is set to a larger value (the masks are less sparse). What if T_target is set to a value larger than 128? What is the upper bound of the proposed method?\n\n-Ablations on prompt depth and context length should be conducted. \n\n-To demonstrate the effectiveness of the proposed method on few-shot classification tasks, the paper should provide results on 1/2/4/8-shot training setting, similar to those reported in CoOP and other related studies."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "-The paper is well-written.\n\n-Extensive experiments on 11 downstream datasets reveal the advantage of Adapt.\n\n-Adding mask to the prompts of different depth is an interesting idea."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a deep continuous prompting method dubbed Adapt that encourages heterogeneous context lengths."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Adding learnable mask to the prompts of different depth is an interesting idea. But, existing methods [1] proposed to add learnable mask to the parameters of CLIP. Adding learnable mask to parameters and add learnable mask to prompt have similar methods. Moreover, this paper did not discuss the difference between ADAPT and [1], which miss this key reference.\n\n[1] Regularized Mask Tuning: Uncovering Hidden Knowledge in Pre-trained Vision-Language Models, ICCV 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the questions in Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed method, ADAPT, changes the context lengths for different transformer layers by iteratively pruning context tokens. ADAPT surpasses the SOTA method on 16-shot image classification tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper assumes that a fixed context length for prompts may lead to either redundant context tokens or insufficient context length when transferring a pre-trained model to downstream tasks. Based on this assumption, the paper proposes a method to automatically determine the prompt length."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It is unclear why the convergence of model training is determined solely by reaching T_target. T_target may vary across different training datasets, but it is set to a fixed value for all datasets. Additionally, if the mask for the text encoder is too sparse, this training target might restrict the sparsity of the mask for the image encoder.\n\nThe paper should provide a more detailed analysis of the learned binary masks. According to Figure 3, on the EuroSAT dataset, more context tokens are required in the middle layer of the image encoder, while the first layer of the text encoder requires more context tokens. An analysis of this discrepancy should be included.\n\nADAPT is trained and evaluated on the few-shot classification task, following the CoOP methodology. Thus, it should also report results under other training settings (1-shot, 2-shot, 4-shot, and 8-shot) to enable a more comprehensive comparison with state-of-the-art methods.\n\nMoreover, UPT[1] should be included for comparison, as it also introduces prompts in both the text and image encoders, similar to ADAPT.\n\n[1] Unified vision and language prompt learning."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See Weakness"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Strength:\n+The average performance on 11 downstream datasets verifies the effectiveness of the proposed methods.\n+The proposed method shows slightly fewer FLOPs than existing methods.\n+Adaptively changing the prompt tokens is an interesting idea."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose adaptively pruning prompt tokens during the prompt tuning, rather than using fixed prompt length. They use metrics in network pruning to compute the importance scores of prompt tokens and prune less important tokens gradually."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weakness:\n1. There is more than one page to write. It looks like a paper in progress The authors should consider to include more experiments and analysis. For example, the authors can show that different datasets prefer different prompt token lengths to verify the importance of the proposed method.\n\n2. In line 377, the authors write “The result is shown in Appendix Figure 4. However, the appendix is missing. The authors should move it from the supplementary material to the end of the main paper.\n\n3. How do we determine the number of tokens to prune each each layer? \n\n4. How to set the number of prune steps rp.\n\n5. There are too many mathematical symbols, especially in Algorithm 1, making it hard to understand, even though the operation used in this paper is easy. The authors should improve this to improve the readability.\n\n6. There are only two paragraphs in the Introduction Section. The authors should consider splitting them into more paragraphs.\n\n7. The proposed methods are highly related to dynamic neural networks. The authors should discuss it and cite related papers.\n\nI think that the idea of this paper is good enough. However, the authors should improve their presentation.\n\n\nIssues:\nIn Figure1, the authors should indicate the proposed method with “Adapt (Ours)”."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024adapt,\ntitle={{ADAPT}: Adaptive Prompt Tuning for Pre-Trained Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1L9vdc7BB5},\nnote={under review}\n}"
},
"abstract": {
"value": "Prompt tuning has emerged as an effective way for parameter-efficient fine-tuning. Conventional deep prompt tuning inserts continuous prompts of a fixed context length into the input to each layer. When a pre-trained model is tailored to a specific downstream task, different layers initialized with pre-trained weights might have, depending on the distribution shift type, different levels of deviation from the optimal weights. Inserted prompts with a fixed context length might have redundant context tokens or insufficient context length. To address this issue, we propose a deep continuous prompting method dubbed Adapt that encourages heterogeneous context lengths. Context lengths are automatically determined by iteratively pruning context tokens. We use the saliency criterion for the neural network pruning to compute the importance scores of context tokens in order to determine which tokens to prune. We examine the proposed method on the pre-trained vision-language model CLIP. Extensive experiments on 11 downstream datasets reveal the advantage of Adapt: the average test accuracy increases from 79.83% to 81.70%. The highest performance gain on individual datasets is 9.63%. At the same time, the computational overheads are comparable to or smaller than baseline methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Prompt Tuning; Multimodality; Vision-Language Models; Network Pruning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/cc17d1165c28fb649aee4b0e4a47ecbe21fb545f.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/c53a327559812af58ef87103a029878977bcb009.pdf"
},
"title": {
"value": "ADAPT: Adaptive Prompt Tuning for Pre-Trained Vision-Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1MHgMGoqsH | Unifying Back-Propagation and Forward-Forward Algorithms through Model Predictive Control | main | Active | deep learning optimization;model predictive control | optimization | 3;3;3 | 3;4;3 | 2;2;2 | 2;1;2 | 2;2;2 | 3 | 3.333333 | 2 | 1.666667 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. It is not clear to me why alignment gets worse with more training (re. Fig. 2 and L400-401).\n2. Suggestion: Fig. 5 would be easier to read if the caption included a note \"lower is better\".\n3. Suggestion: Before introducing (19), referring the reader back to (17-18) might improve readability.\n4. Suggestion: The use of the word \"Objective\" in the sense of (17-18) can be confusing for the reader, seeing as BP and FF also optimize losses (or, \"objectives\")."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper offers a _fresh_ perspective, unifying BP and FF with inspiration from the MPC framework in search of a more flexible family of optimization algorithms.\n2. The results obtained (both theoretical and empirical) show some promise in terms of controlling memory and performance trade-offs via the horizon parameter $h$.\n3. Experimental setup is reasonably well-structured and conducive to conveying the main messages of the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Drawing inspiration from the model predictive control framework, this work proposes a framework for integrating back-propagation (BP) and the forward-forward (FF) algorithm (Hinton, 2022) for optimizing neural networks. In this framework, layer-wise local losses are back-propagated by $h$ steps, where the horizon $h$ is a user-provided algorithm parameter that controls a memory-performance trade-off. Here, $h=1$ corresponds to the FF algorithm, while $h=T$ for a $T$-layer neural net corresponds to backprop. A theoretical result is provided showing the convergence of the loss gradient to (a scaling of) the true gradient as $h \\rightarrow T$. Assuming linear increase in memory consumption with $h$, the work also proposes a heuristic for selecting $h$ adaptively given a particular deep learning optimization problem, and hardware constraints or performance requirements. Empirical studies show the approach may be feasible for obtaining optimization algorithms that enable trading off performance for memory with more flexibility than FF, as well as another alternative (LoCo)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The connection to MPC seems like a bit of a stretch and makes the paper unnecessarily harder to digest in my opinion. That is, I found Sec. 3.2 to be needlessly long and dense; the same horizon idea could be described in simpler terms. The reason why I think the MPC connection is a bit of a stretch is that MPC applies _the optimal solution_ of the opt. problem to control the system given the trajectory cost, whereas the proposed approach takes a single gradient step.\n2. In L211-214, the comments on memory usage read as though FF and/or the proposed framework has better _complexity_ than BP (re. usage of the word \"growth\"), when in fact the complexity is the same and gains are only in terms of constants. Indeed, Fig. 3 shows that FF ($h=1$) reduces memory by some factor of 3-4x in the best case and at a huge performance discount. Given modern hardware and distributed training capabilities, this brings to question whether interpolating FF and BP is worth the effort and complication to begin with (Occam's razor).\n3. The theoretical result in Thm. 3.4 does not surprise me. Just looking at Fig. 1, one can already see that the gradients will be aligned exactly for roughly $h/T$ fraction of the parameters. Once again, I am not convinced the gravity of the result is worth the complication. Furthermore, the commentary in L270-271 seem to claim that alignment of the gradients necessarily translate to better performance, which I don't believe is true. Consider the Newton direction, which almost never aligns with the gradient, yet would likely yield much better performance than the gradient (steepest descent dir.) if it could be feasibly computed.\n4. The horizon selection algorithm requires some runs with $h=T$. If this is possible on the available hardware, why bother reducing memory usage (except maybe some atypical use cases)? \n5. Fig. 3 (right) is missing bars on memory usage, which seems awkward and raises suspicion for the reader. Note also that the linear memory demand assumption seems only to hold for eager execution (but not static execution) of the backprop framework. This information should be highlighted in the main text. Currently it's only mentioned in Appx. E.1.\n6. The same goes for the range of values considered on the x-axis of Fig. 2. The scale for the rightmost 2 plots should also go down to $\\approx 5 \\times 10^{-3}$ like the leftmost plot. \n7. Overreaching claims: e.g., L492 says \"proposed horizon selection algorithm is _more efficient than_ BP\". Careful wording is critical for maintaining scientific tone. Perhaps it's better to say something like \"more memory-efficient than BP\" or \"better at optimizing our proposed objectives in (17-18)\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see the weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed training algorithm is technically sound, which interpolates between BP and FF.\n\nThe writing is clear and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tries to provide a unified training algorithm that connects BP and Forward-Forward algorithm based on the concepts or basic formulation in Model Predictive Control (MPC). The proposed training algorithm balances the accuracy and memory usage. The theoretical analysis is based on a deep linear model, followed by a horizon selection algorithm. Experiments are conducted by considering mang commonly used deep models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.Motivation. I understand it is a doable research paper to trade off memory usage and accuracy. However, I feel it may not be necessary to sacrifice accuracy to gain memory efficiency, given that nowadays, we have relatively sufficient computation power to train large deep models, e.g., foundation models. Thus, it may be less pressing to consider this trade-off.\n\n2.Methodology. The proposed method simply borrows the concept of basic formulation of MPC without involving much technical content from MPC literature. Thus, I could not tell sufficient technical contribution in terms of methodology. Similarly, the title also seems misleading by emphasizing MPC too much.\n\n3.Theory. One apparent limitation is the authors only derive results based on deep linear models, which could be fundamentally different from modern (non-linear) deep models, such as ResNet and Transformers. Although the heuristic extensions to modern deep models in experiments validate the theoretical results, this limitation is still non-neglectable. \n\n4.Writing. The writing needs substantial improvement. Grammar errors include Line 35 (no subject) and Line 112 (not complete). The citation is also problematic, such as line 78.\n\n5.The proposed method is motivated from a mere optimization perspective without considering the generalization or learning theory, which can be fundamentally limited. For example, it seems that Figure 4 only consider the training loss instead of looking into the test loss.\n\n6.The choice of functions in Section 4 seems subjective, which is less convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* In Figure 3, what does \"full tuning\" refer to? Does this experiment involve training the models from scratch, or is it a fine-tuning process? I'm confused due to the use of \"full tuning\" in Figure 3 but \"fine tuning\" in Table D2.\n\n* What is the significance of introducing the framework through MPC? Does it help with the analysis of the method? Given that intermediate terms cancel out in equation (6), the connection to MPC seems somewhat contrived and appears to introduce unnecessary complexity without providing clear benefits in understanding.\n\n* Is the use of \"max\" in Objectives (1) and (2) a typo?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper is overall well-written and clear.\n\n* The approach of viewing deep neural network optimization through the lens of MPC is innovative and provides a fresh perspective.\n\n* Additionally, the experiments and theoretical results are well-aligned and effectively complement each other, strengthening the overall argument of the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a training framework for deep neural networks formalized based on Model Predictive Control (MPC), where the horizon length can be adjusted to encompass both Back-Propagation (BP) and Forward-Forward (FF) algorithms as special cases. The framework allows for a flexible trade-off between memory usage and model performance by varying the horizon length. The authors provide an asymptotic theoretical analysis for the class of deep linear networks, demonstrating that as the horizon length approaches the total number of layers (or blocks), the gradient computed by the framework converges to that obtained using full BP. Additionally, numerical experiments validate the framework, offering both theoretical and practical insights into its performance across different models and tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The paper overlooks a significant body of prior works that address the memory limitations of BP. Notable examples include techniques like checkpointing, forward-mode automatic differentiation, forward gradients [1]. I recommend that the authors include a comparison of their MPC framework with these memory-efficient techniques, specifically highlighting how their approach differs from or improves upon these existing methods in terms of memory savings and performance.\n\n* Moreover, the time complexity of the proposed framework is not discussed. Based on my understanding, the time complexity would likely be $\\mathcal{O}((T-h+1)h)$. For middle values of $h$, which the authors suggest might balance memory and performance, the time complexity actually increases by a factor of $\\mathcal{O}(T)$. In this case, one could potentially use forward-accumulation gradients with the same time complexity and achieve better memory efficiency, while still producing gradients identical to BP (no performance loss). I suggest the authors provide a detailed analysis of the time complexity of their approach and clearly articulate the advantages of their framework compared to existing methods, particularly in terms of time and memory efficiency. This comparison would help clarify the specific benefits of the proposed approach over alternatives.\n\n* A key experiment demonstrating the practical applicability of the framework is missing, particularly one that shows it can train a model from scratch with a small drop in performance while achieving significant memory savings. Without this, it is difficult to assess whether the proposed approach is useful in practice. I suggest the authors consider adding an experiment that compares training a model from scratch using their MPC framework (with various horizon lengths) against standard backpropagation, reporting both performance metrics and memory usage. This would provide concrete evidence of the framework's practical benefits and limitations.\n\n[1] Baydin, Atılım Güneş, et al. \"Gradients without backpropagation.\" arXiv preprint arXiv:2202.08587 (2022)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present a novel MPC framework for deep learning, unifying BP and Forward-Forward techniques. We analyze accuracy and memory trade-offs and validate a horizon selection algorithm with theoretical and experimental support."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024unifying,\ntitle={Unifying Back-Propagation and Forward-Forward Algorithms through Model Predictive Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1MHgMGoqsH},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce a Model Predictive Control (MPC) framework for training deep neural networks,\n systematically unifying the Back-Propagation (BP)\n and Forward-Forward (FF) algorithms.\n At the same time, it gives rise to a range of\n intermediate training algorithms with varying look-forward horizons,\n leading to a performance-efficiency trade-off.\n We perform a precise analysis of this trade-off on\n a deep linear network, where the qualitative conclusions\n carry over to general networks.\n Based on our analysis, we propose a principled method to choose\n the optimization horizon based on given objectives and model specifications.\n Numerical results on various models and tasks\n demonstrate the versatility of our method."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"deep learning optimization",
"model predictive control"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/893a1a0a1bc8fa2cc07eff9640cc5c9688e29a82.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/7634ccdc33f873c43d6b81ef423687c8c91c4adf.zip"
},
"title": {
"value": "Unifying Back-Propagation and Forward-Forward Algorithms through Model Predictive Control"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1MjOlHwCE6 | Reducing Complexity of Force-Directed Graph Embedding | main | Active | Graph embedding;Force-directed;representation learning;Spring model;Reduced complexity | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 1;3;3;3 | 4;5;4;4 | 3;2;1;2 | 2;2;1;2 | 2;1;1;1 | 2.5 | 4.25 | 2 | 1.75 | 1.25 | 0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "NA"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "NA"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The short review is as follows: the paper proposes a new set of graph embedding methods which instead of using message passing or back propagation, it uses spring model to construct graph embedding. Each nodes' positional embedding is the equilibrium state. I think there are quite a lot of paper that proposes new graph embedding methods, and in order to make a proposed method to work it needs to capture (1) global and local structure information (2) able to be learned and proactively adapted, otherwise no one would ever use the newly proposed embedding methods. From a brief walkthrough of the paper, I don't think the proposed method can be used as a way that proactively learns embeddings for nodes and graphs, which are useful for downstream tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "NA"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The overall idea is interesting and offers a complimentary perspective."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a novel method for computing graph embeddings using a spring model without any neural network/model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper seems incomplete."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "NA"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "NA"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "While this paper presents an interesting force-directed graph embedding approach, the manuscript feels incomplete. I recommend that the authors include more powerful baselines (e.g., DGI [1], GraphZoom [2]) and conduct evaluations on larger graphs (with over 1M nodes) to better demonstrate improvements in accuracy and scalability for their next submission.\n\n[1] Veličković et al., \"Deep graph infomax\", ICLR'19 \\\n[2] Deng et al., \"GraphZoom: A Multi-level Spectral Approach for Accurate and Scalable Graph Embedding\", ICLR'20"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "NA"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could you clarify the differences between this paper and the previous work by Lotfalizadeh et al.?\n2. Could you expand the empirical analysis and evaluation with more downstream tasks, e.g., multilabel classification or clustering?\n3. Besides improved performance on downstream tasks, what desirable qualities do the FD embeddings have? E.g., the paper mentions reflecting the topology of the graph on Line 234 as a rationale for some of your choices. Would it be possible to evaluate that with metrics such as mean average precision?\n4. Could you make your code available for reproducibility purposes?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(++) **Scalability Improvement**: The proposed complexity reduction from $O(n^2)$ to $O(n \\log(n))$ is a notable improvement.\n\n(++) **Practical Utility**: The paper demonstrates comparable or slightly better performance to some state-of-the-art graph embedding methods while offering scalability improvements, which suggests that the proposed approach has practical utility for large graph datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a force-directed graph embedding method that reduces the computational complexity of an earlier approach proposed by Lotfalizadeh et al. The authors introduce a modification to limit the force computations to $k$-hop neighborhoods and a few randomly sampled nodes, resulting in a reduction from $O(n^2)$ to $O(n \\log(n))$ complexity. This makes the proposed method potentially more scalable for large graphs while maintaining competitive performance in unsupervised graph embedding tasks like link prediction and node classification."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(----) **Limited Novelty**: The main contribution is an incremental improvement to the original method by Lotfalizadeh et al. The use of $k$-hop neighborhoods and stochastic sampling for complexity reduction, while useful, does not represent a fundamentally new idea in the context of graph representation learning. The paper offers no new theoretical contributions, insights, or analyses.\n\n(----) **Relationship to Previous Work**: The relationship to previous work, by Lotfalizadeh et al. (2023, 2024) is ambiguous. It is not clear how this work fundamentally extends the original force-directed embedding approach from these works.\n\n(--) **Limited Evaluation and Analysis**: The paper only evaluates the quality of the proposed embeddings using two downstream tasks: link prediction and node classification.\n\n(---) **Presentation Issues**: There are multiple signs that the paper is incomplete. Some examples:\n* Placeholders such as \"!!! A PICTURE TO BE INSERTED for CAMERA READY !!!\" at Line 239 and \"!!! TO BE ELABORATED ON for CAMERA READY !!!\" at Line 452. The Discussion section is empty!\n* Typographical errors such as \"topolofy\" on Line 234 and starting the sentences on Line 186 with lowercase letters.\n* Broken reference on Line 301.\n* $\\log(n^2)$ in Line 027 in the abstract should be $O(n^2)$.\n* The notation $\\mathbf{z}_{uv} = \\mathbf{z}_v - \\mathbf{z}_u$ was introduced on Line 140 to facilitate brevity, then used in equation 3, not used in equations 8 and 9, then used again in equations 13 and 14.\n* The paper mentions several well-known graph embedding techniques on Line 273, such as LINE, SDNE, DeepWalk, and Node2vec, but does not provide proper inline citations for them.\n\n(--) **Marginal Performance Improvement**: While not a deal breaker, the downstream task performance improvement on previous methods is marginal at best, as can be seen in Figures 3 and 5."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "In this work, we expand on a previous work by reducing the complexity of the force-directed graph embedding method. The idea is to reduce the number of force calculations to a limited set of node pairs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024reducing,\ntitle={Reducing Complexity of Force-Directed Graph Embedding},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1MjOlHwCE6},\nnote={under review}\n}"
},
"abstract": {
"value": "Graph embedding is a critical pre-processing step that maps elements of a graph network, such as its nodes or edges, to coordinates in a $d$-dimensional space. The primary goal of the embedding process is to capture and preserve various features of the graph network, including its topology and node attributes, in the generated embedding. Maintaining these graph features in the embedding can significantly enhance the performance of the downstream machine learning tasks. In this work, we introduce a novel family of graph embedding methods that leverage kinematics principles within a spring model and $n$-body simulation framework to generate the graph embedding. The proposed method differs substantially from state-of-the-art (SOTA) methods, as it does not attempt to fit a model (such as neural networks) and eliminates the need for functions such as message passing or back-propagation. Instead, it aims to position the nodes in the embedding space such that the total net force of the system is reduced to a minimal threshold, resulting in the system reaching an equilibrium state. The spring model is designed as a linear summation of non-linear force functions, with the shortest-path distance serving as the adjusting parameter for the force factor between each node pair, and therefore, inducing the graph topology in the force functions. In this work, we attempted to reduce the complexity of the original algorithm from $\\log(n^2)$ to $n\\log(n)$, while maintaining the performance metrics at a competitive level.\nThe proposed method is intuitive, parallelizable, and highly scalable. While the primary focus of this work is on the feasibility of the Force-Directed approach, the results in unsupervised graph embeddings are comparable to or better than SOTA methods, demonstrating its potential for practical applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Graph embedding",
"Force-directed",
"representation learning",
"Spring model",
"Reduced complexity"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4bde1ead37d4894a870a34432ff7c2bedf6d4791.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Reducing Complexity of Force-Directed Graph Embedding"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1NYhrZynvC | Exact linear-rate gradient descent: optimal adaptive stepsize theory and practical use | main | Active | gradient descent;adaptive stepsize/learning rate;universal optimal choice;exact convergence rate | optimization | 1;1;3;5 | 5;4;4;3 | 1;2;2;3 | 1;2;2;2 | 1;2;2;2 | 2.5 | 4 | 2 | 1.75 | 1.75 | -0.852803 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Does Theorem 2.1 and Corollary 2.2 assumes $L$-smoothness?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper uses several examples to demonstrate the benefits of using the proposed step size rules."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies some adaptive size rules for smooth functions, including some theoretical optimal ones and practical approximations. Experiments show some advantages of these rules."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. This paper lacks a comprehensive comparisons with prior art. A similar rule to the proposed step size rule is studied at least in reference [1] and revisited in [2]. For example, in [2], algorithm 2 is similar to the algorithm for practical use in this paper, and to compare rates with [2], can the authors provide details on how to interpret the term $\\Pi_{t = 0}^k \\delta_t$ in equation 3.2? \n2. The optimal choice is not shown to be optimal in detail and not fully understandable to me, i.e., in what sense this choice if optimal, does it achieve fastest global convergence rate or fastest one-step descent?\n3. The experiments in Figure 3 rely on a good guess of $\\bar{f}_0$, and this introduces another parameter for a step size rule designed for tuning free case.\n\n[1]. Boris T. Polyak. Introduction to optimization. Optimization Software, Inc., New York, 1987.\n[2]. Hazan, Elad, and Sham Kakade. \"Revisiting the Polyak step size.\" arXiv preprint arXiv:1905.00313 (2019).\n\nBased on these weakness, I think this paper can be significantly enhanced by a thorough comparison with related works and detailed explanations of the improved convergence rates."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "What are the weakest assumptions that are required about the function f, in order for you to guarantee your results hold? \n\nWhat is the relationship to the Polyak step size (e.g., paper by Hazan and Kakade)?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper considers a significant and important problem.\nThe problem is of current interest -- there are papers appearing about related topics every year.\nThe proposal of a new step size is related to well studied step sizes like Polyak step size, but it seems to have some novel aspects."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper considers selecting a stepsize for gradient descent, in particular when we cannot compute global quantities like smoothness parameters. Though there has been considerable work, including recently, on adaptive step size selection methods such as Adagrad, this paper takes a different view. The idea is to approximate the a step size that looks a lot like the Polyak step size, by quantities that can be estimated (the Polyak step size requires knowing f(x*)). \n\nThey use this step size on various experiments, including on the non-convex problem of training a 2 layer MLP."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The writing could be significantly improved. There are many examples where the writing deviates from grammatical English, even from the very beginning of the paper. For instance, “due to quantity is not a priori knowledge.” — lines 75-76. In some places this does not impede the understandability of the paper, but in others the problems with the writing indeed make it hard to properly understand what the paper is about, and what its contributions are. \n\nThe introduction is generally loose and imprecise, in areas where it should be specifying exactly what the area of contribution is, precisely because this is such a well-researched area. For example, the paper says that though there are several adaptive algorithms implemented and available, “an adaptive stepsize theory has not been established.” This is confusing, since there are many theoretical papers about AdaGrad and other adaptive step size schedules in the last few years in ML and Optimization venues (not to mention that it is also a fairly classical topic). \n\nThen we are told that their optimal stepwise yields a linear rate with factor sin^2 \\eta_k — but we do not know what \\eta_k is at this point in the paper. They they gone on to say that the theory applies to non-convex functions, but we are not told what is guaranteed in this case. At least an informal statement should be made explaining what is happening, if the authors wish to talk about it directly. \n\nProposition 2.1 says it guarantees convergence to a global optimum of GD, yet does not require in the statement that the function being optimized be convex. The proof also does not mention convexity, and indeed does not prove anything about global convergence. \n\nIn line 146, the paper says that they assume that the gradient is non-zero unless GD has already converged; but then they say that this means that it has converged to x*, but which I understand that the assumption is that they assume they are minimizing a function that has no stationary points other than the unique global optimum. \n\nThe experiments are also not particularly convincing. They need to better point to where the weaknesses are of other related methods, where this approach succeeds."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "**Q1.** Could the authors provide theoretical analysis (e.g., oracle or iteration complexity) for the proposed adaptive stepsize strategy in the case where $f(x)$ is non-convex?\n\n**Q2.** The authors mention using a commonly adopted mini-batch size of 128. Is this setting specific to ADAM? The proposed method may not directly extend to stochastic settings if it requires a dynamic estimation of $f(x^*)$."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**S1:** The paper is well-written and easy to follow, with a clear presentation of the introduction and background on line-search-free first-order methods.\n\n**S2:** This paper proves a simple line-search-free variant of gradient descent to minimize smooth convex functions. The proposed stepsize can be dynamically adjusted to capture the curvature information of the problem, allowing for faster convergence.\n\n**S3:** The paper provides a rigorous proof of the linear convergence rate under the convex settings.\n\n**S4:** The paper includes empirical comparisons with other popular optimizers, such as Adam and N-AGD."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new adaptive stepsize for gradient descent that achieves exact linear convergence rates for convex optimization.\n\nThe key contribution is a novel stepsize formula based on the gradient and objective function.\n\nThe authors provide two versions of the stepsize: a theoretical version and a practical version.\n\nThey demonstrate the efficacy of this approach through some preliminary examples."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**W1.** The theoretical analysis relies on strong assumptions, namely that the objective function is convex and the optimal objective value $f(x^*)$ is known.\n\n**W2.** The only practical solution proposed in this paper is Algorithm 1. However, the authors do not provide a theoretical analysis for it. In particular, does Algorithm 1 converge in convex settings? What is its iteration complexity in an ergodic sense when the objective function is convex and non-convex?\n\n**W3.** Additional detailed discussion and analysis are necessary and would be beneficial to further clarify and present Algorithm 1. \n1. For example, the auto-correction mechanism in Algorithm 1 explicitly requires $g(x) \\geq 0$; otherwise, $\\overline{f}_0$ may not serve as a reliable estimate of $f(x^*)$. \n2. Taking the least squares problem in Problem (3.16) as an example, when $\\alpha >0$ and $\\alpha \\approx0$, Algorithm 1 could get stuck at a point that is neither a local nor a global minimum, as the second correction in Line 322 is never invoked. This can result in a less accurate estimation of $f(x^*)$.\n\n**W4.** Other issues: \n\n1) The proposed algorithm is only suitable for deterministic optimization problems, as it requires calculating the objective function value, making it incompatible with stochastic optimization models. Comparing it with stochastic optimizers like ADAM may be unfair, as ADAM is designed for stochastic settings while the proposed method is deterministic.\n\n2) It would be beneficial for the authors to include comparisons with other leading deterministic algorithms, such as AdaGrad-Norm (AdaGrad stepsizes: Sharp convergence over nonconvex landscapes, JMLR 2020), APGM (Adaptive Proximal Gradient Methods Are Universal Without Approximation, ICML 2024), and AdaBB (Adaptive Barzilai-Borwein method for convex optimization, 2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Why are there no citations to relevant works on Polyak stepsize and tuning-free methods?\n- What are the assumptions made on f for each of the results, and do they depend on x* being unique?\n- Can you actually verify the assumptions you make on alpha_k in any way if you know in advance f or at least properties that it satisfies, e.g., Lipschitz-smoothness or gradient domination?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- Numerical experiments are performed and their plots are reported"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an adaptive stepsize selection scheme for gradient descent (GD). The main theoretical contribution is providing an expression for what is claimed to be an optimal stepsize choice, which depends on the (implicitly assumed to be unique) solution to the problem. For practical implementation, they propose approximating this with a Polyak-like stepsize estimating inf_x f(x). The authors provide convergence analysis and some numerical experiments on MNIST and quadratic optimization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The definition of the theoretical stepsize proposed depends on x* but it's not clear that x* is unique or that this stepsize is well-defined if x* is not unique. No further assumptions on the objective function f are ever stated to ensure uniqueness of x*. No discussion of what will happen if x* is not unique is given.\n\n- The practical use stepsize given is just a Polyak stepsize approximating inf f by \\bar{f}_0. Yet, no reference to Polyak is made nor to any papers studying the Polyak stepsize and related variants, which are quite numerous. In this way the discussion of related work is severely lacking.\n\n- The quality of writing is far below a level that is acceptable for publication. Many statements are mathematically incomplete (e.g., line 155 and many others) or outright incorrect (e.g., the Baillon-Hadad theorem on line 650). Many statements have implicit assumptions that are never stated and not always satisfied or verifiable (e.g., line 146 and many others). None of the convergence results make sense mathematically as there is no reason for x* to be unique - how can \\|x_k-x*\\|^2 go to 0 for two different x*?\n\n- There is no comparison of the tuning-free algorithm to other tuning-free gradient descent algorithms, of which there is a significant body of work."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We established a general adaptive stepsize theory for gradient descent, including feasible selection range, optimal choice, and convergence rate."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024exact,\ntitle={Exact linear-rate gradient descent: optimal adaptive stepsize theory and practical use},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1NYhrZynvC},\nnote={under review}\n}"
},
"abstract": {
"value": "Consider gradient descent iterations $ {x}^{k+1} = {x}^k - \\alpha_k \\nabla f ({x}^k) $. \nSuppose gradient exists and $ \\nabla f ({x}^k) \\neq {0}$.\nWe propose the following closed-form stepsize choice:\n\\begin{equation}\n\t\\alpha_k^\\star = \\frac{ \\Vert {x}^\\star - {x}^k \\Vert }{\\left\\Vert \\nabla f({x}^k) \\right\\Vert} \\cos\\eta_k , \\tag{theoretical}\n\\end{equation}\nwhere $ \\eta_k $ is the angle between vectors $ {x}^\\star - {x}^k $ and $ -\\nabla f({x}^k) $.\nIt is universally applicable and admits an exact linear convergence rate with factor $ \\sin^2\\eta_k $.\nMoreover, if $ f $ is convex and $ L $-smooth, then $ \\alpha_k^\\star \\geq {1}/{L} $.\n\nFor practical use, we approximate (can be exact) the above via \n\\begin{equation}\n\t\\alpha_{k}^\\dagger = \\gamma_0 \\cdot \\frac{ f({x}^k) - \\bar{f}_0 }{\\Vert \\nabla f (\t{x}^k ) \\Vert^2 } ,\n\t\\tag{practical use}\n\\end{equation}\nwhere $\\gamma_0 $ is a tunable parameter; $ \\bar{f}_0 $ is a guess on the smallest objective value (can be auto. updated).\nSuppose $ f $ is convex and $ \\bar{f}_0 = f ( {x}^\\star ) $, then \nany choice from $\\gamma_0 \\in (0,2] $ guarantees an exact linear-rate convergence to the optimal point.\n\nWe consider a few examples.\n(i) An $ \\mathbb{R}^2 $ quadratic program, where a well-known ill-conditioning bottleneck is addressed, with a rate strictly better than $ O(1/2^k) $. (ii) A geometric program, where an inaccurate guess $ \\bar{f}_0 $ remains powerful.\n(iii) A non-convex MNIST classification problem via neural networks, where preliminary tests show that ours admits better performance than the state-of-the-art algorithms, particularly a tune-free version is available in some settings."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"gradient descent",
"adaptive stepsize/learning rate",
"universal optimal choice",
"exact convergence rate"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/82793568484fbe7230becb00e2fbac16a140707d.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/d00ef376b8ab5b99057a300b2c4a20fd303aec57.pdf"
},
"title": {
"value": "Exact linear-rate gradient descent: optimal adaptive stepsize theory and practical use"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1NevL7zdHS | Revisiting Mode Connectivity in Neural Networks with Bezier Surface | main | Active | mode connectivity;Bézier surfaces;loss landscape;deep learning | interpretability and explainable AI | 5;5;6 | 4;4;2 | 3;3;3 | 2;2;3 | 3;3;3 | 5.333333 | 3.333333 | 3 | 2.333333 | 3 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is very well-written and clearly explains the problem as well as the techniques that aim to solve them. I like that the proposed method simply consisting of Bézier curves remains rather simple.\n2. The experiments performed show quite convincingly that the proposed method succeeds in connecting multiple minima. The authors also investigate a variety of architectures, making the results stronger."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates connecting multiple models in parameter space through constructing appropriate surfaces. It is well-known that a simple linear hyperplane does not suffice and non-linear methods are needed. To that end, the authors propose using Bézier surfaces, where four points are used to represent the model parameters and nine other points in the parametrization are subsequently optimized such that uniformly sampled surface points also have low loss. The authors show that they can construct surfaces exhibit low loss everywhere, thus succesfully connecting multiple models with a single surface. They further show that the best point on the surface outperforms all the individual models and can thus be used to merge several models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There seem to be quite a few related works missing that also explore the construction of surfaces to connect multiple minima [1, 2, 3, 4, 5]. The authors definitely need to add the listed papers to the related works and clearly articulate how theirs is different and what advantages it provides. \n2. For model merging, the authors do not seem to compare against any other method? It would be interesting to understand whether this technique allows one to leverage the diversity from all the points (that were obtained using different inits and shuffling). Standard merging always needs to be careful to end up in the same basin, and thus diversity of the points seems naturally reduced. Similarly for the output ensembling experiments, the obvious baseline of solely ensembling the four end points is missing. Does the surface really provide diversity beyond those four points? This is currently unclear with the provided experimental results.\n3. I think taking the best performing point on the entire surface is (1) a bit an unfair comparison and (2) very expensive to do as a dense grid of models needs to be evaluated on the test set. I think it would be more appropriate and efficient to compare against some sort of “mean” value on the surface. Does a Bézier curve admit a natural “centroid”? If yes, how does that one perform compared to the individual models? \n4. Another related work for model merging is [6] which explored how a given ensemble can be constructed within the same convex region, and thus also allowing to average weights while still profiting from diversity. It would be interesting to understand which approach works better. \n\n\n[1] Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling, Benton et al., 2021\n\n[2] Large Scale Structure of Neural Network Loss Landscapes\n\n[3] Loss landscape sightseeing with multi-point optimization, Skorokhodov et al., 2019\n\n[4] A deep neural network’s loss surface contains every low-dimensional pattern, Czarnecki et al., 2019\n\n[5] Examining the geometry of neural mode connecting loss subspaces, Chen et al. \n\n[6| How good is a single basin? Lion et al., 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. **Ambiguity in the Central Question About \"Both Low Loss and High Accuracy\" (Line 195)**.\nThe central question on line 195 could benefit from greater specificity regarding \"low loss\" and \"high accuracy.\" Are the authors referring to training loss and testing accuracy? Given the generalization gap, distinguishing training and testing here would provide meaningful context. If both metrics are from the same set (either training or testing), the statement may be redundant, as low loss often correlates with high accuracy on that set. Specifying if this is about generalization (low training loss translating to high testing accuracy) could substantiate the relevance of this question.\n\n2. **Scalability Concerns for Optimization of Many Parameters (θ) in Equation 6**.\nEquation 6 implies a potentially extensive optimization of numerous control points (θ values) across the Bézier surface. This approach seems computationally heavy, especially for large models with millions of parameters. Could the authors discuss the scalability of this optimization? Is there any strategy to reduce the computational load or parameterize this approach efficiently to make it viable for larger architectures?\n\n3. **Justification for Selecting Models from Specific Epochs (Figure 6)**.\nFigure 6 shows models chosen from epochs 220, 200, 180, and 160. However, it’s unclear why these specific epochs were selected or why only a single training trajectory was used. Would models from other epochs, or from different training trajectories, produce similar results? Providing a rationale for these choices or showing comparative results could help validate the generalizability of this selection process."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-written and well-organized. It is easy to read.\n2. The visualizations and plots are also very clear, facilitating the understanding."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores extending the concept of \"mode connectivity\" in neural networks from one-dimensional paths to two-dimensional surfaces using Bézier surfaces. Traditionally, mode connectivity demonstrates that two trained models can be connected by a low-loss path in parameter space. Here, the authors introduce a novel method to connect multiple models on a smooth, low-loss surface, broadening the potential for optimization and generalization. They detail an algorithm that constructs and optimizes Bézier surfaces to maintain low loss and high accuracy across various architectures (VGG16, ResNet18, ViT) and datasets (CIFAR-10, CIFAR-100, Tiny-ImageNet). The study finds that nonlinear surfaces outperform simple linear interpolations, especially for model averaging and ensembling applications, ultimately enhancing performance in tasks like model merging and ensemble accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Premature Claim of \"Low-Loss Curve\" in Line 161 (Bottom of Page 3)**.\nIn line 161, the paper refers to a \"low-loss curve\" as if it were already established, though at this point, neither the method nor specific criteria for determining a \"low-loss\" property have been introduced. Could the authors clarify what they mean by \"low-loss\" here and either postpone this claim until it is better supported or define it explicitly at the outset? Additionally, grounding this concept with a preliminary explanation or notation would improve clarity.\n\n\n2. **Rationale for Defining $q_{uv}$ in Equation 8 and Substitution with Uniform Distribution**.\nThe definition of $q_{uv}$ in Equation 8 lacks an explanation of its theoretical motivation and why it can be replaced by a uniform distribution for practical purposes. What are the specific benefits of this choice, and how does this approximation impact the accuracy or reliability of the surface mode connectivity in experiments? A deeper rationale for this formulation would clarify its role in the model's performance.\n\n\n3. **Lack of Distance Quantification Between Corner Control Points in Loss Landscape Visualizations**.\nThe visualizations of the loss landscapes do not quantify or highlight the parameter distances between corner points (control points). If these control points represent very similar models with minor parameter variations, the diversity of the parameter space explored may be limited, especially when these models are trained under comparable conditions. How would the approach fare with intentionally diverse model initializations, varying training settings, or other augmentations? Such differences could test the robustness of the surface connectivity under broader training conditions.\n\n4. **Limited Impact of Experiments and Marginal Gaps in Results (e.g., Table 1)**.\nThe experimental evaluation relies primarily on relatively small datasets like CIFAR-10, CIFAR-100, and Tiny-ImageNet, which may limit the generalizability of the findings to larger, more complex datasets. Additionally, Table 1 shows only marginal improvements between the baseline and the model merging or ensembling results. Could the authors address how these findings might scale to larger datasets and discuss the significance of these marginal gaps, particularly given the computational overhead involved in the proposed approach? Expanding on the implications for practical, large-scale applications would enhance the impact of these results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the above weaknesses"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tExtending mode connectivity from curves to Bézier surfaces is a significant topic.\n2.\tThe proposed method is sound.\n3.\tWriting is good to follow.\n4.\tThe figure illustration is good in this paper"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper explores the concept of mode connectivity in neural network loss landscapes, expanding it from traditional curve-based connections to surface-based connections. This approach offers a comprehensive way to merge models, enabling applications such as model averaging and output ensembling."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tOnly evaluate the performance on small datasets. Large datasets like image-net should be included.\n2.\tLack of theoretical analysis."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024revisiting,\ntitle={Revisiting Mode Connectivity in Neural Networks with Bezier Surface},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1NevL7zdHS},\nnote={under review}\n}"
},
"abstract": {
"value": "Understanding the loss landscapes of neural networks (NNs) is critical for optimizing model performance. Previous research has identified the phenomenon of mode connectivity on curves, where two well-trained NNs can be connected by a continuous path in parameter space where the path maintains nearly constant loss. In this work, we extend the concept of mode connectivity to explore connectivity on surfaces, significantly broadening its applicability and unlocking new opportunities. While initial attempts to connect models via linear surfaces in parameter space were unsuccessful, we propose a novel optimization technique that consistently discovers Bézier surfaces with low-loss and high-accuracy connecting multiple NNs in a nonlinear manner. We further demonstrate that even without optimization, mode connectivity exists in certain cases of Bézier surfaces, where the models are carefully selected and combined linearly. This approach provides a deeper and more comprehensive understanding of the loss landscape and offers a novel way to identify models with enhanced performance for model averaging and output ensembling. We demonstrate the effectiveness of our method on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets using VGG16, ResNet18, and ViT architectures."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"mode connectivity",
"Bézier surfaces",
"loss landscape",
"deep learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/87ec15997a0b186f234690662e18e420508fc7c5.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Revisiting Mode Connectivity in Neural Networks with Bezier Surface"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Njl73JKjB | Towards Principled Evaluations of Sparse Autoencoders for Interpretability and Control | main | Active | mechanistic interpretability;sparse autoencoders;evaluations | interpretability and explainable AI | 3;5;6;8 | 4;3;2;2 | 3;3;3;3 | 3;2;3;3 | 1;4;3;3 | 5.5 | 2.75 | 3 | 2.75 | 2.75 | -0.919866 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In the title of Figure 1, could you make a clear connection of the text with precise parts of visuals, to facilitate the understanding?\n\nTaking into account the size of the paragraph on related work, it should be possible to describe the related work without much terminology and thus shifting it closer to the beginning of the manuscript. This would allow the reader to better position the framework with respect to the state of the art."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "In general the manuscript is well written and \"strong\".\n\nThe question which is really to ask is how much this finding is relevant for the literature. Here I should say my expertise is perhaps too limited to provide a proper judgment."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a (principled) method allowing to create supervised dictionaries for space features, which allow for evaluating the degree of disentanglement of SAEs. The developed method is then applied to SAEs and LLMs, witnessing not only interpretable later variables, but also providing possibility of editing attributes. Metrics of sufficiency, necessity and control are used for this."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Though this has most probably costed a lot of work, the empirical validation of the proposed methodology is rather scarce. Whether the constructed dictionaries would also function for other tasks/semantics is not clear.\n\nThe mathematics are well explained, in clear and simple way.\n\nSome (not that many) parts of the manuscript I had to read several times, e.g., the title under Figure 1 or the paragraph on interpretability at the end of Section 3.2. But in general formulas aid much understanding."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weakness section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the Sparse autoencoders to capture interpretable features for the IOItask and the expeirment results show that the proposed approach achieves the best performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are several weaknesses:\n\n1. The motivation of this section should be enhanced.\n\n2. The English language should be improved.\n\n3. The main idea seems not very novel. This paper should provide a strong motivation.\n\n4. The experiment can be further improved by providing more results and analysis."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- In section 4.3, if I understand correctly, the SAE latents are found by simply optimizing/searching for the features that perform the task (move one latent to the specified counterfactual latent). This seems a bit roundabout to me - wouldn’t this propose that you need to know the features you are looking for in order to label SAE features? How would one do this searching or interpretation without access to the counterfactual activations? Wouldn’t it be more realistic to interpret or label each SAE feature and then use the features that are labelled to be relevant to the task at hand?\n- Please see the above weaknesses!"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The writing and paper structure are very clear and easy-to-follow.\n- The focus of this paper is highly relevant and interesting. The problem of finding a ground truth with which to evaluate interpretability and explainability methods has remained an issue for decades, and this work works towards solving this problem by exploring using human-generated groundtruths that have been backed up by prior work.\n- The experiments are well-defined, inutitive, and easy to understand.\n- I believe the results are interesting and useful - they reveal that task-specific SAEs are more useful in practice than full-distribution SAEs, hinting that data quality is of utmost importance when training SAEs. Further, this suggests that human priors may be useful when developing interpretability methods/SAEs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on evaluating sparse autoencoders (SAEs) for their ability to recover known ground-truth features learned by a model. To do so, the authors first train a supervised dictionary on the indirect object identification (IOI) task, for which model computation is already relatively known due to prior interpretability and circuit discovery work. Both IOI task-specific SAEs and full-distribution SAEs are trained and evaluated with respect to the supervised dictionaries to understand if SAEs allow for the same level of approximation, control, and interpretability as the supervised dictionaries. Results reveal that more recent SAE architectures improve these capabilities and task-specific SAEs are much more effective than full-distribution SAEs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I find the “Why use these attributes?” paragraph in Section 3.1 to be confusing. If prior work had not proposed the IO, S, and Pos attributes, how would one go about defining the supervised dictionary? If the evaluation pipeline described in this paper were to be used for a different task, I’m not sure whether this section would be generalizable. In particular, when there are many choices of attributes, what is the manner of choosing between them without using another interpretability method, which would then prevent the need of using an SAE in the first place?\n- It would have been significantly more convincing to me if the authors had considered more than one task in their evaluation. At the moment, it’s unclear to me how the proposed methodology and results from this work could be applied to future works that want to evaluate trained SAEs.\n- The section on interpretability (section 4.4) is also a bit confusing to me - I would find it very helpful if the authors provided interpretations of the SAE latents, and a visualization of how these features could then be used to explain the LLM’s computation on a single example. Some examples of /possible/ interpretations are provided in Appendix 7.13-7.14, but if I understand correctly these are not the actual labels of the SAE features.\n- It is my understanding that the authors wish to propose the use od supervised dictionaries an evaluatory baselines for SAEs. However, in practice, this paper reads more as an exploration of whether SAEs can recover the IOI task. While the authors discuss the limitations of hardcoding the attributes to compare SAEs against and only considering a single task and model, I believe these drawbacks fundamentally limit the work in its general utility."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Equation 2. Reason for the \"bias\" term is $E[a(p)]$? Does this mean $E[u_{IO}] + E[u_S] + E[u_{POS}] \\approx 0$ if we take the expected value on $a(p)$ and $\\hat{a}(p)$? If it is by design, can we make a comment on this design? What is a brief explanation behind this design?\n- What is F and A in F1 Score? In the text, it seems F=the set of examples activating a specific SAE latent \"f\". A=binary attribute of a prompt. It seems that the F1 Score is applied **on each SAE latent**, as described in Section 4. How do we get \"A\" in this case? How do we know which \"binary\" attribute of a prompt that the latent f corresponds to? Can we give a more detailed explanation in the text? It would be nice to include some examples of F and A (specifically A)?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Thorough study in the case of IOI. The use of IOI in the study mainly because we know the \"feature\" that more or less directly affect the outputs. This minimizes the possibilities of the cases that \"SAE features\" may not coincide with the features that human understands. This was also implied in Section 4.2.\n- Extensive study on the paper that includes a lot of results in the appendix.\n- Some proposed methods such as Necessity and sufficiency, to the control method proposed can be applied to a more general case, as described in Section 4. \n- This paper also addresses a lot of the details on small nuances in evaluation of SAEs. This includes the discussion in Session 4.2, and the paragraph on \"Preventing...\" in Section 4.3."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a way to compute sparse feature dictionaries that disentangle model computations similar to an SAE fashion using a supervised method; as well as introduces several ways to evaluate these feature dictionaries from different perspective such as necessity, sufficiency and controls of the feature dictionaries towards a specific task. The author applies the work to the indirect object identification (IOI) using GPT-2 Small, and compare the feature dictionaries obtained by their method, vs some other recent SAE variants such as Gated SAE adn Top-K SAE."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Overall**\n\n- As this paper is a case study to IOI, it is somewhat restrictive. I don't think it is necessarily a weakness of the paper though as I don't think anything can be done to address this \"weakness\". It could be viewed as a stepping stone for future work as well.\n- Presentation and Readability are the main weakness of this work. For example, experiment results in Section 3 includes a lot in section 4 such as description of FullDistribution SAE, Task SAE, TopK, Gated SAE etc. There are crucial formulae and methods that should be in the main text, but instead they are in the Appendix. For instance, the necessity formula in Section 7.4 ~line 878 in the appendix.\n- The section on \"Interpretability\" such as in Section 3.2, Figure 5 and Section 4.4 are out of place. The author even mentioned that \"we stress that this experiment is of a rather exploratory and qualitative nature\". Putting these sections in the appendix would be much more coherent.\n- Section 6 on discussions and limitations is too short. There should be more discussions on the experiment results. Possibilities of applying similar techniques to a general settings (as IOI provides the \"known features\"). Including a discussion on future work may be useful.\n\n**Detailed issues**\n\n- On Sufficiency in \"Figure 3. Left\" - as the experiment is to test whether the reconstructions are sufficient, we would hope to compare the logit difference of the original and the reconstruction. The method in Figure 3 shows the ratio of the *average* logit difference with and without the intervention. This may not be the best because the averaging may hide the changes in logit difference with the intervention for each example. A simpler method like the average changes (absolutized) of the logit difference _may_ work. The authors can also opt to include a brief discussion on this so that it does not seem like they were hiding something. For example, showing the distribution of absolute changes of the logit difference in a histogram, or some statistics on it.\n- Necessity. The experiment is to test whether the reconstructions are necessary. This means that we want to show that without the reconstructions, the model cannot do so well, resulting a drop in model performance - i.e. logit difference. Thus there are three quantities\n1) The reconstruction $\\hat{a}(p)$\n2) The proposed quantity; average plus SAE error term\n3) The average.\n\nShowing necessity should be showing the difference between 1) and 2) are large. However, in the main text of the paper, it opts to show the difference between 2) and 3) only. A crucial formula and description of the necessity score directly addressing this problem is in the appendix (Section 7.4, around line 878), which in my opinion, should be in the main text.\n- \"Probing accuracy\" in \"Control\": Seems out of place? Is it referenced somewhere else in the paper? Also no results were shown? In the absence of results, this section on probing accuracy does not seem to achieve the goal of the section: \"measures the degree to which the supervised feature dictionaries disentangle the different attributes\". This is because the probe is linear, which itself can \"disentagle\" the attributes. I think it would be better to either remove this section (put this in the appendix), or show some experiment results with discussions on the way to disentangle the attributes.\n\n**Minor**\n- Section 4.3 Expressing... line ~361. edit 3 --> (3) or Equation 3.\n- Results (line ~424). objective 4 --> (4) or Equation 4.\n- missing parenthesis at line 434 (resp. $a(p_t)$ **)** by their...\n- broken references in appendix (line 883, 935, 1849)"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We compute and validate *supervised* sparse feature dictionaries on the IOI task, and then compare SAEs against them"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Principled Evaluations of Sparse Autoencoders for Interpretability and Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Njl73JKjB},\nnote={under review}\n}"
},
"abstract": {
"value": "Disentangling model activations into human-interpretable features is a central\nproblem in interpretability. Sparse autoencoders (SAEs) have recently attracted\nmuch attention as a scalable unsupervised approach to this problem. However, our\nimprecise understanding of ground-truth features in realistic scenarios makes it\ndifficult to measure the success of SAEs. To address this challenge, we propose\nto evaluate SAEs on specific tasks by comparing them to supervised\nfeature dictionaries computed with knowledge of the concepts relevant to the\ntask. \n\nSpecifically, we suggest that it is possible to (1) compute supervised sparse\nfeature dictionaries that disentangle model computations for a specific task;\n(2) use them to evaluate and contextualize the degree of disentanglement and\ncontrol offered by SAE latents on this task. Importantly, we can do this in a\nway that is agnostic to whether the SAEs have learned the exact ground-truth\nfeatures or a different but similarly useful representation.\n\nAs a case study, we apply this framework to the indirect object identification\n(IOI) task using GPT-2 Small, with SAEs trained on either the IOI or OpenWebText\ndatasets. We find that SAEs capture interpretable features for the IOI task, and\nthat more recent SAE variants such as Gated SAEs and Top-K SAEs are competitive\nwith supervised features in terms of disentanglement and control over the model.\nWe also exhibit, through this setup and toy models, some qualitative phenomena\nin SAE training illustrating feature splitting and the role of feature\nmagnitudes in solutions preferred by SAEs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"mechanistic interpretability",
"sparse autoencoders",
"evaluations"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/74bda01df38d1d5976268ce4df59e52b41f1e723.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Towards Principled Evaluations of Sparse Autoencoders for Interpretability and Control"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1NkrxqY4jK | Towards Understanding Safety Alignment: A Mechanistic Perspective from Safety Neurons | main | Active | Large Language Models;Mechanistic Interpretability;Safety Alignment;Neuron | interpretability and explainable AI | 3;5;6 | 4;4;3 | 3;2;3 | 2;2;3 | 2;3;3 | 4.666667 | 3.666667 | 2.666667 | 2.333333 | 2.666667 | -0.755929 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- What motivated your decision to focus exclusively on MLP neurons, given that prior work has shown attention heads are crucial for refusal and safety behavior?\n\n- Have you considered validating your hypothesis about helpfulness and safety mechanism overlap using models simultaneously trained on both helpful and harmful data?\n\n- Are the probing results, primarily a negative result? If so, the section should be edited to clarify that."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The authors tested their method on a variety of model families (LLaMa2, Mistral, and Gemma), and used a variety of different datasets and cost models to evaluate safety. This helps increase confidence that the neurons are actually responsible for general safety behavior, and not just patterns present in a particular dataset/grading scheme.\n\n- The authors show that the projections of their safety neurons onto the unembedding of the model, result in different tokens than toxicity neurons identified in previous work [1]. This distinction highlights that more complex instruction-tuned models have more nuanced mechanisms for dealing with safety than simply downweighting neurons that respond with toxic content. \n\n[1] Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K. Kummerfeld, & Rada Mihalcea. (2024). A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel methodology for identifying specific MLP neurons that contribute to safety alignment in large language models. The authors present two complementary techniques: inference-time activation contrasting, which identifies neurons by comparing their activation patterns between pre- and post-safety-finetuned model checkpoints; and dynamic activation patching, which employs causal interventions to quantify the extent to which the identified neurons are responsible for the model's safety behaviors.\n\nThe authors show that inference-time activation contrasting can robustly identify neurons that are causally responsible for safety behavior (as measured by dynamic activation patching), on a wide range of benchmarks.\n\nThrough extensive experimentation, the authors demonstrate several key findings. When safety neurons are patched into instruction-trained models that were finetuned for helpfulness, it increases safety but reduces helpfulness. The reverse effect is also observed, suggesting that safety and helpfulness behaviors rely on similar neural mechanisms - providing mechanistic evidence for the alignment tax hypothesis. Additionally, the identified safety neurons can be used for harmful prompt classification to prevent unsafe model outputs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The primary contribution of this work lacks sufficient novelty in the context of existing research. Prior work has already demonstrated successful localization of safety-relevant components in language models across multiple architectural levels, including neurons [1], parameters [2], residual activations [3], attention heads [4] [5], and layers [6]. While the authors occasionally reference some of these works throughout the paper, they fail to provide a comprehensive discussion of this existing research in either the related work section or the discussion.\n\n\n- The authors fail to adequately justify their focus on MLP neurons as the optimal level of abstraction for localizing safety behavior in language models. While they concentrate exclusively on neurons, prior work has demonstrated that safety behaviors emerge across multiple architectural components, particularly in attention heads and residual stream activations. The decision to analyze only neurons, while excluding these other important components, requires stronger theoretical or empirical justification. This limitation is particularly notable given that existing research has specifically identified attention heads as crucial contributors to refusal behavior [4].\n\n- The paper’s main contribution beyond identifying safety neurons is showing that helpfulness and safety training utilize similar mechanisms, which accounts for the “alignment tax” seen during safety training. However, the evidence provided in favor of this hypothesis is limited. The evidence can also be explained by dynamic activation patching not being a very good way of transferring specific mechanisms between different checkpoints. The authors should also look at models finetuned on both helpful and harmful data at the same time (HHH trained model), and test whether safety and helpful neurons still conflict.\n\n- The classification results in Section 6 are very misleading. The authors suggest that safety neurons show promise in assisting with harmfulness classification. However, the results in Appendix E suggest that safety neurons aren’t that much more useful for classifying harmfulness compared to random neurons (with random neurons being better when using 1500 neurons). This suggests that the method does not actually localize safety neurons, or that localization is not very useful for probing for harmfulness. Also, if the authors are going to claim that safety neurons are useful for building defenses that improve safety, they should compare it against similar setups such as in [3].\n\n[1] Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K. Kummerfeld, & Rada Mihalcea. (2024). A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity. \n\n[2] Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, & Peter Henderson. (2024). Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications.\n\n[3] Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, & Dan Hendrycks. (2023). Representation Engineering: A Top-Down Approach to AI Transparency.\n\n[4] Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, & Neel Nanda. (2024). Refusal in Language Models Is Mediated by a Single Direction.\n\n[5] Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Kun Wang, Yang Liu, Junfeng Fang, & Yongbin Li. (2024). On the Role of Attention Heads in Large Language Model Safety.\n\n[6] Shen Li, Liuyi Yao, Lan Zhang, & Yaliang Li. (2024). Safety Layers in Aligned Large Language Models: The Key to LLM Security."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "## Questions:\n* How does the proposed approach for identifying \"safety neurons\" differ from prior methods that target other types of critical neurons in LLMs?\n* Can the \"dynamic activation patching\" method be generalized to other alignment applications, such as aligning models with values beyond safety (e.g., fairness)?\n* Do you find any mechanistic insight? For example, did you observe specific patterns among the \"safety neurons\" related to particular types of safety risks, such as misinformation or toxicity?\n* For safeguard applications, what is the overhead of your proposed approach?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "## Strengths\n* The topic of LLM safety is highly relevant and timely.\n* The paper makes solid contributions by:\n * Identifying safety neurons in three open-source LLMs.\n * Proposing an effective safeguard application."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "## Summary\n\nThe authors propose methods to identify \"safety neurons\" within large language models (LLMs) that are responsible for safety behaviors. They introduce \"inference-time activation contrasting\" to pinpoint neurons active in aligned models but inactive in unaligned ones, and \"dynamic activation patching\" to assess the causal impact of these neurons on safety. These findings suggest a pathway toward more controlled and robust alignment of LLMs with human values and safety requirements."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Weaknesses\n* Novelty Concerns: The novelty of the proposed approach is unclear. Previous studies have investigated critical neurons within LLMs. The authors should clarify how their methods differ from or improve upon existing approaches.\n* Limited Discussion: The paper lacks a sufficient discussion on how the proposed methods relate to existing representation engineering techniques (https://arxiv.org/pdf/2310.01405). A deeper comparison would help contextualize their contributions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Understanding the safety mechanism of LLMs is a crucial research problem.\n1. This paper focuses on various aspects of the proposed interpretability methods, including empirical observation on neurons, transferability, and potential application, making the contribution a comprehensive framework."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Focusing on the safety mechanism of LLMs, this paper proposes (1) inference-time activation contrasting, to locate safety neurons, and (2) dynamic activation patching, to evaluate their causal effects on model safety. The key observation is that only a few (5%) neurons contribute to the safety of the model. This paper also proposes applications of the observations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation of this paper can be substantially improved. Many terms are not well explained in the paper, e.g. cost scores in Table 3, $(IA)^3$ in Section 4.1\n1. The observation that a few safety neurons contribute to the safety of LLMs has already been spotted in some related work, but they are not cited and discussed.\n - On Prompt-Driven Safeguarding for Large Language Models. ICML 2024\n - Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications. ICML 2024\n3. It seems that the 3 LLMs used are already aligned for safety (at least, to a certain degree) before they are released. What is the alignment in 4.1 here?\n4. In my opinion, it would be necessary to include some advanced jailbreaking attacks for evaluation (both for the main observation and the application), since current LLMs can easily refuse to answer vanilla harmful questions.\n5. Though evaluated 3 models, I still think the model scope is quite limited, e.g. all 3 models are in 7b size, but can the conclusion generalize to larger models?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "In this paper, we interpret the mechanism behind safety alignment via neurons and analyze their properties."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Understanding Safety Alignment: A Mechanistic Perspective from Safety Neurons},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1NkrxqY4jK},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) excel in various capabilities but pose safety risks such as generating harmful content and misinformation, even after safety alignment. In this paper, we explore the inner mechanisms of safety alignment through the lens of mechanistic interpretability, focusing on identifying and analyzing *safety neurons* within LLMs that are responsible for safety behaviors. We propose *inference-time activation contrasting* to locate these neurons and *dynamic activation patching* to evaluate their causal effects on model safety. Experiments on multiple prevalent LLMs demonstrate that we can consistently identify about $5$% safety neurons, and by only patching their activations we can restore over $90$% of the safety performance across various red-teaming benchmarks without influencing general ability. The finding of safety neurons also helps explain the ''alignment tax'' phenomenon by revealing that the key neurons for model safety and helpfulness significantly overlap, yet they require different activation patterns for the same neurons. Furthermore, we demonstrate an application of our findings in safeguarding LLMs by detecting unsafe outputs before generation."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models",
"Mechanistic Interpretability",
"Safety Alignment",
"Neuron"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7c3070382063a605d768af659ba75152ef8103fd.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/fb4152e30831d19293085d9c2f8c4e0fee791744.zip"
},
"title": {
"value": "Towards Understanding Safety Alignment: A Mechanistic Perspective from Safety Neurons"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1NprT9Kz0d | TexTailor: Customized Text-aligned Texturing via Effective Resampling | main | Active | 3D texture synthesis;diffusion model;resampling | applications to computer vision, audio, language, and other modalities | 3;5;5;5 | 4;5;4;4 | 3;3;3;3 | 2;2;2;2 | 1;3;3;3 | 4.5 | 4.25 | 3 | 2 | 2.5 | 0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could the authors clarify the novel aspects of TexTailor? The current version of TexTailor appears to be a combination of existing methods. It would be helpful if they could elaborate on any unique modifications within the resampling scheme or improvements made to accelerate the fine-tuning process.\n2. While the paper discusses performance preservation loss qualitatively, a quantitative analysis of its impact on quality would clarify its specific role. Including an ablation study of the performance preservation loss in Table 2 could better highlight its contribution to TexTailor’s performance.\n3. Since the viewpoint refinement uses a fixed threshold, how sensitive is the model’s performance to changes in this parameter?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper proposes TexTailor to address view-consistent texture synthesis by combining inpainting with resampling and fine-tuning.\n2. Method and results are presented clearly and logically, making the paper easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes TexTailor, a method for text-to-texture synthesis utilizing an inpainting approach to achieve view-consistent textures. To address common challenges in texture generation, TexTailor introduces a resampling scheme and fine-tuning to maintain texture consistency across viewpoints. Furthermore, it employs adaptive viewpoint refinement for efficient viewpoint sampling."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While effective, the approach primarily combines existing techniques, with limited emphasis on novel contributions. The paper could be strengthened by enhancing the resampling scheme or accelerating the fine-tuning phase."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Can the authors provide a user study that measures consistency, alignment, quality, and realism? This should provide a better idea on the quality of the results on the actual goals that the paper aims to achieve. \n- I believe that computational costs should be compared more explictly with previous work, so as to better understand the quality/cost pareto frontier in this line of work. \n- I suggest the paper should provide a CLIP-guided text-image alignment metric. \n- I suggest the paper should provide a more upfront discussion of its limitations.\n- The paper should include a detailed analysis of 2D texturing models.\n- The paper should include a detailed analysis of text-to-avatar models, as well as quantitative and qualitative comparisons. \n- How does the model behave with non-diffuse objects? Very few glossy, metallic, or translucent objects are shown. \n- The paper should include many more results, at least in the supplementary material.\n- Results should include standard deviations to better understand the differences between methods in terms of LPIPS, FLIP, etc."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This paper adequately identifies problems in previous methods for text-driven object texturing, including lack of texture consistency and graduality in texture changes. The origin of this problems are identified as being caused by insufficient integration, predefinition of camera positions, and autorregresion. The paper introduces changes to these methods, to enhance their quality and consistency. This is an important line of research, as these works are becoming more prevalent in the literature and industrial applications. \n- This paper tackles an important and salient problem in the literature. \n- The results shown in the paper indeed suggest that the proposed method provides less gradual changes in texture properties. In objects with different parts, TexTailor shows superior performance in assigning different texture to different parts, than previous work do. \n- The proposed method is sound, and the ideas proposed here are very well suited for the task the paper is aiming to solve. In this sense, the paper is correct as far as I am familiar with the literature and the problems in 3D content generation.\n- This paper is well written and easy to follow. The problems identified in previous work are clearly stated, and the ideas to solve them are easy to understand and very well explained. \n- Code is provided as a supplementary material, which should greatly enhance reproducibility."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method for text-driven 3D object texturing. Previous work on this important topic fail in some areas, according to the paper, including: Consistency and the gradual change in textures assigned to the object. The paper aims to solve these issues, by introducing 2 ideas: First, the model leverages a resampling scheme for better integration of previously generated texture during the diffusion process, and second, the model fine-tunes a depth-aware diffusion model with these resampling textures. With these contributions, the method is said to achieve higher quality and consistency than previous work, measured on a set of datasets and perceptual metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While sound, the ideas introduced in this work are somewhat limited in scope and the paper fails to be compelling that they are particularly effective. In this sense, I am not convinced about the extent upon which these contributions will be impactful in the literature. Furthermore, the resampling scheme introduced in this paper is not new, as it is borrowed from previous work. Therefore, the ideas introduced here are not particularly novel nor signficant.\n- Insufficient results are shown on the paper. It is hard to understand the capabilities of the model with the amount of results shown here. In particular, only four results are shown in the comparisons on Objaverse, and these results are not particularly compelling (for example, in the hammer, the method assigns a metal texture to the handle and a wooden texture to the head, which is not correct and arguably a worse result than TEXTure). Only two results are shown in ShapeNet car, and the ablation study is shown exclusively on a single object. Significant more results should be provided to convince the reader that the method is more effective than previous work.\n- I am unconvinced about the metrics used in this paper. While standard for 3D object texturing work, LPIPS and FID do not adequately measure text-to-image alignment. CLIP-based metrics should be used in conjunction to the ones shown in this paper, to be more informative about how well this model is generating results aligned with the prompts. While visually more consistent than previous work, this model seems to struggle more than previous methods (particularly TEXTure and Text2Text) in asigning the correct texture to each part of the object. This is not something that LPIPS and FID can measure correctly. \n- A user study should be provided for better comparisons between methods, across a bunch of dimensions, including alignment, realism, quality, consistency, etc. \n- The quantitative results are not particularly convincing. The ablation study does not show significant improvements across the metrics used, particularly LPIPS, and without standard deviations of the errors it is hard to understand whether the improvements are actually statistically significant. Therefore, the ablation fails to convince that the proposed contributions are actually valuable and effective. \n- No results are shown on 3D human avatar texturing, which is a very closely related and relevant line of work. \n- Related to the previous point, the analysis of the related work is lacking on a set of areas. The most relevant is the work on 3D human texturing. Relevant work include: SMPLitex: A Generative Model and Dataset for 3D Human Texture Estimation from Single Image (BMVC 2023), UVMap-ID: A Controllable and Personalized UV Map Generative Model (ACMMM 2024), TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation (ECCV 2024), etc. Besides, 2D texturing methods with generative models should also be included as part of the related work. \n- Limitations are not adequately addressed or discussed. From the paper, it seems like the only limitation of the proposed method is its computational cost. However, the results shown in the paper indicate that the method is by no means perfect and it struggles with consistently assigning the appropriate texture to different parts of the object, among other limitations. These should be mentioned more explicitly. \n- Contributions are very overstated. Sentences like \"... demonstrate the superior performance of TexTailor in .... \" or \" ... surpases SOTA texture synthesis methods driven by language cues\" should be empirically demonstrated or removed altogether.\n- The paper suggests some reasons why previous methods fail (autorregressive inference, integration of previous information, fixed camera positions, etc), but it fails to provide adequate evidence that these actually limiting factors."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. What are the difficulties for the text-driven methods mentioned in Lines 373 and 374? \n2. Is LPIPS (Section 4.1, Evaluation metrics) a good metric to evaluate view consistency, as LPIPS is sensitive to spatial information? Given that the view angles are known, would it make more sense to reproject one of the views to another and then compute LPIPS between the projected view and the other one?\n3. What does the performance preservation loss do in Eqn. (10)? Why would it be effective at a high level? \n\nSome of the questions may have been entangled with the Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "## Motivation\nThe paper starts with an analysis of the limitations of previous methods. It hypothesizes those inconsistent results from previous methods are mainly coming from an inappropriate way of integrating information from previously synthesized textures. Given this agile insight, it tries to addresses the inconsistency issue by proposing a new approach to better use information across different viewpoints and previously synthesized textures. \n\nThe motivation of the paper is more about a technical aspect. The analysis of previous approaches makes sense. \n\n## Method\n- In Section 3.2, the problem of ControlNet for incorporating multi-views is interesting. \n- In Section 3.3, the analysis of setting viewpoints sounds interesting (Line 303-310). Using a proportion (Eqn. (12)) is an intuitive way. \n\n## Experiments\n- TexTailor outperforms the previous methods in terms of view consistency and quality, as shown in Table 1. \n- The ablation study shows a progressive improvement of each component. \n\nThe authors also show the limitation of TexTailor - the processing time could be further improved."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper focuses on **consistent texture synthesis**. The authors analyze the artifacts in the current approaches and propose a new approach, **TexTailor**, to keep synthesized textures consistent across different viewpoints. TexTailor equips with a resampling scheme for integrating previous textures, a finetuned depth-aware T2I model trained with performance preservation loss, and an adaptive viewpoint refinement strategy for inpainting. \n\nThe authors evaluate the performance of TexTailor on a subset of Objaverse dataset, and showcases that TexTailor outperforms state-of-the-art methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "What concerns me the most in this paper is the motivation behind some technical parts and its unclear writing. \n\n## Motivation\n- In Line 93, it is not clear to me why finetuning a depth-aware T2I model matters. Maybe including a brief explanation could be helpful. \n\n## Method\n- In Section 3.1, the authors propose a non-Markov process to reduce the sampling steps. However, the benefits of it is confusing to me. Would it involve a faster sampling speed? If it would, there is not result to support it. On the other hand, the authors mainly show the effects of resampling is to \"preserve the texture properties\" (Line 480). This makes me confused about the motivation of newly proposed resampling trick. \n\n## Experiments\n- It does not make sense to me the authors choose to not compare with text-driven methods (Line 373-374) just because they have \"difficulties\" when optimizing textures for \"cars\". Wouldn't it be a good chance to showcase the superiority of TexTailor? \n- The authors do not show any viewpoint-varying results in video format, making it less convincing that TexTailor achieves a good view consistency. \n- It is hard to see obvious improvement from TexTailor in Figure 5, especially comparing with Text2Tex. Perhaps including some bounding boxes and zoom-in patches would help. \n\n## Writing/Delivery\nThe writing of the papers can be further improved. For example,\n- Most of the figures in the paper are compressed, resulting in blurriness and sometimes hard to read. \n- In Fig.1, citing previous methods (i.e., Text2tex and Texture) might make readers easier to check the idea of them. \n- It is challenging for readers to digest Eqn. (6) - (8). A good strategy to improve it might be similar to what Repaint shows in their paper: demonstrating all the terms in a figure with pictures for a vivid demonstration. Current delivery of newly proposed resampling way in Section 3.1 is hard for readers to understand, especially about the main difference between it and Repaint. \n- Fig.3 does not deliver a clear message for each component. For example, simply giving readers two equations does not help them to understand what is going on. It might be helpful if the authors can name these two equations in high level."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I do not get the correlation of the third paragraph in Section I with the main content. I think the geometry conversion is not a problem to be solved in this paper. And the SDS can be directly applied on the mesh (DMTET) which does not need conversion.\n2. The authors mentioned that \" we can achieve high-quality texture with only 30 steps, significantly fewer than the 250 steps required by the original resampling method for a single view.\" Other methods like texture and text2tex only sampled no more than 50 steps for each view. Where is this \"250\" from.\n3. What is the meaning of resampling steps? Does it mean you have to sample R steps for each view at each timestep?\n4. The authors used \"resampled images near the first viewpoint to extract images of the same object from different angles in the output domain of the diffusion model\" . How to make sure that the viewpoints near the first viewpoint maintain the same style as the first view.\n5. In the loss function of Eqn. 10, the target is constraining the new noise estimation to be the same as the original noise estimation. The what is the meaning of training? The optimal case is keeping the original model unchanged.\n6. Any 3D results of the method? I prefer to see the rendered 360-degree videos of results.\n7. The attention feature injection as in [Text-Guided Texturing by Synchronized Multi-View Diffusion] can help to reduce the problem of the autoregressive inpainting. Have you tried this?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Proposed a better approach for viewpoint sampling.\n2. The performance of the proposed method is better than listed SOTAs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a novel architecture that can generate more consistent 3D texture than TEXTure and Text2Tex."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The resampling strategy is similar to the [TexFusion:Synthesizing 3D Textures with Text-Guided Image Diffusion Models] and [TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and Resampling]. Please explain the difference.\n2. In Fig. 1b, it seems that the proposed method is over-smoothed. Please explain the reason.\n3. Please answer the following questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024textailor,\ntitle={TexTailor: Customized Text-aligned Texturing via Effective Resampling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1NprT9Kz0d},\nnote={under review}\n}"
},
"abstract": {
"value": "We present TexTailor, a novel method for generating consistent object textures from textual descriptions. Existing text-to-texture synthesis approaches utilize depth-aware diffusion models to progressively generate images and synthesize textures across predefined multiple viewpoints. However, these approaches lead to a gradual shift in texture properties across viewpoints due to (1) insufficient integration of previously synthesized textures at each viewpoint during the diffusion process and (2) the autoregressive nature of the texture synthesis process. Moreover, the predefined selection of camera positions, which does not account for the object's geometry, limits the effective use of texture information synthesized from different viewpoints, ultimately degrading overall texture consistency. In TexTailor, we address these issues by (1) applying a resampling scheme that repeatedly integrates information from previously synthesized textures within the diffusion process, and (2) fine-tuning a depth-aware diffusion model on these resampled textures. During this process, we observed that using only a few training images restricts the model's original ability to generate high-fidelity images aligned with the conditioning, and therefore propose an originality preservation loss to mitigate this issue. Additionally, we enhance the synthesis of natural textures by adaptively adjusting camera positions based on the object's geometry. Experiments on a subset of the Objaverse dataset and the ShapeNet car dataset demonstrate that TexTailor outperforms state-of-the-art methods in synthesizing view-consistent textures."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"3D texture synthesis",
"diffusion model",
"resampling"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/53ee5823c363ea76a71e9399f7c61e33144154d3.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/582db4190b435cc71495b0eb58301a54367723a3.zip"
},
"title": {
"value": "TexTailor: Customized Text-aligned Texturing via Effective Resampling"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Nwsqw0sTm | Open-Vocabulary Object Detection for Incomparable Spaces | main | Active | Multimodal learning;object detection | learning theory | 5;5;6 | 4;3;3 | 2;2;3 | 2;2;3 | 2;2;3 | 5.333333 | 3.333333 | 2.333333 | 2.333333 | 2.333333 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Pls see the weeknesses above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. It combines textual descriptions and visual examples to identify objects, leveraging the strengths of both modalities to improve detection accuracy.\n2. Instead of simple feature fusion, VOCAL focuses on aligning the contextual relationships between objects in text and images, which is a novel way to handle the misalignment problem in heterogeneous data. The model can adapt to new categories or unseen objects without retraining, which is a significant advantage in dynamic environments where new objects frequently appear.\n3. The evaluation shows that the model outperforms existing OVDet models, setting new benchmarks in detecting rare objects."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the challenges of open-vocabulary object detection (OVDet), where the goal is to detect objects at inference time that were not seen during training. The authors propose an approach called VOCAL (Vocabulary Alignment Classifier), which integrates visual and textual embeddings by aligning both feature-level and relational structures across these two modalities. This method aims to bridge the gap between visual and textual data, enabling robust detection even when input from one modality is weak or ambiguous."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The method involves complex alignment mechanisms that could be computationally expensive and may require substantial resources for training and inference.\n2. The performance of VOCAL heavily relies on the quality of the text and image embeddings. If the embeddings are not representative, the alignment may not be effective.\n3. While the model can adapt to new categories, the scalability to a very large number of categories or extremely rare objects is not explicitly discussed and could be a challenge.\n4. Although the paper mentions cross-dataset transfer, the generalization of the model to datasets outside of the trained domain is a potential concern that may require further validation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses. My major concern is introducing much more complexity compared with previous methods."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper presents a sophisticated method for OVDet that focuses on relational alignment between visual and textual data, which is a novel approach in the field.\n\n2. VOCAL demonstrates superior performance in detecting rare objects and outperforms existing OVDet models, which is a significant achievement.\n\n3. The model demonstrates a new benchmark in detecting rare objects and outperforms existing OVDet models, which is a substantial achievement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduce an approach for open-vocabulary object detection (OVDet) that aligns relational structures across visual and textual data to enhance the detection of objects, especially unseen or rare objects. The authors propose a model called VOCAL (Vocabulary Alignment Classifier) that shifts from feature fusion to relational alignment, bridging the gap between visual and textual inputs. VOCAL leverages both text descriptions and image examples to identify objects, addressing limitations such as lexical ambiguity, lack of visual specificity, and unknown class names. The evaluation on challenging datasets shows that VOCAL outperforms existing OVDet models and even surpasses fully-supervised detectors in detecting rare objects."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The approach may be more complex and computationally intensive than simpler fusion methods, which could be a limitation in resource-constrained environments.\n\n2. The introduction of the Image and Text Encoder results in a detection process that requires more computation, and fairness compared to other OVDet methods needs to be considered.\n\n3. Some related OVDet methods are missing. For example, Distilling DETR with Visual-Linguistic Knowledge for Open-Vocabulary Object Detection ICCV 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Figure 2 attempts to demonstrate the model’s effectiveness in detecting rare categories, but the examples provided belong to either the frequent or common categories, which does not prove the model’s capability in detecting rare categories. For instance, ‘knife’, ‘skateboard’, ‘belt’, ‘pillow’, and ‘bicycle’ are all frequent categories, while ‘rhinoceros’, ‘goose’, ‘kiwi’, and ‘gull’ belong to common categories. \n2. Please refer to the weakness section."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The feature-level alignment and relational alignment for fusing textual and visual classifiers is very interesting.\n2. The weighted contextual embeddings and prototype discovery respectively optimize the methods for constructing textual and visual classifiers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel open-vocabulary detection method that utilizes both textual and visual classifiers, integrating them through feature-level alignment and relational alignment. The author conducts experiments on LVIS to demonstrate its performance on novel categories."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe method’s pipeline is similar to MMOVD[1], with only minor improvements made to the construction and fusion of classifiers. Overall, the novelty might be quite limited.\n2.\tThe method is similar to MMOVD, but lacks critical experiments comparing it with MMOVD, such as evaluations using IN-LVIS as extra data on the LVIS dataset, and MMOVD’s evaluations on cross-dataset transfer detection.\n3.\tThere are missing experiments that prove the effectiveness of the method. 1) Lack of experiments demonstrating that weighted contextual embeddings improve the performance of a text-based classifier compared to simply averaging; 2) Lack of experiments showing that using feature-level alignment and relational alignment is more effective compared to naive fusion strategies like addition.\n4.\tThe comparison experiments between V-CLS and V-Mean are not reasonable. V-CLS, compared to V-Mean, uses both the prototype discovery strategy and additional transformer blocks as the Visual Aggregator. This setup does not validate the effectiveness of the prototype discovery strategy. According to MMOVD[1], using a Visual Aggregator already performs better than directly averaging various visual embeddings. V-CLS should be compared with a Visual Aggregator that does not use the prototype discovery strategy.\n5.\tThere is a lack of hyperparameter analysis for $\\lambda$ and $\\alpha$.\n6.\tResults of open vocabulary object detection evaluations on the COCO dataset are missing.\n\n[1] Multi-Modal Classifiers for Open-Vocabulary Object Detection, ICML 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024openvocabulary,\ntitle={Open-Vocabulary Object Detection for Incomparable Spaces},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Nwsqw0sTm},\nnote={under review}\n}"
},
"abstract": {
"value": "In open-vocabulary object detection (OVDet), specifying the object of interest at inference time opens up powerful possibilities, allowing users to define new categories without retraining the model. These objects can be identified through text descriptions, image examples, or a combination of both. However, visual and textual data, while complementary, encode different data types, making direct comparison or alignment challenging. Naive fusion approaches often lead to misaligned predictions, particularly when one modality is ambiguous or incomplete. In this work, we propose an approach for OVDet that aligns relational structures across these incomparable spaces, ensuring optimal correspondence between visual and textual inputs. This shift from feature fusion to relational alignment bridges the gap between these spaces, enabling robust detection even when input from one modality is weak. Our evaluation on the challenging datasets demonstrates that our model sets a new benchmark in detecting rare objects, outperforming existing OVDet models. Additionally, we show that our multi-modal classifiers outperform single-modality models and even surpass fully-supervised detectors."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multimodal learning",
"object detection"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e03f494377e96e6129529f9710195543a6078ad6.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Open-Vocabulary Object Detection for Incomparable Spaces"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1OGhJCGdcP | Learning subgoal representations from state graphs in goal-conditioned hierarchical reinforcement learning | main | Active | Reinforcement Learning;Graph Representation Learning;Hierarchical Reinforcement Learning | reinforcement learning | 3;3;3;5 | 2;3;3;4 | 1;3;2;2 | 1;2;2;2 | 2;2;2;3 | 3.5 | 3 | 2 | 1.75 | 2.25 | 0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Complexity Management: How do the authors plan to manage the increased complexity introduced by the graph encoder-decoder in practical applications? Are there any proposed strategies to simplify the implementation while retaining performance benefits?\n2. Comparison Metrics: What specific metrics do the authors plan to use in future work to compare G4RL against recent GCHRL methods? Will they consider not only performance but also computational efficiency of integration?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tInnovation: The introduction of a graph encoder-decoder offers a novel perspective on GCHRL, facilitating the online construction of state graphs that yield more effective subgoal representations.\n2.\tGeneralizability: G4RL can be integrated into any existing GCHRL algorithm, making it versatile and applicable across various contexts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel approach—Graph-Guided sub-Goal representation Generation RL (G4RL)—aimed at addressing several key issues faced by existing Goal-conditioned Hierarchical Reinforcement Learning (GCHRL) methods, including sample inefficiency and poor subgoal representations. By introducing a graph encoder-decoder architecture, G4RL effectively leverages the state graph generated during exploration to enhance the performance of existing GCHRL methods. Empirical results demonstrate performance improvements in both dense and sparse reward environments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tIncreased Complexity: Although the graph encoder-decoder adds new functionality, the added complexity does not yield a substantial performance improvement over existing HRAC methods. This raises concerns about implementation and debugging challenges without corresponding benefits.\n2.\tInsufficient Comparisons: The paper lacks comparisons with several recent GCHRL methods, which limits the assessment of the proposed approach's advancements and advantages over established techniques."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The settings and environments considered in the experiments are relatively simple. How does the method scale up?\n2. How sensitive is the method to the value of K : the number of timesteps used by the high-level policy to propose a goal? Is it same across different tasks?\n3. How many seeds were used for the experiments and how were they chosen?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper focuses on an important problem of integrating graphs with Goal-conditioned Hierarchical Reinforcement Learning and improving performance.\n2. The work provides a good motivation for the research problem and its importance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper focuses on Goal-conditioned Hierarchical Reinforcement Learning (GCHRL) setting and introduces a graph encoder-decoder that can evaluate unseen states and enhance performance. This encoder-decoder can be trained on data generated during exploration, and by leveraging the high and low-level intrinsic rewards from the graph encoder-decoder improves performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper can benefit from improving the writing and cleaning up the list of their contributions.\n2. The set of environments / task settings is limited and it would be beneficial to add more tasks.\n3. In some results, the methods are pretty similar. Running more seeds or increasing the difficulty of the experiments could be useful to pull the methods apart."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please address my questions in the weakness section"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "* The paper is clearly written and easy to follow.\n* The proposed method improves upon baseline methods in the AntMaze and AntGather tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel graph encoder-decoder approach designed to evaluate previously unseen states, which can be integrated with existing GCHRL (Graph-based Hierarchical Reinforcement Learning) methods. The proposed model is trained on state graphs generated during exploration, and the authors demonstrate its effectiveness through empirical evaluation. Results indicate improved performance in both dense and sparse reward environments, driven by multi-level intrinsic rewards derived from the graph encoder-decoder."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The proposed method does not function as a subgoal representation learning approach but rather predicts state affinity.\n* The paper lacks strong positioning within the subgoal representation learning literature. It cites only one relevant work and does not provide adequate motivation or comparison with existing methods in this area.\n* The method (G4RL) shares significant similarities with HRAC, raising several concerns: 1. G4RL constructs graphs by hard-thresholding distances in state feature space, while HRAC uses K-step affinity along trajectories. As a result, G4RL is both feature- and hyperparameter-dependent, introducing limitations. 2. HRAC applies a contrastive loss to ensure that the learned subgoal space adheres to a K-step adjacency constraint while preventing subgoals from being too close. How does G4RL regularize representation learning in the latent space? 3. What is the rationale behind combining G4RL with HRAC (i.e., HRAC-G4RL)? Does G4RL require HRAC's regularization in the latent space?\n* The evaluation is limited in several respects: 1. The method is only tested on the AntMaze and AntGather tasks. 2. It is only compared to two pre-2020 methods, HIRO and HRAC, without including more recent subgoal representation learning methods such as LESSON, HESS, and HLPS.\n* There is insufficient analysis of the method's sensitivity to hyperparameters, such as how \\epsilon depends on the environment and state space features."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How is the state representation function $\\phi$ implemented? For example, is it based on neural networks or dimensionality reduction? Please provide specific details of the implementation. \n- What impact do the values of $\\epsilon_{d}$ and parameters $\\alpha_{h}$, $\\alpha_{l}$, and $N$ have on the algorithm's performance? \n- Can it be qualitatively or quantitatively demonstrated that the graph encoder-decoder effectively extracts spatial information? \n- Has the representation of subgoal spatial distance been compared with other methods, such as [1]? Does it show advantages over these approaches?\n\nIf the author can address the aforementioned weaknesses and questions, I will consider increasing the score.\n\n[1]Park, Seohong, Tobias Kreiman, and Sergey Levine. \"Foundation Policies with Hilbert Representations.\" Forty-first International Conference on Machine Learning."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper introduces the G4RL approach with a degree of originality, and the presentation is clear, effectively explaining the proposed method in a way that is easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel architecture that employs a graph encoder-decoder to summarize spatial information into subgoal representations and constructs a world model based on the state graph for the agent to generate auxiliary rewards in both the high-level and low-level policies."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The primary experiments are conducted in a limited range of environments.\n- The ablation studies are insufficient, lacking a comprehensive analysis of key parameters such as $\\epsilon_{d}$, $\\alpha_{h}$, $\\alpha_{l}$, and $N$. The existing experimental results do not adequately support the significance of these parameters as stated in the methods section.\n- There is no comparison with other representation methods to demonstrate the advantages or disadvantages.\n- The learned world model is influenced by the current policy distribution, and it may not accurately reflect the actual world model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning subgoal representations from state graphs in goal-conditioned hierarchical reinforcement learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1OGhJCGdcP},\nnote={under review}\n}"
},
"abstract": {
"value": "The integration of graphs with Goal-conditioned Hierarchical Reinforcement Learning (GCHRL) has recently gained attention, as the intermediate goals (subgoals) can be effectively sampled from graphs that naturally represent the overall task structure in most RL tasks. However, some \nexisting approaches often rely on domain-specific knowledge to construct these graphs, limiting their applicability to new tasks. \nOther graph-based approaches create graphs dynamically during exploration but struggle to fully utilize them because they have problems passing the information in the graphs to newly visited states. \nAdditionally, current GCHRL methods face challenges such as sample inefficiency and poor subgoal representations. In this paper, we present a solution to these issues through the development of a graph encoder-decoder that can evaluate unseen states. \nOur proposed method, Graph-Guided sub-Goal representation Generation RL (G4RL), can be incorporated into any existing GCHRL method to enhance performance. \nWe show that the graph encoder-decoder can be effectively implemented using a network trained on the state graph generated during exploration. Empirical results indicate that leveraging high and low-level intrinsic rewards from the graph encoder-decoder significantly enhances the performance of state-of-the-art GCHRL approaches in both dense and sparse reward environments."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Reinforcement Learning",
"Graph Representation Learning",
"Hierarchical Reinforcement Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e27f740d70aa6602a315e05c08830c6c469544de.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Learning subgoal representations from state graphs in goal-conditioned hierarchical reinforcement learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Ogw1SHY3p | Monet: Mixture of Monosemantic Experts for Transformers | main | Active | large language models;mechanistic interpretability;monosemanticity;mixture of experts;knowledge unlearning | interpretability and explainable AI | 3;6;6;6 | 3;3;4;4 | 2;3;3;3 | 1;3;3;3 | 1;2;2;3 | 5.25 | 3.5 | 2.75 | 2.5 | 2 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How exactly were the top experts by subdomain chosen for the Gemma-2B SAEs? Note that SAEs have no notion of probability over the \"experts\", unlike the MONET model, and I could not find this addressed in the paper. Do you pass the hidden SAE activations through a softmax first? \n- What is the scale in figure 2?\n- Have you tried running the MONET features through an automated interpretability pipeline like https://github.com/EleutherAI/sae-auto-interp?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper tackles an interesting and important question for the field: instead of interpreting LLMs post-hoc, can we directly train them in a way that results in interpretable weights? \n\t- This adds to existing work, such as backpack LLMs https://arxiv.org/abs/2305.16765 and codebook features https://arxiv.org/abs/2310.17230\n- The proposed architecture is interesting, can (in principle) represent a large number of experts, and performs on par with the LLaMA baseline of roughly the same parameter count.\n- The applications to targeted erasure of knowledge are very interesting and relevant to the field.\n- The writing is clear"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a new transformer architecture that replaces MLP layers in the standard decoder-only transformer architecture with a type of sparse coding layer which encourages only a small number of hidden neurons to activate on each given input. The construction is also motivated by, and borrows ideas from, the mixture of experts (MoE) literature. The primary motivation of this new architecture is to help interpretability by building something akin to a wide Sparse Autoencoder (SAE) into the MLP layers of the decoder-only transformer architecture in a scalable way, so that we can directly train for sparse (and thus hopefully interpretable) internal activations.\n\nIn more detail:\n- the MLP layer is viewed as an associative memory, and replaced with a sparsely activating version inspired by the paper \"Large memory layers with product keys\". \n\t- The MLP layer is replaced by multiple smaller MLP subnetworks (\"experts\") that share parameters in a specific way inspired by the product idea from \"Large memory layers with product keys\" to effectively represent many experts using only a few trainable parameters. \n\t- A sparse subset of the experts is chosen to produce the final output as an expectation over these layers' outputs (similar to attention)\n\t- There are other engineering optimizations used to make the computation more efficient.\n\t- Finally, auxiliary loss terms are added, encouraging the experts to activate uniformly on average (\"load balancing\") and each token to have a highly activating expert (ambiguity loss).\n- This new architecture is trained on 100B tokens sampled from the FineWeb-Edu dataset (a subset of experiments also uses a programming dataset), using LLaMA trained on the same dataset as a baseline across approx. 850M, 1.4B and 4.1B parameters. The MONET architecture uses an effective count of $2^18=262,144$ experts. Comparisons on question-answering benchmarks such as MMLU show that the architecture performs mostly on par with the LLaMA baseline. \n- As an additional baseline, SAEs for Gemma 2B are used to patch in Gemma-2B's original activations, and the performance drop due to the SAEs is measured. \n- Some qualitative analyses of the contexts that activate a given expert subnetwork are performed.\n- The architecture is then applied to selectively delete model knowledge in three setups: subject-specific knowledge in MMLU (e.g. delete only knowledge of chemistry but not economics etc.), programming language-specific knowledge on a code dataset (e.g. delete only knowledge of Python but not Java), and purging toxic experts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The lack of detailed interpretability baselines makes it difficult to evaluate the strength of the results. \n\t- For example, the only interpretability method used as a baseline is patching reconstructions from SAEs for Gemma-2B. However, it is not reported what sparsity these SAEs achieve compared to the (effective?) sparsity of MONET. This makes it difficult to make sense of the results. \n\t- The only relevant baseline here is using SAEs at the MLP layers, because this matches the MONET setup; so, the residual stream SAEs seem irrelevant for this work?\n\t- Furthermore, SAEs are trained to reconstruct activations coming from the original model being studied, and iteratively applying the SAE reconstructions to MLP layers may take downstream activations off-distribution, leading to an accumulation of errors due to SAE composition. You may argue that this is just a drawback of the SAE paradigm that MONET avoids, and the comparison is still fair. However, from my point of view, the primary goal of SAEs is to find interesting concepts used by the model, and reconstruction is secondary to that (and being able to chain SAE reconstructions is even more secondary). So, ideally the baseline would compare the \"monosemanticity\" of MONET features vs SAE ones.\n\t- A baseline using the ordinary MLP neurons of the LLaMA model would be very valuable to make the point that MONET discovers more interpretable structure compared to the neuron basis\n- The paper would benefit from a discussion of, and comparison with, related work, such as backpack language models and codebook features.\n- Perhaps adding extra bells and whistles like instruction tuning or multimodality distracts from the main goal of the paper, which is to establish the usefulness of the new architecture for interpretability (which I believe can be achieved or falsified in a more basic setup)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I listed my question in the weaknesses section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "I like the idea of the paper. Some earlier works noticed that experts display some monosemanticity [1,2] and it is great to see this work push this idea. I also think that the set of experiments is very convincing and I believe that this work may be influential for getting more interpretable neural networks.\n\n[1] Fedus, William, Barret Zoph, and Noam Shazeer. \"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.\" Journal of Machine Learning Research 23.120 (2022): 1-39.\n\n[2] Fedus, William, Jeff Dean, and Barret Zoph. \"A review of sparse expert models in deep learning.\" arXiv preprint arXiv:2209.01667 (2022)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the use of Mixture of Experts as a way to have more interpretable models in the context of polysemanticity. They change the standard MoE architecture in that they use product key retrieval technique as a router and they have experts associated with each key. They consider two strategies to create the model: horizontal expert decomposition and vertical expert decomposition, and finally explain how to train their models (Section 3). In the experiments section (Section 4), they show that the experts display monosemanticity and that removing some experts from some domain yields significant performance degradation (Sections 5.1 and 5.2). The Monet approach also allows to purge toxic experts from the model, which is interesting from a safety perspective."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I think the main weakness of the paper is the presentation + writing, especially in Section 3. I am happy to consider improving my score if much better explanations of the method are given in Section 3.\n\n- **Section 3 should be more clear (especially the Horizontal and Vertical decomposition)**: I read the work by Lample et al. [1] for completing this review and according to my understanding, there is a unique $(u_i, v_i)$ that is associated with each key. Their approach makes sense to me.\n\n -- I am very confused why there is the mix and match (along the horizontal or the vertical) in this paper. And also, why is there any memory savings (compared to the PEER approach)? And why is each expert of dimension m (while in PEER, it is a single neuron). \n\n\n -- I also recommend the authors to do a complexity calculation like in [1], Section 3.2 to be fully transparent on the memory/computation complexities. \n\n -- I also didn’t find Figure 1 very clear, for instance it was not clear what “Top”, “bottom” or “TL”, “BL” refer to. Above all, I think that this drawing should be improved.\n\n\n- **Lack of baselines**: It is also not clear to me that a whole new architecture is needed to ensure a more interpretable model. For instance, [2,3] showed that standard MoEs display monosemanticity behaviors. Therefore, I think it is important to maybe compare the Monet method with standard MoEs. Would for instance fine-grained MoEs [4] work in this case? Is it the fact that we have a lot of experts that is responsible for more “monosemantic” experts? Or the routing strategy is responsible for it? I just want to be convinced that no simpler architecture would lead to the results obtained in Section 4.\n\n\n\n[1] Lample, Guillaume, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. \"Large memory layers with product keys.\" Advances in Neural Information Processing Systems 32 (2019).\n\n[2] Fedus, William, Barret Zoph, and Noam Shazeer. \"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.\" Journal of Machine Learning Research 23.120 (2022): 1-39.\n\n[3] Fedus, William, Jeff Dean, and Barret Zoph. \"A review of sparse expert models in deep learning.\" arXiv preprint arXiv:2209.01667 (2022).\n\n[4] Krajewski, Jakub, Jan Ludziejewski, Kamil Adamczewski, Maciej Pióro, Michał Krutul, Szymon Antoniak, Kamil Ciebiera et al. \"Scaling laws for fine-grained mixture of experts.\" arXiv preprint arXiv:2402.07871 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No ethcis concerns are needed for the paper."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- What's the reason of choosing $512^2$ number of experts?\n- Are there any trade-offs for adopting Monet over traditional MoE? What is the training time comparison between Monet and LLaMA baseline models?\n\nI suggest the authors to elaborate more on the methodology section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper presents novel decomposition methods that scales traditional MoE to 262k experts.\n- The paper delivers comprehensive experimental results on the proposed model architecture.\n- The proposed method achieves good expert specialization, proven under several experimental settings"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a new architecture that makes large language models more interpretable with monosemanticity. The authors develop novel decomposition methods to efficiently scale to 262K experts per layer, achieving specialists that focus on single concepts through end-to-end training. The model also enables control over model knowledge (across domains, languages, and toxicity) without degrading performance, outperforming traditional Sparse Autoencoder approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The intuition behind the architecture design is unclear.\n- The explanation in the methodology section is poor and hard to understand."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "* What does the model start with in table 3?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Simple and straightforward idea\n* The experiments on domain masking and unlearning is interesting"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose Monet. A new sMoE achitecture built on top of PEER. By pushing the notation of expert to the limit, Monet shows superior performance and unique ability to unlearn domain knowledge by simply masking out experts. Further analyses demonstrate mutual exclusivity of knowledge across experts and showcase the parametric knowledge encapsulated within individual experts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Presentation can be greatly improved. For example, figure 1 does more confusing than explaining. There is zero followup in the caption telling the readers what \"E1\", \"BL2\", \"TL2\" are. Because they are arbitrary abbreviation defined by the authors, they should be properly annotated, or simply just use the full name.\n* No proper ablations to study different choices in the architectural design and no insight is provided. For example, can we mix Horizontal Expert Decomposition and vertical expert decomposition? Which part of the changes over PEER make it superior?\n* No baseline comparison against PEER and traditional SMoE. How come these two most obvious baselines are missing? \n\nSome other minor issues:\n* citation to PEER is missing. \n* Incremental proposal on top of PEER, I am uncertain how significant the contributions are"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024monet,\ntitle={Monet: Mixture of Monosemantic Experts for Transformers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Ogw1SHY3p},\nnote={under review}\n}"
},
"abstract": {
"value": "Understanding the internal computations of large language models (LLMs) is crucial for aligning them with human values and preventing undesirable behaviors like toxic content generation. However, mechanistic interpretability is hindered by polysemanticity—where individual neurons respond to multiple, unrelated concepts due to the superposition hypothesis. While Sparse Autoencoders (SAEs) have attempted to disentangle these features, they face limitations from imperfect reconstruction loss, which impedes LLM's performance. We introduce the Mixture of Monosemantic Experts for Transformers (Monet) architecture, which enhances interpretability by significantly increasing the number of experts to 262,144 per layer while maintaining parameter efficiency through a novel expert decomposition method. By designing the total parameters to scale proportionally to the square root of the number of experts, Monet enables effective specialization of experts. Our analyses demonstrate mutual exclusivity of knowledge across experts and showcase the parametric knowledge encapsulated within individual experts. Moreover, Monet allows robust knowledge manipulation over knowledge domains, languages, and toxicity mitigation without degrading general performance. By overcoming the limitations of SAEs and conventional Mixture-of-Experts architectures, Monet advances the mechanistic interpretability of LLMs and provides practical benefits for controlling model behavior."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language models",
"mechanistic interpretability",
"monosemanticity",
"mixture of experts",
"knowledge unlearning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/be9bc79836c96a435375fe78e776bc6e1f288e31.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/2e1c46a5628dc43a29d70540295458d78bc35157.zip"
},
"title": {
"value": "Monet: Mixture of Monosemantic Experts for Transformers"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1OkVexYLct | Revisiting the Othello World Model Hypothesis | main | Active | Othello gaming modeling;feature alignment;LLM | interpretability and explainable AI | 3;5;5 | 4;4;4 | 2;3;3 | 2;3;2 | 1;3;3 | 4.333333 | 4 | 2.666667 | 2.333333 | 2.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Why is the two-hop move generation an important benchmark in the Othello World Modeling?\n* Could you please provide detailed analysis on the difference of each language model on the Othello world modeling? Why do they show different behaviors on the task?\n* Please discuss potential implications for particular fields or research areas that might benefit from insights into how language models learn structured world representations.\n* What is the reason to revisit this hypothesis using more language models and comprehensive probings? Is the previous work not enough to show the hypothesis's validity?\n*Please discuss specific types of problems or domains where your approach might be applicable, and what challenges you anticipate in extending beyond Othello.\n* Could you show that the Othello world model encodes the rules of Othello (to determine the validity of moves) or strategy of game playing?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This work is based on previous studies on Othello World Model Hypothesis. It's an interesting study because it tries to see if language models can model the rules of the Othello game from a large amount of transcripts data. Although the hypothesis has been probed in the previous studies, authors propose to reevaluate the hypothesis with more language models and different settings.\n\nFrom the reevaluation, authors provide more evidence on the hypothesis and try to provide cross-language model latent representation on the Othello World model.\n\nAs a result, the paper could support the previous work's claims with new evidence."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, authors evaluate the Othello World Model hypothesis using different types of language models. This study is based on the previous works Li et al. (2023) and Nanda et al. (2023). The goal of this study is to reevaluate the hypothesis over multiple language models and see common representations they learnt."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The weak point of this study is to see the contribution claimed by authors as an important new contribution or extension of the previous work's claims. Although the authors tried to use multiple language models to see the difference of the modeling capability, it's not a new problem formulation because it's based on the previous works.\n\nIt's unclear why two-hop move generation is introduced as a new benchmark problem. Authors need to explain how two-hop generation provides insights beyond one-hop prediction, or to discuss potential limitations of the one-hop approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Please define world model.\n2. Please describe very precisely what the models are actually trained to do.\n3. Please provide details on how the SYNTHETIC dataset was generated exactly."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Highly topical area of research, and I think the experiments are carried out well."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to add additional evidence to the Othello World Model Hypothesis by training a variety of different LLMs for predictive tasks in the game of Othello."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Under strengths above, I wrote that I **think** experiments are carried out well. Here is my main issue with the paper: many crucial details are not described in sufficient detail, and/or are too vague. I can make reasonable guesses as to the exact work that was done, and based on this, I do genuinely believe there is good work in here. But I shouldn't have to guess, and this state is not acceptable for a research paper.\n\nI will elaborate on some specific points:\n1. The entire paper revolves around the hypothesis that LLMs trained on Othello move sequences can induce a \"relevant world model\". But... I'm missing a definition of world model. I cannot judge with full certainty whether the experiments adequately support the claims, when I don't even have a crisp, clear, unambiguous definition of the hypothesis that the entire paper revolves around. I understand that \"world model\" is a relatively common phrase, but it is still crucial to define it clearly and unabmiguously.\n2. The paper does not make it clear exactly what the models are trained to do. Combined with the lacking definition of world model above, this makes things very problematic. It is not clear to me whether the models are trained to:\n\n - Given current state (implied by sequence of previous moves), predict what the next played move is / should be.\n - Given current state (implied by sequence of previous moves) and a next move, predict what the next state will be.\n - A combination of the above, or anything else.\n\nThe caption of Figure 1 talks about \"predict the next move\". The caption of Table 1 is talking about \"game state generation\". These two are two very different things. Much of the rest of the paper talks about \"move generation\", which could be predicting next move again, but could also be about predicting which moves are legal, for instance.\n\n3. There are no details whatsoever on how the SYNTHETIC dataset was generated. Which agents were used to play these games? This requires complete details on these agents (what algorithms, how much search time if they used search, on what hardware, any kind of randomisation used to ensure variety in the data, ... we need to know everything, but now we know nothing at all).\n\nOther comments:\n- Section 2 says that the work of Takizawa (2024) also looked at \"whether LLMs adopt similar ones [strategies]\", but as far as I can see, they did not do anything even remotely like that at all.\n- line 159 PLMs should be LLMs?\n- Section 3.2 refers to Tables 2 and 3, but this should be 1 and 2?\n- Caption of Figure 2 vaguely mentions \"performance\". This is not precise enough (could be accuracy, could be error rate, would lead to very different interpretations). There's also no label on the y-axis, which also does not help in this regard.\n- Line 263/264 talks about performance plateauing, but I don't see it as plateauing at all. Therefore, I also disagree with much of the analysis in the rest of the bottom of page 5. Sure, the decline in error rate becomes less steep at the end for the non-pretrained models. But they didn't fully plateau yet, and are **still** outperforming the Pretrained models **also at the very end of your x-axis**. These observations disagree with much of your conclusions here.\n- Line 462/463 mentions \"the policy of the game\". There is no such thing as a \"the policy\" of any game. We can play according to many different policies."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) What qualitatively new observations related to the internal representation of Othello games in LLMs result from the presented study? What are the high-level novel implications of the presented experiments and conclusions?\n2) Why this particular set of models has been selected? There are quite a few newer models available at the moment, both proprietary and open access.\n3) How the presented study relates to the representation abilities of MLLMs?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) The paper is clearly written and easy to follow.\n2) The experiments are well-thought and lead to several new insights.\n3) The topic should be of interest to some of the ICLR community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In 2023 Li et al. (and subsequently Nanda et al. (2023)) formulated the Othello World Model Hypothesis (OWRH), claiming that GPT-2, based purely on Othello move sequence analysis, was able to infer the principles of the game, including its 64-square board representation. This paper revisits OWRH with 6 Large Language Models (LLMs) and enhanced research protocol, providing stronger evidence supporting the hypothesis than the two above-cited articles."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The novelty of the paper is limited. The underlying research concept of verifying the OWMH is not new and even though the paper leads to certain new observations, they are not surprising and do not significantly expand the existing knowledge.\n2) The selection of LLMs is somewhat outdated, since there are quite a few stronger LLMs available these days.\n3) In the era of MLLMs (Multimodal LLMs) the rationale behind the proposed research is disputable."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024revisiting,\ntitle={Revisiting the Othello World Model Hypothesis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1OkVexYLct},\nnote={under review}\n}"
},
"abstract": {
"value": "\\citet{li2023emergent} used the Othello board game as a test case for the ability of GPT-2 to induce world models, and were followed up by \\citet{nanda-etal-2023-emergent}. We briefly discuss the original experiments, expanding them to include more language models with more comprehensive probing. Specifically, we analyze sequences of Othello board states and train the model to predict the next move based on previous moves. We evaluate six language models (GPT-2, T5, Bart, Flan-T5, Mistral, and LLaMA-2) on the Othello task and conclude that these models not only learn to play Othello, but also induce the Othello board layout. We find that all models achieve up to 99% accuracy in unsupervised grounding and exhibit high similarity in the board features they learned. This provides considerably stronger evidence for the Othello World Model Hypothesis than previous works."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Othello gaming modeling",
"feature alignment",
"LLM"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9df302a6a740de6a614f27db03f2dbe109bab0f2.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/f362dfa28faa9de0db6ad62bc762c85765b9a70c.pdf"
},
"title": {
"value": "Revisiting the Othello World Model Hypothesis"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1OyE9IK0kx | On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models | main | Active | Trustworthy Machine Learning;Explainability;Interpretability;Faithfulness;Large Language Models | interpretability and explainable AI | 3;3;3;5;6;8 | 3;3;4;4;4;2 | 2;3;3;2;3;3 | 1;2;2;1;3;3 | 3;2;3;2;3;4 | 4.666667 | 3.333333 | 2.666667 | 2 | 2.833333 | -0.395285 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Q1. Do you expect activation steering to be more or less effective for other models/architectures?\n- Q2. Will you be releasing code/data for how faithfulness was calculated in this particular case?\n- Q3. Do you expect your results to be consistent across how faithfulness metric was defined? So, for example, experimenting with faithfulness metric with paraphrasing vs. early answering strategy?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Overall, I thought the paper was strong and has potential for broad impact, because it connects so many concepts that are disparately considered. There has been a significant gap in evaluating *for* interventions, and this work systematically investigates the common and practical techniques for interventions.\n- S1. Comprehensive evaluation of intervention methods for a widely used technique, CoT reasoning. Since this is how many researchers as well as practitioners interact with LLMs, this work is widely applicable and can have broad impact in considerations for AI safety.\n- S2. I thought the introduction was particularly well motivated and the paper was generally well written.\n- S3. Finetuning strategies were tested with multiple sampling strategy of their design. Adding faithfulness metric to the finetuning dataset creation was a particularly convincing experimental strategy.\n- S4. Also introduces novel strategy for activation editing based on aligning on faithfulness vectors\n- S5. The paper includes salient results, with most of these methods getting partial success. ICL or activation editing seem to get either accuracy or faithfulness performance enhancements, but rarely both. It seems that more finetuning on faithful datasets can improve both more so than ICL and activation editing"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In recent years, there have been concerted effort in making language models more faithful and robust with methods such as finetuning, in-context learning and activation editing. This work investigates whether these 3 methods can make CoT reasoning more faithful. Their findings suggest that all of them achieve very limited performance improvements, with activation editing achieving only minimal improvements. Finetuning and in-context learning can be slightly more effective, though they seem to fail to generalize across tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- W1. It seems that activation editing was only experimented with LLaMA-3B. I wonder if this could have been an issue with this particular model, particularly because activation editing could have vastly different results depending on the architecture. For that reason, I think this result could be made more robust by adding other models for comparison such as Gemma or OLMo.\n\n- W2. \"Fine-tuning using most faithful explanations achieve better accuracy-faithfulness trade-offs.\" This seems like an expected result, but I wonder if this holds true across domain. If there could have been a more comprehensive strategy such as sampling by length for comparison, I wonder if there were any observable differences across domain.\n\n- W3. There's slew of methods proposed by lanham et al, but I think this paper only discusses faithfulness with respect to early answering strategy. Faithfulness metric could result in different behavior based on the metric definitions: early answering vs. adding mistakes vs. paraphrase.\n\n- W4. The faithfulness based activation editing strategy was introduced, but the results on it were not included in the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- What is the formal definition of faithful (CoT) reasoning in LLMs? Unless, I am missing something this was stated to be formally defined in line 93, but I fail to find this definition later in the manuscript."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper targets an important and timely challenge in LLM. While massive effort is dedicated towards enhancing LLM's capability to reason or even demonstrating that it can reason, it still represents a major bottleneck and prevents using LLM to create AGI. \n\nOverall, the paper reads well and is well organised. The overall contribution is more technical and focuses on empirical studies of various combined techniques implemented, resulting in comprehensive experimental evaluations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Recent advances of foundation models, in particular Large Language Models (LLMs) have demonstrated impressive performances in many natural language processing tasks. Nevertheless the capabilities of LLMs at reasoning tasks are still limited and raises significant debate (see [1]). A line of recent works proposed prompt-based techniques to improve LLM capability including, but not limited to, reasoning. Notably, the most popular techniques are: *chain-of-thought* (CoT) by adding the phrase 'think/solve step by step' at the end of the prompt, and *in-context learning* by including illustrative examples in the prompt to inspire or assist the LLM about the specific context of the query to solve; another line focuses on fine-tuning the LLM on formal reasoning benchmarks data, mathematical problems (Algebra, Geometry, calculus and so on). \n\nThis work combines the three aforementioned techniques to improve LLMs in producing what is referred to as *faithful* CoT reasoning and rational explanations to the delivered output. Moreover, it define a metric to assess the concept of faithful CoT reasoning. \n\n\n\n[1] Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, Mehrdad Farajtabar: GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models. EMNLP 2024"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper constitutes incremental research work. The proposed solution is simply combining applied techniques in LLMs, which render the contribution incremental and straightforward. Technically, the contribution lacks in rigor, and many of the applied strategies are not formally justified. \n\n- The aforementioned techniques have shown several limitations, in past works, and more importantly in many cases techniques like activation patching are deteriorating the LLMs accuracy. \n\n- Several notions and techniques that this work builds upon, are not formally defined or described earlier in the paper, making it less accessible to a broader audience."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What is the sample size of the benchmark? Correct me if I am wrong but lines 339 - 348 describe original datasets' statistics instead. \n2. When selecting N ICL demonstrations, are we considering questions' similarities or just using faithfulness as the single index? \n\nMinor:\n1. Figures' notation requires browsing around.\n2. Please avoid directly using acronyms, a full expression would be more reader-friendly. e.g. out of distribution for OoD in line 303 \n3. Please check typos in the manuscript, such as:\na. line 312, Figure 4?\nb. line 354 asking the question *without* invoking?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. **Detailed Experiment** The paper conducted thorough experiments across in-context learning, fine-tuning, and activation editing.\n\n2. **Insights from the Experiment** The empirical experiments provided meaningful insights, which might inform a better LLM alignment methodology for achieving faithful intermediate states in the future."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper systematically examines the Chain-of-Thought (CoT) behavior in large language models (LLMs) through experiments involving in-context learning, fine-tuning, and activation editing. The results indicate that activation editing had limited success, while in-context learning and fine-tuning led to only slight, non-generalizable improvements. The authors argue that the training process of these models does not prioritize faithfulness, which contributes to the generation of more self-aware content."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Novelty in methodology** is the weakness of this position paper. As author(s) stated, changes are made for referenced methods/procedures, some theoretical/numerical supports can better validate the proposal. Please let me know if the following points are biased.\n\nHere are some directions to consider:\n1. To measure faithfulness, the Area Over the Curve (AOC) metric from [1] is adopted while the paper proposed to use probability scores for each instance instead of on the dataset level. However, section 2.3.1 of [1] also stated \"AOC values are calculated as a weighted sum\", thus [1] should also work on the instance level. I suggest editing line 166 to prevent confusion if this is the case.\n2. For activation editing, this work selected top-K heads based on faithful probing results instead of top-K truth-relatedness heads in [2], they sound serving similar purposes to me. Can we compare these methods or see if they are transferable?\n\nReference\n[1] Lanham, T., Chen, A., Radhakrishnan, A., Steiner, B., Denison, C., Hernandez, D., ... & Perez, E. (2023). Measuring faithfulness in chain-of-thought reasoning. arXiv preprint arXiv:2307.13702.\n[2] Li, K., Patel, O., Viégas, F., Pfister, H., & Wattenberg, M. (2024). Inference-time intervention: Eliciting truthful answers from a language model. Advances in Neural Information Processing Systems, 36."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Insights on why faithfulness is difficult to learn, either in the form of mathematical theorems, or carefully designed experiments would be helpful."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper provided experimental results indicating that in-context learning, fine-tuning, and activation editing did not result in substantial improvement in faithfulness of chain of thought reasoning. This suggests that other techniques may be required if this type of faithfulness is required."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper did an empirical study on whether chain of thought reasoning can be made to accurately reflect the underlying reasoning done by the LLM (i.e. whether it can be made faithful) by in-context learning, fine-tuning, or activation editing. The faithfulness measurement tries to measure whether stopping the chain of thought early would results in different outcomes compared to using the full chain to answer the question; if it does not, it is an indication that the LLM already knows the answer before generating the chain and is doing post-hoc explanation of its reasoning in the chain rather than computing the answer within the chain. The study found that in-context learning, fine-tuning, and activation editing are all not successful in substantially improving faithfulness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper provides negative results -- this is fine. However, to make a strong paper, insights that are supported by evidence on why the results are negative would be helpful."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "As the authors claim that \"our work underscores the inherent difficulty in eliciting faithful CoT reasoning from LLMs, suggesting that the current array of approaches may not be sufficient to address this challenge\", I wonder what could be revealed from the evaluation about the fundamental cause of the limitation for current LLM paradigms? Further, what could be the potential way to address them?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "(1) The topic of faithful reasoning of LLMs is interesting and sounds reasonable to investigate.\n\n(2) By demonstrating the limited success of conventional strategies, the paper highlights the intrinsic difficulty of faithful reasoning in LLMs, which provides a strong basis for future exploration. \n\n(3) The presentation is generally clear and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the challenge of generating faithful Chain-of-Thought reasoning in large language models, specifically focusing on approaches like in-context learning, fine-tuning, and activation editing. While the authors highlight the importance of faithfulness in CoT reasoning for trustworthiness in high-stakes domains like healthcare, their empirical results suggest that none of these methods yield significant improvements in CoT faithfulness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) The paper evaluates standard techniques like in-context learning, fine-tuning, and activation editing to improve CoT faithfulness, but these methods have already been extensively studied in other contexts such as improving accuracy, bias reduction, and factual consistency. The paper does not present any substantial technical adaptations or theoretical contributions to these methods specifically for faithful CoT reasoning. For example, while activation editing is discussed, it largely follows the framework of existing works like Li et al. (2024) without offering any new insights. The novelty of merely applying them to faithful CoT seems limited, and the contribution does not significantly advance the field beyond the status quo.\n\n(2) The \"early answering\" metric used to evaluate faithfulness is based on whether truncating CoT reasoning affects the model's final output. However, the reason for taking it as the best way to measure faithfulness remains unclear, particularly given the complexity of CoT explanations. The measure seems too simplistic, as it fails to capture nuances in reasoning that may be faithful but not necessarily immediately reflected in the final answer. This could raise a misalignment between the metric and the goal of the research, which is to assess whether CoT explanations reflect the internal logic of the LLM.\n\n(3) Although the paper acknowledges that none of the explored methods significantly improve CoT faithfulness, it does not provide a deep analysis of why these methods fail. For example, the results show only marginal gains in faithfulness, but the paper does not dive into what specifically causes this limitation—whether it is the inherent architecture of LLMs, the quality of training data, or other factors.\n\n(4) While the paper claims to \"lay the groundwork\" for future research in trustworthy CoT reasoning, it does not propose concrete next steps or actionable insights based on the experimental findings. The conclusion merely restates that current methods are insufficient without suggesting innovative ideas or frameworks that could be explored in the future. This lack of direction limits the potential impact of the paper in advancing the field."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weaknesses"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper tackles the issue of enhancing the faithfulness of reasoning in large language models, which is vital for applications requiring high reliability.\n\n2. The study is methodologically sound, with rigorous experiments across different models and datasets, demonstrating the limited effectiveness of current strategies in improving reasoning faithfulness.\n\n3. The findings are impactful, highlighting the need for new methodologies to make LLMs more transparent and trustworthy, which is crucial for their adoption in high-stakes domains."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper examines the difficulty of making large language models produce reasoning that accurately reflects their internal processes. It tests methods like in-context learning, fine-tuning, and activation editing and finds they only marginally improve a model's ability to produce faithful reasoning. The study concludes that current techniques are insufficient to ensure reasoning transparency in language models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The study focuses on a limited number of benchmarks. It would benefit from expanding the range of datasets to better understand how these findings generalize across different types of reasoning tasks and domains.\n\n2. The paper could benefit from a more robust theoretical framework that explains why certain strategies might improve faithfulness while others do not."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We explore approaches to improve faithfulness of CoT reasoning generated from large language models, and present their shortcomings."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024on,\ntitle={On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1OyE9IK0kx},\nnote={under review}\n}"
},
"abstract": {
"value": "As Large Language Models (LLMs) are being increasingly employed in critical domains such as healthcare, it is essential to make these models trustworthy. In this pursuit, Chain-of-Thought (CoT) prompting has emerged as a potential source of transparency in LLMs. While CoT reasoning is appealing to humans, prior studies have shown that these reasoning chains are not faithful i.e.; they do not accurately reflect the underlying LLM's behavior. Ensuring the faithfulness of LLM-generated CoT reasoning is crucial for decision-makers, who rely on them to determine if, when, and to what extent, trust the recommendations made by these models. While several works proposed strategies to enhance accuracy and truthfulness in LLMs, there has been a lack of exploration on the effectiveness of these common strategies to enhance the faithfulness of chain-of-thought (CoT) reasoning. Specifically, we explore the promise of in-context learning, fine-tuning, and activation editing to improve the faithfulness of the CoT reasoning. Our empirical analyses on benchmark tasks indicate that these strategies offer limited success in improving the faithfulness of the CoT reasoning, with only slight performance enhancements in controlled scenarios. Activation editing demonstrated minimal success, while fine-tuning and in-context learning achieved marginal improvements that failed to generalize across reasoning and truthful question-answering benchmarks. We subsequently analyse what makes faithful CoT reasoning challenging, and present findings to lay the groundwork for future research in trustworthy reasoning from LLMs. In summary, our work underscores the inherent difficulty in eliciting faithful CoT reasoning from LLMs, suggesting that the current array of approaches may not be sufficient to address this challenge."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Trustworthy Machine Learning",
"Explainability",
"Interpretability",
"Faithfulness",
"Large Language Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2f95d56be8849edab629ad64c777a7227d73d1eb.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/cf379a1ae57da762eafba539966b2e408c4114b7.zip"
},
"title": {
"value": "On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1P6AqR6xkF | ACID: A Comprehensive Dataset for AI-Created Image Detection | main | Withdraw | Computer vision;Generative Model;AI Ethics | datasets and benchmarks | Haoming Lu;Kai Wang;Bin Sun;Hovhannes Margaryan;Xingqian Xu;Humphrey Shi | ~Haoming_Lu1;~Kai_Wang10;~Bin_Sun1;~Hovhannes_Margaryan1;~Xingqian_Xu2;~Humphrey_Shi1 | 3;3;5;6 | 4;5;5;4 | 2;2;2;2 | 2;2;2;3 | 2;2;3;2 | 4.25 | 4.5 | 2 | 2.25 | 2.25 | -0.19245 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See Weaknesses"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.The target issues of the paper are meaningful and worth exploring. \n2.The motivation is clear. \n3.The paper is easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper is relatively well-motivated as AI-generated image detection is a crucial issue. I also find the evaluations thorough."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.The number of images is small. Only 57693 real images and 42307 fake images. This number of images is smaller than GenImage.\n\n2.GAN-based methods are not included in this benchmark.\n\n3.Do the detectors trained on ACID benchmark perform well on real datasets? For example, the images collected from fake news on the Internet."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "None"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "S1: ACID Dataset: The authors present a comprehensive dataset named ACID, which contains 13 million samples sourced from over 50 different generative models and real-world scenarios. The AI-generated images in ACID are created using fine-grained text prompts, and the real-world samples are carefully selected from public data sources based on visual and caption quality, ensuring a broad representation of different image types.\n\nS2: Extensive testing on various AI detectors demonstrates the challenging nature of the ACID dataset. ACIDNet, in particular, shows impressive accuracy of 98.01% on the ACID benchmark, indicating a substantial advancement in the detection of AI-created images."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new dataset and dual-flow detection framework aimed at addressing the challenges posed by the proliferation of AI-generated images and their potential negative social impacts, such as the spread of fake news."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1: The dataset construction requires generating thousands of images for each model, which poses scalability challenges, especially for proprietary models that may not allow such extensive access.\n\nW2: The framework proposed in this paper is simply a combination, lacking innovation. For example, it combines the addition of filters in SSP with the traditional backbone + classifier approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "* This dataset contains different data sources, the authors should make sure everything is ok for, such as privacy, terms of use.\n* It could be better to add some ethical discussion on how the dataset and method could impact the community."
},
"flag_for_ethics_review": {
"value": [
"Yes, Legal compliance (e.g., GDPR, copyright, terms of use)",
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* The authors claim 13M samples for their ACID dataset, but in line 131, they claim 22M images. I don't know whether it is typo.\n* The authors regard images uploaded on online platform A before 2019 as not AI-created in line 215. But why? How can you make sure there is no generated/manipulated images before 2019?\n* For the post-processing augmentation, did the authors only employ them for training their ACIDNet? Or they also used them to organize their dataset?\n* For the simplest patch method, it is a little strange the most discriminative part of an image is the simplest part, since intuitively the more difficult part should also be more difficult to generate. Can the authors provide any proof for this claim beyond two cited previous work?\n* For comparisons in Tab.4, the authors compare on their proposed benchmark and show the superiority. Did the authors try to evaluate on other previous public benchmarks? This should provide more evidence for the performance.\n* For Tab.4, did the authors evaluate other detectors by using their pre-trained checkpoints? Or fine-tuning on the proposed datasets? We should make the comparisons as fair as possible."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The dataset collects images generated from very recent generative models, which should contribute to the related community.\n* The authors consider several different scenarios, such as the art, unnatural forgery, and post-processing photos, which are very interesting and should be discussed in this field.\n* The dataset considers many different settings, such as style, and object categories, which is also a issue unaddressed by former datasets.\n* The proposed detector baseline is effective for detecting AI-generated images, supported by their experiments.\n* The paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a large-scale new dataset called ACID, which consists of 13M samples over 50 different recent generative models and real-world scenarios. The dataset is collected from very recent generative models, such as Stable Diffusion XL, with high resolutions, object categories, and augmentation. Furthermore, the authors propose a baseline for their method termed ACIDNet, which consists of two branches: one semantic branch with ResNetXt50, and a texture branch with high-pass filters for a single simple patch. The experiments on their proposed dataset support their method' effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* How will the proposed dataset be effective or contribute to future research? Since the generative models always evolving, there will be countless new models in the future. The ACID dataset is novel enough for now, but how to make sure for the future? I acknowledge the authors should have spent enough time and effort on collecting the dataset, but it is not enough if it is just a work depending on time. Maybe there are more insights this dataset can give for related future work.\n* The dataset considers many different scenarios and settings, which is good. Therefore, it is a little confusing to follow all the different settings, category them may be better for reviewers to understand, such as for generalization, for robustness, etc.\n* For the proposed detector baseline: the resnet branch is a widely-used baseline for image classification, and the texture branch is based on the SSP and Patchcraft, which underestimate the authors own contributions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In Table 3, could you provide the parameter settings or random parameter ranges for the following augmentation methods: JPEG Compression, Add Shape, Sharpness Adjustment, Rotation, Color Jitter, Gaussian Blur, and Add Noise?\n\n- In Appendix 9 of the AEROBLADE paper, it is revealed that the image storage format in the dataset can lead models to learn compression biases, significantly affecting model performance. What is the image format of your dataset? Did you use a unified image storage format?\n\n- In Table 4, the top 7 rows use pretrained models to evaluate the generalization of different models on ACID through inference, while the bottom 3 rows use different methods to train and validate on ACID. Placing these two approaches in the same table can be confusing; I recommend separating them into two tables.\n\n- Currently, generated image detection models are not limited to texture and semantic methods. CNNSpot and SSP are not the best-performing detection models. You might consider adding some baselines (e.g., ResNet50, ViT) and some new detection models: DRCT, AEROBLADE, NPR, RIGID, ZED, Fake-Inversion (the first three are open-source, and the others will be open-sourced).\n\n- In line 127, you state that \"ACIDNet consistently achieves an average accuracy of 81.1%.\" How was the 81.1% figure obtained? I only found possibly related data of 86.77% in Table 5.\n\n- In Table 5, what is the difference between \"Texture branch only\" and \"SSP (ACID)\"?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper constructs a large-scale dataset that includes images generated by a variety of generative models, enhancing the dataset's practicality and broad applicability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces the ACID dataset, comprising 13 million samples collected from over 50 different generative models and real-world sources, offering a broad range of resolutions. Alongside the dataset, the authors propose ACIDNet, a detection model that combines texture and semantic features. ACIDNet achieves 98.01% accuracy on their dataset, surpassing existing methods (e.g., SSP) by over 10%."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper exhibits some deficiencies in its writing logic. The transitions between paragraphs are not sufficiently cohesive, and the internal coherence within some paragraphs is lacking.\n\n- When describing the dataset, there is a lack of detailed statistical information about the data distribution, such as the number of generated images from different categories or various generative models.\n\n- The paper lacks comparative analysis with other existing datasets in terms of dataset construction; specifically, it could refer to the relevant practices in the GenImage paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present a benchmark for AI generated image detection."
},
"_bibtex": {
"value": "@misc{\nlu2024acid,\ntitle={{ACID}: A Comprehensive Dataset for {AI}-Created Image Detection},\nauthor={Haoming Lu and Kai Wang and Bin Sun and Hovhannes Margaryan and Xingqian Xu and Humphrey Shi},\nyear={2024},\nurl={https://openreview.net/forum?id=1P6AqR6xkF}\n}"
},
"abstract": {
"value": "Generative models have demonstrated remarkable capabilities in generating photorealistic images under proper conditional guidance. Such advancements raise concerns about potential negative social impacts, such as the proliferation of fake news. In response, numerous methods have been developed to differentiate fake from real. Yet, their accuracy and reliability still need to be improved, especially when facing state-of-the-art generative models such as large diffusion models. Infrastructure-wise, the existing testing datasets are sub-optimal in terms of research dimensions and product utility due to their limited data volume and insufficient domain diversity.\nIn this work, we introduce a comprehensive new dataset, namely ACID, which consists of 13M samples sourced from over 50 different generative models versus real-world scenarios. The AI-generated images in this collection are sampled based on fine-grained text prompts and span multiple resolutions. For the real-world samples, we broadly searched public data sources and carefully filtered text-image pairs based on visual and caption quality.\nUsing ACID, we present ACIDNet, an effective framework for detecting AI-generated images. ACIDNet leverages texture features from a Single Simple Patch (SSP) branch and semantic features from a ResNeXt50 branch, and achieves overall cross-benchmark accuracy of $86.77\\%$, significantly outperforming previous methods such as SSP and CNNSpot by over $10\\%$. Both our model and dataset will be open-released to the public."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Haoming_Lu1",
"~Kai_Wang10",
"~Bin_Sun1",
"~Hovhannes_Margaryan1",
"~Xingqian_Xu2",
"~Humphrey_Shi1"
]
},
"authors": {
"value": [
"Haoming Lu",
"Kai Wang",
"Bin Sun",
"Hovhannes Margaryan",
"Xingqian Xu",
"Humphrey Shi"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Computer vision",
"Generative Model",
"AI Ethics"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "lu|acid_a_comprehensive_dataset_for_aicreated_image_detection"
},
"pdf": {
"value": "/pdf/ad35445ad3bf0766e68adaf1f6aa2e96f62a1e2f.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "ACID: A Comprehensive Dataset for AI-Created Image Detection"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
1PDz4Ny1N2 | Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation | main | Active | Jensen Gap;Recommender Systems;Max-min Fairness | alignment, fairness, safety, privacy, and societal considerations | 3;6;6;6;6 | 4;2;4;2;4 | 2;3;3;2;3 | 3;2;3;2;3 | 2;3;4;3;3 | 5.4 | 3.2 | 2.6 | 2.6 | 3 | -0.408248 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see the description in Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "S1: Sufficient support is provided for motivation through theoretical and empirical analysis.\n\nS2: A new perspective based on group-weighted optimization is provided for MMF, and some corresponding theoretical insights are provided.\n\nS3: Combining different skeleton models on multiple datasets provides various experimental results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper, through theoretical and empirical analysis, argues that existing methods with group max-min fairness (MMF) optimization for fairness-aware recommender systems will introduce Jensen gaps during model convergence when applying mini-batch sampling. It theoretically reformulates the MMF constraint objective as a group-weighted optimization objective and proposes a FairDual algorithm to minimize the Jensen gap. The effectiveness of FairDual is verified on two public datasets combined with three skeleton recommendation models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1: The quality and clarity of the presentation need to be improved. Including some unclear statements that need to be revised, the organization of figures and tables needs to be revised, and some content coherence needs to be further revised. For example,\n* For Formula 1, the meaning of the symbol $n_i$ seems missing. Secondly, based on the description in the second paragraph of Section 3, $L_k(u)$ is used to represent the recommendation list, and $K$ identifies the list length. However, in the description in the last paragraph, the statement \"following the practice in time-aware RS, there may be users with interactions cu,i greater than the ranking size K, in which case we will only consider the least recent K interactions to represent their recent preferences\" is confusing. Why is it emphasized that there are users with more than K interactions? Why must the most recent K interactions be kept instead of other numbers?\n\n* Regarding the first paragraph of Section 4, the statement “In real-world scenarios, the number of users |U| is often large, and a mini-batch sampling strategy is often necessary due to the large computational costs. This involves dividing the |U| users into |U|/B batches and performing gradient descent methods on each batch” is also confusing. In my experience, in practice, grouping users into different batches for optimization is not usually adopted, i.e., each batch only contains a subset of users, and all interactions of each user are included. Secondly, the following description seems to use a random batch sampling. The purpose of emphasizing this aspect here is unclear.\n\n* Regarding the second paragraph of Section 4.2 and Figure 1, as far as I understand, the convergence of the same group should be examined under different batch sizes to obtain the desired observations and conclusions.\n* For the last paragraph of Section 5.2.1, the meaning of *P* is missing.\n\n* Based on the current description of Section 5.2.2, I can't find any instructions on handling the first batch since it lacks a pre-batch to compute $g$. Secondly, does the operation of sampling $Q$ items bring noise or instability? This requires more discussion and experimental analysis.\n\n* The current placement of figures and tables does not facilitate reading and needs to be placed as close to the corresponding description as possible.\n\n* I like the first half of this paper. However, I am confused about why fairness must be associated with large recommendation models (especially large language recommendation models) after the methods section. On the one hand, this makes some of the treatments required for large language recommendation models appear abruptly. On the other hand, it is not conducive to evaluating the effectiveness of the proposed solution for fairness in a more general setting.\n\nW2: The proposed FairDual lacks some deeper and more valuable insights. For example,\n* Does it have the same performance or properties for other types of loss functions?\n\n* Does it have the same behavior or properties as other fairness optimization constraints?\n\n* How does it compare to existing work regarding storage space, computational complexity, and parameter size? Some static or dynamic group weighting methods discussed in related work seem lightweight. Is the additional overhead worthwhile?\n\n* If it is not just about fairness at the item group level, does it apply to fairness at the user group level or even in situations where both users and items exist in groups?\n\nW3: The current experimental setup and experimental results are not convincing enough.\n* Representative datasets adopted by many previous fairness recommendation methods should be included more.\n\n* Related to the previous concerns, the current version's selection criteria for baselines are confusing and not sufficiently representative. Skeleton models should not be constrained to be recommendation models related to large language models, and more research lines of fair recommendation methods mentioned in related work should be included as baselines, especially those aimed at designing group weighting.\n\n* The current description of the implementation details is oversimplified, which is not conducive to reproducibility. Secondly, $\\lambda$ is mentioned to range from 0 to 5, but in Figure 3 it is inclusive of 0 to 10."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "**Major Concerns:**\n\n**Questions about Theoretical Analysis and Algorithm.** I have the following questions about the theoretical analysis and algorithm in this paper that need clarification:\n\n- In the proof of Theorem 1 (Appendix A), why is Problem (15), i.e., $\\min\\mathbf{1}^\\top\\hat{\\boldsymbol{A}}^\\top\\boldsymbol{w}+\\lambda\\max_{g\\in\\mathcal{G}}\\gamma_g(\\hat{\\boldsymbol{A}}^\\top\\boldsymbol{w})_g$ , equivalent to Problem (2), i.e., $\\min\\boldsymbol{b}^\\top(\\hat{\\boldsymbol{A}}^\\top\\boldsymbol{w})^{1+t}$ ? Specifically, considering the limit in the function $g(\\cdot; \\infty)$ , how is the constant $t$ determined? Providing an explicit solution for $t$ and $\\boldsymbol{b}$ is important for Theorems 1 and 2.\n- The organization of Lemma 2, Lemma 3, and Theorem 3 (Appendix D-F) is somewhat disorganized. I suggest reorganizing these proofs, e.g.:\n - Place Lemma 3 before Lemma 2, as the conclusion of Lemma 2 depends on Lemma 3.\n - Rewrite the proof of Lemma 2 to explain why the conclusion can be derived from $r(\\boldsymbol{\\mu} + c\\boldsymbol{b}) < \\infty$ (i.e., Lemma 3).\n- The group-weighted form (4) of the Group MMF-constrained objective in Theorem 3 is concise, but its weight $\\boldsymbol{s}_g$ is not. The authors should provide some intuitive explanations for the weight $\\boldsymbol{s}_g$ to better elucidate the experimental phenomena (Case study in Section 6.3). For instance, under what circumstances is $\\boldsymbol{s}_g$ larger, and when is it smaller?\n\n- In the calculation of $\\widetilde{w}$, the authors randomly sample $Q$ items and set $\\widetilde{\\boldsymbol{w}} _ b=\\sum _ {k=1}^K(\\boldsymbol{E} ^ j\\boldsymbol{e} _ {u _ b}) _ {[k]}$ (cf. line 12 in Algorithm 1, and line 358). I am primarily concerned about the bias caused by sampling-based ranking (although it does not affect the fairness bound given in Theorem 4). Can the authors provide a theoretical analysis of this bias? Alternatively, could the authors change the sampling-based ranking to random sampling of $\\boldsymbol{E}^j\\boldsymbol{e} _ {u _ b}$ , and test the impact of this bias on the convergence rate of Jensen gap?\n\n**Questions about Experiments.** I have the following concerns about the experiments:\n\n- There are only two datasets utilized in the main results (Tables 1 and 2), which is insufficient. The authors might consider adding one or two widely-used datasets, such as Amazon-Electronic, which can be processed using the same method as in Appendix H.\n- In Section 6.3 \"Experimental Analysis\", the authors find that the accuracy first increases then decreases as $\\lambda$ increases, and attribute the phenomenon to the popularity bias. Then, is it possible to apply popularity debias method to the proposed algorithm, e.g., Inverse Propensity Score (IPS)-based reweighting method?\n\n**Minor Concerns:**\n\n- Line 324, $\\hat c _{u, i} = -d(\\boldsymbol{e} _u, \\boldsymbol{e} _i)$, should there be $\\hat c _{u, i} = d(\\boldsymbol{e} _u, \\boldsymbol{e} _i)$ ?\n- Line 325, the authors should suppose that $\\boldsymbol{e}_u$ and $\\boldsymbol{e}_i$ are normalized to make sure $d(\\boldsymbol{e}_u, \\boldsymbol{e}_i) \\leq 1$ , which is relied on by the proof of Theorem 4 (cf. Line 1064).\n- Line 357, \"The $L$ items’ embeddings are denoted as ...\", the $L$ should be $Q$ ?\n- Line 979, the minus in $-\\mathcal{I}$ should be placed at the loss term."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The Group MMF studied by the authors is significant for recommendation fairness research.\n- The motivation of the paper is clear, the algorithm introduction is straightforward, and the experimental analysis is detailed.\n- The paper employs dual optimization to separate MMF and predicted score, resulting in a simple form of group-weighted BCE loss, and uses the dual mirror gradient descent algorithm for optimization, which is somewhat novel."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper analyze the Group max-min fairness (MMF) constrained optimization problem in recommendation. The authors first explain the Group MMF constrained objective as a non-linear polynomial form, indicating that the Jensen gap is non-negligible in mini-batch sampling based optimization (i.e., the gap between the overall loss and the sum of batched loss). To bridge the Jensen gap, this paper propose a dual optimization method called FairDual. Specifically, they rewrite the objective as a group-weighted BCE loss, and utilize the dual mirror gradient descent algorithm to optimize this loss. They further conduct experimental validation of FairDual's effectiveness and provide a detailed analysis of its proposed theoretical advantages."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Theoretical proofs contain parts that require clarification, and the writing of the proof needs further improvement.\n- The authors' analysis of the proposed group-weighted BCE loss is insufficient.\n- In the implementation of FairDual algorithm, sampling-based ranking may introduce bias that is not adequately discussed.\n- The authors conducted experiments on only two datasets, which may not be sufficient to demonstrate the algorithm's generalization."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tThe paper provides both theoretical analysis and empirical results for motivation derivation and the effectiveness of the proposed method.\n2.\tThe experiments are conducted across large-scale recommendation models, which aligns well with the stated motivation.\n3.\tThe paper is generally well-written, with a clear structure."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper observes that the MMF-based optimization introduces a Jensen gap, which will become more pronounced when mini-batch size decreases and group size increases. The authors reformulate MMF into a group-weighted optimization, and solve its dual to minimize Jensen gap. Theoretical analysis reveals that the proposed method achieves a sub-linear convergence rate. Experiments are conducted across three recommendation models on two datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe assumption of convexity is too strong and impractical for large-scale recommendation models.\n2.\tWhy can the reported NDCG exceed 1, which is theoretically impossible? Also, please specify the number of items in the truncated list K."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- Can authors explain is there a reason that in some cases the MRR is not statistically significant as shown in Table 1 and 2? For example, the MRR in the top-5 results of RecFormer and BigRec is not significant and no improvement, while the improvement of NDCG and MMF is significant. Can authors give some insights on this observation?\n\n- In the visualization of Figure 3(c), the differences in the patterns of the two figures are not quite obvious. The classification boundary of FairDual also seems to exist. Could the authors provide some quantitative results to distinguish the different patterns, such as divergence in the two distributions?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The motivation is well established with theorical and emprical analysises of the Jesen gap in Section 4.\n- The method is solid with a guaranteed bound for Jesen gap, and the experiment also showed that the proposed method indeed has lower gap than other baselines.\n- The authors conducted complete and comprehensive evaluations, including the effictiveness, Jensen gap analysis, case study, training efficiency and parameter analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors claimed that current group max-min fairness (MMF) methods would introduce a Jensen gap between the model’s\nconvergence point and the optimal point. The authors analyzed the existence of Jensen gap theorically and emprically. Then, the authors proposed FairDual, a dual-optimization approach that guaranteed a bound for the Jesen gap. They conducted comprehensive experiments to show the effectiveness and efficiency of FairDual compared to baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Baselines. The authors mentioned that there are two types of approaches to bridge the Jensen gap, re-weighting and optimization-based methods. But the authors mostly compared only re-weighting methods in the experiments, while ignoring optimization-based methods they mentioned in the introduction part, such as Abernethy et al. 2022, Cousins 2022, Demidovich et al., 2023 and Agarwal et al., 2018. I suggest the authors to add state-of-the-art baselines in optimization-based methods mentioned in the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could the results extend to alternative fairness constraints beyond max-min fairness?\n- What is the computational complexity of the proposed algorithm and how does it compare with the other baselines?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well written and provides interesting insights. \n- The authors well motivated the problem in Section 4 where they established the existence of Jensen gap.\n- The theoretical results appear sound. The authors also provided extensive numerical evaluations of their method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors address the challenge of integrating a max-min fairness constraint, which introduces a Jensen gap between the model’s convergence point and the optimal point. They first demonstrate that using mini-batch sampling optimization strategies leads to a Jensen gap that increases as the mini-batch size decreases. To bridge this gap, the authors propose an algorithm that reformulates the original optimization problem through a re-weighting approach, leveraging dual-optimization techniques to update the weights of each group. They theoretically prove that their approach achieves a sublinear convergence rate and numerically demonstrate its effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It'd be helpful if the authors can provide more interpretations for Theorem 4, the main theoretical result and, in particular, comment on their technical contributions. What is the main technical novelty in attaining this theoretical result? A proof sketch could also be helpful. \n- Following the point above, in the numerical experiments, there appears to be some non-monotonic variation of the Jensen gap w.r.t. the batch size. I wonder if the authors can comment on why this is the case. Is this consistent with the theoretical results?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A dual gradient method is proposed to bridge the Jensen Gap to conduct recommendation tasks based on max-Min group fairness constraint."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024bridging,\ntitle={Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1PDz4Ny1N2},\nnote={under review}\n}"
},
"abstract": {
"value": "Group max-min fairness (MMF) is commonly used in fairness-aware recommender systems (RS) as an optimization objective, as it aims to protect marginalized item groups and ensures a fair competition platform. However, our theoretical analysis indicates that integrating MMF constraint violates the assumption of sample independence during optimization, causing the loss function to deviate from linear additivity. Such nonlinearity property introduces the Jensen gap between the model's convergence point and the optimal point if mini-batch sampling is applied. Both theoretical and empirical studies show that as the mini-batch size decreases and the group size increases, the Jensen gap will widen accordingly. Some methods using heuristic re-weighting or debiasing strategies have the potential to bridge the Jensen gap. However, they either lack theoretical guarantees or suffer from heavy computational costs. To overcome these limitations, we first theoretically demonstrate that the MMF-constrained objective can be essentially reformulated as a group-weighted optimization objective. Then we present an efficient and effective algorithm named FairDual, which utilizes a dual optimization technique to minimize Jensen gap. Our theoretical analysis demonstrates that FairDual can achieve a sub-linear convergence rate to the globally optimal solution and the Jensen gap can be well bounded under a mini-batch sampling strategy with random shuffle. Extensive experiments conducted using three large-scale RS backbone models on two publicly available datasets demonstrate that FairDual outperforms all baselines in terms of both accuracy and fairness."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Jensen Gap",
"Recommender Systems",
"Max-min Fairness"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/98b4921f34bf70d0d0f113ae0276cd24013d692d.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9db1a57a7328b2a40473ee40d95f73a3dfe88cb0.zip"
},
"title": {
"value": "Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1PZt5nFlzH | Size-aware Compression of 3D Gaussians with Fine-grained Mixed Precision Quantization | main | Withdraw | 3D Gaussian Splatting;Mixed-precision Quantization;Compression | applications to computer vision, audio, language, and other modalities | Shuzhao Xie;Jiahang Liu;Weixiang Zhang;Shijia Ge;Sicheng Pan;Chen Tang;Yunpeng Bai;Zhi Wang | ~Shuzhao_Xie1;~Jiahang_Liu2;~Weixiang_Zhang1;~Shijia_Ge1;~Sicheng_Pan1;~Chen_Tang3;~Yunpeng_Bai1;~Zhi_Wang5 | 3;5;6;6 | 4;3;3;3 | 2;3;3;3 | 1;2;3;3 | 2;2;3;3 | 5 | 3.25 | 2.75 | 2.25 | 2.5 | -0.942809 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you to all reviewers and the Area Chair for your thoughtful and detailed feedback on our submission. We are very grateful for the time and effort each of you has dedicated to evaluating our work. Your insights have provided us with valuable directions to improve our research. After careful consideration, we have decided to withdraw the paper in order to address these suggestions more comprehensively. We look forward to using this feedback to refine our work and hope to submit an improved version in the future. Thank you again for your invaluable support and constructive input."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed method significantly reduces search time relative to existing approaches while ensuring accurate size estimation and strong compression performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a size-aware 3DGS compression approach designed to achieve accurate size estimation. Building upon the MesonGS framework, the authors first develop a size estimator to obtain precise size measurements. To enhance performance further, a mixed-precision quantization strategy that incorporates 0-1 integer linear programming and dynamic programming is proposed. Experimental results demonstrate that the proposed method achieves superior compression quality compared to existing approaches while requiring less search time."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The novelty of this paper is limited. The overall coding architecture closely resembles that of MesonGS, with only marginal innovations. The primary contributions consist primarily of technical enhancements, specifically 0-1 integer linear programming and dynamic programming, rather than presenting novel research insights.\n2. The application of the proposed method is confined to MesonGS, which restricts its potential use cases. To demonstrate the effectiveness of the method, it would be beneficial to apply it to multiple baseline models.\n3. The performance gains attributed to the proposed method are not adequately analyzed. Given that the core idea and methodology focus on accurate size estimation, the substantial performance improvement over MesonGS (as shown in Table 2, with Mip-NeRF 360 increasing from 26.68 dB to 27.65 dB) appears insufficiently justified. A detailed analysis of the contribution of each component, including the transition from 3DGS to Scaffold-GS, the proposed mixed-precision quantization strategy, and the fine-tuning process, is warranted.\n4. There are several writing issues. For example, “PNSR” in Table 1 should be “PSNR”. Additionally, notations should be defined upon their initial appearance, such as “Ai” in Equation (4) and the “⊙” symbol in Equation (5)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "(1)In line 202, how do you obtain the average important score of anchors in detail?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(1) This is a well-written paper. \n(2) The proposed method is compared with various methods. The experiments are complete and convincing \n(3) Some visualizations are helpful to understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a novel approach to size-aware compression of 3D Gaussians, focusing on fine-grained mixed precision quantization to optimize file size while maximizing visual quality. The authors propose a framework that includes several key components: the selection of a base model (ScaffoldGS), a compression framework (MesonGS), and a size estimator."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1)Lack of FPS comparisons"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could you provide more details about your method's performance on dynamic scenes, particularly regarding temporal coherence and compression consistency between frames? \n- What are the memory-speed trade-offs in your compression pipeline, and how does the peak memory usage compare to existing methods? \n- Have you identified any quality cliffs or failure cases where the compression performance degrades significantly (e.g., minimum achievable file size, complex geometries, or detailed textures)?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Formulation of the compression problem that explicitly considers target file sizes, addressing a practical need not well-handled by existing methods\n- Combination of size estimation with mixed precision quantization, offering a new approach to balancing compression and quality\n- The original use of 0-1 ILP for bit-width selection in 3D Gaussian compression, adapting techniques from neural network quantization to a new domain\n- Clear justification for design choices (e.g., choosing MesonGS over other frameworks due to size stability)\n- 100× speedup in parameter search makes the method much more practical for real-world applications"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a method for compressing 3D Gaussian models to meet target file sizes while preserving visual quality. The key contributions include a quick size estimator for compression prediction, a hierarchical mixed precision quantization approach using integer linear programming and dynamic programming, and a complete compression pipeline that finds optimal parameters 100x faster than existing methods. The approach is validated on standard datasets, showing competitive results in both compression speed and visual quality metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I'm concerned about the paper's core assumption that attribute channels are independent during quantization. This feels like a significant oversimplification without any supporting evidence. I would like to see some experiments showing whether there's a correlation between channels' quantization errors and how this impacts the final results.\n- Why only test on static scenes with a single file size target (30MB)? For a paper claiming to be \"size-aware,\" I'd expect to see results across various target sizes and more challenging scenarios like dynamic scenes. I'm particularly curious how their method handles SH coefficients under different lighting conditions.\n- The performance analysis feels incomplete. We get plenty of quality metrics, but what about memory usage during compression? Also, they mention using CUDA for speed-up but don't explain the implementation details - this kind of information is crucial for anyone trying to replicate their work.\n- The paper shows how their method works but doesn't really explain how to use it. How do we choose the step size U or the number of blocks K in practice? Table 6 shows it's robust to different K values, but I'm still wondering what values I should pick for my use case.\n- I'm worried about error propagation in their system. What happens when errors from the inter-attribute stage combine with those from the intra-attribute stage? And how does the method behave with very small target sizes? Some analysis of failure cases would really help understand the method's limitations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(1) The motivation is clear and this paper is easy to follow.\n(2) Superior performance as compared with previous methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose a mixed-precision quantization method for 3DGS compression. Specifically, different bit-widths are assigned to different attribute channels of the gaussians. In addition, each attribute channel is partitioned into blocks of vectors. While previous methods require completing the entire compression process to determine the compressed file size, the proposed method introduces a size estimator to determine the model size within 70 seconds. Experiments show that the proposed method improves the performance of fine-tuning-free approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) My major concern is about the marginal performance gain. In Table 2, it seems the proposed method is even inferior to HAC, especially on the Tanks&Temples dataset. As compared to HAC, our-small has similar model size but produces lower performance on all metrics. For our-large, superior performance is acheived at the cost of much larger model size. So I wonder the superiority of the proposed method as compared to HAC. I can see that the proposed method is finetuning-free, but the authors should clarify which methods are fair results as compared to the proposed method.\n(2) Following the first comment, as shown in Fig. 4, the PSNR score of the proposed method seems to be lower than that of HAC under the same size. This further shows that the proposed method does not produces either higher accuracy or better efficiency. So the effectiveness of the proposed method seems to be further highlighted.\n(3) As mix-precision quantization is one of the major contributions for the proposed method, the bit-widths for different attribute channels should be discussed, which could be an interesting point for follow-up researchers. It would be better if the bit-widths for different attribute channels under different buget can be further analyzed.\n(4) Typos: \n- Line 40: 5.2710^6?\n- MPQ is not defined in the paper"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Automatically selecting hyperparameters to compress 3D Gaussians to a target file size while maximizing visual quality"
},
"_bibtex": {
"value": "@misc{\nxie2024sizeaware,\ntitle={Size-aware Compression of 3D Gaussians with Fine-grained Mixed Precision Quantization},\nauthor={Shuzhao Xie and Jiahang Liu and Weixiang Zhang and Shijia Ge and Sicheng Pan and Chen Tang and Yunpeng Bai and Zhi Wang},\nyear={2024},\nurl={https://openreview.net/forum?id=1PZt5nFlzH}\n}"
},
"abstract": {
"value": "In this paper, we propose a method to automatically select hyperparameters to compress 3D Gaussians to a target file size while maximizing visual quality. We iteratively search for a hyperparameter configuration until the file size meets the specified budget. However, existing compression frameworks require completing the entire compression process to determine the compressed file size, which is time-consuming. To accelerate this, we design a tailored size estimator for frameworks that can determine hyperparameters without requiring fine-tuning. Although the finetuning-free frameworks are more predictable, they typically underperform compared to fine-tuning-based approaches, which utilize end-to-end differentiable structures to achieve superior results. To close this performance gap, we propose a mixed-precision quantization strategy that exploits the heterogeneity of attribute channels by compressing each channel with different bit-widths. The resulting combinatorial optimization problem is efficiently solved using 0-1 integer linear programming. Additionally, we partition each attribute channel into blocks of vectors, quantizing each vector based on the optimal bit-width determined in the previous step. The block length is then determined via dynamic programming. Our method identifies hyperparameter settings that meet the target file size within 70 seconds, outperforming state-of-the-art methods in both efficiency and quality. Extensive experiments demonstrate that our approach significantly enhances the performance of fine-tuning-free methods, with its upper-bound performance comparable to that of fine-tuning-required techniques."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Shuzhao_Xie1",
"~Jiahang_Liu2",
"~Weixiang_Zhang1",
"~Shijia_Ge1",
"~Sicheng_Pan1",
"~Chen_Tang3",
"~Yunpeng_Bai1",
"~Zhi_Wang5"
]
},
"authors": {
"value": [
"Shuzhao Xie",
"Jiahang Liu",
"Weixiang Zhang",
"Shijia Ge",
"Sicheng Pan",
"Chen Tang",
"Yunpeng Bai",
"Zhi Wang"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"3D Gaussian Splatting",
"Mixed-precision Quantization",
"Compression"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "xie|sizeaware_compression_of_3d_gaussians_with_finegrained_mixed_precision_quantization"
},
"pdf": {
"value": "/pdf/f01f65eefbc4b083e8de605fe0a9d81c6eb73fa9.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Size-aware Compression of 3D Gaussians with Fine-grained Mixed Precision Quantization"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
1Q2t6D4dK6 | Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image-Quality Metrics | main | Active | adversarial defenses;image quality assessment;adversarial attacks;image quality metrics;benchmark | datasets and benchmarks | 5;5;5;6 | 5;5;4;3 | 2;3;3;3 | 2;2;2;2 | 3;2;3;1 | 5.25 | 4.25 | 2.75 | 2 | 2.25 | -0.870388 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see Weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(1) This paper gives a comprehensive comparison of multiple defense methods against IQA under a variety of attacks, and draws a few conclusions under different scenarios.\n(2) The detailed analysis of the trade-offs between performance and robustness in various defense strategies offers practical guidance for researchers and developers.\n(3) The inclusion of statistical tests and evaluations of quality scores adds robustness to the findings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper aims to benchmark and evaluate the robustness of 30 different adversarial defense methods against 14 adversarial attacks regarding IQA metrics. It emphasizes the need for effective defenses due to the unique challenges posed by preserving image quality while defending against adversarial perturbations. It presents a comprehensive analysis of the efficiency of various adversarial defense methods for image quality assessment (IQA) metric."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1)\tAlthough the paper considers 30 different defense methods, it ignores some defense methods which are tailored for IQA methods specifically such as [1]. These methods should be discussed and compared in the paper, as the goal of this paper is to discuss the defense of IQA.\n(2)\tThe paper has evaluated different defense methods under different indicators, showing a lot of charts, but it lacks the in-depth analysis about what is the reason behind the effectiveness of different defense methods which is important. For example, the defense performance on the KonIQ-1k dataset on the right of Figure 3 exceeds the other two datasets in multiple defense methods. What is the reason? Why do many attack methods in Figure 4 achieve the worst defense performance on FPR, in terms of R robustness?\n(3) More experimental details and analysis are expected. For example, in line 227, 50 images are selected from 1k images for attack, do different selections of attack images affect the performance of attack and defense? Does it affect the conclusion?\n\n[1] Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, June 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "Not Applicable"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. In section 3.2, \"clustering the KonIQA dataset by spatiotemporal complexity.\" - Could you please explain the temporal aspect of images?\n2. Given the high computational demands of certified defenses and 10 images being used? How would you expect the results to vary as you sample different sets of 10 images?\n3. Logistics questions on the leaderboard :\n How do you plan to maintain the leaderboard, and will there be mechanisms for incorporating new defense/attack techniques over time?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Major strengths : \n\n1. The paper presents the first comprehensive benchmark of 29 defense methods,14 attack methods, and 9 IQA methods for NR-IQA\n2. Multi-dimensional approach to evaluation: robustness, preserving correlation with human perception (SRCC/PLCC), and quality of the image with respect to original (PSNR, SSIM)\n3. Practical Contribution to Research and Industry -> Good work on setting up a public benchmark."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper attempts to set up benchmarking for adversarial attacks and defenses in the context of No-Reference Image Quality Assessment algorithms. The coverage of the work seems to be good—29 defense strategies, 14 attack methods, and 9 IQA methods. Lots of experiments (as needed) and results are provided, as expected from a benchmarking paper."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major weaknesses :\n\n1. The paper is a valuable resource for the IQA community. But in terms of technical novelty, it would have been nice to have an attack/defense method with some motivation for the IQA task. I feel this paper would have been more suitable for a Benchmarking Track, but I understand ICLR lacks such a track and had to submit it to the main conference track\n2. The appendix provides many results, but it is very difficult to connect them to the main text, which points to the paper's poor organization.\n3. The authors should add LPIPS to the results in Table 3 (and other similar tables) along with PSNR and SSIM. PSNR is not a perceptual metric and can be reliable, leaving SSIM as the only metric. It is better to report both SSIM and LPIPS scores. \n\nMinor :\n\n1. Paper formatting needs to improve, and content organization can also be better. For example, Fig 1 does not discuss page 8.\n2. Section 3.1 under Adversarial defenses: It is better to divide the paragraphs into different types. This will make reading the paper much easier."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The definition of attack in Equation 1 focuses solely on increasing the model's score. However, if an image already possesses the highest quality, what is the purpose of the attack on it? Why was the idea of decreasing the score of high-quality images/videos not considered?\n2. Currently, the attack methods employed are those typically used in classification problems. It would be beneficial to consider incorporating some of the attack strategies for IQA that have been proposed in recent years.\n3. Table 3 shows that many adversarial purification defenses exhibit strong defensive effects. However, these methods should be analyzed more thoroughly. For instance, purification techniques that modify the entire image, such as color quantization and median blur, should include more detailed indicators (L_2 and L_\\infty) to better reflect the extent of image modification.\n4. Most of the analysis in this paper primarily describes the data presented in the tables. It would be beneficial to include an in-depth analysis of the characteristics and connections among these various types of defense strategies, providing great guidance for future research.\n5. The abbreviations in the table should be added with full spelling in the caption to help readers understand and prevent misunderstandings. \n6. Equation 2 misses the variable x’."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The research topic of this paper is interesting and promising.\n2. The experiments in this paper are comprehensive and detailed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper systematically evaluates the effectiveness of various defense strategies for NR-IQA models, including adversarial purification, adversarial training, and certified robustness. It also examines these defenses under both adaptive and non-adaptive attack scenarios. Overall, the experiments in this paper are thorough and comprehensive, but the paper's readability could still be enhanced."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Most of the strategies are directly borrowed from classification tasks, with no new approaches tailored to the specific characteristics of IQA tasks.\n2. In addition to PSNR and SSIM, incorporating additional quantitative indicators like L_2 and L_\\infty would provide a more intuitive understanding of the differences between the original image and the purified image.\n3. The article is less readable, and many tables contain abbreviations that are not defined."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The authors should work more on Points 1, 2, and 3 in an attempt to raise the reviewer's rating."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is decently written.\n2. The problem under investigation is of practice importance as well described in the Introduction.\n3. This empirical study is comprehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an empirical investigation of the effectiveness of various defense techniques against adversarial attacks on image quality metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. A primary technical concern is label preservation (quality preservation in our context) amidst adversarial perturbations. This research evaluates various adversarial attacks, including those like MADC by Wang and Simoncelli, which may not preserve image quality after attacks. In such instances, the model is expected to provide a different quality prediction for the manipulated image, necessitating ground-truth quality assessment through human evaluation.\n\n2. From a conceptual perspective, the reviewer wonders how to understand certified defenses that involve voting and medianing. In classification, it is not hard to comprehend that the robustness is gained through prediction ensembling, which is again under the assumption of label preservation (see Eq. (4)). However, the quality preservation assumption is clearly not true for quality assessment. For example, consider a test image with some Gaussian blur, to perform random smoothing, we shall add Gaussian noise to it according to Eq. (4). Then, the final score is the average quality estimates of a Gaussian blurred image and a Gaussian blurred and noised image (which may be of different quality), which makes less sense to the reviewer.\n\n3. The observed effectiveness of the adversarial attacks (evidenced by an SRCC decrease from 0.611 to 0.477 in Table 3) appears inconsistent with prior research such as [Zhang et al., 2022a] (which reduces to random guessing). Given the limited success of these attacks, interpreting the defense results with similar SRCC values (ranging between 0.5 and 0.6) becomes challenging.\n\n4. Recent NR-IQA models that integrate visual and language components have not been evaluated in this study.\n\n5. The focus of this empirical study aligns more closely with image processing journals rather than a machine learning conference like ICLR, given that no new theories and algorithms are developed."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper presents a new benchmark of defense methods against adversarial attacks on image quality metrics"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024guardians,\ntitle={Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image-Quality Metrics},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Q2t6D4dK6},\nnote={under review}\n}"
},
"abstract": {
"value": "Most modern image-quality-assessment (IQA) metrics are based on neural networks, which makes the adversarial robustness of these metrics a critical concern. This paper presents the first comprehensive study of IQA defense mechanisms in response to adversarial attacks on these metrics. We systematically evaluated 29 defense strategies - including adversarial purification, adversarial training, and certified robustness - and applied 14 adversarial attack algorithms in both adaptive and nonadaptive settings to compare these defenses on nine no-reference IQA metrics. Our analysis of the differences between defenses and their applicability to IQA metrics recognizes that a defense technique should preserve IQA scores and image quality. Our proposed benchmark aims to guide the development of IQA defense methods and can evaluate new methods; the latest results are at link hidden for blind review."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"adversarial defenses",
"image quality assessment",
"adversarial attacks",
"image quality metrics",
"benchmark"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/105e922e4f4419661217d6a0347cc4b345f601bf.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/df3568e3d1f17630d1b8bdc0931598783def60fb.zip"
},
"title": {
"value": "Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image-Quality Metrics"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Qn1pMLYas | On the Cycle Consistency of Image-Text Mappings | main | Active | cycle consistency;multimodal learning;vision-language modeling;text-to-image generation;synthetic data | foundation or frontier models, including LLMs | 3;3;5;5;5 | 4;4;4;3;4 | 2;2;3;2;2 | 1;2;2;2;2 | 3;3;2;2;3 | 4.2 | 3.8 | 2.2 | 1.8 | 2.6 | -0.408248 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Recent research shows the hallucination problems in Multimodal LLM and compositional problems in T2I. How can the proposed method avoid this issue? For example, an input prompt could result in the generation of an incorrect image, which might then lead to an MLLM producing captions that are incorrect but resemble the original prompt. In this case, the cycle consistency might be high, but the actual performance should be low.\n2. What is the cycle consistency on long captions?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper focuses on interesting topics about the cycle consistency of by analyzing the cycle consistency of T2I and I2T models.\n2. It provides a comprehensive analysis of cycle consistency in image-to-text and text-to-image mappings, highlighting the correlation between cycle consistency and downstream performance in tasks such as image captioning and text-to-image generation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper analyzes the cycle consistency of image-to-text and text-to-image models. The study shows that while current models exhibit a level of cycle consistency, there is room for improvement, especially T2I models are sensitive to slight changes in prompts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the paper presents that T2I models are more sensitive to small changes in input prompts, it lacks an in-depth analysis of why different combinations of T2I and I2T models yield varying performance. For example, how does the training dataset affect the cycle consistency? How does the pre-trained model in T2I or I2T affect the cycle consistency? \n2. The paper does not sufficiently analyze why specific combinations of I2T and T2I models perform differently in terms of image and text cycle consistency. For example, BLIP2 underperforms compared to LLaVA1.6 in image cycle consistency while surpassing it in text cycle consistency.\n3. The analysis in the paper highlights that text-to-image models are highly sensitive to slight changes in prompt structure (word choice, order, and length), which can lead to inconsistencies. However, the paper stops short of proposing concrete solutions or mitigation strategies for this issue.\n4. The evaluation conducted solely on 1k MS COCO data is limited, especially since MS COCO captions often lack detailed descriptions of the images"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- When calculating cycle consistency for each modality, one of two generative models is fixed. (SDXL Turbo for image cycle consistency / LLaVA 1.5-13b for text cycle consistency) Would results show the same trend if the fixed models were changed?\n- If richer and more detailed data improves cycle consistency, can we say that recent models show better performance because they use quality data? It could lead to valuable insights if authors examined the training data characteristics of the better-performing models to see if there's a correlation with data quality, and discussed how this relates to cycle consistency performance."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper presents various quantitative analysis results on the proposed claim while also visualizing various qualitative results.\n- The authors analyze a less explored aspect of generative models and provide insights into its significance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper analyzes the cycle consistency of current image/text generative models, which represents how well the original input is preserved when it consecutively passes through two generative models. To quantify the cycle consistency of images and text, the authors use metrics that measure perceptual similarity and present results for various combinations of image and text generative models. Using several benchmarks, the authors suggest that cycle consistency alone can imply the performance of models on downstream tasks by showing a high correlation between the two, thereby eliminating the need for creating additional test sets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- There is little analysis on the differences between the models used to measure cycle consistency. The paper simply mentions that recent models perform better, without analyzing whether the differences stem from their training data, objective functions, specific architectures, etc. Authors could have provided a table summarizing these differences and discussed how these factors may contribute to the observed performance differences in cycle consistency.\n- In sections 4 and 5, it is unclear what message the authors are trying to convey. It is ambiguous how these sections relate to the cycle consistency discussed in sections 2 and 3. Authors could have better linked these sections to the overall narrative, such as explicitly stating how the divergence in text-to-image mappings (Section 4) and sensitivity in image-to-text mappings (Section 5) impact or relate to cycle consistency."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Is image-text cycle consistency a meaningful metric for model development? Should improving cycle consistency be a priority for model designers? What are the concrete applications or benefits of enhanced cycle consistency?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is well presented and easy to read.\n- The paper demonstrates extensive experiments incorporating multiple combinations of T2I and I2T models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents several intriguing phenomena regarding the cycle consistency of image-text mappings with text-to-image models and image-to-text ones. It demonstrates (1) that more advanced models achieve better cycle consistency; (2) a strong correlation between cycle consistency and tasks such as image captioning and text-to-image generation; (3) that the number of objects described by the text affects text-to-image mappings; and (4) that text-to-image models are sensitive to prompt variations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major issues:\n- Regarding Table 1 and 2, the analysis of why image-to-text models have greater impact than text-to-image models is hand-wavy. There should be more discussion on this.\n- Causality Direction: While the paper demonstrates correlation between cycle consistency and model performance (captioning/generation), it fails to address the causality direction. The improved cycle consistency is likely a consequence of better model capabilities rather than a contributing factor, which diminishes the practical utility of cycle consistency as a metric.\n- The claim that \"more descriptive text leads to a a broader distribution of generated images\" is not convincing. The experiments does not properly control the caption length. Figure 6 shows a case where the 1-tag caption exceeds the 5-tag caption in length.\n- The abstract makes several claims that aren't supported by the paper's content:\n - \"analyze in what ways they (cycle consistency) fail\": there are no such discussions in the paper.\n - \"how it affects achieving cycle consistency\": there are no such discussions in the paper.\n - \"we show possible challenges of training cycle consistent models due to the sensitivity of text-to-image models\": there are no explorations of training cycle-consistent models in the paper.\n\nMinor issues:\n- \"more descriptive text leads to a a broader distribution of generated images\" has double \"a\".\n- On the 4th line in page 4, the sentence \"Therefore, examine how a text-to-image mapping can diverge from one fixed text prompt into many different images.\" is incomplete.\n- At the end of page 8, \"Table 5\" should be \"Table 6\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "-In Table 1\\~2, the presented values alone are not enough to tell if each I2T+T2I model pair has a good cycle consistency since there's no baseline performance or threshold was suggested. Although the authors showed several cases in Figure 2\\~3, could the authors provide any kind of baseline scenario for comparison?\n\n-Since the sampling process image-to-text models can be also stochastic, could you also provide the analysis on the divergence of I2T models?\n\n-What does the analysis of the divergence and sensitivity of I2T models suggest for creating more cycle-consistent models? It would be better if there's a clearer statement about how the results on divergence and sensitivity imply about cycle consistency."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "-Considering the current generative models become more challenging to inject cycle consistency because of their iterative sampling process, their behavior on cycle consistency is an interesting question.\n\n-The script is well-written and clearly presents its claim."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the authors explored about the multi-modal cycle consistency of current text-to-image(T2I) and image-to-text(I2T) generation models. \n\nThey paired various off-the-shelf T2I and I2T models to build a cycle model and measured the input-output difference. They found that current state-of-the-art models possess a certain level of perceptual cycle consistency, even when they're not explicitly trained with cycle consistency objectives.\nThen, they argued that as the performance of the individual T2I/I2T module increases, the cycle consistency improves.\n\nTo further analyze and find possible factors that can affect to achievement of cycle consistency, the authors suggested the concept of 'divergence' in T2I mappings. And they claimed that more detailed and informed text prompts showed more divergent output space, yet improved cycle consistency.\nFinally, the authors demonstrated that a slight perturbation of text input sometimes results in higher variation in the T2I model output, which could be a challenge to achieve better cycle consistency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-Some experiments are not well-designed, which makes the corresponding findings seem to lack contributions or doubtful.\n\n1. Section 3 stated to demonstrate \"more cycle-consistent models have better T2I/I2T downstream performance\", but its content only shows that \"better T2I/I2T models are more cycle-consistent\", which are not the same.\nIt seems too natural that combining better T2I&I2T models improves cycle-consistency of the pair, since they provide high-quality data that contains major information of the input. On the other hand, it's still questionable that satisfying cycle consistency guarantees better T2I&I2T performance.\n(e.g. A perfect Image->Text->Image reconstruction can be achieved if the I2T model writes down all pixel values in one long string. A perfect Text->Image->Text reconstruction can be achieved if the T2I model produces the image that contains the entire input text visually.)\n\n2. In Figure 6, synthesized input captions with fewer tags don't seem to actually contain less information. In the first row, the input caption for 1 Tag is very long and specific, more detailed than 2~5 Tag captions. In the second row, the 1 Tag caption already contains the info of the second tag \"reflects\". This could be the reason that the divergence decreased with fewer tags, since better cycle consistency (more tags) coming with more divergence seems counter-intuitive."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Overall, this paper is a nice empirical study on cyclical consistency of image-text mappings, but I would urge the authors to respond to the Weaknesses during the rebuttal. I am open to improving the score based on the rebuttal discussion. Looking forward to the discussion. \n\nSee Weaknesses for additional questions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Below I state the strengths and weaknesses of the model: \n\nStrengths:\n\n- The paper is a thorough empirical study on the cycle-consistency of image-text representations across most of the popular T2I and I2I models. The results though not super surprising (as they are often trained / fine-tuned on the similar training sets) are well documented and can be crucial for the community. \n- The observation regarding the correlation between image-text cyclical consistency and downstream task accuracy can be useful to quickly check the effectiveness of the model. One question regarding this: In Fig. 4, SDXL-Turbo is used as a decoder for the image reconstruction case and LLaVa-1.5-13B for text generation. How does this design choice affect the correlation between cycle consistency and downstream performance? The authors should ideally provide some ablation on this design choice."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper (Cycle Consistency of Image-Text Mappings) investigates the degree to which the image-text mappings have a cyclical consistency. Although existing models do not train for this consistency explicitly, for a subset of models this cyclic consistency is enforced. In terms of application, the authors find that the measure of cycle-consistency correlates relatively well with downstream accuracy — which can help perform quick tests on the capabilities of the model without requiring a curated benchmark. Overall, I believe that the paper is insightful, but lacks a strong application using those insights except for an approximate performance check for downstream tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weaknesses:\n- The application of using cycle-consistency as an approximate measure for downstream task accuracy is an interesting use-case; However, I believe they are proxy for only two tasks (Image captioning and T2I generation performance). To be useful in practice, I suggest the authors to add in more tasks concerning these models (e.g., VQA amongst others) and check if cycle-consistency can still be an approximate measure of task accuracy. \n- I find the Sec.(5) to be intriguing, but the authors should highlight how some of the takeaways be used towards training / fine-tuning models with better downstream capabilities. \n- The authors select CIDEr score for captioning performance measurement; Have the authors considered using a strong MLLM for measuring captioning performance and using it to measure the correlation with?\n- (This is not a weakness - but a discussion point) — Based on the insights, what do the authors think about building unified image-to-text and text-to-image models while enforcing cyclical consistency? Will it lead to better downstream performance than training these models independently."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024on,\ntitle={On the Cycle Consistency of Image-Text Mappings},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Qn1pMLYas},\nnote={under review}\n}"
},
"abstract": {
"value": "The increasing exchange of image and text in large multimodal models leads us to ask: to what degree are mappings from text to image, and back, cycle-consistent? First, we find that current image-to-text models paired with text-to-image models do achieve a degree of perceptual cycle consistency, even when these models are not trained to have this effect. However, these mappings are far from perfect, motivating us to analyze in what ways they fail. First, we observe a strong correlation between cycle consistency and downstream performance in both image captioning and text-to-image generation. Next, we investigate how divergent are text-to-image mappings as a function of the number of objects described by the text, and how it affects achieving cycle consistency. Surprisingly, we find that more descriptive text leads to a a broader distribution of generated images, but also results in overall better reconstructions. Finally, we show possible challenges of training cycle consistent models due to the sensitivity of text-to-image models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"cycle consistency",
"multimodal learning",
"vision-language modeling",
"text-to-image generation",
"synthetic data"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/663882bc87645df63f0932022806c34eab4987a5.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "On the Cycle Consistency of Image-Text Mappings"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Qpt43cqhg | Fully-inductive Node Classification on Arbitrary Graphs | main | Active | node classification;inductive generalization | learning on graphs and other geometries & topologies | 3;6;6;6 | 4;3;4;4 | 3;3;2;3 | 2;4;4;4 | 2;3;2;2 | 5.25 | 3.75 | 2.75 | 3.5 | 2.25 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Most of the questions and suggestions are already mentioned in the weaknesses section. I would like to mention some minor points here.\n\n1. I would like to see more graph operations used in LinearGNN instead of just X, AX, A^2X, (I-A)X, (I-A)^2X. For example, the Chebyshev polynomial operation, the PageRank operation, normalized Laplacian operation, etc. I think more operations could provide more diverse perspectives of the graph, and thus improve the performance of GraphAny at a little extra cost.\n\n2. I doubt the time complexity in Table 1, since pseudo-inverse is used in LinearGNN, which is computationally expensive up to O(n^2d), could the authors explain this?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper proposes a novel problem setting seemingly impractical, and provides a reasonable solution to it. Previously, I was doubtful about the feasibility of graph foundation models, since unlike in NLP and CV, graph data is more universal and diverse. The information heterogeneity between different graphs may make this fully inductive setting impossible, i.e., I didn't think the knowledge in different graphs has much in common. However, the authors provide an impressive and valid solution to this problem, which is a good contribution to the community.\n\n2. The proposed method is well-motivated and well-designed. The attention module that tackles dimensionality and permutation issues is particularly novel and interesting, with strong intuition. \n\n3. The experiments are extensive and convincing. An impressive number (31) of datasets are involved in this fully-inductive setting, and the good average score of GraphAny demonstrates its effectiveness.\n\n4. The ablation study is comprehensive and insightful. The authors provide a clear understanding of the importance of each component in GraphAny."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the problem of fully inductive node classification, where limited parameters are learned from one small graph, and inference other unseen graphs. The authors propose GraphAny, which consists of two components, one is a set of linearGNNs, and the other is a learnable attention MLP function. Using pseudo-inverse, LinearGNNs directly compute the node embeddings of corresponding linearGNN channels. Then a sophisticated attention technique which has properties of permutation-invariant and robust dimension generalization is used to combine these embeddings. The extensive experiments show that GraphAny gains significant improvements over the state-of-the-art methods in many datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. (Explainability) I didn't see any explanation of one very important question: why could the knowledge learned from one graph be transferred to another unseen and unrelated graph? The authors should provide more intuitive insights on this point. From my point of view, LinearGNNs with different graph operations may serve as probes to extract different types of intrinsic knowledge from the graph, then the permutation and dimension invariant attention module could combine this knowledge in a semantic space where the common knowledge of graphs is shared. The authors should provide more insights on this point, i.e., why it works well.\n\n2. (Experiments) Although the proposed AnyGraph shows a high average performance, it is not the best in all datasets, especially in some large datasets such as Arxiv, Reddit and Products. I don't think homophily could explain this, since AnyGraph (Arxiv) also performs poorly. The authors could provide more insights on why AnyGraph fails in these datasets, and how to possibly improve it.\n\n3. (Experiments) The transductive baselines (GCN, GAT) are not strong enough. Since the benchmark contains so many datasets ranging from highly homophily to highly heterophily, baselines [1,2,3] that could fit both homophilous and heterophilous graphs should be compared. I highly recommend the authors to add some of these baselines to make the results more convincing.\n\n\n[1] Luan, S., Hua, C., Lu, Q., Zhu, J., Zhao, M., Zhang, S., ... & Precup, D. (2022). Revisiting heterophily for graph neural networks. Advances in neural information processing systems, 35, 1362-1375.\n\n[2] Lim, D., Hohne, F., Li, X., Huang, S. L., Gupta, V., Bhalerao, O., & Lim, S. N. (2021). Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems, 34, 20887-20902.\n\n[3] Zhu, J., Rossi, R. A., Rao, A., Mai, T., Lipka, N., Ahmed, N. K., & Koutra, D. (2021, May). Graph neural networks with heterophily. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 12, pp. 11168-11176)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How do you get the attention score in equation 5? Do you just sum all elements in matrix $P_u^{i}$ in equation 10? What is the learnable weight in the attention module as shown in figure 3?\n\n2. Can you further explain the experimental setting in figure 5? What does the density mean? Since the value is in the range of [0, 1], is this value normalized?\n\n3. This paper mentions that it is always possible to cheat the fully-inductive setup by training a separate instance of existing models for each test dataset (in Line 75). However, the proposed LinearGNN operates like what it just said by training a linear layer with a graph convolution operation for a test graph and the authors called this LinearGNN a non-parametric solution, or preprocessing step (in Table 1). It's hard to convince the readers that the proposed method is a fully-inductive graph learning method. Can the authors clearly differentiate your approach from the \"cheating\" setup?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper tackles a great challenging fully-inductive graph learning task.\n2. This paper introduces an inductive attention module that satisfies permutation invariance properties and generalizes to new graphs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tackles the issue of fully inductive graph learning by introducing GraphAny. The proposed method consists of LinearGNN to preprocess the features following the idea of SGC and attention module to transform the feature."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation of this paper needs improvement. Many details are missing in the section of methodology. \n- The authors conduct the experimental on motivating the entropy normalization, while the experimental setup in figure 5 is not explicit. It's not suggested to specify what these methods are until the section 4.1. The authors should provide more explicit explanation of the experimental setup for Figure 5.\n- It's not clear what is the learnable parameters in the attention module and how to get the attention vector $\\alpha$. A clear description of the learnable parameters in the attention module should be added.\n- It's weird to call $y_u^{(i)}$ in equations 9 and 10 as node feature and it's more proper to describe it as label vector considering its dimensionality. \n\n2. In figure 3, the authors mention that LinearGNN is non-parametric, but LinearGNN involves the learnable weight matrix W in equation 1. It's improper to claim that LInearGNN is a non-parametric solution. The authors should revise their description of LinearGNN to avoid confusion.\n\n3. This paper mentions that it is always possible to cheat the fully-inductive setup by training a separate instance of existing models for each test dataset (in Line 75). However, the proposed LinearGNN operates like what it just said by training a linear layer with a graph convolution operation for a test graph and the authors called this LinearGNN a non-parametric solution, or preprocessing step (in Table 1). It's hard to convince the readers that the proposed method is a fully-inductive graph learning method. \n\n4. Though the authors show that GraphAny has better average performance in total 31 graphs in Table 2. However, the experimental results in Table 5 shows that GAT outperforms GraphAny in 18 out of 31 graphs, which means that the proposed method does not have advantage in the fully inductive learning setting. In addition, GAT is a baseline proposed in 2018, and many recent methods can outperform GAT in most of these graphs. \n\n5. How does the different values of t influence the performance of GraphAny on different datasets? It's better to include an ablation study on the effect of t."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "S1. The authors are ambitious, tackling a highly challenging and valuable problem: designing a foundational GNN model that can generalize across diverse datasets. \n\nS2. The proposed method is ingenious. The authors introduce a LinearGNN that does not require training, enabling the model to adapt to different datasets.\n\nS3. The experimental results are powerful and impressive. \n\nS4. The authors provide the complete code, along with a well-organized README file, to support their views."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors focus on addressing a challenging problem: enabling GNNs to be fully-inductive across diverse datasets. They propose a model called GraphAny. Specifically, the authors employed multiple encoders (LinearGNNs) whose parameters can be obtained analytically, allowing it to generalize across datasets with different feature and label spaces. Additionally, the authors design an attention-based, learnable MLP to capture transferable graph patterns. Extensive experiments demonstrate the model's effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. In fact, the proposed LinearGNNs seem to me more like a data preprocessing method which requires no learning to unify the feature and label spaces through analytical solutions.\n\nW2. Regarding W1, the authors’ statement in the Introduction that GraphAny is the first fully-inductive method seems somewhat over-claimed. According to the views in this paper, any model that can be solved analytically (i.e., without training) could also be seem as fully-inductive. Nonetheless, this point does not negate the contribution of the attention-based component to knowledge transfer.\n\nW3. The paper does not mention some recent methods capable of achieving the fully-inductive as described, such as GraphControl [1]. \n\nW4. I suggest that the author should provide the data split ratio for downstream test datasets (it is vaguely mentioned only in the appendix). This is a crucial setting, as if my understanding is correct, the proposed method requires a certain amount of ground-truth labels to analytically solve the parameters of LinearGNNs on test datasets.\n\nW5. Based on W4, the approach in this paper seems to be semi-supervised (or fine-tuned) on downstream tasks, meaning it has access to the same amount of labeled data as other semi-supervised algorithms like GCN. Moreover, GraphAny benefits from additional prior knowledge from other datasets (i.e., the pre-training phase), making it seemingly more advantageous compared to other algorithms in experimental settings. This stands in contrast to the authors' claim that other semi-supervised algorithms have an additional advantage over GraphAny in the experimental settings.\n\nIf LinearGNNs do not require any test dataset labels to solve the parameters (i.e. completely zero-shot scenario), then W4 and W5 would not hold. I strongly recommend that the authors add further explanations in the paper to improve reader comprehension.\n\n[1] GraphControl: Adding Conditional Control to Universal Graph Pre-trained Models for Graph Domain Transfer Learning, WWW24."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could you compare your method with the random coefficients combination of 5 different linearGNNs?\n\n2. According to your Table 2 and Figure 7, it seems that SGC1 and SGC2 occupy a dominant position( high weight and high accuracy). Could you discuss why this happens more? Could you analyze why SGC1 and SGC2 tend to get higher weights and accuracy? Does this suggest that simpler graph convolutions are more transferable? How might this insight inform future designs of inductive graph models?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. this paper raises a more general and challenging task for graph ood, that is the fully inductive node classification, which requires the model can generalize to arbitrary graphs, involving new structures, new dimensions, and semantics for their feature and label spaces.\n\n2. this paper designs a novel method GraphAny, that integrates the multiple LinearGNN predictions and learned inductive attention, which satisfies the permutation invariant and robust to dimension changes\n\n3. the paper gives comprehensive experiments and evaluation of various datasets, showing the effectiveness of their methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces GraphAny, a model designed for fully-inductive graph learning, where models must infer on new graphs with varying structures, features, and labels. GraphAny leverages LinearGNN for analytical graph inference, adaptable to diverse graph types. By integrating multiple LinearGNN predictions using learned inductive attention, GraphAny ensures robust generalization to new graphs. Empirical results demonstrate GraphAny's effectiveness, achieving a 67.26% average accuracy on 30 new graphs with minimal training data, outperforming both inductive and transductive baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The lack of baseline. This paper seems only to compare with the test-adapted GNN models as the baselines (GCN, GAT, MLP), I am not very certain if any other GNN baselines trained on the one dataset while generalizing to more datasets, such as the LLM-based GFM[1]. \n\n2. Since your method is based on the combination of 5 different linearGNNs ($ F = X$ (Linear), $F = AX$ (LinearSGC1), $F = A^2X $(LinearSGC2), $F = (I − A)X$ (LinearHGC1) and $F = (I − A)^2X$ (LinearHGC2) ), have you ever compared your method with the random coefficients combination of them? I suggest comparing GraphAny to a baseline that uses random or fixed coefficients to combine the 5 LinearGNN components. This would help isolate the benefit of the learned inductive attention mechanism.\n\n[1] One for All: Towards Training One Graph Model for All Classification Tasks"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel model that enables inductive generalization problem on unseen graph with arbitrary feature and label dimensions."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024fullyinductive,\ntitle={Fully-inductive Node Classification on Arbitrary Graphs},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1Qpt43cqhg},\nnote={under review}\n}"
},
"abstract": {
"value": "One fundamental challenge in graph machine learning is generalizing to new graphs. Many existing methods following the inductive setup can generalize to test graphs with new structures, but assuming the feature and label spaces remain the same as the training ones. \nThis paper introduces the fully-inductive setup, where models should perform inference on arbitrary test graphs with new structures, feature and label spaces. We propose GraphAny as the first attempt to this challenging setup. GraphAny models inference on a new graph as an analytical solution to a LinearGNN, which can be naturally applied to graphs with any feature and label spaces. To further build a stronger model with learning capacity, we fuse multiple LinearGNN predictions with a learned inductive attention. Specifically, the attention module is carefully parameterized as a function of the entropy-normalized distance features between pairs of LinearGNN predictions to ensure generalization to new graphs. Empirically, GraphAny trained on a single Wisconsin dataset with only 120 labeled nodes can generalize to 30 new graphs with an average accuracy of 67.26%, surpassing not only all inductive baselines, but also strong transductive methods trained separately on each of the 30 test graphs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"node classification",
"inductive generalization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/413e7b331a944a220129538d90ea5dabab785c18.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/349d73aafd23dbffc50f542f349ed39f240d79e9.zip"
},
"title": {
"value": "Fully-inductive Node Classification on Arbitrary Graphs"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1Qq62mo8TW | AFFS: Adaptive Fast Frequency Selection Algorithm for Deep Learning Feature Extraction | main | Desk Reject | Discrete Cosine Transform;feature extraction;frequency domain;frequency components selection. | other topics in machine learning (i.e., none of the above) | Zilong He;Kun Xie;Xiaocan Li;Jigang Wen;Jiannong Cao;Gaogang Xie;LiangWei;Kenli Li | ~Zilong_He4;~Kun_Xie1;~Xiaocan_Li1;~Jigang_Wen1;~Jiannong_Cao1;~Gaogang_Xie2;~LiangWei1;~Kenli_Li1 | 0 | 0 | 0 | 0 | 0 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": {
"value": "For using smaller margin and narrower linespace to squeeze in more contents in 10 pages."
},
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Submission Desk Rejected by Program Chairs"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nhe2024affs,\ntitle={{AFFS}: Adaptive Fast Frequency Selection Algorithm for Deep Learning Feature Extraction},\nauthor={Zilong He and Kun Xie and Xiaocan Li and Jigang Wen and Jiannong Cao and Gaogang Xie and LiangWei and Kenli Li},\nyear={2024},\nurl={https://openreview.net/forum?id=1Qq62mo8TW}\n}"
},
"abstract": {
"value": "As deep learning(DL) advances, effective feature extraction from big data remains critical for enhancing DL model's performance. This paper proposes a method for feature extraction in the frequency domain, utilizing advantages such as concentrated signal energy and pronounced data features. However, existing frequency component selection algorithms face challenges like difficulty adapting to diverse tasks and achieving only locally optimal results with extended processing times. To address these challenges, we introduce the Adaptive Fast Frequency Selection (AFFS) algorithm, tailored for various subsequent tasks. AFFS incorporates a frequency component selection factor layer, integrating it with the subsequent DL model to select globally optimal frequency component combinations for the DL model. Additionally, we propose a fast selection algorithm to expedite the process, leveraging the experimental observation of rapid convergence of selection factor ranking. Experimental results demonstrate that AFFS achieves superior performance across three datasets and three DL models. By using AFFS to select appropriate frequency components, even though our input data size is only 10\\% of the original frequency feature, the classification accuracy of the model is improved by about 1\\%. Furthermore, the early stopping mechanism can shorten the selection process by approximately 80\\%."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Zilong_He4",
"~Kun_Xie1",
"~Xiaocan_Li1",
"~Jigang_Wen1",
"~Jiannong_Cao1",
"~Gaogang_Xie2",
"~LiangWei1",
"~Kenli_Li1"
]
},
"authors": {
"value": [
"Zilong He",
"Kun Xie",
"Xiaocan Li",
"Jigang Wen",
"Jiannong Cao",
"Gaogang Xie",
"LiangWei",
"Kenli Li"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Discrete Cosine Transform",
"feature extraction",
"frequency domain",
"frequency components selection."
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "he|affs_adaptive_fast_frequency_selection_algorithm_for_deep_learning_feature_extraction"
},
"pdf": {
"value": "/pdf/f437800c9914c5d9c8472c040f2048d1277a6edb.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "AFFS: Adaptive Fast Frequency Selection Algorithm for Deep Learning Feature Extraction"
},
"venue": {
"value": "ICLR 2025 Conference Desk Rejected Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Desk_Rejected_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
||||||||||
1R5BcYS8EC | SysCaps: Language Interfaces for Simulation Surrogates of Complex Systems | main | Active | surrogate models;multimodal text and timeseries models;language-interfaced regression | applications to physical sciences (physics, chemistry, biology, etc.) | 5;5;6 | 3;2;2 | 2;2;3 | 2;2;2 | 3;3;3 | 5.333333 | 2.333333 | 2.333333 | 2 | 3 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I would be interested in seeing the performance compared between time-series-centric models and current generic architectures."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The authors provide a clear explanation of their setup, flow of data across multiple components and their evaluation and analysis.\n- (I do not feel sufficiently well-acquainted with this domain to evaluate the predictive contribution or performance of the models.)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper describes a set of lightweight models to model complex energy systems, using an LLM to generate prompts and a encoder and bidirectional time-series model to predict energy consumption."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I presume the authors' choice of models is due to resource constraints and aiming for a lightweight setup, but it feels like it has multiple components when it could be a simpler setup with fewer model components. For instance, a BERT-type model could also be used for time-series prediction (as opposed to only text encoding). Similarly the two-step process of generating prompts using a separate LLM and then encoding that prompt with an encoder could be avoided by just using the LLM directly and fine-tuning it."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In addition to the points in the weakness:\n\n1. Did the authors try to just templatize the sentences rather than generating them using an LLM, how would that impact performance (i.e. rather than telling an LLM to adhere to some constraint-based template, just have a sketch sentence and fill attribute values in the given sentence)?\n2. Why wasn't RFE performed for the Wind Farm Wake modeling dataset, would performing RFE improve performance ?\n3. Would the model not further improve if the SysCaps were generated using synonyms for the attributes, did the authors see the LLM generate different synonyms for the building or wind farm dataset? \n4. Do the authors believe that training on the subset of data where the caption quality assessed by the classifier model, would improve the overall model performance?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is well-written and easy to follow. The authors motivate the problem well and empirically show improvements over 2 real-world datasets. Further, SysCaps can be used by non-expert users to understand the features of surrogate systems. The Design space exploration is insightful to show the features learned by the model."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper discusses the important challenge of building surrogate models for the prediction of simulation data. They specifically motivate the problem for complex energy systems(CES). These surrogate models often model system features as one hot vectors. The authors propose using text based descriptions to model these so-called surrogate systems with time series data. The text data is encoded as a dense embedding obtained from language models. The embedding is then fed to a bidirectional sequence encoder along with the time series data. \n\nThe paper discusses the generation of the text pertaining to the attributes of such systems and proposes an automatic evaluation strategy for the same. \n\nFor generating the captions the authors prompt an LLM with an in-context learning-based prompt that tunes the style and number of sentences. To evaluate the SysCap quality the authors train a multi-class classifier to check the attributes covered in the description generated by the LLM, using the text embedding. \n\nThe authors show how including SysCaps along with time series data leads to improved performance against baselines that perform onehot encoding over attributes. The authors further show how training a custom embedding model can aid in improving time series prediction over just using a time series-based model. They further empirically prove how the embeddings are more robust to synonyms and missing data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper claims the technique uses a pretrained text encoder for generating the embeddings, but then in section 5 mentions that the models are actually finetuned. This should be explicitly mentioned in the claims that the paper makes rather than just mentioning that a pretrained embedding is used. \n2. Further, the authors do not compare with the \"said-pretrained\" embeddings but only finetuned embeddings, and other SOTA embedding models for text encoding. \n3. The paper also claims that they train a system to evaluate the caption quality, the parameters of the said multiclass classifier are omitted from the paper.\n4. The paper claims that for the CES building energy consumption dataset, the SysCaps-kv configuration works best, and for the turbine configuration the SysCaps-nl, there should be some discussion regarding the insights drawn from both cases and why the performance for both techniques are different. \n5. The authors claim that SysCaps would be useful for non-expert users, but lack the discussion if LLM-based explanations (complementary to the work done) can also aid in explaining the system attributes for surrogate models."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Q1) What's the advantage of the proposed approach using LLMs over more traditional template-based natural language captions? (e.g. \"The building is <x> squared feet.\", etc.)\n* Q2) In Figure 1, the key-value template has only a colon to separate the key and the value. Have you tried adding a space in between? I expect \n* Q3) For the one-hot encodings, how do you deal with numeric inputs?\n* Q4) In the results in Table 3, why did you expect longer captions to have larger error? I would have had the opposite intuition as shorter captions are more likely to miss important attributes."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* S1: Provides extensive empirical evaluation of the proposed system\n* S2: The presentation is clear."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the use of natural language captions as inputs to surrogate models that simulate \"complex energy systems\". These natural language captions describe the features of the system being simulated. The task is to predict a timeseries of some variable of interest that depends on these features and some other independent variable that is fed as a time series. The paper introduces an architecture that fuses the textual description with the time series data to achieve this goal.\nThe viability of the approach and its robustness to out-of-distribution perturbations are validated with a relatively extensive empirical evaluation, including different ablations of the system (such as one-hot encoding of the features, or no features), variations on the caption lengths or replacing words with synonyms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* W1: The LightGBM baseline is underspecified. This baseline is the only one that stands as a reference point that is not an ablated version of the proposed model. However, as I understand it, LightGBM is a framework but not necessarily a model, so I don't really to which model this system is really being compared against.\n* W2: Not very clear what is the added value of the proposal of using LLMs against simply using a template-based natural language description.\n* W3: Despite the system is motivated on the potential intuitiveness of language interfaces to non-experts, no particular study is conducted on that front."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Augmenting surrogate models of complex systems with natural language interfaces."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024syscaps,\ntitle={SysCaps: Language Interfaces for Simulation Surrogates of Complex Systems},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1R5BcYS8EC},\nnote={under review}\n}"
},
"abstract": {
"value": "Surrogate models are used to predict the behavior of complex energy systems that are too expensive to simulate with traditional numerical methods. \nOur work introduces the use of language descriptions, which we call \"system captions\" or SysCaps, to interface with such surrogates. \nWe argue that interacting with surrogates through text, particularly natural language, makes these models more accessible for both experts and non-experts.\nWe introduce a lightweight multimodal text and timeseries regression model and a training pipeline that uses large language models (LLMs) to synthesize high-quality captions from simulation metadata. \nOur experiments on two real-world simulators of buildings and wind farms show that our SysCaps-augmented surrogates have better accuracy on held-out systems than traditional methods while enjoying new generalization abilities, such as handling semantically related descriptions of the same test system.\nAdditional experiments also highlight the potential of SysCaps to unlock language-driven design space exploration and to regularize training through prompt augmentation."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"surrogate models",
"multimodal text and timeseries models",
"language-interfaced regression"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/de2706eabd3bc68eb7bfdb2d8584b607b5992841.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/07f94bc944995380d0b4303a019399392a63b03c.zip"
},
"title": {
"value": "SysCaps: Language Interfaces for Simulation Surrogates of Complex Systems"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1RC3KtP1jT | Archilles' Heel in Semi-open LLMs: Hiding Bottom against Recovery Attacks | main | Active | Semi-open Model;Closed-sourcing Approach | other topics in machine learning (i.e., none of the above) | 3;5;6;8 | 5;3;4;4 | 2;3;3;3 | 1;2;2;4 | 2;2;3;3 | 5.5 | 4 | 2.75 | 2.25 | 2.5 | -0.392232 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper introduces SCARA, a method that selectively closes only the bottom layers of semi-open large language models (LLMs) to enhance customizability while maintaining resilience against recovery attacks.\n2. It provides a theoretical analysis of the existence of a transition layer in transformer-based models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces SCARA, a selective closed-sourcing approach for designing semi-open large language models (LLMs) that enhance customizability while maintaining resilience against recovery attacks. The authors develop an algorithm that strategically keeps only a few bottom layers closed-source, ensuring model flexibility without compromising security. They theoretically demonstrate a \"transition layer\" within deep transformer models, showing that recovery errors in layers before this point lead to recovery failure, while errors in later layers have a limited impact. SCARA estimates the optimal number of layers to hide using a novel metric based on initial recovery loss, bypassing the need for fine-tuning. The method is applied to five models ranging from 1.3B to 70B parameters, tested across six downstream tasks and sixteen recovery benchmarks. Results show that SCARA improves downstream performance while requiring over ten times fewer closed-source parameters than baselines, achieving improvements, especially in domain-specific tasks like Financial, with 30% higher performance on Llama2-70B. SCARA maintains comparable resilience against recovery attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Unclear Motivation for Semi-Open Models:** The market is dominated by closed-source models and fully open-source models. If customization needs are already addressed by existing fine-tuning services provided for closed-source models (e.g., API-based fine-tuning on closed models like GPT-4), it would be insightful to understand the specific motivations and advantages driving the development of a semi-open architecture. \n2. **The threat model is not clear.** The threat model concerning recovery attacks on semi-open LLMs is insufficiently defined. The paper does not clearly specify the adversary's capabilities, such as the extent of access to the model's architecture, parameters, or outputs. This lack of clarity makes it challenging to assess the effectiveness of the proposed SCARA method in mitigating such threats.\n3. **Insufficient Details on SCARA's Implementation:** The description of SCARA's methodology is vague, particularly regarding the fine-tuning-free metric used to determine which layers to keep closed-source. The paper does not provide a clear explanation of how this metric is calculated, the data required, or the computational resources involved etc.\n4. **Evaluation minors:** While the authors present experimental results across multiple models and tasks, the evaluation lacks depth. The paper does not offer a comprehensive analysis of SCARA's performance compared to existing methods, nor does it explore potential trade-offs between customizability and security."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "n/a"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The evaluation seems comprehensive. It was easy to follow the problem setup and the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a method for identifying which layers in the model that, if recovered by an adversary in an iterative layer recovery process, will make subsequent layers easier to recover."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I question the threat model that the authors are introducing. I don't think there's any chance of stealing an LLM through any method that exists no matter how semi-closed/semi-open it is. The only methods that have been proposed that can do something like this specifically target the embedding layer.\n\nIt seems like the main insight of the paper, that hiding the earlier layers in the model is more impactful than hiding later layers because if an attacker wants to recover the model they'll pay an accuracy error scaling in the depth of the model past the layer they haven't yet recovered, is trivial. If you asked someone who had never heard anything about this literature of hiding layers, whether they should hide the first block or the last block, I'm certain everyone would choose to hide the first block. There's plenty of work already showing that later layers are more or less redundant and don't learn anything new. This is because attention heads in block N have the ability to learn Nth order interactions, but for N > 2, these interactions typically don't get learned and the attention heads just degenerate [1].\n\nThe actual implementation of the method is not sophisticated. It just takes this straightforward insight and turns it into a metric. But that metric is itself just \"what happened if I closed the first N layers of the model\" and then returns the first one that passes some threshold of difficulty.\n\nIt doesn't seem like the evaluation is really fair. The authors evaluate against SEM. But SEM just wants to recover the embedding and the authors are trying to show what happens if they hide the early parts of the network. This seems like an indication that this isn't a particularly realistic threat model.\n\n[1] https://arxiv.org/abs/2404.08634"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Can the authors explain and clarify why the semi-open models are practically relevant?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-written, and the presentation is clear and clean. \n\n2. The approach is well motivated --- it first starts from an empirical observation that closed-sourcing the first two layers offers significantly greater resilience than the last two, while the model shares similar customizability under the two cases. This implies that close sourcing the later layers may be the optimal solution for keeping the resistance to recovery attacks. The paper subsequently further formally establishes this finding with rigorous theoretical analysis, showing the existence of a transition layer such that even small recovery errors in layers before this layer can lead to recovery failure. This also intuitively makes sense --- when the attacker is asked to recover the earlier layers as opposed to the later layers, the errors in early layers will be amplified by later layers. This asymmetry is natural. \n\n3. Based on this insight, the paper also proposes an effective approach for the selectively closed-sourcing model to defend against recovery attacks. The experiment results support the effectiveness of the approach. \n\nOverall, the paper is nicely done."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the problem of how to design semi-open models (i.e., models whose weights are only partially open-sourced) that can simultaneously also be resilient to recovery attacks. The paper finds that a transition layer exists, such that even small recovery errors in layers before this layer can lead to recovery failure. Building on these insights, the paper proposes an approach called SCARA that keeps only a few bottom layers as closed-source. With this new approach, the paper shows that it is possible to improve downstream customization performance while maintaining similar resilience."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "One outstanding weakness of this paper is that the threat model considered may not be practically relevant. It seems the authors coined the semi-open model's scenario, that seems not really exist in the real world. \n\nCurrently, the most common setups are either open-source or closed-source. For close-source context, when developers do want their users to customize their models, the standard practice is to deploy fine-tuning APIs (e.g., https://platform.openai.com/docs/guides/fine-tuning) rather than partially open-source a part of the model. It seems to make no sense to only open-source the first few layers of a model to enable customization. Because the customization anyway still needs the involvement of the closed-source developers --- so they can fine-tune and connect the first few layers and the later layers to really deploy the model. Then, why not just close-source all weights and directly ask the users to upload custom data, and then the closed-source developers fine-tune and deploy the model for the users, like what is being done in fine-tuning APIs? \n\nI worry that if not developers will do the partial open-sourcing like the authors of this paper consider, then the problem itself may not hold."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Is the \"attack datasets\" mentioned in Figure 2 the same as the \"recovery datasets\" discussed later in the paper?\n2. Could you clarify the formula and the loss function used for RD(I)?\n3. Could you clarify how fully-closed and semi-open models differ in practice?\n4. Could you explain more about the distinctions between FT-all, FT-closed, and SEM in Section 5.1?\n5. Can the row and column headers in the tables be made clearer by avoiding abbreviations?\n6. Could you explain more about potential future work that could be included in the paper?\n7. Could the authors clarify what the value 0.00 represents in Table 1 and Table 2?\n8. The authors discussed the impact of datasets of different lengths on the effectiveness of SCARA in the experimental section, but these datasets did not appear in the setup. Could the authors provide a detailed introduction to the composition of these datasets?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-written, featuring thorough experiments and clear explanations from both theoretical and empirical perspectives.\n2. The overall layout is visually pleasing, and the figures are diverse and effectively illustrate the content, aiding readers' understanding.\n3. It proposes a straightforward and effective method for constructing a semi-open model by hiding only a few layers, while achieving baseline-level resilience and customization performance comparable to the fully open setting.\n4. The insight regarding the existence of a transition layer contributing to resilience is particularly compelling, with a detailed theoretical explanation that enhances understanding.\n5. The authors provide comprehensive empirical validation across multiple architectures and benchmarks, covering models of various sizes (1.3B-70B), and testing customizability and recovery performance on several benchmarks. They also conducted experiments on recovery datasets of different sizes, demonstrating sufficient experimental rigor.\n6. The authors proposed additional enhancements to the original baseline, strengthening the protection of baseline SAP and highlighting SCARA’s effectiveness in preserving resilience.\n7. The authors empirically validated their theory of the transition layer’s existence and pointed out that smaller models exhibit transition layers earlier than larger models.\n8. The authors clearly identified the limitations of the SCARA method, noting its ineffectiveness on small models (OPT-350M) and its inability to defend against other adversary attacks.\n9. The proposed SCARA algorithm has clear practical applications, offering a viable solution for enhancing the customizability of semi-open models while preserving comparable resilience."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents SCARA, a method to identify decoder layers to hide in decoder-only LLMs. The authors provide theoretical proof of transition layers, where early errors are amplified in subsequent layers. They introduce RD to assess post-recovery performance when specific layers are hidden. Experiments show that SCARA, by hiding only a few layers, achieves a recovery ratio close to baselines while maintaining customization performance similar to fully open approach. The experiments also confirm the existence of transition layers in the models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. One mathematical notation in Section 4.2 is unclear. The loss function $\\ell$ for RD(I) is not specified, making it confusing.\n2. Figures 1 and 2 have minimal captions and small text, reducing readability and limiting their ability to convey insights."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "In this paper, we propose SCARA, a fine-tuning-free approach that reduces the number of closed-source layers to enhance customizability while preserving resilience to recovery attacks in semi-open models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024archilles,\ntitle={Archilles' Heel in Semi-open {LLM}s: Hiding Bottom against Recovery Attacks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1RC3KtP1jT},\nnote={under review}\n}"
},
"abstract": {
"value": "Closed-source large language models deliver strong performance but have limited downstream customizability. Semi-open models, combining both closed-source and public layers, were introduced to improve customizability. However, parameters in the closed-source layers are found vulnerable to recovery attacks. In this paper, we explore the design of semi-open models with fewer closed-source layers, aiming to increase customizability while ensuring resilience to recovery attacks. We analyze the contribution of closed-source layer to the overall resilience and theoretically prove that in a deep transformer-based model, there exists a transition layer such that even small recovery errors in layers before this layer can lead to recovery failure. \nBuilding on this, we propose \\textbf{SCARA}, a novel approach that keeps only a few bottom layer as closed-source. SCARA employs a fine-tuning-free metric to estimate the maximum number of layers that can be publicly accessible for customization. We apply it to five models (1.3B to 70B parameters) to construct semi-open models, validating their customizability on six downstream tasks and assessing their resilience against various recovery attacks on sixteen benchmarks. We compare SCARA to baselines and observe that it generally improves downstream customization performance and offers similar resilience with over \\textbf{10} times fewer closed-source parameters. We empirically investigate the existence of transition layers, analyze the effectiveness of our scheme and finally discuss its limitations."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Semi-open Model",
"Closed-sourcing Approach"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/99bb7d6b7317cda572f7e45ea1e714b48c0ac876.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Archilles' Heel in Semi-open LLMs: Hiding Bottom against Recovery Attacks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1RNSYEEpwi | Stealing User Prompts from Mixture-of-Experts Models | main | Active | Mixture-of-Experts;privacy;ml-security;information security;buffer overflow;leakage;exploit;token dropping | other topics in machine learning (i.e., none of the above) | 3;5;5 | 5;4;3 | 2;3;2 | 2;3;3 | 3;4;3 | 4.333333 | 4 | 2.333333 | 2.666667 | 3.333333 | -0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could you please further discuss about how man-in-the-middle attacks can help to inject the proposed attack in LLM server?\n- Could you discuss what will happen if there are two tokens sharing the same routing path."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The study introduces a novel security concern by identifying a previously unexamined vulnerability in LLM service.\n- Experimental results demonstrate the effectiveness of the proposed attack, showing that it reliably extracts user prompts under the specified conditions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores a novel security vulnerability in Mixture-of-Experts (MoE) language models, specifically focusing on the risk of prompt leakage through the architecture's routing mechanisms.The proposed attack, an adversary manipulates expert buffers within an MoE model to extract a victim's prompt by observing how token routing and dropping affect model outputs. The study reveals that an attacker can reconstruct a user’s prompt by exploiting token-dropping patterns and guessing tokens sequentially."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The threat model assumes an attacker with significant control over the LLM server, which may not be practical in real-world settings. Additionally, token-dropping techniques are not widely used in recent LLM inference architectures, limiting the relevance of the attack to current models.\n- The attack is computationally intensive, requiring up to 1,000 tokens for each token being extracted, which may restrict its feasibility in large-scale applications.\n- The explanation of the proposed method for Recovering Target Token Routing Path lacks clarity. It is unclear how the method handles cases where two tokens share the same routing path. If two tokens follow identical paths, this could complicate the attack, as distinguishing between them based on routing alone may not be difficult."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "n/a"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Attacking deployments of MoEs is a pretty interesting idea, and stealing the data of other users who are using the inference API is sufficiently high impact that this paper may have some impact even if the threat model and attack are unrealistic / impractical.\n\nThe diagrams explained the attack quite well."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper shows that if someone else's data is placed in the same batch as your data for many consecutive queries, and the model is a 2-layer MoE whose weights you have access to, and you can locally compute a forward pass on the MoE and the KV Cache, and that MoE is using cross-batch Expert-Choice Routing, and the router weights are heavily quantized in order to induce ties, and the MoE is running PyTorch TopK, then you can brute-force (with exponential query complexity) some of the tokens of the other person's query."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors acknowledge upfront that their threat model is unrealistic (line 135). \nI will add some additional reasons why the threat model is unrealistic;\n\n- Not all deployed MoEs use Expert Choice Routing. In Expert Choice Routing, typically some tokens may be dropped if they don't go to any expert because that expert is filled. Expert Choice Routing can be very bad in some settings. The alternative is Dropless MoEs, which can be implemented in a couple different ways. I'm not sure which MoEs that are deployed actually use Expert Choice Routing, but if I were to go to an inference provider and ask for Deepseek MoE or DBRX, they would be serving a Dropless MoE. So some kind of table showing \"here are the deployed MoEs that use Expert Choice Routing\" would be useful. Of course this is closed information in many places, so I don't expect the authors to try and figure out whether Gemini or GPT-4 is using this, but you can at least go to all the inference providers serving open-weights MoEs (because you need open weights MoEs for this attack to work anyways) and see which ones use expert-choice routing. As far as I can tell, it is none of them, but I would want to see this table.\n- Not all deployed MoEs would use the tie-handling mechanism that the attack relies on exploiting. The only way for a tie to occur is if two tokens have the exact same output from the router. But this does not happen even if those two tokens are actually the same, because over the course of an MoE with multiple layers, the token representations get mixed with other tokens via Attention. The authors note that they quantise the router weights to 5 bits to induce ties (line 377) but even if the router weights were quantised, you would not get ties in a multilayer model. I routed some tokens from Fineweb-CC-2014-03-04 through Mixtral 8x7B, saved the router scores, and found that there are basically no ties. If the authors could release their code that would be helpful to reproduce this tie-breaking behavior, even if it does require quantization.\n- Some deployed MoEs would use jitter, which also totally messes up the proposed algorithm. Jitter just tries to sample from a slightly perturbed distribution so now we are even less likely to see ties.\n- Not all deployed MoEs do not use the first-come-first-serve tie-breaking CUDA topk function that the authors assume they are using. For example, xAI's Grok and Gemini do not use this function. This is because the PyTorch TopK function on CUDA is absurdly memory inefficient. TRT, vLLM, etc. use other CUDA kernels for Topk that do not have this issue. Ex, NVIDIA's FasterTransformer uses this https://github.com/NVIDIA/FasterTransformer/blob/main/src/fastertransformer/kernels/sampling_topk_kernels.cu. \n- Deployed MoEs typically do not have open weights. Even if we consider an inference provider running Pytorch on CUDA to serve an open-source MoE like Deepseekv2 such as Fireworks, the inference provider's KV Cache compression mechanism (anyone serving a model is not storing the full KV Cache, they are doing something like MLA, or sparse KV Cache, or quantized, or pruned, etc etc etc) is not publicly known. And this is required for the adversary to run this attack, because the adversary needs the KV Cache locally in the same way that the model is being inferenced on the cloud.\n- If the adversary can run an open-weights MoE like Deepseek-v2 locally for many thousands of queries, they are operating with a massive amount of computational power. Furthermore, this attack needs the victim's data to also be present in the same batch for many queries.\n\nThe authors do not spend enough time proposing defenses; the paragraph starting on (line 484) should be expanded into a subsection. The authors had some ~30 lines remaining so it's not a matter of space constraints.\n\nThe main text of the paper is pretty much incomplete. There are too many places where the reader is forced to scroll to the Appendix and read a chunk of text in order to follow the paper. This is unfortunately becoming a common practice, but I dislike it nonetheless.\n\nThe confidence intervals seem way too large in Figure 4. It looks like all these attacks could just have 0 success rate. And this is even in the super unrealistic setting where the canaries are taking on a few values, the vocab is <10k (Gemma has vocab 256k), the model is artificially altered to make the attack work at all.\n\nThe attack is pretty unsophisticated. If I had to draw a comparison, I would say that this is like the brute-force binary search attacked proposed to extract logprobs by exploiting logit bias as proposed by Morris 2023. It's straightforward and if you don't care about efficiency it's fine, but it's not going to make an attack paper on its own. What can the community learn from the development from this attack? It has no practical implications, so there should be something about the design that is clever or inspires new ideas.\n\nThere are some minor typos (line 496) (line 837) (line 342) (line 819) (line 820)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "The paper seems premature in its current form, but I would advocate for it if a meaningful subset of the weaknesses were addressed. It would require a much more substantial evaluation, though."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Data-dependent computations are vulnerable to side-channel leakage: designers of ML systems need to learn this lesson.\n- Cool exploitation of an interesting side channel in a particular MoE architecture (+ the top-k implementation in CUDA).\n- History of computer security suggests that even seemingly impractical side channels can turn into exploitable vulnerabilities (with lots of additional research, of course)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In MoE models, individual experts process tokens in priority order; tokens with the same priority are processed in the arrival order (because of a CUDA quirk). If the buffer is almost full, the second-to-arrive token is dropped. This is a side channel: if an adversary can control the relative placement of their own and someone else's tokens in a batch, they can first fill the buffer with high-priority tokens, then switch the order between their own token and someone else's unknown token, and observe the resulting routings. If the routing is the same for both tokens, this means the adversary's token is the same as the unknown token, revealing the value of the latter. With repeated application, this can be leveraged into an extraction attack."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- As acknowledged in the submission, the setting is unrealistic. The adversary needs to (1) control the placement of target inputs in the batch, (2) repeatedly submit different orderings of the same batch to the model, and (3) observe its internal routing choices. Man-in-the-middle (mention in 3.1) might be able to do (1) -- although not entirely clear how -- but not (2) or (3). I cannot think of any setting where (2) and (3) are available to the adversary, yet the adversary is unable to directly observe inputs into the model.\n\n- Evaluation is rudimentary, just a single Mixtral model. I understand this is a proof-of-concept, but seems a little skimpy for a conference submission.\n\n- Just a single routing strategy is investigated. I do believe that other routing strategies may be similarly vulnerable, but again, seems skimpy for a conference submission.\n\n- Defences are not really explored in any depth. Randomizing top-k and/or token dropping (or other aspects) should mitigate the attack, but would it have a noticeable impact on performance / quality of the results?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present a novel attack against MoE architectures that exploits Token Dropping in expert-choice routing to steal user prompts."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024stealing,\ntitle={Stealing User Prompts from Mixture-of-Experts Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1RNSYEEpwi},\nnote={under review}\n}"
},
"abstract": {
"value": "Mixture of Expert (MoE) models improve the efficiency and scalability of dense language models by \\emph{routing} each token to a small number of experts in each layer of the model. In this paper, we show how an adversary that can arrange for their queries to appear in the same batch of examples as a victim's queries can exploit expert-choice routing to the full disclosure of a victim's prompt. We successfully demonstrate the effectiveness of this attack on a two-layered Mixtral model. Our results show that we can extract the entire prompt using $\\mathcal{O}(\\text{Vocabulary size} \\times \\text{prompt length}^2)$ queries or a maximum of 100 queries per token in the setting we consider. Our work is the first of its kind data reconstruction attack that originates from in a flaw in the model architecture, as opposed to the model parameterization."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Mixture-of-Experts",
"privacy",
"ml-security",
"information security",
"buffer overflow",
"leakage",
"exploit",
"token dropping"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ecfe53d6753f9a5f82e04db814951775cfe3e75b.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Stealing User Prompts from Mixture-of-Experts Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1S7kpbfgq9 | Normalized Space Alignment: A Versatile Metric for Representation Analysis | main | Active | Deep Learning;Representation Learning;Local Intrinsic Dimensionality;Similarity Metric;Dimensionality Reduction;Interpretability | interpretability and explainable AI | 3;3;5;8 | 4;4;4;3 | 2;2;3;3 | 2;3;2;3 | 2;3;3;3 | 4.75 | 3.75 | 2.5 | 2.5 | 2.75 | -0.916949 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) How does NSA perform in extremely high-dimensional spaces where Euclidean distance is known to be problematic? Are there alternative distance metrics that could be integrated into NSA?\n2) How sensitive is NSA to parameter settings, and what are the best practices for tuning it in different applications (e.g., adversarial robustness vs. dimensionality reduction)?\n3) Given the versatility of NSA, do you envision any specific areas where its application would be limited or challenging?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) NSA can be used both as a loss function and a similarity metric across different applications \n2) NSA is designed to work efficiently in large-scale applications with a quadratic complexity that is better than some existing methods \n3) It is also effective in preserving structural characteristics and identifying vulnerabilities in neural networks, even under adversarial attacks\n4) the paper provides a thorough analysis with multiple experiments and comparisons to other methods like RTD, CKA validating NSA's effectiveness"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Normalized Space Alignment(NSA), a new manifold analysis technique designed to compare neural network representations; NSA compares pairwise distances between point clouds from the same data source but with different dimensionalities. NSA is proposed as both a differentiable loss function and a similarity metric, and it is computationally efficient. The paper demonstrated the NSA's versatility in representation analysis, structure-preserving tasks, and robustness testing against adversarial attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The reliance on Euclidean distance as a primary metric may limit performance in high dimensional spaces due to curse of dimensionality \n2) NSA is versatile but may not require careful tuning and modifications to work effectively in specific scenarios\n3) The limitations of NSA are not explored beyond high-dimensionality issue"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "__Figure 1:__ assuming the values being plotted are means, how many tests were performed? What are their standard deviations? This is especially important since the data is being subsampled for the computations, and training seeds will produce different networks.\n\n__Figure 3:__ same questions here with regards to whether the curves represent means. Stating how many repetitions and the standard deviations is important to understand the significance of these curves.\n\n__Lines 324--328__: I had trouble understanding the sensitivity test. Although the notion of testing robustness to the removal of principal components makes perfect sense to me, it was not clear how the plots in Fig. 1 demonstrated, e.g., that \"NSA is more sensitive to the removal of high variance PCs compared to RTD and CKA\". Moreover, I'm not sure how to interpret the values for \"detection threshold\", especially since the values in the main text are different than those in the figure. What are the \"baselines\" mentioned in the plots' legends?\n\n__Line 435:__ \"the latent embeddings are then tested on their ability to predict the existence of links between nodes\". How exactly are they tested on this? Are they used as inputs in another GCN? This wasn't clear to me.\n\nIn lines 197, 199, 272, surely the authors mean dissimilarity, not similarity (since they compute distances)? There are more instances throughout the paper where these metrics are called \"similarities\".\n\n__Line 497:__ \"We used a GCN along with four __robust__ GNN variants...\". Why robust? Robust to what exactly?\n\n__Line 500:__ \"by introducing perturbations ranging from 5% to 25%\". These percentages are w.r.t. what exactly? And what is the nature of these perturbations? Removing/changing links, nodes, or both?\n\n__Minor points:__\n\n- I found no pointer to Figure 3 in the main text.\n\n- Line 493: \"we applied NSA in the context of GNNs, but the method __can__ be equally effective in analyzing the robustness of other architectures\". I recommend changing __can__ to \"might\", or \"could\", unless the authors have actually tested this empirically.\n\n- Line 503: I recommend saying \"the __original__ graph\" instead of \"the _clean_ graph\"."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- NSA introduces a new approach for representation alignment with applications in dimensionality reduction, structure-preserving autoencoders, and robustness analysis, highlighting its adaptability to multiple tasks.\n\n- Its quadratic computational complexity improves on the cubic complexity of alternative metrics like RTD, making it suitable for large datasets and mini-batch processing in training.\n\n- NSA is evaluated across multiple tasks and datasets, and compared with established metrics (CKA and RTD)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Normalized Space Alignment (NSA), a novel metric to analyze neural network representations. NSA compares two point clouds (representing data structures within neural networks) by preserving both global and local structures, regardless of differing dimensionalities. It can be used to preserve representation structures in tasks such as suitable for diverse tasks such as dimensionality reduction, adversarial robustness assessment, and cross-layer representation analysis. NSA’s main advantage is its ability to efficiently preserve global and local structure across different dimensional spaces. The authors showcase NSA’s versatility by applying it across various tasks, demonstrating its computational efficiency and robustness, particularly in mini-batch processing."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "__Figure 1__: all 3 plots in the left column are the same.\n\n__Specificity assumptions:__ In section 4.1.1, the authors expect that the same layers of two networks trained on the same data and differing only in the initial weights should have high structural similarity. However, the actual layer in which similar features are learned may vary, particularly in ResNets (due to their residual connections). This is a well-known phenomenon: residual connections allow networks to adapt flexibly, enabling the model to skip certain layers or distribute features across them depending on initial weights and learning dynamics. See:\n\n[1] Veit, A., Wilber, M. J., & Belongie, S. (2016). Residual networks behave like ensembles of relatively shallow networks. Advances in neural information processing systems, 29.\n\nThus, instead of showing a single example result in Figure 1, the authors would make a stronger case if they (i) reported the average across multiple instances of the same networks; and (ii) used multiple architectures and datasets.\n\n__Equation 6__: for the units of LNSA to make sense, you should take the inverse again, after computing the mean of the inverses. That's what MacKay and Ghahramani actually do -- notice the -1 power in the formulas for their estimators ($\\hat{m}^{-1}$). You can also check this on the source code they provided. In their Fig. 2, the best curves are: \"the inverse of the average of the inverse m-hats (orange), and our preferred maximum likelihood estimator (which is just equation (7) again.\"\n\nHaving said that, I don't think you should compute the individual residuals using the Lid inverses. The residuals should keep their units of \"dimension\". How do the authors justify this?\n\n__GNSA:__ I see a problem with this dissimilarity in that it can produce large values if the geometry of the manifold changes but the topology stays the same. A classic example where this would happen is for the \"swiss roll\" dataset (https://scikit-learn.org/1.5/auto_examples/manifold/plot_swissroll.html): the GNSA value comparing the original roll and its unrolled counterpart would be very large since, although the first several nearest neighbors of a point $i$ would not change their distances much, points that are far away (following along the spiral) would become considerably farther after flattening the roll. I believe this would lead to large GNSA even though the two manifolds are topologically identical. Have the authors considered this? If they agree, I suggest a more thorough discussion on strengths and weaknesses of GNSA.\n\n__Lack of ground truth__: I believe this study would greatly benefit from using toy datasets that provide some ground truth to verify the efficacy of the method proposed. E.g., Gaussian clusters of various dimensionalities, the 1-D spiral, the 2-D S-curve, a plane with a hole; these have been classically used in the manifold learning literature. Here are a couple examples of recent papers that use interesting toy datasets as ground truth for comparing low-dimensional embeddings and dimensionality:\n\n[2] Wang, Yingfan, et al. (2021) \"Understanding how dimension reduction tools work: an empirical approach to deciphering t-SNE, UMAP, TriMAP, and PaCMAP for data visualization.\" Journal of Machine Learning Research 22.201: 1-73.\n\n[3] Dyballa, L., & Zucker, S. W. (2023). IAN: Iterated Adaptive Neighborhoods for manifold learning and dimensionality estimation. Neural Computation, 35(3), 453-524.\n\nIt would be informative to have some simple, intuitive examples that could be directly visualized in 2 or 3 dimensions. Such datasets could be perturbed in ways that _did_ change their topology and structural relationships vs. others that _did not_, the goal being to check whether the values produced by LNSA and GNSA would reflect the truth."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Why an access to the source code was not provided for reproducibility check purposes?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "-The paper addresses an interesting problem of constructing a reasonable measure of dissimilarity of two data representations of the same dataset. \n\n-Different experiments are described in order to empirically validate the method, although no source code is provided making reproducibility check difficult."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a method (NSA) for comparing two data representations of the same dataset. NSA is a weighted sum with some tuned weights of GNSA which essentially compares the pairwise euclidian distances in the two representations of the same points, and of LNSA which is a local dissimilarity measure, based on k-NN graph. Experiments are described in order to empirically validate the expected properties of the method, although no access to the source code is provided."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.Dependence on Euclidean distance in high-dimensional spaces. NSA uses essentially the comparison of Euclidean distances as its measure of structural similarity. This choice can be suboptimal in high-dimensional spaces due to the \"curse of dimensionality,\" which makes Euclidean distances less informative and can lead to unreliable similarity measurements.\n\n2.An access to the source code is not provided making the paper results reproducibility check difficult.\n\n3.Lack of universality without parameter tuning. NSA's performance across different tasks relies heavily on parameter tuning and specific integration with other loss functions. The choice of k in the construction of k-nn graph is essential in the definition of LNSA. The weights in front of the local and global parts of NSA clearly lead to drastically different results depending on their values. \n\n4.No thorough guidance is provided for the choices and tuning of these hyperparameters. For example how to do 'appropriately adjusting the number of nearest neighbors considered in each mini-batch' on line 366 remains unspecified.\n\n5.High computational complexity for large datasets. Despite claims of efficiency, NSA has a quadratic computational complexity concerning the number of data points, \\( O(N^2 D + kND) \\). This can become prohibitively expensive as the dataset size grows.\n\n6.The method's focus on structural preservation might make it less effective in scenarios where functional similarity is more relevant, limiting its applicability.\n\n7.Absence of interpretability mechanisms for practical applications. Although NSA provides a structural similarity measure, it lacks interpretability features that could make its outputs more useful in real-world applications. For instance, it does not offer insights into which specific features or dimensions contribute most to the observed structural discrepancies."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Specific comments:\n* Doesn’t the computation of GNSA depend on the specific order of the point clouds? For example, comparing a_i and b_i only make sense if these below to the same datapoint, otherwise you’re comparing random elements. \n* In Sec 4.1 you claim that “a good structural similarity index should show high similarity between architecturally identical neural networks with different weight initializations”. However, different initializations produce different models and there is no reason to assume that these should have the same structures. Also, in Figure 1 all the plots on the left are exactly the same. If this is not a typo, then I also don’t believe that the experiment shows what it is claimed. Additionally, the results here should be compared to the classical methods for comparing representations like Alaa et al, How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models, Kynkäänniemi et al, Improved precision and recall metric for assessing generative models, NeurIPS 2019, Poklukar et al, Delaunay Component Analysis, ICLR 2022, Khrulkov et al, Geometry score: A method for comparing generative adversarial networks, ICML 2018, etc, which are also missing from the related work.\n* Please add details in 4.2.1. on how GSNA is even calculated. What is X and what is Y? \n* In Sec 4.3., I do not understand why an AE is used on top of the produced embeddings. In my view, a baseline should be the classification accuracy on the embeddings of the GCN or alternatively of a NSA-GCN trained model but not of a frozen GCN model with an AE attached to it. Also, as mentioned above, this experiment lacks comparison to any SOTA graph based methods which makes the applicability questionable."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* Good background section on LID.\n* Good applicability of the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce the Normalized Space Alignment (NSA) method for comparing two point clouds, which is based on comparing pairwise distances. The NSA consists of the local NSA, defined through the Local Intrinsic Dimensionality, and the global NSA, defined through Representational Similarity Matrices. The final NSA is defined as the weighted sum of global and local NSA. The experimental section includes experiments where NSA is used to analyze representations, as a loss in AE and for detection of adversarial attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "General concerns:\n* Despite a wide range of applications presented in the experimental section, the paper lacks comparison to relevant existing methods to really showcase the efficiency. For example, in the link prediction and adversarial attacks experiments, the method should be compared to the relevant baselines from the respective fields to be able to fairly judge the efficiency of the method.\n* Datasets used in the experiments are small and basic, and the generalization of the method is questionable. How does the method behave for large sets and more complicated cases?\n* No ablation studies are provided. For example, the method relies on the k nearest neighbors selection and I believe that the choice of k does influence the results. No experiments are provided on the robustness of k, neither is mentioned what k is actually used in the experiments. There is also no info on the balancing parameters l and g, and no ablation studies on the influence of these.\n* The definition of GNSA depends on the choice of the origin. For example, given two point clouds X and Y, the translated point clouds will have the same structure but not the same GSNA score which is problematic. Of course one could resolve this with selecting a different origin but that is not feasible in practice. \n* Figures are not well readable."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce NSA, a robust method for quantifying discrepancy between point clouds in different ambient spaces, offering improved performance and computational efficiency across a wide variety of tasks."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024normalized,\ntitle={Normalized Space Alignment: A Versatile Metric for Representation Analysis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1S7kpbfgq9},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce a manifold analysis technique for neural network representations. Normalized Space Alignment (NSA) compares pairwise distances between two point clouds derived from the same source and having the same size, while potentially possessing differing dimensionalities. NSA can act as both an analytical tool and a differentiable loss function, providing a robust means of comparing and aligning representations across different layers and models. It satisfies the criteria necessary for both a similarity metric and a neural network loss function. We showcase NSA's versatility by illustrating its utility as a representation space analysis metric, a structure-preserving loss function, and a robustness analysis tool. NSA is not only computationally efficient but it can also approximate the global structural discrepancy during mini-batching, facilitating its use in a wide variety of neural network training paradigms."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Deep Learning",
"Representation Learning",
"Local Intrinsic Dimensionality",
"Similarity Metric",
"Dimensionality Reduction",
"Interpretability"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5db9fb2dba03f5583eafda2d7afcee4e12ca9d0e.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Normalized Space Alignment: A Versatile Metric for Representation Analysis"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1S8ndwxMts | Towards Robust Evaluation of Protein Generative Models: A Systematic Analysis of Metrics | main | Active | evaluation metrics;protein;protein generative models | applications to physical sciences (physics, chemistry, biology, etc.) | 1;3;3;5 | 5;5;4;4 | 1;1;2;2 | 1;1;2;3 | 1;2;3;2 | 3 | 4.5 | 1.5 | 1.75 | 2 | -0.707107 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Would it be possible to discuss the sensitivity-robustness trade-off more systematically & quantitatively? For instance, does it make sense to interpret the cluster elimination experiment as a strong perturbation (that a sensitive distribution similarity metric should detect) and intra-cluster diversity reduction as a weak perturbation (that distribution similarity metrics should be more or less robust to)?\n\n- Why is the CD diversity metric not compared to simpler alternatives like average pairwise distances between generated sequences?\n\n- I would like to see some reference data for run times of different metrics to support statements about the reliability-efficiency trade-off.\n\n- Section 4.4.3 should also discuss the _Density_ and _IPR Precision_ results.\n\n- The conclusion states \"We demonstrate that combining quality, diversity, and distributional similarity metrics provides the most robust assessment of generated proteins\". As far as I can tell all experiments evaluate metrics in isolation and therefore do not really support this statement. Could you please elaborate a bit more? \n\n- Figure 8 is missing.\n\n### Minor comments\n\n- line 37: missing/broken reference\n- line 43: reference seems to be incorrectly formatted\n- quotation marks should be corrected in some places (e.g. lines 73 and 81)\n- the norm in the equation in line 89 is not specified, maybe a more general notation for a distance function should be used here\n- line 103: I am not sure if I agree with the definition of _diversity_ using memorization of the training data. Samples from the training set can still be diverse. Doesn't this definition apply to _novelty_?\n- In many places, it would be preferable to change the formatting of citations (use `\\citep` instead of `\\citet`).\n- line 157: why did the notation change? Before, small letters were used to denote the folding function and inverse folding function, respectively. \n- line 214: indices are not correctly formatted\n- Figures 2 - 5: error bars should be defined in the figure legends\n- Figure 4 should be referenced in the main text.\n- Section 4.2.1: how many data points were used to calculate the correlation values? Is the raw data shown somewhere?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "### Importance\n\nUnifying benchmarking attempts for protein generative models is an extremely important open challenge. \nStudying various common evaluation metrics systematically and in a controlled setup is impactful because it can inform future developments of new methods and allow researchers to benchmark their models in a more convincing way.\nThe problem is motivated nicely and grounded in related works.\n\n### Breadth\n\nThe paper addresses three dimensions of generative model evaluation: **quality**, **diversity**, and **distributional similarity**.\nIt furthermore identifies at least two axes along which evaluation metrics should be assessed: **robustness vs sensitivity** and **reliability vs computational efficiency**.\nTogether these cover most practically relevant aspects of model evaluation in this space."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies several common evaluation metrics for protein sequence generative models covering quality, diversity and distributional similarity of samples. \nThe authors present controlled experiments and derive guidelines for a robust assessment of the performance of protein generative models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "### Clarity\n\nThe presented topic is very complex and the authors' attempt to illuminate the design space for these metrics from various angles is commendable.\nHowever, the clarity of the presentation of their results can be improved. \nThe paper introduces a lot of metrics and desirable properties thereof but the arguments are sometimes difficult to follow in the current state.\nIt could be useful to restructure the experimental results section so that each subsection (quality, diversity and distribution similarity) systematically analyses different available metrics regarding their (1) robustness-sensitivity trade-off, and (2) reliability-efficiency trade-off.\nI would define a clear, quantitative criterion for each and follow an identical structure in each subsection (quality, diversity and distribution similarity).\nThe current discussion sometimes mixes empirically supported findings with intuition-derived arguments.\n\nIn the background section, it is confusing that most of the time the paper discusses three key axes of model performance: quality, diversity and distribution similarity,\nbut in Section 2.2 it talks about an alternative set of objectives: fidelity, diversity, novelty. \nSimilarly, the paper introduces \"Interpretability\" in Section 2.3 but does not discuss this aspect in the Results section.\nI would recommend to be more consistent throughout the paper (both in terms of wording and semantics).\n\nFurthermore, the paper should define the scope of the work clearly. It only covers generative models for amino acids _sequences_ as opposed to backbone _structures_.\nThe discussion about self-consistency in Section 2.1 seems unnecessarily detailed given the concept is only used once later on (scPerplexity metric). \nWhen I arrived at this point in the manuscript I was under the impression that the paper discusses both sequence and structure generative models because self-consistency is primarily used in the evaluation of _structure_ design methods (e.g. [1]).\n\n\n\n### Analysis of diversity metrics\n\nThe analysis of diversity metrics (Section 4.3) is extremely short, and it is unclear whether the presented data in Figure 3 provides information about the _sensitivity_ or _robustness_ of the Cluster Density metric.\nThe absence of a comparison with alternative approaches additionally makes it hard to interpret the results.\n\n\n### Support every claim with empirical data\n\nA systematic evaluation of metrics should always provide empirical evidence to back up the presented conclusions. \nHere, this is missing in some cases. For instance,\n- Looking at Figure 9 I would argue there are still notable differences between AlphaFold2 and ESMFold. Rather than just assessing their correlation, it would be useful to understand how sensitive and robust each method is to sample quality differences.\n- The paper states that simple diversity metrics lack discriminative power but it does not discuss any examples in the analysis in Section 4.3.\n- The paper also mentions intrinsically disordered regions as a potential stumbling block for the pLDDT metric. While this assumption is reasonable, it is still possible that pLDDT has better discriminative power than alternative metrics in those cases, but only empirical data can provide an answer to this question.\n- Finally, statements about computational efficiency are never quantified. Providing concrete run times would be an important piece of information that allows readers to get an idea about the reliability-efficiency trade-off.\n\n\n\n### References\n\n[1] Yim, Jason, et al. \"SE (3) diffusion model with application to protein backbone generation.\" arXiv preprint arXiv:2302.02277 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In line 307 you say that ScPerplexity has the highest sensitivity to sample size. Firstly, this isn’t clear from the plots as it doesn’t seem to change any more than plDDT. Additionally, why does sample size matter if the ordering with respect to noise is always correct? In practice, we can fix the sample size and correctly rank different generative models.\n\n- You say one of the important aspects of a metric is its interpretability but this isn’t considered later when evaluating the metrics. Are these metrics interpretable and are there differences in interpretability between them?\n\n- You say that a good generated protein should be structurally stable. Are any of the metrics actually capturing this?\n\nminor comments\n\n- Line 37 missing reference\n\n- Quotation marks are always the wrong way up when used. For example, line 46.\n\n- Some references are missing their journal. For example, “Generating novel, designable, and diverse protein structures\nby equivariantly diffusing oriented residue clouds” was at ICML 2023."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper's background section and motivation is extremely strong. The need for reliable evaluation metrics for protein generation is convincing and some of the metrics used in the literature are clearly outlined.\n\n- The controlled experiments are well thought out and provide some useful information about the quality of the metrics.\n\n- The authors perform a rigorous set of experiments and the provided practical recommendations could be useful to the community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper analyses several metrics for protein generation on synthetic datasets with controlled properties in order to see their strengths, limitations, and practical applicability. The paper highlights that some metrics are dependant on sample size and that computationally efficient metrics can be just as effective."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The main weakness comes from evaluating the metrics in such a controlled and synthetic setting. The quality metrics are evaluated on proteins which the models (such as ESMFold) are trained on. In this case, introducing more noise is shown to cause the metrics to get worse. In practice though, we generate unseen proteins and it is not clear whether these metrics generalize to proteins they are not trained on. Additionally, it is not clear from the paper whether these metrics correlate with anything experimentally. Therefore, the evaluation of these metrics in the given scenario doesn’t seem to offer much practical insight on the usefulness of these metrics. \n\n- The authors compare different metrics and explain that there should be a tradeoff with computational efficiency. However, it is not clear how the methods actually differ in this regard. You mention a few times that scPerplexity is expensive to calculate as it involves two models but there is no figure or timing comparison given. How much slower is it and is it impractical? You also say that your proposed metrics allow for rapid evaluation. Again, how long are these proposed metrics taking and what does the term “rapid” quantitatively mean? Although computational efficiency is mentioned a lot throughout the work, and seems to be important for selecting metrics, I have no indication from the paper on how these methods actually differ in this regard and why I should use a method over another practically. To improve this, the authors could include a table or figure comparing the runtime of each metric on a standardized dataset, perhaps across different sample sizes."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Many of my questions are embedded in the weaknesses. Some more minor questions.\n\n* Line 37. What is the missing reference \"?\"\n\n* Line 88. Self-consistency is mentioned but this equation is never used. Why is this given and where is it actually used?\n\n* Line 158. The equation $-\\log p(S|G(F(S))$ is confusing. If $G(F(S))$ is the inverse folding prediction then what does it mean to conditioned $p(S| \\cdot)$ on this?\n\n* Line 230. What protein generation tasks are considered?\n\n* Line 433. How are \"state-of-the-art protein generative models\" re-trained?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "* Evaluation of robustness to several protein sequence generation metrics."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The present work attempts to provide insight into various metrics of generative models over protein sequences. They evaluate several metrics used in prior works such as predicted local distance difference test, perplexity, pseudo perplexity, self-consistency perplexity, cluster density, and multiple techniques for distributional similarity metrics. On a curated dataset, they measure robustness to random perturbations, sensitivity to sample size, and use of different protein language models to compute the metrics. Some recommendations are provided at the end of which models to use and sample size for robust evaluation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The present work contains no technical novelty or new results. Therefore the analysis and presentation needs to be of high quality. Unfortunately the presentation quality is low and the insights are novel enough for acceptance to ICLR.\n\n* First, the work claims to evaluate protein generative models but proceeds to ignore or miss protein structure generative models such as RFdiffusion [1], Chroma [2]. The work only attempts to evaluate protein sequences without consideration for generated structures. Considering the popularity and success of [1, 2], this is a major omission.\n\n* There are **no benchmarks** of generative models in this work. The experiments are conducted on artificial perturbations of known sequences and on a curated set of sequences from 5 protein families. The insights in this work cannot be believed and are of little use unless the metrics are rigorously evaluated on state-of-the-art protein generative models.\n\n* Metrics are only useful if they correspond to success in downstream applications. The metrics used in [1, 2] are accepted because they are known to correlate (albeit weakly) with experimental success [3]. None of the metrics utilized in this work are associated with success in downstream applications. Indeed we care about how well the samples capture distributions but they are auxiliary metrics and are not the primary metrics in high impact protein generative model publications.\n\n* The noise perturbations are artificial. How do we know if randomly mutating 5-30% of the sequence is a failure mode or common occurrence in existing protein generative models?\n\n* Novelty is mentioned as a important consideration but no novelty metrics are presented or discussed.\n\n* Only using 5 protein families is far too small of an evaluation set. Line 234 states the experiments are done on \"real-world generated data\" but what is actually being generated here?\n\n* Section 4.3 on diversity metric analysis is weak. The trend in Figure 3 is the expected behavior of the 50% and 95% sequence similarity threshold. There is no new insight here.\n\n* I'm not sure what new insight is provided from the noise. Figures 2 and 3 show more noise leads to all the metrics becoming worse. This is expected but there is no indication of how this transfers to commonly used protein generative models. Do protein generative models exhibit such behavior?\n\n* Section 4.4 is also weak on insights. The graphs are expected by changing the noise and RBG kernel width. It would seem to me that different downstream applications would call for different parameters and robustness. Instead, the claims here are too general and unclear how useful they are for specific downstream applications such as binder design.\n\n* I would have liked to see a ranking of protein sequence generative models such as ESM2, ProGen, T5 with the metrics provided. \n\nOverall I do not believe this work provides a careful and rigorous study of evaluating protein generative models. I recommend the authors to rethink the experiments and hypotheses they wish to test.\n\n[1] https://www.nature.com/articles/s41586-023-06415-8\n[2] https://www.nature.com/articles/s41586-023-06728-8\n[3] https://www.nature.com/articles/s41467-023-38328-5"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Similar to above weaknesses:\n1. How do those metrics behave for meaningful diverse sequences, that were note generated with random noising?\n2. Are the randomly noised sequences foldable? Have you tried to calculate the TM-score between the original sequence and the forward folded structure of a 30% noised sequence?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Originality: A systematic analysis of metric variability with protein sequence diversity is a good idea and I would recommend the authors to build on it, but incorporate several improvements: Instead of uniform probabilities for mutations it might be more meaningful to use PAM or BLOSUM matrices. These are the transition probabilities from one amino acid residue to another one (based on similar hydrophobicity, charge, polarity, size etc.). \nSignificance: The authors correctly emphasize that there is no gold-standard in the field of generative protein models on what constitutes a \"good\" protein. The topic is worth being addressed, although I don't think that this work provides a significant contribution."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work evaluates various quality, diversity and distributional similarity metrics for their ability to co-vary with random synthetic perturbations on protein amino acid sequences. The authors also evaluate differences in plddt scores for forward folded noised sequences. \nThe stated aim is to provide a systematic overview of how those metrics change with sequence randomness (noise), number of protein samples and model size to advance the evaluation of protein generative models. However, I think this works falls short of this aim. The proposed metrics are omitting the non-bijective nature of the protein structure-sequence relationship, the authors do not compare with well-established quality metrics in the field of generative protein design (e.g. self-consistency folding as a quality metric for structural fidelity, or edit-distance as a function of distributional similarity). The authors only present results on synthetically perturbed sequences, where residues are mutated with equal probability to assess perplexity and diversity. This is very different from the case of generative modeling, where diverse sequences are generated (non-randomly!) via auto-regressive sampling, any-order sampling or temperature sampling. I would recommend to generate sequences with these models, and synthetically perturbed sequences with BLOSUM or PAN transition matrices.\nI don't think that machine-learning motivated metrics, such as perplexity, or earth mover's distance are practically useful for the field of generative protein design. Useful metrics should capture if the model generates protein sequences or structures, that fold, are stable, exhibit a specific function.\nI have several concerns about the methodology, biological soundness and presentation of this work as I will outline concretely below."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Background section:\n1. The motivation of diversity in the absence of training data is confusing. The authors should discuss that the structure-sequence relationship is no bijective and a one-to-many mapping problem. There are very many sequences that fold into the same or similar structures. The true diversity of this solution space is not known, given the small size of structural data in the PDB. This diversity is likely a complex function of protein size (there are very many diverse sequences that all fold into the same alpha helix peptide), packing (internal residues less diverse, versus external residues etc. \n2. The authors mention structural stability as a measure of a \"good\" protein, but do not evaluate this property in this work, this is confusing.\n3. I find the mathematical notations (especially under \"self-consistency\" overly complicated (given they are not being used) anywhere else. \nSection 3, Metrics:\n1. Fidelity metrics: The fidelity metrics are not addressing structural fidelity in terms of structural similarity (e.g. TM-score or RMSE) in the case of forward folding. Or self-consistency TM in the case of inverse folding. \n2. In general I would recommend the authors to split metrics for different generative model types and approaches, e.g. sequence-based (e.g. LLMs), inverse folding (structure-to-sequence)\n3. In section 2.3. the authors state that metrics should be interpretable. I don't find perplexity, or pseudo-perplexity very interpretable. plddt is interpretable. I would recommend adopting metrics like edit distance or structure consistency (e.g. TMscore). I think reporting perplexity in a protein LLM is still valuable, but it's not particularly novel or insightful. I am not sure if self-consistency perplexity: -logp(S|G(F(S)) makes sense given that this protein inverse folding (G) is a one-to-many problem with an unknown and variable number of diverse solutions. And as the authors state -- the folding and inverse folding model bias might further complicate this metric.\n4. Section 3.2: The diversity defintion of cluster density at 50% and 95% is interesting, but shoudl be compared to more commonly adopted diversity metrics in the field, such as edit distance and pairwise distances. \n\nSection 4: Experiments:\n1. I like the idea of a systematic perturbation of amio acid sequences, but random noise (uniform transition probabilities) is unrealistic. I would recommend using BLOSOM or PAN matrices. Additionally to the synthetic perturbations I am missing an actual application to generative models. I would recommend using different inverse folding models, e.g. ESMIF or ProteinMPNN and generating diverse sequences with random decoding orders and temperature sampling. Currently the authors perturb the sequence in a random way which likely turns them easily into garbage (ie they would never exist in nature and fold). \n2. The random perturbations do not create meaningful biological diversity in the sequences and simply degrade their quality. As such Figures 2 and 4 are stating obvious trends: The more noise, the worse the quality/fidelity metrics. \n\nPresentation: \n1. Please review citation guidelines, current citation style is reader unfriendly.\n2. Please mark supplementary figures as such (e.g. Figure 9). \n3. Figure 8 is missing"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Systematic analysis of protein generative model evaluation metrics, revealing key insights for improved assessment practices."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Robust Evaluation of Protein Generative Models: A Systematic Analysis of Metrics},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1S8ndwxMts},\nnote={under review}\n}"
},
"abstract": {
"value": "The rapid advancement of protein generative models necessitates robust and principled methods for their evaluation and comparison. As new models of increasing complexity continue to emerge, it is crucial to ensure that the metrics used for assessment are well-understood and reliable. In this work, we conduct a systematic investigation of commonly used metrics for evaluating sequence protein generative models, focusing on quality, diversity, and distributional similarity. We examine the behavior of these metrics under various conditions, including synthetic perturbations and real-world generative models. Our analysis explores different design choices, parameters, and underlying representation models, revealing how these factors influence metric performance. We identify several challenges in applying these metrics, such as sample size dependencies, sensitivity to data distribution shifts, and computational efficiency trade-offs. By testing metrics on both synthetic datasets with controlled properties and outputs from state-of-the-art protein generators, we provide insights into each metric's strengths, limitations, and practical applicability. Based on our findings, we offer a set of practical recommendations for researchers to consider when evaluating protein generative models, aiming to contribute to the development of more robust and meaningful evaluation practices in the field of protein design."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"evaluation metrics",
"protein",
"protein generative models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4c5362bc8faed739b28d0472155a5007589305a0.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Towards Robust Evaluation of Protein Generative Models: A Systematic Analysis of Metrics"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1STZCCI8mn | CNS-Bench: Benchmarking Model Robustness Under Continuous Nuisance Shifts | main | Active | Generative models;benchmarking;computer vision | datasets and benchmarks | 3;5;5;5;6 | 3;4;4;5;3 | 2;3;3;3;3 | 1;2;2;3;3 | 1;3;3;2;3 | 4.8 | 3.8 | 2.8 | 2.2 | 2.4 | 0.218218 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "There is a lack of clarity on how the dataset handles potentially unrealistic or counter-intuitive scenarios, such as cars driving on water. How are these cases addressed? A discussion on the handling of such edge cases would improve the comprehensiveness of the dataset."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well-structured, and the proposed CNS-Bench benchmark is simple yet effective for evaluating model robustness. The authors provide comprehensive discussions along three key dimensions—architecture, number of parameters, and pre-training paradigm—giving clear insights into the paper's findings.\n- In addition to the proposed dataset for benchmarking model robustness, the authors present an annotated dataset to benchmark OOC) filtering strategies. They introduce a novel filtering mechanism that significantly improves filter accuracy, which is a notable contribution.\n- The application of LoRA sliders to compute shift levels continuously is a particularly innovative and inspiring approach. This adds an interesting methodological contribution to the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces CNS-Bench, which uses generative models for benchmarking robustness across diverse continuous nuisance shifts by applying LoRA adapters to diffusion models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The novelty of the insights presented in this paper could be more compelling. For example, in Figure 6, are there any underlying reasons or mechanisms that could provide a deeper understanding of the results? It would be beneficial to explore these further to add depth to the conclusions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Self-supervised pre-training: Why is DINOv1 using linear probing compared with other models? This seems to create an unfair comparison, as linear probing may not fully reflect the robustness of self-supervised models relative to other models in the evaluation. Could you clarify the rationale behind this comparison approach?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Instead of measuring average accuracy drop across all nuisance shifts, the authors consider evaluating model performance at specific levels of nuisance shifts, enabling a detailed analysis of failure points in vision models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel benchmark dataset for evaluating model robustness by systematically controlling individual nuisance factors. The dataset allows for a precise assessment of the failure points of vision models, based on the severity of these controlled nuisance factors. The authors find that model rankings vary with changes in shift severity, and model architecture is a key factor in robustness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Unclear contributions: The contributions listed in the paper seem overlapping. The distinctions among them are insufficiently clear. Notably, the third contribution is not visible in the main text. While the paper claims 14 distinct nuisance shifts as a key contribution, it lacks an explanation or rationale for selecting these specific shifts. Since this is a foundational aspect of the contribution, detailed descriptions should be provided in the main text, not relegated to the appendix.\n\n2. Ambiguity in benchmark superiority: The authors assert that their benchmark outperforms existing benchmarks for evaluating model robustness by incorporating nuisance shifts across multiple severity levels. However, earlier works by Hendrycks & Dietterich (2018) and Kar et al. (2022) already support multi-severity analysis for vision model failure points. Thus, the authors should clarify how their benchmark framework distinctly advances beyond these existing approaches.\n\n3. Inconsistent statements on model robustness: In line 451, the authors claim that transformers are more robust than CNNs, yet this statement seems contradicted by Fig. 6a, where ConvNext outperforms ViT and DeiT but performs slightly worse than DeiT3. This inconsistency suggests that CNNs may not always be less robust than transformers, and the statement should be re-evaluated or clarified.\n\n4. Validation of realistic nuisance shifts: While the authors argue that the benchmark includes realistic nuisance shifts, the realism of these diffusion-generated images is not substantiated. Proper validation, such as human assessment, would enhance the credibility of this claim.\n\n5. Readability of figures: The font size in several figures is too small, which detracts from readability. Increasing the font size would improve clarity for readers."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. I don’t understand the failure point concept in Section 3.2. This section may contain many symbols that are confusing, such as: $X_n(S), X(s_n),X_n(S_n)$ , and the subscripts in $s_n, c_n$.\n2. In Section 4, the paper mentions \"activate the LoRA adapters with the selected scale for the last 75% of the noise steps\". Could you provide some theoretical or empirical evidence to justify the rationale for adjusting LoRA for the last 75% of the noise steps?\n3. In Section 4.2, the paper mentions \"fine-tune ResNet-50 with our data and show more than 10% gains on ImageNet-R\". Was the data used for fine-tuning the entire CNS-Bench or a specific style within it (such as a style closely resembling ImageNet-R distribution)? In Table 3, I noticed that after fine-tuning, the model accuracy on IN/val decreased by 2.04%. I believe the results in Table 3 do not fully support the claim regarding \"the realism of generated images.”\n4. For experiment about the relation between ID and OOD accuracy in section 4.3,please further elaborate on the rationale for using the slope of the linear fit between ID and OOD accuracies and the significance represented by this slope. Why not use the linear correlation coefficient?Furthermore, please provide a more detailed analysis of the results in Figure 7, particularly elucidating the impact of the strength of nuisance on the relation between ID and OOD accuracy.\n5. Figures 6a and 6b evaluate the accuracy drop. I do not think this metric rational because the model size and performance on the ImageNet validation set may not necessarily align. This mismatch could result in accuracy drops of different models that are not directly comparable. Please provide the model's parameter count and the model accuracy on IN/val for reference or other evidence to claim rationality the accuracy drop.\n6. Figures 4 and 5 assess using accuracy, while Figure 6 employs accuracy drop. Could you standardize to a single metric for consistency throughout the text?\n7. ImageNet-C also contains images with nuisances of different strengths. What are the distinctions between CNS-Bench and ImageNet-C?\n8. Could you give some experiment details of the claim “the alignment for one given seed increases in 73% for scales s > 0 for all shifts in our benchmark” in Section 3.2?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-motivated. Understanding the robustness of models to nuisances of varying degrees is crucial.\n2. It is reasonable to generate images with gradual and continuous nuisance using Stable Diffusion and LoRA adapters.\n3. The experimental section evaluates various classifiers, providing a better understanding of the robustness capabilities of these classifiers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a benchmark, CNS-Bench, composed of synthetic images with gradual and continuous nuisance, to evaluate the robustness of classifiers in detail. The images are generated using Stable Diffusion, incorporating a wide range of individual nuisance shifts with continuous severities through LoRA adapters to diffusion models. This paper provides a detailed evaluation and analysis of various classifiers' behavior on CNS-Bench, emphasizing the advantage of utilizing generative models for benchmarking robustness across diverse continuous nuisance shifts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "See questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- If it should be a benchmark, people will want to know: who won? What's the score that I need to beat in order to be SOTA? Tables with an overall score would help. Table 1 is a step in the right direction but it's not immediately clear which model is best and which score (Accuracy? Accuracy drop?) is the benchmark score.\n- Why were those 14 \"nuisances\" chosen and not others, why 14 and not, say, 50? (Not saying that the authors should do this but asking out of curiosity)\n- What's the robustness of a failure point to random variation?\n- Is performance (accuracy) always a monotonous function of the LoRA slider strength? Are there instances when that's not the case? If so, what does it mean if there are images beyond the failure point that are again correctly recognized?\n- line 43: \"such approaches are not scalable\" - why not? If one takes a large dataset and applies cheap corruptions like the ones from ImageNet-C, should that be considered less scaleable?\n- What's the computational cost of generating the dataset?\n\nMISC:\n- Figure 7: instead of re-using colors that were used in e.g. Figure 6 with a different association, I'd recommend using different colors here to avoid confusion - ideally a sequential color palette, with the legend sorted by scale not arbitrarily. Also, label 1.5 appears twice which is probably not intentional."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Authors promise to release the dataset under a permissive licences (CC-BY-4.0); code is available from supplementary material via Google Drive.\n- I like the approach of measuring a precise failure point. In psychophysics, a related concept is called the threshold of a model - see, e.g., Figure 4 of this 2017 paper on \"object recognition when the signal gets weaker\": https://arxiv.org/pdf/1706.06969. A threshold is calculated across many samples; the failure point described in this article, in contrast, is the point where an individual test sample is no longer correctly recognized.\n- The technical approach is a nice, simple and creative application of generative diffusion models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces CNS-Bench, a benchmark for evaluating the robustness of image classifiers to what the authors call \"continuous nuisance shifts\" - essentially OOD distortions like no snow -> snow along a continuous axis. CNS-Bench uses LoRA adapters applied to diffusion models to generate images with a wide range of nuisance shifts at various severities. While in principle continuous shifts are possible, most of the article nevertheless focuses on a fixed number of shifts (5 severity levels). The authors then conducted an evaluation of few different visual image classifier families on CNS-Bench.\n\nThe paper's contributions are defined, by the authors, as follows:\n1. The creation of CNS-Bench & evaluation of models\n2. The collection of an annotated dataset for filtering (note: this is a process that becomes necessary since the approach used in the paper may alter the class label, therefore this essentially fixes an issue introduced by the approach)\n3. The publication of 14 nuisance shifts at five severity levels. (note: this is essentially part of #1)"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Nuisance shifts affect information that's not related to the nuisance concept. In Figure 22 and 23, some nuisance shifts don't achieve the desired result; e.g. the variation \"in rain\" (Fig 23f) alters/blurs the background without occluding the object through rain. **Some nuisance shifts introduce confounds**, e.g. \"in dust\" not only adds dust but also removes half of the people in the image and changes a person's shirt color from red to black. As a consequence, failures cannot be attributed to the nuisance concept itself.\n\n\n2. The approach is based on generative models, thereby introducing a **real vs. synthetic distribution shift** that may further influence results. A discussion - better yet: an analysis - of this likely confound is recommended. Without this, I'm hesitant to share the author's hope that (\"this benchmark can encourage the community to adopt generated images for evaluating the robustness of vision models.\").\n\n\n3. **The paper's main claim to fame remains a bit unclear to me**, and that's my most important concern. At the same time, this might be the biggest opportunity for improvement and clarification from which future readers might benefit. The authors propose a variety of options to choose from, but I'm not convinced (yet - happy to be convinced of the opposite). Specifically:\n- Is it about continuous shifts? If so, this can be achieved with parametric distortions too (e.g. Gaussian noise with noise strength as a continuous parameter). Furthermore, the authors end up narrowing it down to 5 severity levels anyways, which is roughly in line with the 5-8 levels from related work.\n- Is it about a large number of distortions? Probably not, since the dataset's 14 distortions are in the same ballpark as ImageNet-C (15 test + 4 validation distortions) or model-vs-human (17 distortions).\n- Is it about testing a variety of models? While a number of model families are investigated (CLIP, ConvNext, Deit, Dino, MAE, MOCO, ResNet, ViT) that's also similar to previous investigations, some of which tested a broader variety.\n- Is it about identifying failure cases? If so, when is it important to know about a specific failure case (as opposed to a model's threshold, averaged across many samples)?\n- Is it about the connection between architecture and robustness? The observation that architecture influences model robustness has been reported (extensively) by a range of previous work.\n- Is it about precise control? While strength can be controlled, the effect introduced by the nuisance can't be controlled to a level where no confounds would be introduced, as seen in Figures 22 & 23.\n- Is it about scalability? If so, why is training separate LoRA adapters for each ImageNet class and shift more scalable than existing approaches?\n- Is it about real-world nuisance shifts? If so, see concern #2 on the real vs. synthetic distribution shift.\n\nI recommend that the authors clearly state and justify what they believe is the primary novel contribution (\"claim to fame\") of their work, and how it advances the field beyond existing benchmarks and approaches."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "## Feedback\n* I would have liked to take a closer look at the images in the benchmark, but could not unzip the provided benchmark.zip file, apparently because the file was corrupted. I don't think it's an issue on my end, could you look into this?\n* I think the writing, especially in section 3.2 where the method is explained, could be improved quite a bit, also to render the paper more self-sustained - I found myself having to look up the referenced papers, even though the relevant parts could have been summarized in a few sentences. For example, how exactly the scale of the sliders works cannot be understood from this paper alone, one needs to read Gandikota et al. 2023.\n* The legend of figure 7 is broken. The label for scale 1.5 appears twice and the values are not ordered.\n* Minor point, but in figures 9 and 10 it might be better to share the y-axis for more comparability between the plots.\n\n## Questions\n1. In line 197, shouldn’t $\\theta$ have both $c_t$ and $c_+$ in the subscript, like $\\theta_{c_t, c_+}$?\n2. In figure 3, how is it possible that the difference of two cosine similarities, which should be <= 2, achieves values of up to 7.5?\n3. In line 423, you write that an explanation for the abrupt change in failure rate of the cartoon style is the ImageNet class “comic book”, but I don’t see why images would be mis-classified as comic books more for scale 1.5 than for scale 2 and higher. \n4. Do you have any way of asserting that the severity levels of different shifts and different classes are actually calibrated, i.e. that scale 2.5 of an elephant in snow is the same level of corruption as a scale 2.5 zebra in fog? Since you are training different LoRAs for the different classes, I’m not sure if this will always be the case, but it might be desirable. (I guess one could calibrate this using the CLIP-distances…?)\n5. In principle, could you combine different distribution shifts at the same time? E.g., modify the same image to both exhibit fog and snow?\n\n## Final Assessment\nOverall, I’m a bit skeptical of the relevance of the contribution of the paper (see above) and could not check how the images in the benchmark look like, qualitatively. I propose to reject for now, but I'm curious to hear the perspectives of the other reviewers and would be willing to increase my score if they deem this work relevant, or if the authors can motivate the need for continuous shifts better."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper seems technically sound and successfully combines different existing methods to achieve the stated goal of generating a benchmark of continuous distribution shifts. I appreciate the thorough analysis and sanity-checks, such as creating a large OOC-detection dataset to make sure that the proposed filtering mechanism works. The writing is mostly clear, although some questions remain (see below). As far as I can tell (although I'm not too familiar with generative models) the authors cite the relevant related work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel benchmark for evaluating the OOD robustness of vision models. The core idea is to build a system that can generate images from the training distribution, but with natural distribution shifts (like snow) applied *with continuous severity levels*, so that one can smoothly increase the degree of corruption. The authors achieve this by leveraging diffusion models conditioned on the training distribution in combination with LoRA adapters. The resulting benchmark does therefore not only yield scalar accuracy values, but performance curves for different models, relating the severity of the corruption to the drop in classification performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "One fundamental weakness of the paper is the lack of motivation for why a robustness evaluation at different levels is important. I’m aware that ImageNet-C also offers different corruption levels, and I could maybe be convinced that having access to these levels is useful, but the analyses conducted here do not really achieve this: Why is it interesting at which severity level a model fails, especially given that it’s unclear whether the corruption severity levels across different shifts and different classes are properly calibrated against each other (see my question 4)? Of course, having a method of subjecting any training set to a natural distribution shift is great, but the Dataset Interface paper already achieves this. So the overall contribution of the paper is effectively “only” interpolating between uncorrupted images and fully corrupted images, but I wonder why that matters, unless the ordering of models drastically changes across the different levels. That does not seem to be the case overall, according to figure 6a, and I wonder whether the differences in figure 6b and 6c (where values are averaged over fewer trials) are statistically stable. Adding confidence intervals to these plots would help convince me that this is indeed a robust finding. But even if this were the case: If I had a dataset with a painting-corruption, how would I know what the corruption-scale of my dataset is, to then select the best model at that level? And do I really care about the minuscule differences between models (<< 1% accuracy delta) at scale 1, or would I simply select the model that does best at the maximum scale?\nWhile I appreciate that the authors included the failure cases in figure 16, they do make me wonder how reliably the method really works, and whether this unreliability might explain the weird curves in figure 6c. It would be good to also add confidence intervals to figure 3, to give a better idea of the quality of the generated images (but see my question 2 about the y-axis values of figure 3)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Benchmarking vision models using LoRA adapters for realizing continuous nuisance shifts"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024cnsbench,\ntitle={{CNS}-Bench: Benchmarking Model Robustness Under Continuous Nuisance Shifts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1STZCCI8mn},\nnote={under review}\n}"
},
"abstract": {
"value": "One important challenge in evaluating the robustness of vision models is to control individual nuisance factors independently.\nWhile some simple synthetic corruptions are commonly applied to existing models, they do not fully capture all realistic distribution shifts of real-world images. Moreover, existing generative robustness benchmarks only perform manipulations on individual nuisance shifts in one step. We demonstrate the importance of gradual and continuous nuisance shifts, as they allow evaluating the sensitivity and failure points of vision models. In particular, we introduce CNS-Bench, a Continuous Nuisance Shift Benchmark for image classifier robustness. CNS-Bench allows generating a wide range of individual nuisance shifts in continuous severities by applying LoRA adapters to diffusion models. We perform a comprehensive large-scale study to evaluate the robustness of classifiers under various nuisance shifts. Through carefully-designed comparisons and analyses, we reveal the following observations: 1) Evaluating the model performance on a continuous scale allows the identification of model failure points and a more nuanced understanding of model robustness. 2) Model rankings can change for varying severities of a shift, which is not captured when averaging the performance over all severities. 3) The architecture has a strong influence on the robustness and the failure points of a model. \nOverall, our work demonstrated the advantage of using generative models for benchmarking robustness across diverse continuous nuisance shifts in a controlled and scalable manner."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Generative models",
"benchmarking",
"computer vision"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/abb44afc7e02a4cd503643f1ccc01b2ae67eaea4.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "CNS-Bench: Benchmarking Model Robustness Under Continuous Nuisance Shifts"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1SYUKPeM12 | Aligned Better, Listen Better For Audio-Visual Large Language Models | main | Active | Audio-Visual Learning;Multimodal Large Language Models | foundation or frontier models, including LLMs | 3;5;5;8 | 4;4;5;4 | 2;3;3;3 | 2;2;3;3 | 2;3;2;3 | 5.25 | 4.25 | 2.75 | 2.5 | 2.5 | -0.080845 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We sincerely thank the Area Chairs and all reviewers (pdCN, JTeq, gDzU, f89q) for dedicating their valuable time to reviewing our paper and providing constructive feedback. We believe these comments are essential for enhancing the overall quality of this paper.\n\nWe are delighted that the reviewers appreciate our paper from various perspectives, including its **resolution of a meaningful problem** [pdCN, f89q], **sound and innovative methodology** [gDzU, f89q], **significant contribution to the community with our dataset** [pdCN, gDzU, JTeq, f89q], **sound and reasonable dataset curation** [pdCN, JTeq], **superior results on multiple benchmarks** [pdCN, JTeq, gDzU], and **comprehensive and verified ablations** [JTeq, gDzU]. These positive assessments truly motivate us.\n\nDuring the discussion phase, we will strive to clarify and integrate the feedback received. While we require some time to understand the feedback thoroughly and prepare for the discussions, we will deliver well-prepared responses as swiftly as possible to address each reviewer's concerns with thorough analysis and responses. Thank you.\n\n\n(This response is being continuously updated during the discussion phase, in order to clarify our response to all the reviewers... )"
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "General Response (continuously updating...)"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What are the effects of using pre-trained models to create a pipeline for various captioning and QA creation steps? What if any of the models hallucinated? Was there some kind of quality check done?\n\n2. I am intrigued by some of the examples of the dataset that has absolute time information such as \"What time does the train whistle blow?\" and the model providing an answer. Do these models understand the concept of time and seconds?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The problem addressed by the authors is an important one. Most video-related datasets and models indeed ignore the information present in the audio almost completely. Hence this work is an important one to fill this research gap.\n\n2. The proposed model architecture achieves better results on existing video QA datasets and the ablation studies show the importance of spatial and temporal alignment layers introduced in the architecture.\n\n3. The dataset is large-scale and can be significant to the community to advance audio-visual understanding. \n\n4. The usefulness of the dataset is shown by comparing video llama trained with and without the AVU dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new audio-visual LLM model called Dolphin and a new audio-visual dataset AVU. The authors discuss the existing problem with video LLMs, which is, how they often ignore the audio information present in the video and only attend to the visual information while understanding videos. The authors claim that the models do not learn any alignment between the audio and visual information in the video, which is the reason for this behavior of video LLMs. Hence the authors design the Dolphin model, which aligns the audio and visual information both spatially and temporally before feeding them to the LLM. Specifically, they use multi-scale vision transformers to extract visual features at different scales and apply cross-attention with audio features at each scale. These\nfeatures are again merged with the global visual representation using another cross-attention. Then temporal cross-attention is applied between these features bi-directionally to obtain visual-contextualized audio tokens and audio-contextualized visual tokens. This is fed to the LLM for the downstream task.\n\nSince most existing video datasets focus mainly on visual content, the authors have introduced a new audio-visual dataset by using existing unimodal datasets and leveraging LLMs to generate modality-specific question-answer pairs. They generate different types of questions and answers based on metadata correspondence of the audio and visual inputs by prompting LLMs. The experiments are designed to test the new model architecture on existing video QA datasets and other unimodal tasks such as captioning and classification."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The entire pipeline in the dataset generation is LLM-based. There are no discussions about the efficiency of the pipeline, hallucination effects, or error propagation in the dataset creation process.\n\n2. The authors claim in a lot of places in the paper that there is a significant reduction in hallucinations using their model and dataset. They design an AVU-negatives subset to train the model to say no to some questions. However, the experiments are not designed to validate this claim in any manner. While Dolphin may outperform certain models, it is unclear whether the hallucination is reduced as there are no metrics or definitions to evaluate this. It is a tall claim without any experimental results to say that hallucinations are reduced. \n\n3. Minor comment: Clotho-V2 which was is used as a dataset for training is not referenced."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Considering the selected audio and visual encoders are far smaller than the LLM (ViT-L and AST), why not directly train these encoders to achieve better performance since the 7b/13b LLM is also involved in training in the instruction-tuning stage? \n\n2. Why select ViT-L, AST, and Vicuna as encoders and decoders when tons of more powerful alternatives are available (such as SigLIP, InternViT for image, Beats, Whisper encoder for audio, and Qwen, llama3, mistral for LLM)? Is there any ablation?\n\n3. Why not use some video encoders to perform visual encoding both for the Dolphin model and the data curation pipeline? Is there any ablation? \n\n4. For the temporal integration, how does the proposed bi-directional cross-attention block 'enhance the audio-visual information exploitation of AV-LLM' as the author claims? What I see is just an attention block to perform cross-modal interaction for global features, yet how to model the temporal relationships, is positional encoding or RoPE being used? How to inject the so-called 'temporal integration information' into the dual-path framework? The descriptions are too vague and need to be improved. \n\n5. What is the connector between the audio/visual encoder and LLM decoder? Q-former or linear projection? Is there any ablation? \n\n6. How does the model tackle uni-modal tasks since the fine-grained alignment seems to be mandatory? For videos that missing the auditory part, will a modality mask perform on the input of the LLM decoder and the cross-modality integration module (both spatial and temporal)? For videos with semantic-irrelevant auditory parts, how does the model resist the potential negative information brought by the auditory modality? \n\n7. For the experiments, the authors only compare the proposed method with audio-visual LLMs, how much is the performance gap between the proposed AV-LLM with some uni-modal models?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The curation process of the AVU dataset looks sound and reasonable. The authors integrate several open-source and commercial LLMs into the data pipeline to generate high-quality audio-visual captions and divide the dataset into several parts based on audio-visual consistency. The community now is facing a shortage of a large-scale audio-visual instruction-tuning dataset. The proposed dataset, along with the data curation procedure, will help the following research in the related field.\n\n2. The results show the proposed method outperforms several previous audio-visual LLM on audio, video, and audio-visual benchmarks. Apart from caption and question-answering, it also excels in some closed and open-ended audio tasks, which makes the framework more applicable.\n\n3. The ablations are comprehensive. Each component is well-ablated and clearly verified. The authors also conduct numerical analysis on the impact of the proposed dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose an audio-visual LLM Dolphin, which consists of a multi-scale adapter for spatial alignment and an interleaved merging module for temporal alignment. A large-scale audio-visual caption&instruction-tuning dataset AVU is also proposed, including 5.2M video-audio-qa tuples. Training on the proposed dataset, the proposed method achieves state-of-the-art performance on several audio-visual, audio, and video benchmarks compared with existing audio-visual LLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The method is trivial and questionable. The entire framework consists of three parts: audio and visual encoders with injected multi-scale uni-modal and multi-modal adapters, a cross-modal attention block to perform temporal integration, and a Vicuna as the decoder. The audio-visual adapters and the cross-modal attention have been proposed and utilized in many previous works[1-3], and the pipeline of training an audio-visual LLM is also not novel. The data pipeline for generating audio-visual captions is also been utilized by several previous methods[4-5]. Besides, the description of the model architecture is vague, many details are missing and the rationale of some model designs is unclear. Please see the question part below in detail. \n\n2. Speech is neglected in the model architecture designs. Since the audio feature is semantic and high-level, while the speech feature is low-level and dense, it is a common way to model the audio and speech separately via different encoders, such as [4, 6]. Besides, how does the proposed model outperform baseline methods on the speech recognition task as shown in Table 3 when no speech encoder or dense feature is involved? What does the model perform when compared with some speech-centric models?\n\n3. The application scenarios are limited. It seems that the proposed method is only suitable for audio-visual correspondence videos since the training dataset is constructed by at least medium-level AV consistency videos, while the low-level AV consistency data is used for negative samples, yet 1). how to decide whether an in-the-wild video is suitable for the model to infer? and 2). what is the purpose of aligning audio and visual encoders using high AV consistency videos? I believe the alignment stage is more likely to align the audio and visual encoder with the text decoder rather than align the audio encoder with the visual encoder. What will happen if videos with low AV consistency are introduced for training?\n\n4. Audio-visual capabilities are not fully probed. Some audio-visual tasks are not tested, such as audio-visual caption, audio-visual speech recognition, and audio-visual sound source detection as the previous method [6] does. I suggest the authors conduct experiments on these benchmarks and compare the proposed method with [6] to show the model's capability more comprehensively.\n\nReference: \n\n[1] Lin, Yan-Bo, et al. \"Vision transformers are parameter-efficient audio-visual learners.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. \n[2] Tian, Yapeng, Dingzeyu Li, and Chenliang Xu. \"Unified multisensory perception: Weakly-supervised audio-visual video parsing.\" Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16. Springer International Publishing, 2020. \n[3] Li, Guangyao, et al. \"Learning to answer questions in dynamic audio-visual scenarios.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. \n[4] Chen, Sihan, et al. \"Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset.\" Advances in Neural Information Processing Systems 36 (2023): 72842-72866. \n[5] Wang, Yi, et al. \"Internvideo2: Scaling video foundation models for multimodal video understanding.\" arXiv preprint arXiv:2403.15377 (2024). \n[6] Sun, Guangzhi, et al. \"video-SALMONN: Speech-enhanced audio-visual large language models.\" arXiv preprint arXiv:2406.15704 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weakness. \nOverall, I think this article is quite comprehensive, but in this era of a large number of LLM works, I think this work needs to be supplemented with more comparisons to prove that this work is novel enough to be published in ICLR."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The method is soundness. The author put forward a fine grand alignment method, adding visual tokens audio, and special temporal Temporal tokens to achieve better alignment.\nThe.\n\nThis paper put forward a comprehensive dataset with a promising data processing pipeline and obtained large-scale data.\n\nThe paper gives a benchmark based on the task definition and its dataset and compares the baseline methods.\n\nExtensive experiments demonstrate that Dolphin significantly improves audio-visual comprehension and is effective in reducing errors related to audio neglect."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper discusses the importance of audio-visual large language models (AV-LLMs) in multimodal video understanding, with a particular emphasis on the use of audio information. The paper proposes a fine-grained AV-LLM model called Dolphin, which ensures comprehensive and accurate video understanding by aligning audio and video in both spatial and temporal dimensions. To better define the task, this work proposed a related dataset(AVU) and benchmark(AVU-Bench), that contains 5.2 million diverse data pairs (video, audio, questions, answers), and a novel data partitioning strategy is introduced. Experimental results show that Dolphin performs well in audio-visual understanding and effectively reduce hallucinations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experiment is comprehensive but the baseline is weak. The method mentioned VideoLLAMA2, but the experiment seems only to compare the result with VideoLlaMA1. Adding more comparisons against these baselines would be more persuasive.\n\n2. The author mentioned that AVU could reduce the hallucination; while the related analysis is not included in the experiments. \n\n3. The meaning of “fine-grained spatial modeling” lack of definition. Please provide a clear definition or explanation of \"fine-grained spatial modeling\" in the context of their work.\n\n4. Although the author compares video and audio captions separately, more experiments on other audio-visual datasets are expected.\nMany any-to-any models can have a visual-audio understanding ability. What is their performance on the given tasks?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- L32: Does the audio modality prove crucial for comprehensive understanding? Could you substantiate this claim?\n- Does reason (3) on L72 contradict the starting paragraph of the introduction, where the authors assert that audio is crucial for video understanding? Could the authors provide examples of when audio is crucial versus when it may be less informative than visual data?\n- In Table 2 and Table 3, did Dolphin use unimodal signal as an input, or use both of multimodal signal for unimodal task?\n- L67: It appears that the model trained with audio converted to text performs favorably. How would the model perform with video + audio (converted to text)? Could this combination outperform the Dolphin model? Could the authors conduct this experiment?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The motivation of this work, which identifies the weaknesses of AV-LLMs and aims to solve them from two different perspectives, is a sound approach to advancing research in this area.\n- The approach to enhancing spatial and temporal alignment in audio-visual LLMs is innovative.\n- Constructing an audio-visual caption and instructional dataset is beneficial for researchers, as there is a lack of such datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the capabilities of audio-visual large language models (AV-LLMs) to enhance their reasoning and understanding capabilities. As existing AV-LLMs tend to neglect audio information, this paper addresses the issue from two perspectives: model architecture and dataset. For model architecture, the authors enhance both spatial and temporal alignment of AV-LLMs by proposing an audio-visual multi-scale adapter for aggregating multi-scale information and proposing audio-visual interleaved merging, respectively. For the dataset, this paper proposes a large-scale caption & instructional dataset using existing audio-visual data sources. Experimental results show that the proposed model achieves favorable performance in both audio-visual understanding and audio understanding tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**The main text requires further refinement. It contains typos, broken sentences, and inconsistent tenses. The reviewer has identified only some of these issues:**\n - L67: VideoLlama is mentioned twice.\n - L158: \"clap\" should be checked for correctness.\n - L231: \"detailed\" is misspelled as \"detrailed.\"\n - L414 contains a broken sentence.\n - The right figure in Figure 1 is not explained in the main text.\n - There is a typo in the right figure of Figure 2, \"his audio.\"\n - The text should use \"/citep\" for citations.\n\n**The reviewer is concerned about the reliability of the dataset. Since the paper proposes a large-scale dataset, it should include a more detailed explanation, such as dataset statistics. The reviewer points out some missing or problematic aspects that lessen the dataset's reliability:**\n- The prompt templates for constructing the meta information are not provided. These prompts are crucial as they differentiate dataset types and help manage noise in this automatically generated dataset.\n- In Figure 6, AVU-specific, although the questions differ, the answers are identical.\n- In Figure 9, the question asks about the sound of a frog, yet the answer discusses an unrelated aspect of color, highlighting the dataset's noisiness.\n- To address concerns about the dataset's reliability and its claim as a benchmark, human verification of the dataset is necessary. If the dataset is noisy, researchers might hesitate to use it for evaluating models.\n\n**The comparison experiments are not thoroughly conducted. Since the paper focuses on improving the audio-visual understanding of AV-LLMs, it should include comparisons with existing high-performing AV-LLMs. Here are several models that the paper should have considered:**\n- FAVOR: https://arxiv.org/pdf/2310.05863\n- video-Salmon: https://arxiv.org/pdf/2406.15704\n- PandaGPT:https://arxiv.org/abs/2305.16355\n- OneLLM: https://arxiv.org/pdf/2312.03700\n\n**The reliability of the model's design and training is questionable. The inconsistencies and errors in the paper amplify these concerns:**\n- The notations in Figure 2 and the main text differ, making it hard to understand the model's mechanism.\n- What does the superscript “i” stand for in all notations? And what is the difference from the superscript “1” in L178?\n- In Figure 1, how does the Dolphin model recognize the words a man says using the ImageBind audio encoder? Doesn't the ImageBind audio encoder take environmental sound as an input, not speech?\n- In L430, the authors mention that AST was used, but do not explain how they trained or integrated this model.\n- Table 6 not explained in the main text."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce an audio-visual multi-scale adapter that can extract and merge spatial information from both modalities at multiple scales, thereby enhancing feature interaction and spatial alignment between modalities."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024aligned,\ntitle={Aligned Better, Listen Better For Audio-Visual Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1SYUKPeM12},\nnote={under review}\n}"
},
"abstract": {
"value": "Audio is essential for multimodal video understanding. On the one hand, video inherently contains audio and audio supplies complementary information to the visual modality. Besides, video large language models (Video-LLMs) can encounter many audio-centric settings. However, existing Video-LLMs and Audio-Visual Large Language Models (AV-LLMs) exhibit deficiencies in exploiting audio information, leading to weak understanding and hallucination. To solve the issues, we delve into the model architecture and data aspects. (1) From the architectural perspective, we propose a fine-grained AV-LLM, namely Dolphin. The concurrent alignment of audio and visual modalities in both temporal and spatial dimensions ensures a comprehensive and accurate understanding of videos. Specifically, we devise an audio-visual multi-scale adapter for multi-scale information aggregation, which achieves spatial alignment. For temporal alignment, we propose audio-visual interleaved merging. (2) From the data perspective, we curate an audio-visual caption \\& instruction-tuning dataset, called AVU. It comprises 5.2 million diverse, open-ended data tuples (video, audio, question, answer) and introduces a novel data partitioning strategy. Extensive experiments show our model not only achieves remarkable performance in audio-visual understanding, but also mitigates hallucinations. Our codes and dataset will be made publicly available."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Audio-Visual Learning",
"Multimodal Large Language Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/71eb3dede92aa60c8d4d212e54c89024d79d30f2.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Aligned Better, Listen Better For Audio-Visual Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
1T6HzuZMCz | Interpretable Surrogate Models: A Clustering Approach for Gaussian Process Posteriors Using Mixed-Integer Quadratic Programming | main | Active | Interpretability;Clustering;Gaussian Process Regression | interpretability and explainable AI | 3;3;3;5 | 2;5;4;3 | 2;2;3;3 | 2;2;2;2 | 2;1;2;2 | 3.5 | 3.5 | 2.5 | 2 | 1.75 | -0.258199 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In my opinion, the may contribution of the paper should be highlighted as the advantage of using mixed-integer optimization over the iterative clustering algorithm. Indeed, as the authors pointed out, on lines 227-229, the weakness of the iterative clustering algorithm is that it can be trapped in local optimizers. Therefore, the authors should demonstrate why using mixed-integer optimization can overcome such a draw back, either through convergence analysis or extensive simulation studies. But I do not see any of such analysis in the paper, which is a bit disappointing. \n2. The authors claim that by grouping the parameters, one can improve the interpretability of the Gaussian process regression coefficients. I fails to see why this is the case. For Gaussian process regression, it the the estimated functions or surfaces that matter most, not the regression coefficients. Please elaborate on the claimed \"interpretability\". In fact, grouping the coefficients, there is a chance of over-smooth the estimated functions or surfaces if the number of clusters are small. At least some simulation studies should be carried out to investigate these issues.\n3. How does the K-means algorithm work in Figure~5? It looks like it is just clustering the spatial locations? Please elaborate.\n4. For the decision tree algorithm, it is well-know that a single decision tree is not stable and sub-optimal in capturing the non-linear regression relationship. Ensemble methods such as random forest and boosting are much better. Could the proposed algorithm scale up to these method computationally?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Using mixed integer programing as an alternative approach to iterative clustering algorithm such as K-means is indeed an interesting idea and I find the formulation in (5)-(6) quite clever."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes using a mixed-integer quadratic programming algorithm to cluster regression coefficients in Gaussian process regression. The approach is further extended to applications in graph partitioning and decision tree growth. While the proposed algorithm is interesting and potentially valuable, I find the paper difficult to read, as it attempts to cover numerous loosely connected topics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The title of the paper is \"...for Gaussian process posteriors,\" yet the authors delve into topics like graph partitioning and decision tree growth, which don’t seem directly related to Gaussian process regression. This shift from the main theme feels distracting and makes the paper harder to follow. I would have preferred a more focused exploration of Gaussian process regression rather than these loosely connected topics. I also feel that the advantage of the proposed algorithm is not well justified."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1.\tI wonder why the authors chose to approximate the GP posterior distribution rather than the GP predictive distribution. In fact, the proposed methods are designed to approximate the GP posterior mean function values with the GP posterior covariance matrix, not the entire GP posterior distribution (i.e., a single (mode) function in the entire function space). In addition, the GP posterior mean and covariance are already approximated ones since the sparse approximation was used instead of the full GP. \n\n2. Please provide some details for the results reported in Table 1. \na. How was the number of inducing points, m, chosen for the data sets? \nb. How was the decision tree (CART) trained? \nc. It is unclear why model accuracy can be measured by evaluating the values of the loss function (from the MIQP formulation?). In addition, it is unclear whether it is fair to compare the loss function of the two methods, since the proposed methods would provide a solution that minimizes this loss function by design."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Although the proposed methods have clear disadvantages (weaknesses), the MIQP formulations for graph partitioning and decision tree learning look novel."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The manuscript proposes computational methods to enhance the interpretability of the Gaussian process (GP) posterior. The methods are based on clustering of the GP posterior mean values, where a single parameter is used to approximate the posterior mean values of the data points in the same cluster and this approximation is formulated as the minimization of the weighted (the weights are derived from the posterior covariance matrix) squared loss using mixed integer quadratic programming (MIQP). The manuscript shows that two surrogate models, graph partitioning and decision tree, can be implemented in the MIQP formulation with additional linear inequality constraints."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The manuscript seems to have failed to provide attractive applications of the proposed methods to real-world problems. The experiments include only small datasets (even they look like toy datasets). The designed experiments included in the manuscript do not show the significance of the proposed methods.\n\n2. The proposed methods seem to suffer from high computational requirements. It seems that the proposed methods could not handle these small data sets. The manuscript does not provide any computational analysis of the proposed methods. As a result, it is difficult to understand how much computational resources the proposed methods require to solve the given problems."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Can the authors explain what does \"parameter\" mean?\n\n2. Can the authors summarize and highlight the main goal and contribution of this manuscript?\n\n3. More comprehensive experiments are expected."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The idea of combining clustering and GPR seems interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a clustering approach to improve the interpretability of Gaussian Process (GP) regression. By assuming that parameters within each cluster are identical, it reduces the number of parameters in the GP posterior, making the predictions easier to interpret. The clustering is formulated as a mixed-integer quadratic programming problem, with a weighted squared error objective based on the posterior mean approximated by variational inference. The approach also incorporates graph partitioning and decision tree learning through linear constraints."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My major concern is about the presentation. Some sections are hard to follow. For example, the first paragraph in the introduction, the relationship between sentences are not clear to me, it's more like a stack of facts. \n\n2. The main goal & contribution is not very clear to me. According to the abstract, the goal is to improve the interpretability of GPR. However, in 5.2, only prediction accuracy was discussed, but the interpretability was completely overlooked. \n\n3. I've been confused by \"parameters\". What do the authors mean in terms of parameters in the GPR setup?\n\n4. The empirical study can be improved. For example, the clustering results are only compared with k-means, but there are quite a few existing spatial clustering methods that are sometimes better than k-means. Similarly, for 5.2, only CART was considered."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is well-constructed, with an almost solid contribution to the field. The writing quality is high, and the problem formulation and related work are well-explained, providing a clear foundation for understanding the approach. The MIQP formulation is particularly noteworthy and offers a promising avenue for further development."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores enhancing interpretability in Gaussian Process (GP) regression by developing a surrogate model that leverages clustering, graph partitioning, and decision trees. By employing this clustering approach, the authors aim to group predictions into more interpretable segments. The model formulation as a mixed-integer quadratic programming (MIQP) problem optimizes a weighted squared error between the predicted values and the mean of the posterior GP distribution, which is approximated using variational inference. This approach seeks to balance the model’s interpretability with prediction accuracy, providing a structured methodology for creating interpretable surrogates in complex GP regression tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* Initial Definition of the Problem: The problem definition begins with a discussion on the interpretability of Gaussian Processes (GPs), where the first concern arises. GPs are often more interpretable than many machine learning models, particularly due to their probabilistic structure and flexible kernel choices that accommodate domain-specific assumptions. However, as complexity increases (e.g., with higher-dimensional data, and complex kernels), interpretability tends to diminish. The paper could improve its problem definition by clearly specifying which aspect of GP interpretability it seeks to address. For instance, does it aim to handle interpretability in high-dimensional data, manage the interpretability of GPs with complex kernel structures, or focus on non-stationary models? By explicitly defining these directions, the study could clarify the scope and impact of its contributions, helping readers better understand the specific interpretability challenges it addresses.\n \n* Interpretation of GPs with a Large Number of Parameters: A large number of parameters poses challenges in the context of complex kernels and high-dimensional datasets. However, the estimation of these parameters occurs during the training phase, and this paper does not address that step. Consequently, the parameters cannot be altered or modified to enhance interpretability. While the paper identifies the parameters as a source of the interpretability problem, it does not offer any solutions to this issue.\n\n* Novelty and Consistency: If clustering is performed before training, the problem is transformed into a distributed Gaussian process. Predicting new test data points within one or multiple clusters has been explored previously in this field. However, conducting clustering after training and solely on test points is somewhat confusing. In practical scenarios, we do not receive all new points simultaneously; instead, data is entered gradually. How can we perform partitioning under these circumstances?\n\n* Complexity and Computational Cost: Interpretability becomes an issue in Gaussian processes (GPs) as complexity increases. GPs are generally expensive prediction models. However, the authors have integrated this complex method with mixed-integer quadratic programming (MIQP), which is computationally intensive due to its combinatorial nature and the challenges posed by non-linearities in the objective function. It does not appear that this approach can be practically applied, even if it potentially improves interpretability.\n\n* Inadequate Numerical Experiments: The numerical analysis presented in the paper fails to substantiate the main claims. Simply outperforming conventional K-means clustering does not support the key assertions made. Other baselines could have been employed for comparison in the experiments, but they were not utilized in this paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024interpretable,\ntitle={Interpretable Surrogate Models: A Clustering Approach for Gaussian Process Posteriors Using Mixed-Integer Quadratic Programming},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=1T6HzuZMCz},\nnote={under review}\n}"
},
"abstract": {
"value": "Gaussian process regression is a flexible Bayesian method for capturing nonlinearity. \nAlthough recent advancements allow us to handle various types of tasks by specifying a covariance function and a likelihood function, the interpretation of its predictions is sometimes challenging due to the large number of parameters. \nIn this study, we propose a clustering approach to improve the interpretability of Gaussian process posteriors. \nAssuming that the parameters corresponding to data points within each cluster are identical, the number of parameters in the posterior distribution is reduced. \nThe assignment of data points to clusters is formulated as a mixed-integer quadratic programming problem, with the objective function being a weighted squared error from the mean of the posterior distribution approximated by variational inference. \nGraph partitioning and decision tree learning can be represented by incorporating linear inequality constraints into this formulation. \nExperimental results demonstrated that our approach provided significant advantages in enhancing the interpretability of spatial modeling. \nMoreover, our formulation has produced higher-scoring decision trees compared to Classification and Regression Trees algorithm."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Interpretability",
"Clustering",
"Gaussian Process Regression"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9ed69551a564bf12d32209346a63a97a576010e7.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/d2f247f7bcccbc52d39115ea784e11db32d428ee.zip"
},
"title": {
"value": "Interpretable Surrogate Models: A Clustering Approach for Gaussian Process Posteriors Using Mixed-Integer Quadratic Programming"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |