gprimosch
's Collections
Papers
updated
LMDX: Language Model-based Document Information Extraction and
Localization
Paper
•
2309.10952
•
Published
•
65
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Paper
•
2309.12307
•
Published
•
87
A Paradigm Shift in Machine Translation: Boosting Translation
Performance of Large Language Models
Paper
•
2309.11674
•
Published
•
31
Boolformer: Symbolic Regression of Logic Functions with Transformers
Paper
•
2309.12207
•
Published
•
11
GPT4Tools: Teaching Large Language Model to Use Tools via
Self-instruction
Paper
•
2305.18752
•
Published
•
3
SCREWS: A Modular Framework for Reasoning with Revisions
Paper
•
2309.13075
•
Published
•
15
Calibrating LLM-Based Evaluator
Paper
•
2309.13308
•
Published
•
11
Exploring Large Language Models' Cognitive Moral Development through
Defining Issues Test
Paper
•
2309.13356
•
Published
•
36
Aligning Large Multimodal Models with Factually Augmented RLHF
Paper
•
2309.14525
•
Published
•
29
Efficient Post-training Quantization with FP8 Formats
Paper
•
2309.14592
•
Published
•
10
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of
Language Models
Paper
•
2309.15098
•
Published
•
7
Finite Scalar Quantization: VQ-VAE Made Simple
Paper
•
2309.15505
•
Published
•
21
Evaluating Cognitive Maps and Planning in Large Language Models with
CogEval
Paper
•
2309.15129
•
Published
•
6
LD-ZNet: A Latent Diffusion Approach for Text-Based Image Segmentation
Paper
•
2303.12343
•
Published
•
1
Enable Language Models to Implicitly Learn Self-Improvement From Data
Paper
•
2310.00898
•
Published
•
23
Efficient Streaming Language Models with Attention Sinks
Paper
•
2309.17453
•
Published
•
13
Multimodal Analogical Reasoning over Knowledge Graphs
Paper
•
2210.00312
•
Published
•
1
Large Language Models Cannot Self-Correct Reasoning Yet
Paper
•
2310.01798
•
Published
•
33
Large Language Models as Analogical Reasoners
Paper
•
2310.01714
•
Published
•
15
CodeChain: Towards Modular Code Generation Through Chain of
Self-revisions with Representative Sub-modules
Paper
•
2310.08992
•
Published
•
10
In-Context Pretraining: Language Modeling Beyond Document Boundaries
Paper
•
2310.10638
•
Published
•
28