karmiq
's Collections
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper
•
2211.05100
•
Published
•
28
CsFEVER and CTKFacts: Acquiring Czech data for fact verification
Paper
•
2201.11115
•
Published
Training language models to follow instructions with human feedback
Paper
•
2203.02155
•
Published
•
15
FinGPT: Large Generative Models for a Small Language
Paper
•
2311.05640
•
Published
•
27
Orca 2: Teaching Small Language Models How to Reason
Paper
•
2311.11045
•
Published
•
70
GAIA: a benchmark for General AI Assistants
Paper
•
2311.12983
•
Published
•
183
Jailbroken: How Does LLM Safety Training Fail?
Paper
•
2307.02483
•
Published
•
13
Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Paper
•
2311.10702
•
Published
•
18
How Far Can Camels Go? Exploring the State of Instruction Tuning on Open
Resources
Paper
•
2306.04751
•
Published
•
5
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Paper
•
2101.00027
•
Published
•
6
LIMA: Less Is More for Alignment
Paper
•
2305.11206
•
Published
•
21
An In-depth Look at Gemini's Language Abilities
Paper
•
2312.11444
•
Published
•
1
Recursively Summarizing Books with Human Feedback
Paper
•
2109.10862
•
Published
•
1
LLM360: Towards Fully Transparent Open-Source LLMs
Paper
•
2312.06550
•
Published
•
56
Language Resources for Dutch Large Language Modelling
Paper
•
2312.12852
•
Published
•
9
Adapting Large Language Models via Reading Comprehension
Paper
•
2309.09530
•
Published
•
77
Shai: A large language model for asset management
Paper
•
2312.14203
•
Published
•
4
Paper
•
2401.04088
•
Published
•
157
RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on
Agriculture
Paper
•
2401.08406
•
Published
•
37
FinTral: A Family of GPT-4 Level Multimodal Financial Large Language
Models
Paper
•
2402.10986
•
Published
•
76
Improving Text Embeddings with Large Language Models
Paper
•
2401.00368
•
Published
•
79
SaulLM-7B: A pioneering Large Language Model for Law
Paper
•
2403.03883
•
Published
•
74
MMLU-Pro: A More Robust and Challenging Multi-Task Language
Understanding Benchmark
Paper
•
2406.01574
•
Published
•
42