Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,3 +1,45 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # FRAMES: Factuality, Retrieval, And reasoning MEasurement Set
6
+
7
+ FRAMES is a comprehensive evaluation dataset designed to test the capabilities of Retrieval-Augmented Generation (RAG) systems across factuality, retrieval accuracy, and reasoning.
8
+
9
+ ## Dataset Overview
10
+
11
+ - 824 challenging multi-hop questions requiring information from 2-15 Wikipedia articles
12
+ - Questions span diverse topics including history, sports, science, animals, health, etc.
13
+ - Each question is labeled with reasoning types: numerical, tabular, multiple constraints, temporal, and post-processing
14
+ - Gold answers and relevant Wikipedia articles provided for each question
15
+
16
+ ## Key Features
17
+
18
+ - Tests end-to-end RAG capabilities in a unified framework
19
+ - Requires integration of information from multiple sources
20
+ - Incorporates complex reasoning and temporal disambiguation
21
+ - Designed to be challenging for state-of-the-art language models
22
+
23
+ ## Usage
24
+
25
+ This dataset can be used to:
26
+ - Evaluate RAG system performance
27
+ - Benchmark language model factuality and reasoning
28
+ - Develop and test multi-hop retrieval strategies
29
+
30
+ ## Baseline Results
31
+
32
+ We provide baseline results using state-of-the-art models like Gemini-Pro-1.5-0514:
33
+
34
+ - Naive prompting: 40.8% accuracy
35
+ - BM25 retrieval (4 docs): 47.4% accuracy
36
+ - Oracle retrieval: 72.9% accuracy
37
+ - Multi-step retrieval & reasoning: 66% accuracy
38
+
39
+ ## Citation
40
+
41
+ If you use this dataset in your research, please cite our paper:
42
+
43
+
44
+
45
+ We hope FRAMES will be useful for advancing RAG systems and language model capabilities. For more details, please refer to our full paper.