Request for Assistance with Wikipedia Text Content for FRAMES Benchmark Evaluation

#13
by Siheng99 - opened

Upon downloading the dataset from Hugging Face, we encountered a challenge: the dataset seems to be missing the Wikipedia text content. Despite our efforts to extract the relevant text ourselves—such as following methods you have mentioned, including the title string match—we have been unable to retrieve all the necessary content.

Given the importance of this text content for our research, I would like to kindly ask if it would be possible for you to share the Wikipedia text content directly with us. Having access to this resource would be invaluable for our project, and it would significantly support our ongoing work.

I have implemented the evaluation script for this benchmark in my open source optimizing inference proxy here - https://github.com/codelion/optillm/blob/main/scripts/eval_frames_benchmark.py You can use that. The readurls plugin in the proxy will allow you to fetch the content of the wikipedia pages as well. We recently hit new SOTA using the optillm library with gpt-4o-mini getting an accuracy almost equal to Gemini Flash even though gpt-4o-mini has 1/10 of the context length.

I have implemented the evaluation script for this benchmark in my open source optimizing inference proxy here - https://github.com/codelion/optillm/blob/main/scripts/eval_frames_benchmark.py You can use that. The readurls plugin in the proxy will allow you to fetch the content of the wikipedia pages as well. We recently hit new SOTA using the optillm library with gpt-4o-mini getting an accuracy almost equal to Gemini Flash even though gpt-4o-mini has 1/10 of the context length.

Thanks!!!
This can be very helpful for our project.

Best,
Siheng

Sign up or log in to comment