--- license: apache-2.0 model-index: - name: Delexa-V0.1-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.98 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.97 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.69 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.06 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b name: Open LLM Leaderboard --- ## Delexa-V0.1-7b: Our Newest and Best Model Yet! We are excited to announce the release of Delexa-V0.1-7b, our newest and best model yet! Delexa-V0.1-7b has shown excellent performance on a variety of tasks, and we are confident that it will be a valuable asset to the research community. ### Eval Results Delexa-V0.1-7b was evaluated on a dataset of question-answer pairs. The model was given a single question and three different answer choices, and it was tasked with selecting the best answer. Delexa-V0.1-7b achieved an average score of 8.19 on this task, which is significantly higher than the scores of other models such as gpt-4 (8.99), gpt-3.5-turbo (7.94), and claude-v1 (7.90). Here is a table showing the detailed eval results: | Model | Turn 1 | Turn 2 | Average | |---|---|---|---| | gpt-4 | 8.95625 | 9.0250 | 8.990625 | | Delexa-V0.1-7b | 8.57500 | 7.8125 | 8.193750 | | claude-v1 | 8.15000 | 7.6500 | 7.900000 | | gpt-3.5-turbo | 8.07500 | 7.8125 | 7.943750 | | vicuna-13b-v1.3 | 6.81250 | 5.9625 | 6.387500 | | palm-2-chat-bison-001 | 6.71250 | 6.0875 | 6.400000 | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a87f9532c9473fed9caab0/8frcbCX0Wi0WEJadwULFU.png) ### Technique One of the key factors that contributed to Delexa-V0.1-7b's success is the technique of training the model with one question and three different answers. This technique allows the model to take into account different perspectives and viewpoints, which leads to more robust and accurate results. ### Future Work We are excited to continue working on Delexa and to see how it can be further improved. We are currently working on an Instruct model, which is a type of model that can be fine-tuned on specific tasks. We believe that Instruct models have the potential to be even more powerful than Delexa-V0.1-7b, and we are eager to see the results of our ongoing research. We would like to thank the entire team for their hard work on Delexa-V0.1-7b. We are confident that this model will be a valuable asset to the research community. ### Guardrails: This Model allows 18+ content and lewd content, but it wont let any illegal content through (unless you jailbreak it) ### Support Our Work and join our Community!: [Our Patreon](https://patreon.com/Lex_Hue?utm_medium=unknown&utm_source=join_link&utm_campaign=creatorshare_creator&utm_content=copyLink) [Our Twitter](https://twitter.com/lex_hue) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lex-hue__Delexa-V0.1-7b) | Metric |Value| |---------------------------------|----:| |Avg. |69.94| |AI2 Reasoning Challenge (25-Shot)|66.38| |HellaSwag (10-Shot) |85.98| |MMLU (5-Shot) |63.97| |TruthfulQA (0-shot) |61.69| |Winogrande (5-shot) |78.06| |GSM8k (5-shot) |63.53|