aloobun's picture
Adding Evaluation Results (#2)
3028ad7 verified
metadata
license: other
library_name: transformers
tags:
  - chatml
  - finetune
  - gpt4
  - synthetic data
  - custom_code
  - qwen2
datasets:
  - Locutusque/Hercules-v3.0
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat/raw/main/LICENSE
model-index:
  - name: Reyna-Mini-1.8B-v0.2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 36.6
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 60.19
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 44.75
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 41.24
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 61.56
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 31.31
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/Reyna-Mini-1.8B-v0.2
          name: Open LLM Leaderboard

Reyna aloobun qwen0.5B

  • Finetuned Qwen/Qwen1.5-1.8B-Chat, with SFT on Hercules v3 dataset.
  • This marks the third model in this series.
  • Format: ChatML -
      <|im_start|>system
      {system}<|im_end|>
      <|im_start|>user
      {prompt}<|im_end|>
      <|im_start|>assistant
    
  • Next step would be to do a DPO train on top.

Benchamrks:

Avg. Arc HellaSwag MMLU TruthfulQA Winogrande GSM8K
45.94 36.6 60.19 44.75 41.24 61.56 31.31

Example:

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, StoppingCriteria
import torch

class MyStoppingCriteria(StoppingCriteria):
  def __init__(self, target_sequence, prompt):
    self.target_sequence = target_sequence
    self.prompt=prompt

  def __call__(self, input_ids, scores, **kwargs):
    generated_text = tokenizer.decode(input_ids[0])
    generated_text = generated_text.replace(self.prompt,'')
    if self.target_sequence in generated_text:
        return True 
    return False 

  def __len__(self):
    return 1

  def __iter__(self):
    yield self

modelpath="aloobun/Reyna-Mini-1.8B-v0.2"

model = AutoModelForCausalLM.from_pretrained(
    modelpath,
    torch_dtype=torch.bfloat16,
    device_map="cuda",
    trust_remote_code=True,       
)

tokenizer = AutoTokenizer.from_pretrained(
    modelpath,
    trust_remote_code=True,      
    use_fast=False,
)

prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nIs there inherent order in nature or is it all chaos and chance?<|im_end|>\n<|im_start|>assistant\n"

encoded_input = tokenizer(prompt, return_tensors='pt')
input_ids=encoded_input['input_ids'].cuda()
streamer = TextStreamer(tokenizer=tokenizer, skip_prompt=True)
op = model.generate(
    input_ids,
    streamer=streamer,
    pad_token_id=tokenizer.eos_token_id,
    do_sample=True,
    temperature=0.6,
    top_p=0.8,
    max_new_tokens=512,
    stopping_criteria=MyStoppingCriteria("<|im_end|>", prompt)
)

Output:

Nature appears to be inherently organized, with patterns and structures that can be observed across different levels of organization. However, the exact mechanisms by which these patterns emerge and evolve remain largely unknown. The universe seems to be governed by a series of laws and principles known as "laws of physics," such as Newton's laws of motion, electromagnetism, and thermodynamics. These laws govern how matter and energy interact with each other and how they behave over time. Despite our understanding of these laws, we still struggle to comprehend the underlying mechanisms that allow for the emergence of complex patterns and structures. This is because the universe operates on a scale that is too small for us to observe directly, and therefore we cannot fully understand its internal workings. In summary, while there may be some level of order and structure within the universe, the precise mechanisms governing this order remain largely unknown.<|im_end|>

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 45.94
AI2 Reasoning Challenge (25-Shot) 36.60
HellaSwag (10-Shot) 60.19
MMLU (5-Shot) 44.75
TruthfulQA (0-shot) 41.24
Winogrande (5-shot) 61.56
GSM8k (5-shot) 31.31