Gaja-v1.00 / README.md
damerajee's picture
Update README.md
36e65bb verified
metadata
language:
  - en
  - hi
license: llama2
library_name: transformers
tags:
  - hindi
  - 'english '
  - Bilingual
datasets:
  - sarvamai/samvaad-hi-v1
pipeline_tag: text-generation
model-index:
  - name: Gaja-v1.00
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 52.82
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Gaja-v1.00
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 76.31
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Gaja-v1.00
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 40.83
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Gaja-v1.00
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 44.64
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Gaja-v1.00
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 70.64
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Gaja-v1.00
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 0.91
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=damerajee/Gaja-v1.00
          name: Open LLM Leaderboard

Model

🐘 Gaja

Gaja is a Hindi/Hinglish chat model, initially trained on SarvamAI's OpenHathi model and further fine-tuned for conversational interactions. Image

Inference

hey guys thanks to Bhabha AI, you guys can finally try my model

Additional Information

  • It outperforms Airavata, AI4Bharat's chat version, on Huggingface OpenLLM benchmark suite.
  • It was fine-tuned on only 1k samples
  • The code for this can be found in The github code - Github

💬 Prompt template

<|im_start|>user
{}<|im_end|> 
<|im_start|>assistant
{}<|im_end|> 

😎 Features:

  • Language Support: Gaja is designed to understand and generate responses in both Hindi and Hinglish, catering to a diverse range of users.
  • Base Model: Built upon SarvamAI's OpenHathi model, Gaja inherits its foundational capabilities while being optimized for conversational tasks.
  • Fine-tuning: Gaja has undergone fine-tuning specifically for chat-based interactions, enhancing its ability to engage in meaningful conversations with users.
  • Experimental Platform: With its flexibility and adaptability, Gaja serves as a valuable platform for conducting experiments and exploring innovative approaches to chatbot development.

😤 Example Output

User Assistant
what is a large language model एक बड़ा भाषा मॉडल, जिसे ML या LLM के रूप में भी जाना जाता है, एक प्रकार का एआई एल्गोरिथ्म है जो मानव भाषा को समझने और समझने में निपुण है। यह एल्गोरिथ्म एक बड़ी फिल्टरिंग योजना का उपयोग करता है, जो इसे बड़ी मात्रा में डेटा को संसाधित करने की अनुमति देता है। यह एल्गोरिथ्म एक बड़े डेटासेट से जानकारी निकालता है, जो आमतौर पर इंटरनेट पर पाया जाता है। यह एल्गोरिथ्म मानव भाषा के बारे में जानकारी को समझने और समझने के लिए कई एल्गोरिथमिक तरीकों का उपयोग करता है। बड़े भाषा मॉडल का उपयोग आमतौर पर संभावित संपर्क, अनुप्रयोगों की एक व्यापक श्रृंखला में किया जाता है।
How could AI help humans? AI can help humans in several ways. Firstly, it can assist in decision-making by providing insights and data analysis. For example, AI-powered tools can help individuals and businesses make informed decisions based on real-time data.Secondly, AI can enhance productivity by automating repetitive and mundane tasks. This frees up time and resources for more creative and strategic work.Thirdly, AI can improve healthcare by providing personalized medical diagnoses and treatment plans. AI-powered tools can analyze vast amounts of medical data and provide insights that can lead to more effective and efficient healthcare.Fourthly, AI can enhance security by detecting and preventing cyber threats. AI-powered systems can monitor networks and detect suspicious activity, preventing cyber-attacks before they occur.Lastly, AI can contribute to environmental conservation by optimizing resource utilization and reducing waste. AI-powered systems can monitor and optimize resource usage, leading to more sustainable practices.In summary, AI can help humans by enhancing decision-making, increasing productivity, improving healthcare, enhancing security, and contributing to environmental conservation.
क्या आप मुझे बता सकते हैं कि भारत में लोग बड़ों के पैर छूते क्यों हैं? भारत में, बड़ों के पैर छूना एक आदमी की बड़ों के प्रति श्रद्धा और सम्मान दिखाने की एक पारंपरिक प्रथा है। यह आदमी बड़ों को आदर और आत्मसात करता है। यह एक आदमी के बड़ों के साथ अपने संबंध को स्थापित करने और बड़ों के प्रति अपनी श्रद्धा और आत्मसात को दर्शाित करने का एक तरीका है। यह एक आदमी के बड़ों के साथ अपने घनिष्ठ संबंध को स्थापित करने और बड़ों के प्रति अपनी आत्मात्मकता और आत्मसात को दर्शाित करने का एक तरीका है।

Eval

Hindi eval

Task #Samples Accuracy Precision F1 Recall Metrics
Indic-Sentiment Analysis 100 0.71 - 0.76 - Accuracy, F1 score
Indic-QA Evaluation 50 - 0.62 0.68 0.75 Bert Score
Indic-NLI 50 0.24 - 0.17 - Accuracy, F1 score
Indic-Paraphrase 500 0.52 0.49 0.48 - Accuracy, F1 score, Precision

English eval

Model name Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
damerajee/Gaja-v1.00 47.69 52.82 76.31 40.83 44.64 70.64 0.91
manishiitg/open-aditi-hi-v2 59.31 59.39 82.01 61.41 45.84 77.19 30.02
ai4bharat/Airavata 45.52 46.5 69.26 43.9 40.62 68.82 4.02

Screenshot (30).png

🚀 Infernce(colab or kaggle notebooks)

Installing dependencies

!pip install -q peft bitsandbytes datasets accelerate

Load the model

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("damerajee/Gaja-v1.00")
model = AutoModelForCausalLM.from_pretrained("damerajee/Gaja-v1.00",load_in_4bit=True)

Try it out


messages = [
    {"role": "user", "content": "Why do poeple in India touch the feet of elders when they greet them?"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize = True,
    add_generation_prompt = True, # Must add for generation
    return_tensors = "pt",
).to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 300, use_cache = True)

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 47.69
AI2 Reasoning Challenge (25-Shot) 52.82
HellaSwag (10-Shot) 76.31
MMLU (5-Shot) 40.83
TruthfulQA (0-shot) 44.64
Winogrande (5-shot) 70.64
GSM8k (5-shot) 0.91