Edit model card

Ghost 7B v0.9.1

Ghost 7B v0.9.1 Logo

Ghost 7B, v0.9.1, flying

An early release version of the Ghost 7B Alpha model.

The next generation of large language models focuses on optimization for excellent reasoning and multi-task knowledge.

▢️ Experience it on Colab

In addition, the model also has versions: GUFF and AWQ.

Come on, create yourself an AI assistant, according to your wishes!

In your language, maybe Vietnamese.

Or, English.

Let the assistant become an expert, and more.

The challenge of the model's ability to understand the language.

Challenge the model's reasoning ability, in Vietnamese language.

In case of using Vietnamese language, it lacks accents, abbreviations or uses slang.

πŸ“š Model Details

Model Description

A version to consider comprehension in generating languages other than the original language being initially trained, here is the Vietnamese language. A brief summary of the effectiveness of the Mistral 7B model for training with a new language is excellent and low cost.

I have started training the Ghost 7B v0.9.0 model again, with a smaller amount of data, it is estimated to be only about 150MB. In that data, about 70% is Vietnamese, the rest is almost English. The approach here uses QLora for training then merges them. Also, I am very thankful to Unsloth for their features.

⛹️‍♂️ Uses

Online using Google Colab

To make it easier to play around with the model, I created a notebook in Google Colab so you can start experimenting.

Directly

For direct use, you can easily get started with the following steps.

  • Firstly, you need to install transformers via the command below with pip.

    pip install -U transformers
    
  • Right now, you can start using the model directly.

    import torch
    from transformers import (
        AutoModelForCausalLM,
        AutoTokenizer,
    )
    
    base_model = "lamhieu/ghost-7b-v0.9.1"
    model = AutoModelForCausalLM.from_pretrained(
        base_model,
        torch_dtype=torch.bfloat16,
        trust_remote_code=True,
        device_map="auto",
    )
    tokenizer = AutoTokenizer.from_pretrained(base_model)
    
    messages = [
        {"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate"},
        {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
    ]
    prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    tokenized = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
    outputs = model.generate(**tokenized, max_new_tokens=512)
    results = tokenizer.batch_decode(outputs)[0]
    print(results)
    
  • Additionally, you can also use a model with 4bit quantization to reduce the required resources at least. You can start with the code below.

    import torch
    from transformers import (
        AutoModelForCausalLM,
        AutoTokenizer,
        BitsAndBytesConfig,
    )
    
    base_model = "lamhieu/ghost-7b-v0.9.1"
    bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=torch.bfloat16,
        bnb_4bit_use_double_quant=False,
    )
    model = AutoModelForCausalLM.from_pretrained(
        base_model,
        quantization_config=bnb_config,
        trust_remote_code=True,
        device_map="auto",
    )
    tokenizer = AutoTokenizer.from_pretrained(base_model)
    
    messages = [
        {"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate"},
        {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
    ]
    prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    tokenized = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
    outputs = model.generate(**tokenized, max_new_tokens=512)
    results = tokenizer.batch_decode(outputs)[0]
    print(results)
    

Summary

Although the amount of training data is small, it is "great". You don't need to worry too much that it won't be able to meet some of your requirements. Instead, try experimenting with the model of what you want. One more thing, use it like you would ChatGPT, I've purposely tweaked it to be able to replace my app (for some tasks, and it does a good job). It's okay with both Vietnamese and English languages. It would be great to hear feedback about the experience, feel free to leave information in the discussion section.

Setting up the system prompt will have a great impact on the performance and quality of the content generated by the model. Keep this in mind to always ensure the model is used for your intended purpose, the goal is to achieve good results but. It's best to always set system, you can still leave it empty if you always want to set it.

πŸ₯‡ Evaluation

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 55.10
AI2 Reasoning Challenge (25-Shot) 55.38
HellaSwag (10-Shot) 77.03
MMLU (5-Shot) 54.78
TruthfulQA (0-shot) 43.96
Winogrande (5-shot) 72.53
GSM8k (5-shot) 26.91

VMLU

A Vietnamese Multitask Language Understanding Benchmark Suite for Large Language Models.

With the score achieved, the model can rank 3rd in VMLU's "Leaderboard of fine-tuned models" list, as of the date of evaluation.

image/png

Details
{
  "humanity": {
    "administrative_law": 52.22,
    "business_law": 40.22,
    "civil_law": 46.11,
    "criminal_law": 49.08,
    "economic_law": 39.75,
    "education_law": 42.17,
    "elementary_history": 55.37,
    "high_school_history": 36.67,
    "high_school_literature": 37.78,
    "history_of_world_civilization": 46.67,
    "idealogical_and_moral_cultivation": 50,
    "introduction_to_laws": 45.24,
    "vietnamese_language_and_literature": 34.48,
    "total": 43.3,
    "revolutionary_policy_of_the_vietnamese_commununist_part": 51.11,
    "introduction_to_vietnam_culture": 30.56,
    "logic": 27.01,
    "middle_school_history": 44.44,
    "middle_school_literature": 50.57
  },
  "stem": {
    "total": 34.73,
    "applied_informatics": 50.56,
    "computer_architecture": 33.89,
    "computer_network": 43.02,
    "discrete_mathematics": 31.52,
    "electrical_engineering": 30.68,
    "elementary_mathematics": 30,
    "elementary_science": 58.89,
    "high_school_biology": 38.33,
    "high_school_chemistry": 28.89,
    "high_school_mathematics": 26.35,
    "high_school_physics": 29.44,
    "introduction_to_chemistry": 27.37,
    "introduction_to_physics": 31.79,
    "introduction_to_programming": 36.31,
    "metrology_engineer": 31.21,
    "middle_school_biology": 46.47,
    "middle_school_chemistry": 30.56,
    "middle_school_mathematics": 30.56,
    "middle_school_physics": 30,
    "operating_system": 40.56,
    "statistics_and_probability": 22.99
  },
  "total": 39.58,
  "other": {
    "accountant": 31.55,
    "civil_servant": 42.11,
    "clinical_pharmacology": 33.89,
    "driving_license_certificate": 59.06,
    "environmental_engineering": 28.07,
    "internal_basic_medicine": 39.77,
    "preschool_pedagogy": 46.08,
    "tax_accountant": 22.41,
    "tax_civil_servant": 47.95,
    "total": 38.99
  },
  "social_science": {
    "business_administration": 41.38,
    "high_school_civil_education": 45,
    "high_school_geography": 34.57,
    "ho_chi_minh_ideology": 48.04,
    "macroeconomics": 31.11,
    "microeconomics": 37.22,
    "middle_school_civil_education": 66.29,
    "middle_school_geography": 48.3,
    "principles_of_marxism_and_leninism": 30,
    "sociology": 53.93,
    "total": 43.58
  }
}

πŸ“œ More Information

Note, this is a personal research project with a limited budget, so the model only stops at the evaluation level with the developed approach. Apart from that, I think I can definitely build a model with better quality in terms of language and other performance using this approach.

Thanks for the support

Model trained with Unsloth, many thanks.

πŸ“¨ Model Card Contact

Lam Hieu ([email protected])

Downloads last month
2,618
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using ghost-x/ghost-7b-v0.9.1 5

Collection including ghost-x/ghost-7b-v0.9.1

Evaluation results