Edit model card

Stablelm_Telugu Model

Model Details:

  • Model Name: Stablelm_Telugu (Telugu Romanized)
  • Foundational Model: Stable LM 2 1.6B
  • Parameters: 1.6 Billion
  • Pre-training Data: 2 Trillion Tokens from Multilingual and Code Datasets
  • Pre-training Epochs: 2

Fine-Tuning

The Stablelm_Telugu model was fine-tuned using the eswardivi/telugu_instruction_dataset. This dataset is in Alpaca format and comprises translated and transliterated versions of yahma_alpaca and teknium_GPTeacher_general. The dataset was sourced from Telugu-LLM-Labs.

Used axolotl for Finetuning,Below is yml file

Click to expand
base_model: stabilityai/stablelm-2-1_6b
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true

load_in_8bit: true
load_in_4bit: false
strict: false

push_dataset_to_hub:
datasets:
  - path: eswardivi/telugu_instruction_dataset
    type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.02
output_dir: ./lora-out

adapter: lora
lora_model_dir:

sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true

lora_r: 16
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
  - gate_proj
  - down_proj
  - up_proj
  - q_proj
  - v_proj
  - k_proj
  - o_proj

wandb_project: telugu_llm
wandb_entity:
wandb_watch:
wandb_name: stablelm_1_6
wandb_log_model:

gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 4
optimizer: adamw_bnb_8bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false

bf16: false
fp16: true
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:

local_rank:
logging_steps: 1

xformers_attention:
flash_attention: true
gptq_groupsize:
s2_attention:
gptq_model_v1:
warmup_steps: 100
evals_per_epoch: 2
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
  pad_token: "<|endoftext|>"
  eos_token: "<|endoftext|>"

Fine-Tuning Data:

  • Dataset: telugu_instruction_dataset
  • Format: Alpaca
  • Source: Here

For more details on base model, visit the stablelm-2-1_6b.

Usage

Get started generating text with Stable LM 2 1.6B by using the following code snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("eswardivi/stablelm_telugu", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
  "eswardivi/stablelm_telugu",
  trust_remote_code=True,
  torch_dtype="auto",
)
model.cuda()
def create_prompt(instruction: str) -> str:
    prompt_template = f""" 
    Instruction:
    {instruction}

    Response:
    """
    return prompt_template
inputs = tokenizer(create_prompt("Naku python Program 1 to 10 count cheyadaniki ivvu"), return_tensors="pt").to(model.device)
tokens = model.generate(
  **inputs,
  max_new_tokens=1024,
  temperature=0.65,
  top_p=0.85,
  do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))

Output

Instruction: Naku python Program 1 to 10 count cheyadaniki ivvu

Response: python program 1 to 10 count cheyadaniki ivvabadina code ikkada vundi:

count = 0
for n in range(1, 11):
    count += 1
print("count: ", count)

idi python program 1 to 10 count cheyadaniki ivvabadina code, idi 10 nundi 11 varaku 10 sankhyalanu 1 nundi 10 varaku tisukoni 10 sankhyala sankhyalanu leckinchadam dwara prarambhamavuthundi. 1 nundi 10 varaku 10 sankhyalanu tisukoni, 1 nundi 10 varaku 10 sankhyalanu 1 nundi 10 varaku 10 sankhyala sankhyalanu leckinchadam dwara prarambhamavuthundi.

Run with Flash Attention 2 ⚡️

Click to expand
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("eswardivi/stablelm_telugu", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
  "eswardivi/stablelm_telugu",
  trust_remote_code=True,
  torch_dtype="auto",
  attn_implementation="flash_attention_2",
)
model.cuda()
def create_prompt(instruction: str) -> str:
    prompt_template = f""" 
    Instruction:
    {instruction}

    Response:
    """
    return prompt_template
inputs = tokenizer(create_prompt("Naku python Program 1 to 10 count cheyadaniki ivvu"), return_tensors="pt").to(model.device)
tokens = model.generate(
  **inputs,
  max_new_tokens=1024,
  temperature=0.65,
  top_p=0.85,
  do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))

Use and Limitations

Intended Use

The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications.

Limitations and Bias

​ As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.

How to Cite

@misc{StableLM-2-1.6B,
      url={[https://huggingface.co/stabilityai/stablelm-2-1.6b](https://huggingface.co/stabilityai/stablelm-2-1.6b)},
      title={Stable LM 2 1.6B},
      author={Stability AI Language Team}
}
Downloads last month
30
Safetensors
Model size
1.64B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Datasets used to train eswardivi/stablelm_telugu