suayptalha's picture
Update README.md
c078716 verified
|
raw
history blame
No virus
1.33 kB
metadata
library_name: transformers
tags:
  - unsloth
license: apache-2.0
datasets:
  - turkish-nlp-suite/InstrucTurca
language:
  - tr
pipeline_tag: text-generation
base_model:
  - unsloth/Meta-Llama-3.1-8B

This is a Turkish finetuned Llama-3.1-8B model using InstrucTurca dataset in order to increase the Turkish capability of modern LLMs.

Note: These are only LoRA adapters. You should also import the base model itself.

Example usage:

model_name = "unsloth/Meta-Llama-3.1-8B"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.float16)
model.gradient_checkpointing_enable()

tokenizer = AutoTokenizer.from_pretrained(model_name)

adapter_path = "suayptalha/Llama-3.1-8b-Turkish-Finetuned"
model = PeftModel.from_pretrained(model, adapter_path)

alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

inputs = tokenizer(
[
    alpaca_prompt.format(
        "", #Your question here
        "", #Given input here
        "", #Output (for training)
    )
], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True)
tokenizer.batch_decode(outputs)