--- language: - en - tr license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** umarigan - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) ## Usage Examples ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("KOCDIGITAL/Kocdigital-LLM-8b-v0.1") model = AutoModelForCausalLM.from_pretrained("KOCDIGITAL/Kocdigital-LLM-8b-v0.1")#umarigan/llama-3-openhermes-tr alpaca_prompt = """ Görev: {} Girdi: {} Cevap: {}""" inputs = tokenizer( [ alpaca_prompt.format( "fibonnaci dizisinin devamını getir.", # instruction "1, 1, 2, 3, 5, 8", # input "", # output - leave this blank for generation! ) ], return_tensors = "pt") outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True) tokenizer.batch_decode(outputs) ```