umarigan commited on
Commit
44a1a7e
1 Parent(s): ff6c725

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  language:
3
  - en
 
4
  license: apache-2.0
5
  tags:
6
  - text-generation-inference
@@ -21,3 +22,34 @@ base_model: unsloth/llama-3-8b-bnb-4bit
21
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ - tr
5
  license: apache-2.0
6
  tags:
7
  - text-generation-inference
 
22
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
23
 
24
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
25
+
26
+ ## Usage Examples
27
+
28
+ ```python
29
+
30
+ # Load model directly
31
+ from transformers import AutoTokenizer, AutoModelForCausalLM
32
+
33
+ tokenizer = AutoTokenizer.from_pretrained("KOCDIGITAL/Kocdigital-LLM-8b-v0.1")
34
+ model = AutoModelForCausalLM.from_pretrained("KOCDIGITAL/Kocdigital-LLM-8b-v0.1")#umarigan/llama-3-openhermes-tr
35
+ alpaca_prompt = """
36
+ Görev:
37
+ {}
38
+
39
+ Girdi:
40
+ {}
41
+
42
+ Cevap:
43
+ {}"""
44
+
45
+ inputs = tokenizer(
46
+ [
47
+ alpaca_prompt.format(
48
+ "fibonnaci dizisinin devamını getir.", # instruction
49
+ "1, 1, 2, 3, 5, 8", # input
50
+ "", # output - leave this blank for generation!
51
+ )
52
+ ], return_tensors = "pt")
53
+ outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
54
+ tokenizer.batch_decode(outputs)
55
+ ```