MaziyarPanahi commited on
Commit
5987b83
1 Parent(s): 6315cf3

Create README.md (#2)

Browse files

- Create README.md (2df36158642b066dc233c990e9a5259164ac28d6)

Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - chat
7
+ - llama
8
+ - facebook
9
+ - llaam3
10
+ - finetune
11
+ - chatml
12
+ library_name: transformers
13
+ inference: false
14
+ model_creator: MaziyarPanahi
15
+ quantized_by: MaziyarPanahi
16
+ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
17
+ model_name: calme-2.2-llama3.1-70b
18
+ datasets:
19
+ - MaziyarPanahi/truthy-dpo-v0.1-axolotl
20
+ ---
21
+
22
+ <img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
23
+
24
+ # MaziyarPanahi/calme-2.2-llama3.1-70b
25
+
26
+ This model is a fine-tuned version of the powerful `meta-llama/Meta-Llama-3.1-70B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
27
+
28
+
29
+ ## Use Cases
30
+
31
+ This model is suitable for a wide range of applications, including but not limited to:
32
+
33
+ - Advanced question-answering systems
34
+ - Intelligent chatbots and virtual assistants
35
+ - Content generation and summarization
36
+ - Code generation and analysis
37
+ - Complex problem-solving and decision support
38
+
39
+ # ⚡ Quantized GGUF
40
+
41
+ All GGUF models are available here: [MaziyarPanahi/calme-2.2-llama3.1-70b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.2-llama3.1-70b-GGUF)
42
+
43
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
44
+
45
+ coming soon!
46
+
47
+
48
+ This model uses `ChatML` prompt template:
49
+
50
+ ```
51
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
52
+
53
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
54
+
55
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
56
+
57
+ ```
58
+
59
+ # How to use
60
+
61
+
62
+ ```python
63
+
64
+ # Use a pipeline as a high-level helper
65
+
66
+ from transformers import pipeline
67
+
68
+ messages = [
69
+ {"role": "user", "content": "Who are you?"},
70
+ ]
71
+ pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.2-llama3.1-70b")
72
+ pipe(messages)
73
+
74
+
75
+ # Load model directly
76
+
77
+ from transformers import AutoTokenizer, AutoModelForCausalLM
78
+
79
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b")
80
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b")
81
+ ```
82
+
83
+
84
+ # Ethical Considerations
85
+
86
+ As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.