VinayHajare
commited on
Commit
•
a5578de
1
Parent(s):
e34c95a
Added usage example
Browse files
README.md
CHANGED
@@ -18,7 +18,71 @@ library_name: transformers
|
|
18 |
|
19 |
- This is GGUF quantized (6-Bit) version of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) created using llama.cpp
|
20 |
- Created using latest release of llama.cpp dated 5.5.2024
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
## Model Details
|
23 |
|
24 |
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
|
|
|
18 |
|
19 |
- This is GGUF quantized (6-Bit) version of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) created using llama.cpp
|
20 |
- Created using latest release of llama.cpp dated 5.5.2024
|
21 |
+
## Usage Details
|
22 |
+
**Simple Use**
|
23 |
+
```python
|
24 |
+
from llama_cpp import Llama
|
25 |
+
llm = Llama.from_pretrained(
|
26 |
+
repo_id="VinayHajare/Meta-Llama3-70B-Instruct-v2-GGUF",
|
27 |
+
filename="*.gguf",
|
28 |
+
verbose=True,
|
29 |
+
chat_format="llama-3"
|
30 |
+
)
|
31 |
|
32 |
+
llm.create_chat_completion(
|
33 |
+
messages = [
|
34 |
+
{"role": "system", "content": "You are an assistant who perfectly answers users question."},
|
35 |
+
{
|
36 |
+
"role": "user",
|
37 |
+
"content": "Describe the Uncertainty Principle."
|
38 |
+
}
|
39 |
+
]
|
40 |
+
)
|
41 |
+
```
|
42 |
+
**Tools Calling**
|
43 |
+
```python
|
44 |
+
llm.create_chat_completion(
|
45 |
+
messages = [
|
46 |
+
{
|
47 |
+
"role": "system",
|
48 |
+
"content": "A chat between a curious user and an artificial intelligence assistant.
|
49 |
+
The assistant gives helpful, detailed, and polite answers to the user's questions.
|
50 |
+
The assistant calls functions with appropriate input when necessary"
|
51 |
+
},
|
52 |
+
{
|
53 |
+
"role": "user",
|
54 |
+
"content": "Extract Jason data who is 25 years old"
|
55 |
+
}
|
56 |
+
],
|
57 |
+
tools=[{
|
58 |
+
"type": "function",
|
59 |
+
"function": {
|
60 |
+
"name": "UserDetail",
|
61 |
+
"parameters": {
|
62 |
+
"type": "object",
|
63 |
+
"title": "UserDetail",
|
64 |
+
"properties": {
|
65 |
+
"name": {
|
66 |
+
"title": "Name",
|
67 |
+
"type": "string"
|
68 |
+
},
|
69 |
+
"age": {
|
70 |
+
"title": "Age",
|
71 |
+
"type": "integer"
|
72 |
+
}
|
73 |
+
},
|
74 |
+
"required": [ "name", "age" ]
|
75 |
+
}
|
76 |
+
}
|
77 |
+
}],
|
78 |
+
tool_choice={
|
79 |
+
"type": "function",
|
80 |
+
"function": {
|
81 |
+
"name": "UserDetail"
|
82 |
+
}
|
83 |
+
}
|
84 |
+
)
|
85 |
+
```
|
86 |
## Model Details
|
87 |
|
88 |
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
|