aashish1904
commited on
Commit
•
55f79a3
1
Parent(s):
5cacb81
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
license: apache-2.0
|
8 |
+
tags:
|
9 |
+
- text-generation-inference
|
10 |
+
- transformers
|
11 |
+
- unsloth
|
12 |
+
- llama
|
13 |
+
- trl
|
14 |
+
datasets:
|
15 |
+
- sahil2801/CodeAlpaca-20k
|
16 |
+
- argilla/magpie-ultra-v0.1
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
|
21 |
+
|
22 |
+
|
23 |
+
# QuantFactory/Llama-3.2-3B-Agent007-Coder-GGUF
|
24 |
+
This is quantized version of [EpistemeAI/Llama-3.2-3B-Agent007-Coder](https://huggingface.co/EpistemeAI/Llama-3.2-3B-Agent007-Coder) created using llama.cpp
|
25 |
+
|
26 |
+
# Original Model Card
|
27 |
+
|
28 |
+
# Llama Agent 3B coder
|
29 |
+
Fine tuned with Agent dataset and also Code Alpaca 20K and magpie ultra 0.1 datasets.
|
30 |
+
|
31 |
+
|
32 |
+
## Original Model card
|
33 |
+
## Model Information
|
34 |
+
|
35 |
+
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
|
36 |
+
|
37 |
+
**Model Developer:** Meta
|
38 |
+
|
39 |
+
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
40 |
+
|
41 |
+
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
|
42 |
+
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
|
43 |
+
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
|
44 |
+
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
|
45 |
+
|
46 |
+
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
|
47 |
+
|
48 |
+
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
|
49 |
+
|
50 |
+
**Model Release Date:** Sept 25, 2024
|
51 |
+
|
52 |
+
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
|
53 |
+
|
54 |
+
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
|
55 |
+
|
56 |
+
**Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
57 |
+
|
58 |
+
## Intended Use
|
59 |
+
|
60 |
+
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks.
|
61 |
+
|
62 |
+
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
|
63 |
+
|
64 |
+
## How to use
|
65 |
+
|
66 |
+
This repository contains two versions of Llama-3.2-3B-Instruct, for use with `transformers` and with the original `llama` codebase.
|
67 |
+
|
68 |
+
### Use with transformers
|
69 |
+
|
70 |
+
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
|
71 |
+
|
72 |
+
Make sure to update your transformers installation via `pip install --upgrade transformers`.
|
73 |
+
|
74 |
+
```python
|
75 |
+
import torch
|
76 |
+
from transformers import pipeline
|
77 |
+
model_id = "EpistemeAI/Llama-3.2-3B-Agent007-Coder"
|
78 |
+
pipe = pipeline(
|
79 |
+
"text-generation",
|
80 |
+
model=model_id,
|
81 |
+
torch_dtype=torch.bfloat16,
|
82 |
+
device_map="auto",
|
83 |
+
)
|
84 |
+
messages = [
|
85 |
+
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
86 |
+
{"role": "user", "content": "Who are you?"},
|
87 |
+
]
|
88 |
+
outputs = pipe(
|
89 |
+
messages,
|
90 |
+
max_new_tokens=256,
|
91 |
+
)
|
92 |
+
print(outputs[0]["generated_text"][-1])
|
93 |
+
```
|
94 |
+
|
95 |
+
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
|
96 |
+
|
97 |
+
|
98 |
+
|
99 |
+
# Uploaded model
|
100 |
+
|
101 |
+
- **Developed by:** EpistemeAI
|
102 |
+
- **License:** apache-2.0
|
103 |
+
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
|
104 |
+
|
105 |
+
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
106 |
+
|
107 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|