suhara commited on
Commit
70d5b6c
1 Parent(s): aca79ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -5
README.md CHANGED
@@ -1,5 +1,105 @@
1
- ---
2
- license: other
3
- license_name: nvidia-community-model-license
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-community-model-license
4
+ license_link: LICENSE
5
+ library_name: transformers
6
+ ---
7
+
8
+ # Mistral-NeMo-Minitron-8B-Instruct
9
+
10
+ ## Model Overview
11
+
12
+ Mistral-NeMo-Minitron-8B-Instruct is a model for generating responses for various text-generation tasks including roleplaying, retrieval augmented generation, and function calling. It is a fine-tuned version of [nvidia/Mistral-NeMo-Minitron-8B-Base](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base), which was pruned and distilled from [Mistral-NeMo 12B](https://huggingface.co/nvidia/Mistral-NeMo-12B-Base) using [our LLM compression technique](https://arxiv.org/abs/2407.14679). The model supports a context length of 8,192 tokens.
13
+
14
+ Try this model on [build.nvidia.com](https://build.nvidia.com/nvidia/mistral-nemo-minitron-8b-8k-instruct).
15
+
16
+
17
+ **Model Developer:** NVIDIA
18
+
19
+ **Model Dates:** Mistral-NeMo-Minitron-8B-Instruct was trained between August 2024 and September 2024.
20
+
21
+ ## License
22
+
23
+ [NVIDIA Community Model License](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Instruct/blob/main/nvidia-community-model-license-aug2024.pdf)
24
+
25
+ ## Model Architecture
26
+
27
+ Mistral-NeMo-Minitron-8B-Instruct uses a model embedding size of 4096, 32 attention heads, MLP intermediate dimension of 11520, with 40 layers in total. Additionally, it uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
28
+
29
+ **Architecture Type:** Transformer Decoder (Auto-regressive Language Model)
30
+
31
+ **Network Architecture:** Mistral-NeMo
32
+
33
+
34
+ ## Prompt Format:
35
+
36
+ We recommend using the following prompt template, which was used to fine-tune the model. The model may not perform optimally without it.
37
+
38
+ ```
39
+ <extra_id_0>System
40
+ {system prompt}
41
+
42
+ <extra_id_1>User
43
+ {prompt}
44
+ <extra_id_1>Assistant\n
45
+ ```
46
+
47
+ Please note that a newline character `\n` should be added at the end of the prompt.
48
+
49
+
50
+ ## Usage
51
+
52
+ ```
53
+ from transformers import AutoTokenizer, AutoModelForCausalLM
54
+
55
+ # Load the tokenizer and model
56
+ tokenizer = AutoTokenizer.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct")
57
+ model = AutoModelForCausalLM.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct")
58
+
59
+ # Use the prompt template
60
+ messages = [
61
+ {
62
+ "role": "system",
63
+ "content": "You are a friendly chatbot who always responds in the style of a pirate",
64
+ },
65
+ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
66
+ ]
67
+ tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
68
+
69
+ outputs = model.generate(tokenized_chat, max_new_tokens=128)
70
+ print(tokenizer.decode(outputs[0]))
71
+ ```
72
+
73
+ You can also use `pipeline` but you need to create a tokenizer object and assign it to the pipeline manually.
74
+
75
+ ```
76
+ from transformers import AutoTokenizer
77
+ from transformers import pipeline
78
+
79
+ tokenizer = AutoTokenizer.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct")
80
+
81
+ messages = [
82
+ {"role": "user", "content": "Who are you?"},
83
+ ]
84
+ pipe = pipeline("text-generation", model="nvidia/Mistral-NeMo-Minitron-8B-Instruct")
85
+ pipe.tokenizer = tokenizer # You need to assign tokenizer manually
86
+ pipe(messages)
87
+ ```
88
+
89
+ ## AI Safety Efforts
90
+
91
+ The Mistral-NeMo-Minitron-8B-Instruct model underwent AI safety evaluation including adversarial testing via three distinct methods:
92
+ - [Garak](https://github.com/leondz/garak), is an automated LLM vulnerability scanner that probes for common weaknesses, including prompt injection and data leakage.
93
+ - [AEGIS](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-1.0), is a content safety evaluation dataset and LLM based content safety classifier model, that adheres to a broad taxonomy of 13 categories of critical risks in human-LLM interactions.
94
+ - Human Content Red Teaming leveraging human interaction and evaluation of the models' responses.
95
+
96
+
97
+ ## Limitations
98
+
99
+ The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. This issue could be exacerbated without the use of the recommended prompt template. This issue could be exacerbated without the use of the recommended prompt template. If you are going to use this model in an agentic workflow, validate that the imported packages are from a trusted source to ensure end-to-end security.
100
+
101
+
102
+ ## Ethical Considerations
103
+
104
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the [Model Card++](https://build.nvidia.com/nvidia/mistral-nemo-minitron-8b-8k-instruct/modelcard). Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
105
+