Text Generation
Transformers
PyTorch
English
gpt_neox
text-generation-inference
Inference Endpoints
juewang commited on
Commit
47b94a7
1 Parent(s): f46ebbc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -19,15 +19,15 @@ inference:
19
  max_new_tokens: 128
20
  ---
21
 
22
- # RedPajama-INCITE-Chat-7B-v0.1
23
 
24
- RedPajama-INCITE-Chat-7B-v0.1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
25
 
26
  It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
27
 
28
- - Base Model: [RedPajama-INCITE-Base-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1)
29
- - Instruction-tuned Version: [RedPajama-INCITE-Instruct-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1)
30
- - Chat Version: [RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1)
31
 
32
 
33
  ## Model Details
@@ -62,8 +62,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
62
  assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
63
 
64
  # init
65
- tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
66
- model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.float16)
67
  model = model.to('cuda:0')
68
  # infer
69
  prompt = "<human>: Who is Alan Turing?\n<bot>:"
@@ -104,8 +104,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
104
  assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
105
 
106
  # init
107
- tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
108
- model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
109
 
110
  # infer
111
  prompt = "<human>: Who is Alan Turing?\n<bot>:"
@@ -135,8 +135,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
135
  assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
136
 
137
  # init
138
- tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
139
- model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1", torch_dtype=torch.bfloat16)
140
  # infer
141
  prompt = "<human>: Who is Alan Turing?\n<bot>:"
142
  inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
@@ -167,13 +167,13 @@ It is the responsibility of the end user to ensure that the model is used in a r
167
 
168
  #### Out-of-Scope Use
169
 
170
- `RedPajama-INCITE-Chat-7B-v0.1` is a language model and may not perform well for other use cases outside of its intended scope.
171
  For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
172
  It is important to consider the limitations of the model and to only use it for its intended purpose.
173
 
174
  #### Misuse and Malicious Use
175
 
176
- `RedPajama-INCITE-Chat-7B-v0.1` is designed for language modeling.
177
  Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
178
 
179
  Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
@@ -190,7 +190,7 @@ Using the model to generate content that is cruel to individuals is a misuse of
190
 
191
  ## Limitations
192
 
193
- `RedPajama-INCITE-Chat-7B-v0.1`, like other language models, has limitations that should be taken into consideration.
194
  For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
195
  We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
196
 
@@ -205,7 +205,7 @@ Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/data
205
  - **Hardware:** 8 A100
206
  - **Optimizer:** Adam
207
  - **Gradient Accumulations**: 1
208
- - **Num of Tokens:** 131M tokens
209
  - **Learning rate:** 1e-5
210
 
211
  ## Community
 
19
  max_new_tokens: 128
20
  ---
21
 
22
+ # RedPajama-INCITE-7B-Chat
23
 
24
+ RedPajama-INCITE-7B-Chat was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
25
 
26
  It is fine-tuned on OASST1 and Dolly2 to enhance chatting ability.
27
 
28
+ - Base Model: [RedPajama-INCITE-7B-Base](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base)
29
+ - Instruction-tuned Version: [RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)
30
+ - Chat Version: [RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
31
 
32
 
33
  ## Model Details
 
62
  assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
63
 
64
  # init
65
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
66
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", torch_dtype=torch.float16)
67
  model = model.to('cuda:0')
68
  # infer
69
  prompt = "<human>: Who is Alan Turing?\n<bot>:"
 
104
  assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
105
 
106
  # init
107
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
108
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
109
 
110
  # infer
111
  prompt = "<human>: Who is Alan Turing?\n<bot>:"
 
135
  assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
136
 
137
  # init
138
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat")
139
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Chat", torch_dtype=torch.bfloat16)
140
  # infer
141
  prompt = "<human>: Who is Alan Turing?\n<bot>:"
142
  inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
 
167
 
168
  #### Out-of-Scope Use
169
 
170
+ `RedPajama-INCITE-7B-Chat` is a language model and may not perform well for other use cases outside of its intended scope.
171
  For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
172
  It is important to consider the limitations of the model and to only use it for its intended purpose.
173
 
174
  #### Misuse and Malicious Use
175
 
176
+ `RedPajama-INCITE-7B-Chat` is designed for language modeling.
177
  Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
178
 
179
  Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
 
190
 
191
  ## Limitations
192
 
193
+ `RedPajama-INCITE-7B-Chat`, like other language models, has limitations that should be taken into consideration.
194
  For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
195
  We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
196
 
 
205
  - **Hardware:** 8 A100
206
  - **Optimizer:** Adam
207
  - **Gradient Accumulations**: 1
208
+ - **Num of Tokens:** 79M tokens
209
  - **Learning rate:** 1e-5
210
 
211
  ## Community