Text Generation
Transformers
PyTorch
English
gpt_neox
text-generation-inference
Inference Endpoints
juewang commited on
Commit
4b96b8d
1 Parent(s): ea35838

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +128 -0
README.md CHANGED
@@ -1,3 +1,131 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
  ---
6
+
7
+ # RedPajama-Chat-INCITE-2.8B
8
+
9
+ RedPajama-Chat-INCITE-2.8B-v1, is a large transformer-based language model developed by Together Computer and trained on the RedPajama-Data-1T dataset.
10
+ It is further fine-tuned on GPT-JT's datasets enhance zero/few-shot in-context learning.
11
+
12
+ ## Model Details
13
+ - **Developed by**: Together Computer.
14
+ - **Model type**: Language Model
15
+ - **Language(s)**: English
16
+ - **License**: Apache 2.0
17
+ - **Model Description**: A 2.8B parameter pretrained language model.
18
+
19
+ # Quick Start
20
+
21
+ ## GPU Inference
22
+
23
+ This requires a GPU with 8GB memory.
24
+ ```python
25
+ from transformers import AutoTokenizer, AutoModelForCausalLM
26
+ # init
27
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-Chat-INCITE-2.8B-v1")
28
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-Chat-INCITE-2.8B-v1", torch_dtype=torch.float16)
29
+ model = model.to('cuda:0')
30
+ # infer
31
+ inputs = tokenizer("Hello", return_tensors='pt').to(model.device)
32
+ outputs = model.generate(**inputs, max_new_tokens=10, do_sample=True, temperature=0.8)
33
+ output_str = tokenizer.decode(outputs[0])
34
+ print(output_str)
35
+ ```
36
+
37
+ ## GPU Inference in Int8
38
+
39
+ This requires a GPU with 6GB memory.
40
+
41
+ ```python
42
+ from transformers import AutoTokenizer, AutoModelForCausalLM
43
+ # init
44
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-Chat-INCITE-2.8B-v1")
45
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-Chat-INCITE-2.8B-v1", device_map="auto", load_in_8bit=True)
46
+ # infer
47
+ inputs = tokenizer("Hello", return_tensors='pt').to(model.device)
48
+ outputs = model.generate(**inputs, max_new_tokens=10, do_sample=True, temperature=0.8)
49
+ output_str = tokenizer.decode(outputs[0])
50
+ print(output_str)
51
+ ```
52
+
53
+ ## CPU Inference
54
+
55
+ ```python
56
+ from transformers import AutoTokenizer, AutoModelForCausalLM
57
+ # init
58
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-Chat-INCITE-2.8B-v1")
59
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-Chat-INCITE-2.8B-v1", torch_dtype=torch.bfloat16)
60
+ # infer
61
+ inputs = tokenizer("<human>: Hello!\n<bot>:", return_tensors='pt').to(model.device)
62
+ outputs = model.generate(**inputs, max_new_tokens=10, do_sample=True, temperature=0.8)
63
+ output_str = tokenizer.decode(outputs[0])
64
+ print(output_str)
65
+ ```
66
+
67
+
68
+ # Uses
69
+
70
+ ## Direct Use
71
+
72
+ The model is intended for research purposes. Possible research areas and tasks include
73
+
74
+ - Safe deployment of models which have the potential to generate harmful content.
75
+ - Probing and understanding the limitations and biases of dialogue models or language models.
76
+ - Generation of artworks and use in design and other artistic processes.
77
+ - Applications in educational or creative tools.
78
+ - Research on dialogue models or language models.
79
+
80
+ Excluded uses are described below.
81
+
82
+ ### Misuse, Malicious Use, and Out-of-Scope Use
83
+
84
+ It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
85
+
86
+ #### Out-of-Scope Use
87
+
88
+ RedPajama-Chat-INCITE-2.8B is a language model and may not perform well for other use cases outside of its intended scope.
89
+ For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
90
+ It is important to consider the limitations of the model and to only use it for its intended purpose.
91
+
92
+ #### Misuse and Malicious Use
93
+
94
+ RedPajama-Chat-INCITE-2.8B is designed for language modeling.
95
+ Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the OpenChatKit community project.
96
+
97
+ Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
98
+
99
+ - Generating fake news, misinformation, or propaganda
100
+ - Promoting hate speech, discrimination, or violence against individuals or groups
101
+ - Impersonating individuals or organizations without their consent
102
+ - Engaging in cyberbullying or harassment
103
+ - Defamatory content
104
+ - Spamming or scamming
105
+ - Sharing confidential or sensitive information without proper authorization
106
+ - Violating the terms of use of the model or the data used to train it
107
+ - Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
108
+
109
+ ## Limitations
110
+
111
+ RedPajama-Chat-INCITE-2.8B, like other language models, has limitations that should be taken into consideration.
112
+ For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
113
+ We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
114
+
115
+ ## Training
116
+
117
+ **Training Data**
118
+
119
+ Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
120
+
121
+ **Training Procedure**
122
+
123
+ - **Hardware:** 8 A100
124
+ - **Optimizer:** Adam
125
+ - **Gradient Accumulations**: 1
126
+ - **Num of Tokens:** 1B Tokens
127
+ - **Learning rate:** 1e-5
128
+
129
+ ## Community
130
+
131
+ Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)