DarwinAnim8or
commited on
Commit
•
4f70222
1
Parent(s):
74695ac
Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,8 @@ co2_eq_emissions:
|
|
22 |
# GPT-NoSleep-355m
|
23 |
A finetuned version of [GPT2-Medium](https://huggingface.co/gpt2-medium) on the 'reddit-nosleep-posts' dataset. (Linked above)
|
24 |
|
|
|
|
|
25 |
# Training Procedure
|
26 |
This was trained on the 'reddt-nosleep-posts' dataset, using the "HappyTransformers" library on Google Colab.
|
27 |
This model was trained for X epochs with learning rate 1e-2.
|
@@ -31,4 +33,21 @@ This likely contains the same biases and limitations as the original GPT2 that i
|
|
31 |
It likely will generate offensive output.
|
32 |
|
33 |
# Intended Use
|
34 |
-
This model is meant for fun, nothing else.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
# GPT-NoSleep-355m
|
23 |
A finetuned version of [GPT2-Medium](https://huggingface.co/gpt2-medium) on the 'reddit-nosleep-posts' dataset. (Linked above)
|
24 |
|
25 |
+
**TIP** You can find a larger, more capable version of the model here: [GPT-NoSleep-1.5b](https://huggingface.co/DarwinAnim8or/GPT-NoSleep-1.5b)
|
26 |
+
|
27 |
# Training Procedure
|
28 |
This was trained on the 'reddt-nosleep-posts' dataset, using the "HappyTransformers" library on Google Colab.
|
29 |
This model was trained for X epochs with learning rate 1e-2.
|
|
|
33 |
It likely will generate offensive output.
|
34 |
|
35 |
# Intended Use
|
36 |
+
This model is meant for fun, nothing else.
|
37 |
+
|
38 |
+
# Sample code
|
39 |
+
```python
|
40 |
+
#Import model:
|
41 |
+
from happytransformer import HappyGeneration
|
42 |
+
happy_gen = HappyGeneration("GPT2", "DarwinAnim8or/GPT-NoSleep-355m")
|
43 |
+
|
44 |
+
#Set generation settings:
|
45 |
+
from happytransformer import GENSettings
|
46 |
+
args_top_k = GENSettingsGENSettings(no_repeat_ngram_size=3, do_sample=True, top_k=80, temperature=0.8, max_length=150, early_stopping=False)
|
47 |
+
|
48 |
+
#Generate a response:
|
49 |
+
result = happy_gen.generate_text("[WP] We don't go to the forest at night [RESPONSE] ", args=args_top_k)
|
50 |
+
|
51 |
+
print(result)
|
52 |
+
print(result.text)
|
53 |
+
```
|