Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/google/reformer-crime-and-punishment/README.md
README.md
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Reformer Model trained on "Crime and Punishment"
|
2 |
+
|
3 |
+
Crime and Punishment is a novel written by Fyodor Dostoevsky and was translated into English.
|
4 |
+
|
5 |
+
Crime and Punishment training data was taken from `gs://trax-ml/reformer/crime-and-punishment-2554.txt` and contains
|
6 |
+
roughly 0.5M tokens.
|
7 |
+
|
8 |
+
The ReformerLM model was trained in flax using colab notebook proposed by authors: https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/text_generation.ipynb and the weights were converted to Hugging Face's PyTorch ReformerLM model `ReformerModelWithLMHead`.
|
9 |
+
|
10 |
+
The model is a language model that operates on small sub-word units. Text can be generated as follows:
|
11 |
+
|
12 |
+
```python
|
13 |
+
model = ReformerModelWithLMHead.from_pretrained("google/reformer-crime-and-punishment")
|
14 |
+
tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment")
|
15 |
+
tok.decode(model.generate(tok.encode("A few months later", return_tensors="pt"), do_sample=True,temperature=0.7, max_length=100)[0])
|
16 |
+
|
17 |
+
# gives:'A few months later on was more than anything in the flat.
|
18 |
+
# “I have already.” “That’s not my notion that he had forgotten him.
|
19 |
+
# What does that matter? And why do you mean? It’s only another fellow,” he said as he went out, as though he want'
|
20 |
+
```
|