DanielHesslow commited on
Commit
10da292
1 Parent(s): 257e676

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md CHANGED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: protein
3
+ tags:
4
+ - protein
5
+ datasets:
6
+ - uniref-100
7
+ ---
8
+
9
+ # RITA-S
10
+
11
+ RITA is a family of autoregressive protein models, developed in collaboration between Lighton, Harvard and Oxford.
12
+
13
+
14
+
15
+ Model | #Params | d_model | layers | lm loss uniref-100
16
+ --- | --- | --- | --- | --- |
17
+ [**Small**](https://huggingface.co/lightonai/RITA_s) | 85M | 768 | 12 | 2.31
18
+ [Medium](https://huggingface.co/lightonai/RITA_m) | 300M | 1024 | 24 | 2.01
19
+ [Large](https://huggingface.co/lightonai/RITA_l)| 680M | 1536 | 24 | 1.82
20
+ [XLarge](https://huggingface.co/lightonai/RITA_xl)| 1.2B | 2048 | 24 | 1.70
21
+
22
+
23
+ # Usage
24
+ Instantiate a model like so:
25
+ from transformers import AutoModel, AutoModelForCausalLM
26
+ model = AutoModelForCausalLM.from_pretrained("Seledorn/RITA_s, trust_remote_code=True")
27
+ tokenizer = AutoTokenizer.from_pretrained("Seledorn/RITA_s")
28
+ for generation use we support pipelines:
29
+
30
+
31
+ rita_gen = pipeline('text-generation', model=model, tokenizer = tokenizer)
32
+ sequences = rita_gen("MAB", max_length=20, do_sample=True, top_k=950, repetition_penalty=1.2, num_return_sequences=2, eos_token_id=2)
33
+ for seq in sequences:
34
+ print(f"seq: {seq['generated_text'].replace(' ', '')}")