hdallatorre
commited on
Commit
•
4f06105
1
Parent(s):
7d77051
Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,93 @@ tags:
|
|
4 |
- DNA
|
5 |
- biology
|
6 |
- genomics
|
7 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- DNA
|
5 |
- biology
|
6 |
- genomics
|
7 |
+
---
|
8 |
+
# nucleotide-transformer-500m-1000g model
|
9 |
+
|
10 |
+
The Nucleotide Transformers are a collection of foundational language models that were pre-trained on DNA sequences from whole-genomes. Compared to other approaches, our models do not only integrate information from single reference genomes, but leverage DNA sequences from over 3,200 diverse human genomes, as well as 850 genomes from a wide range of species, including model and non-model organisms. Through robust and extensive evaluation, we show that these large models provide extremely accurate molecular phenotype prediction compared to existing methods
|
11 |
+
|
12 |
+
Part of this collection is the **nucleotide-transformer-500m-1000g**, a 500M parameters transformer pre-trained on a collection of 3202 genetically diverse human genomes.
|
13 |
+
|
14 |
+
**Developed by:** InstaDeep, NVIDIA and TUM
|
15 |
+
|
16 |
+
### Model Sources
|
17 |
+
|
18 |
+
<!-- Provide the basic links for the model. -->
|
19 |
+
|
20 |
+
- **Repository:** [Nucleotide Transformer](https://github.com/instadeepai/nucleotide-transformer)
|
21 |
+
- **Paper:** [The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics](https://www.biorxiv.org/content/10.1101/2023.01.11.523679v1)
|
22 |
+
|
23 |
+
### How to use
|
24 |
+
|
25 |
+
<!-- Need to adapt this section to our model. Need to figure out how to load the models from huggingface and do inference on them -->
|
26 |
+
```python
|
27 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
28 |
+
import torch
|
29 |
+
|
30 |
+
# Import the tokenizer and the model
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-1000g")
|
32 |
+
model = AutoModelForMaskedLM.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-1000g")
|
33 |
+
|
34 |
+
# Create a dummy dna sequence and tokenize it
|
35 |
+
sequences = ['ATTCTG' * 9]
|
36 |
+
tokens_ids = tokenizer.batch_encode_plus(sequences, return_tensors="pt")["input_ids"]
|
37 |
+
|
38 |
+
# Compute the embeddings
|
39 |
+
attention_mask = tokens_ids != tokenizer.pad_token_id
|
40 |
+
torch_outs = model(
|
41 |
+
tokens_ids,
|
42 |
+
attention_mask=attention_mask,
|
43 |
+
encoder_attention_mask=attention_mask,
|
44 |
+
output_hidden_states=True
|
45 |
+
)
|
46 |
+
|
47 |
+
# Compute sequences embeddings
|
48 |
+
embeddings = torch_outs['hidden_states'][-1].detach().numpy()
|
49 |
+
print(f"Embeddings shape: {embeddings.shape}")
|
50 |
+
print(f"Embeddings per token: {embeddings}")
|
51 |
+
|
52 |
+
# Compute mean embeddings per sequence
|
53 |
+
mean_sequence_embeddings = torch.sum(attention_mask.unsqueeze(-1)*embeddings, axis=-2)/torch.sum(attention_mask, axis=-1)
|
54 |
+
print(f"Mean sequence embeddings: {mean_sequence_embeddings}")
|
55 |
+
```
|
56 |
+
|
57 |
+
|
58 |
+
## Training data
|
59 |
+
|
60 |
+
The **nucleotide-transformer-500m-1000g** model was pretrained on 3202 genetically diverse human genomes originating from 27 geographically structured populations of African, American, East Asian, and European ancestry taken from the [1000G project](http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data_collections/1000G_2504_high_coverage/working/20201028_3202_phased) . Such diversity allowed the dataset to encode a better representation of human genetic variation. To allow haplotype reconstruction in the sequences fed to the model, we considered the phased version of the 1000G Genomes project, which corresponded to a total of 125M mutations, 111M and 14M of which are single nucleotide polymorphisms (SNPs) and indels, respectively. The total number of nucleotides in the dataset is 19,212 B nucleotides, resulting in roughly 3,202 B tokens.
|
61 |
+
|
62 |
+
## Training procedure
|
63 |
+
|
64 |
+
### Preprocessing
|
65 |
+
|
66 |
+
The DNA sequences are tokenized using the Nucleotide Transformer Tokenizer, which tokenizes sequences as 6-mers tokenizer when possible, otherwise tokenizing each nucleotide separately as described in the [Tokenization](https://github.com/instadeepai/nucleotide-transformer#tokenization-abc) section of the associated repository. This tokenizer has a vocabulary size of 4105. The inputs of the model are then of the form:
|
67 |
+
|
68 |
+
```
|
69 |
+
<CLS> <ACGTGT> <ACGTGC> <ACGGAC> <GACTAG> <TCAGCA>
|
70 |
+
```
|
71 |
+
|
72 |
+
The tokenized sequence have a maximum length of 1,000.
|
73 |
+
|
74 |
+
The masking procedure used is the standard one for Bert-style training:
|
75 |
+
- 15% of the tokens are masked.
|
76 |
+
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
|
77 |
+
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
78 |
+
- In the 10% remaining cases, the masked tokens are left as is.
|
79 |
+
|
80 |
+
### Pretraining
|
81 |
+
|
82 |
+
The model was trained with 8 A100 80GB on 300B tokens, with an effective batch size of 1M tokens. The sequence length used was 1000 tokens. The Adam optimizer [38] was used with a learning rate schedule, and standard values for exponential decay rates and epsilon constants, β1 = 0.9, β2 = 0.999 and ε=1e-8. During a first warmup period, the learning rate was increased linearly between 5e-5 and 1e-4 over 16k steps before decreasing following a square root decay until the end of training.
|
83 |
+
|
84 |
+
|
85 |
+
### BibTeX entry and citation info
|
86 |
+
|
87 |
+
```bibtex
|
88 |
+
@article{dalla2023nucleotide,
|
89 |
+
title={The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics},
|
90 |
+
author={Dalla-Torre, Hugo and Gonzalez, Liam and Mendoza Revilla, Javier and Lopez Carranza, Nicolas and Henryk Grywaczewski, Adam and Oteri, Francesco and Dallago, Christian and Trop, Evan and Sirelkhatim, Hassan and Richard, Guillaume and others},
|
91 |
+
journal={bioRxiv},
|
92 |
+
pages={2023--01},
|
93 |
+
year={2023},
|
94 |
+
publisher={Cold Spring Harbor Laboratory}
|
95 |
+
}
|
96 |
+
```
|