--- license: apache-2.0 tags: - pretrained - mistral - DNA - biology - genomics --- # Model Card for Mistral-DNA-v1-138M-bacteria (mistral for DNA) The Mistral-DNA-v1-138M-bacteria Large Language Model (LLM) is a pretrained generative DNA text model with 17.31M parameters x 8 experts = 138.5M parameters. It is derived from Mistral-7B-v0.1 model, which was simplified for DNA: the number of layers and the hidden size were reduced. The model was pretrained using around 700 bacterial genomes with 10kb DNA sequences. For full details of this model please read our [github repo](https://github.com/raphaelmourad/Mistral-DNA). ## Model Architecture Like Mistral-7B-v0.1, it is a transformer model, with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Load the model from huggingface: ``` import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-DNA-v1-138M-bacteria", trust_remote_code=True) # Same as DNABERT2 model = AutoModel.from_pretrained("RaphaelMourad/Mistral-DNA-v1-138M-bacteria", trust_remote_code=True) ``` ## Calculate the embedding of a DNA sequence ``` dna = "TGATGATTGGCGCGGCTAGGATCGGCT" inputs = tokenizer(dna, return_tensors = 'pt')["input_ids"] hidden_states = model(inputs)[0] # [1, sequence_length, 256] # embedding with max pooling embedding_max = torch.max(hidden_states[0], dim=0)[0] print(embedding_max.shape) # expect to be 256 ``` ## Troubleshooting Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer. ## Notice Mistral-DNA-v1-138M-bacteria is a pretrained base model for DNA. ## Contact Raphaƫl Mourad. raphael.mourad@univ-tlse3.fr