Edit model card

Model Card for Mistral-Codon-v1-204M (Mistral for coding DNA)

The Mistral-Codon-v1-204M Large Language Model (LLM) is a pretrained generative DNA sequence model with 204M parameters. It is derived from Mixtral-8x7B-v0.1 model, which was simplified for DNA: the number of layers and the hidden size were reduced. The model was pretrained using 204M coding DNA sequences (300bp) from many different species (vertebrates, plants, bacteria, viruses, ...). Compared to v1 models, v2 models have a very large number of experts (128) making the model faster to run.

Model Architecture

Like Mixtral-8x7B-v0.1, it is a transformer model, with the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer
  • Mixture of Experts

Load the model from huggingface:

import torch
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("RaphaelMourad/Mistral-Codon-v1-204M", trust_remote_code=True) 
model = AutoModel.from_pretrained("RaphaelMourad/Mistral-Codon-v1-204M", trust_remote_code=True)

Calculate the embedding of a coding sequence

insulin = "TGA TGA TTG GCG CGG CTA GGA TCG GCT"
inputs = tokenizer(insulin, return_tensors = 'pt')["input_ids"]
hidden_states = model(inputs)[0] # [1, sequence_length, 256]

# embedding with max pooling
embedding_max = torch.max(hidden_states[0], dim=0)[0]
print(embedding_max.shape) # expect to be 256

Troubleshooting

Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.

Notice

Mistral-Codon-v1-204M is a pretrained base model for coding DNA.

Contact

Raphaël Mourad. [email protected]

Downloads last month
4
Safetensors
Model size
204M params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .