Text Generation
Transformers
Safetensors
openelm
custom_code

Fix loading in Hugging Face transformers

#2
by awni - opened
Apple org

Currently the tokenizer does not load when you try to load the model with Hugging Face transformers. I realize it's not there, but it would be good to package it in the model repos so that we can use the model with a single repo ID rather than pointing to a compatible tokenizer elsewhere.

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("apple/OpenELM-270M")

use other tokenizer like NousResearch/Llama-2-7b-hf

Apple org

Right one can always do that as a hack, but for ease of use the tokenizer should be packaged with the HF repo o/w people will need to figure out which tokenizer to use and point to another repo.

Sign up or log in to comment