I could not install xformers on my M3.
#10
by
keerthi1306logesh
- opened
This comment has been hidden
This comment has been hidden
Hi
@keerthi1306logesh
, you can use the eager
attention in M3 chip machines. By default, the xformers
are not needed.
This comment has been hidden
Load the model in this way:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('YOUR_MODEL')
model = AutoModel.from_pretrained('YOUR_MODEL', attn_implementation='eager', trust_remote_code=True)
keerthi1306logesh
changed discussion status to
closed