Text Generation
Transformers
Safetensors
dbrx
conversational
text-generation-inference

How to get hands on experience as a newbie

#38
by kimsia - opened

My first option is to run quantized versions.

Quantized

I read this https://github.com/databricks/dbrx#mlx

and then went to https://huggingface.co/mlx-community/dbrx-instruct-4bit

I read this

On my Macbook Pro M2 with 96GB of Unified Memory, DBRX Instruct in 4-bit for the above prompt it eats 70.2GB of RAM.

I am on a macbook pro M1 Max with 64Gb memory.

I guess that's not enough?

Computing

My next version is to figure out what's a cheap way to run the model but the details confuse me.

Can help?

Databricks org

If you are just looking to experiment, try it out in this space: https://huggingface.co/spaces/databricks/dbrx-instruct

Sign up or log in to comment