Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Quantized model => https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0

Quantization Details: Quantization is done using turboderp's ExLlamaV2 v0.2.3.

I use the default calibration datasets and arguments. The repo also includes a "measurement.json" file, which was used during the quantization process.

For models with bits per weight (BPW) over 6.0, I default to quantizing the lm_head layer at 8 bits instead of the standard 6 bits.


Who are you? What's with these weird BPWs on [insert model here]? I specialize in optimized EXL2 quantization for models in the 70B to 100B+ range, specifically tailored for 48GB VRAM setups. My rig is built using 2 x 3090s with a Ryzen APU (APU used solely for desktop output—no VRAM wasted on the 3090s). I use TabbyAPI for inference, targeting context sizes between 32K and 64K.

Every model I upload includes a config.yml file with my ideal TabbyAPI settings. If you're using my config, don’t forget to set PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync to save some VRAM.

Downloads last month
19
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for DBMe/EVA-Qwen2.5-72B-v0.0-4.86bpw-h6-exl2

Base model

Qwen/Qwen2.5-72B
Quantized
(26)
this model

Datasets used to train DBMe/EVA-Qwen2.5-72B-v0.0-4.86bpw-h6-exl2