Source
Upload of the model released here: https://twitter.com/MistralAI/status/1733150512395038967
Introduction
The model is split into 11 8gb files because of file size limits and it was easier to handle. You need to concatenate them together after downloading. The final hash of the model should match the one in the RELEASE file.
How to Combine
To recombine:
cat consolidated.00.pth-split0 consolidated.00.pth-split1 consolidated.00.pth-split2 consolidated.00.pth-split3 consolidated.00.pth-split4 consolidated.00.pth-split5 consolidated.00.pth-split6 consolidated.00.pth-split7 consolidated.00.pth-split8 consolidated.00.pth-split9 consolidated.00.pth-split10 > consolidated.00.pth
Infernece and Evaluation
Reference implementation
- https://github.com/dzhulgakov/llama-mistral Inference code like llama.
- https://github.com/open-compass/MixtralKit Inference and evaluation from OpenCompass Team.