--- language: en license: apache-2.0 library_name: transformers --- # SQFT Base Model: sqft-mistral-7b-v0.3-60-base - Source Model: [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) - Sparse Method: [Wanda](https://github.com/locuslab/wanda) - Sparsity: 60% - Quantization: No ## Model Sources - **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT) - **Paper:** [SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models](https://arxiv.org/abs/2410.03750) ## How to get this model Refer to the command in [SQFT/run_command/mistral-7b-v0.3/sparse_quantization.sh#11](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT/run_command/mistral-7b-v0.3/sparse_quantization.sh#11). ## Citation ```bash @article{munoz2024sqft, title = {SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models}, author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain}, journal={The 2024 Conference on Empirical Methods in Natural Language Processing (Findings)}, year={2024} } ``` ## Acknowledgement Thanks to the work Wanda ([paper](https://arxiv.org/abs/2306.11695), [code](https://github.com/locuslab/wanda)), which provides a simple but effective pruning approach. ## License Apache-2.0