You are viewing v1.22.0 version.
A newer version
v1.23.3 is available.
Installation
🤗 Optimum can be installed using pip
as follows:
python -m pip install optimum
If you’d like to use the accelerator-specific features of 🤗 Optimum, you can install the required dependencies according to the table below:
Accelerator | Installation |
---|---|
ONNX Runtime | pip install --upgrade --upgrade-strategy eager optimum[onnxruntime] |
Intel Neural Compressor | pip install --upgrade --upgrade-strategy eager optimum[neural-compressor] |
OpenVINO | pip install --upgrade --upgrade-strategy eager optimum[openvino] |
NVIDIA TensorRT-LLM | docker run -it --gpus all --ipc host huggingface/optimum-nvidia |
AMD Instinct GPUs and Ryzen AI NPU | pip install --upgrade --upgrade-strategy eager optimum[amd] |
AWS Trainum & Inferentia | pip install --upgrade --upgrade-strategy eager optimum[neuronx] |
Habana Gaudi Processor (HPU) | pip install --upgrade --upgrade-strategy eager optimum[habana] |
FuriosaAI | pip install --upgrade --upgrade-strategy eager optimum[furiosa] |
The --upgrade --upgrade-strategy eager
option is needed to ensure the different packages are upgraded to the latest possible version.
If you’d like to play with the examples or need the bleeding edge of the code and can’t wait for a new release, you can install the base library from source as follows:
python -m pip install git+https://github.com/huggingface/optimum.git
For the accelerator-specific features, you can install them by appending optimum[accelerator_type]
to the pip
command, e.g.
python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git