Optimum documentation

RyzenAIModel

You are viewing v1.22.0 version. A newer version v1.23.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

RyzenAIModel

optimum.amd.ryzenai.pipeline

< >

( task model: Optional = None vaip_config: Optional = None model_type: Optional = None feature_extractor: Union = None image_processor: Union = None use_fast: bool = True token: Union = None revision: Optional = None **kwargs ) Pipeline

Parameters

  • task (str) — The task defining which pipeline will be returned. Available tasks include:
    • “image-classification”
    • “object-detection”
  • model (Optional[Any], defaults to None) — The model that will be used by the pipeline to make predictions. This can be a model identifier or an actual instance of a pretrained model. If not provided, the default model for the specified task will be loaded.
  • vaip_config (Optional[str], defaults to None) — Runtime configuration file for inference with Ryzen IPU. A default config file can be found in the Ryzen AI VOE package, extracted during installation under the name vaip_config.json.
  • model_type (Optional[str], defaults to None) — Model type for the model
  • feature_extractor (Union[str, "PreTrainedFeatureExtractor"], defaults to None) — The feature extractor that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained feature extractor.
  • image_processor (Union[str, BaseImageProcessor], defaults to None) — The image processor that will be used by the pipeline for image-related tasks.
  • use_fast (bool, defaults to True) — Whether or not to use a Fast tokenizer if possible.
  • token (Union[str, bool], defaults to None) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
  • revision (str, defaults to None) — The specific model version to use, specified as a branch name, tag name, or commit id. **kwargs — Additional keyword arguments passed to the underlying pipeline class.

Returns

Pipeline

An instance of the specified pipeline for the given task and model.

Utility method to build a pipeline for various RyzenAI tasks.

This function creates a pipeline for a specified task, utilizing a given model or loading the default model for the task. The pipeline includes components such as a image processor and model.

Computer vision

class optimum.amd.ryzenai.pipelines.TimmImageClassificationPipeline

< >

( model: Union tokenizer: Optional = None feature_extractor: Optional = None image_processor: Optional = None modelcard: Optional = None framework: Optional = None task: str = '' args_parser: ArgumentHandler = None device: Union = None torch_dtype: Union = None binary_output: bool = False **kwargs )

Example usage:

import requests
from PIL import Image

from optimum.amd.ryzenai import pipeline

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

model_id = "mohitsha/timm-resnet18-onnx-quantized-ryzen"

pipe = pipeline("image-classification", model=model_id, vaip_config="vaip_config.json")
print(pipe(image))

class optimum.amd.ryzenai.pipelines.YoloObjectDetectionPipeline

< >

( model: Union tokenizer: Optional = None feature_extractor: Optional = None image_processor: Optional = None modelcard: Optional = None framework: Optional = None task: str = '' args_parser: ArgumentHandler = None device: Union = None torch_dtype: Union = None binary_output: bool = False **kwargs )

Supported model types

  • yolox
  • yolov3
  • yolov5
  • yolov8

Example usage:

import requests
from PIL import Image

from optimum.amd.ryzenai import pipeline

img = ".\\image.jpg"
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
img = ".\\image2.jpg"

image = Image.open(img)

model_id = "amd/yolox-s"

detector = pipeline("object-detection", model=model_id, vaip_config="vaip_config.json", model_type="yolox")
detector = pipe(image)
< > Update on GitHub