Edit model card

CLIPSeg model

CLIPSeg model with reduce dimension 16. It was introduced in the paper Image Segmentation Using Text and Image Prompts by LΓΌddecke et al. and first released in this repository.

Intended use cases

This model is intended for zero-shot and one-shot image segmentation.

Usage

Refer to the documentation.

Downloads last month
295
Safetensors
Model size
150M params
Tensor type
I64
Β·
F32
Β·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for CIDAS/clipseg-rd16

Quantizations
1 model

Spaces using CIDAS/clipseg-rd16 5