blip_captioning / README.md
philschmid's picture
philschmid HF staff
Update README.md
ccb995f
|
raw
history blame
1.5 kB
metadata
tags:
  - image-to-text
  - image-captioning
  - endpoints-template
license: bsd-3-clause
library_name: generic

Blip Caption 🤗 Inference Endpoints

This repository implements a custom task for image-captioning for 🤗 Inference Endpoints. The code for the customized pipeline is in the pipeline.py. To use deploy this model a an Inference Endpoint you have to select Custom as task to use the pipeline.py file. -> double check if it is selected

expected Request payload

{
  "image": "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgICAgMC....", // base64 image as bytes
}

below is an example on how to run a request using Python and requests.

Run Request

  1. prepare an image.
!wget https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
  1. run request
import json
from typing import List
import requests as r
import base64

ENDPOINT_URL = ""
HF_TOKEN = ""

def predict(path_to_image: str = None):
    with open(path_to_image, "rb") as i:
        b64 = base64.b64encode(i.read())
    payload = {"inputs": {"image": b64.decode("utf-8"), "candiates": candiates}}
    response = r.post(
        ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
    )
    return response.json()
prediction = predict(
    path_to_image="palace.jpg"
)

expected output

['buckingham palace with flower beds and red flowers']