Huggingface.js documentation

Class: HfInference

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Class: HfInference

Hierarchy

  • TaskWithNoAccessToken

    HfInference

Constructors

constructor

new HfInference(accessToken?, defaultOptions?): HfInference

Parameters

Name Type Default value
accessToken string ""
defaultOptions Options {}

Returns

HfInference

Defined in

inference/src/HfInference.ts:28

Properties

accessToken

Private Readonly accessToken: string

Defined in

inference/src/HfInference.ts:25


audioClassification

audioClassification: (args: { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string }, options?: Options) => Promise\<AudioClassificationReturn>

Type declaration

▸ (args, options?): Promise\<AudioClassificationReturn>

Parameters
Name Type Description
args Object -
args.data Blob | ArrayBuffer Binary audio data
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<AudioClassificationReturn>

Defined in

inference/src/tasks/audio/audioClassification.ts:30


audioToAudio

audioToAudio: (args: { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string }, options?: Options) => Promise\<AudioToAudioReturn>

Type declaration

▸ (args, options?): Promise\<AudioToAudioReturn>

Parameters
Name Type Description
args Object -
args.data Blob | ArrayBuffer Binary audio data
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<AudioToAudioReturn>

Defined in

inference/src/tasks/audio/audioToAudio.ts:35


automaticSpeechRecognition

automaticSpeechRecognition: (args: { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string }, options?: Options) => Promise\<AutomaticSpeechRecognitionOutput>

Type declaration

▸ (args, options?): Promise\<AutomaticSpeechRecognitionOutput>

Parameters
Name Type Description
args Object -
args.data Blob | ArrayBuffer Binary audio data
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<AutomaticSpeechRecognitionOutput>

Defined in

inference/src/tasks/audio/automaticSpeechRecognition.ts:23


chatCompletion

chatCompletion: (args: {}, options?: Options) => Promise\<ChatCompletionOutput>

Type declaration

▸ (args, options?): Promise\<ChatCompletionOutput>

Parameters
Name Type
args Object
options? Options
Returns

Promise\<ChatCompletionOutput>

Defined in

inference/src/tasks/nlp/chatCompletion.ts:10


chatCompletionStream

chatCompletionStream: (args: {}, options?: Options) => AsyncGenerator\<ChatCompletionStreamOutput, any, unknown>

Type declaration

▸ (args, options?): AsyncGenerator\<ChatCompletionStreamOutput, any, unknown>

Parameters
Name Type
args Object
options? Options
Returns

AsyncGenerator\<ChatCompletionStreamOutput, any, unknown>

Defined in

inference/src/tasks/nlp/chatCompletionStream.ts:8


defaultOptions

Private Readonly defaultOptions: Options

Defined in

inference/src/HfInference.ts:26


documentQuestionAnswering

documentQuestionAnswering: (args: { endpointUrl?: string ; inputs: { image: Blob | ArrayBuffer ; question: string } ; model?: string }, options?: Options) => Promise\<DocumentQuestionAnsweringOutput>

Type declaration

▸ (args, options?): Promise\<DocumentQuestionAnsweringOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs Object -
args.inputs.image Blob | ArrayBuffer Raw image You can use native File in browsers, or new Blob([buffer]) in node, or for a base64 image new Blob([btoa(base64String)]), or even await (await fetch('...)).blob()
args.inputs.question string -
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<DocumentQuestionAnsweringOutput>

Defined in

inference/src/tasks/multimodal/documentQuestionAnswering.ts:42


featureExtraction

featureExtraction: (args: { endpointUrl?: string ; inputs: string | string[] ; model?: string }, options?: Options) => Promise\<FeatureExtractionOutput>

Type declaration

▸ (args, options?): Promise\<FeatureExtractionOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs string | string[] The inputs is a string or a list of strings to get the features from. inputs: “That is a happy person”,
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<FeatureExtractionOutput>

Defined in

inference/src/tasks/nlp/featureExtraction.ts:24


fillMask

fillMask: (args: { endpointUrl?: string ; inputs: string ; model?: string }, options?: Options) => Promise\<FillMaskOutput>

Type declaration

▸ (args, options?): Promise\<FillMaskOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs string -
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<FillMaskOutput>

Defined in

inference/src/tasks/nlp/fillMask.ts:31


imageClassification

imageClassification: (args: { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string }, options?: Options) => Promise\<ImageClassificationOutput>

Type declaration

▸ (args, options?): Promise\<ImageClassificationOutput>

Parameters
Name Type Description
args Object -
args.data Blob | ArrayBuffer Binary image data
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<ImageClassificationOutput>

Defined in

inference/src/tasks/cv/imageClassification.ts:29


imageSegmentation

imageSegmentation: (args: { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string }, options?: Options) => Promise\<ImageSegmentationOutput>

Type declaration

▸ (args, options?): Promise\<ImageSegmentationOutput>

Parameters
Name Type Description
args Object -
args.data Blob | ArrayBuffer Binary image data
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<ImageSegmentationOutput>

Defined in

inference/src/tasks/cv/imageSegmentation.ts:33


imageToImage

imageToImage: (args: { endpointUrl?: string ; inputs: Blob | ArrayBuffer ; model?: string ; parameters?: { guess_mode?: boolean ; guidance_scale?: number ; height?: number ; negative_prompt?: string ; num_inference_steps?: number ; prompt?: string ; strength?: number ; width?: number } }, options?: Options) => Promise\<Blob>

Type declaration

▸ (args, options?): Promise\<Blob>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs Blob | ArrayBuffer The initial image condition
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
args.parameters? Object -
args.parameters.guess_mode? boolean guess_mode only works for ControlNet models, defaults to False In this mode, the ControlNet encoder will try best to recognize the content of the input image even if you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended.
args.parameters.guidance_scale? number Guidance scale: Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
args.parameters.height? number The height in pixels of the generated image
args.parameters.negative_prompt? string An optional negative prompt for the image generation
args.parameters.num_inference_steps? number The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
args.parameters.prompt? string The text prompt to guide the image generation.
args.parameters.strength? number strengh param only works for SD img2img and alt diffusion img2img models Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image will be used as a starting point, adding more noise to it the larger the strength. The number of denoising steps depends on the amount of noise initially added. When strength is 1, added noise will be maximum and the denoising process will run for the full number of iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores image.
args.parameters.width? number The width in pixels of the generated image
options? Options -
Returns

Promise\<Blob>

Defined in

inference/src/tasks/cv/imageToImage.ts:61


imageToText

imageToText: (args: { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string }, options?: Options) => Promise\<ImageToTextOutput>

Type declaration

▸ (args, options?): Promise\<ImageToTextOutput>

Parameters
Name Type Description
args Object -
args.data Blob | ArrayBuffer Binary image data
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<ImageToTextOutput>

Defined in

inference/src/tasks/cv/imageToText.ts:22


objectDetection

objectDetection: (args: { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string }, options?: Options) => Promise\<ObjectDetectionOutput>

Type declaration

▸ (args, options?): Promise\<ObjectDetectionOutput>

Parameters
Name Type Description
args Object -
args.data Blob | ArrayBuffer Binary image data
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<ObjectDetectionOutput>

Defined in

inference/src/tasks/cv/objectDetection.ts:39


questionAnswering

questionAnswering: (args: { endpointUrl?: string ; inputs: { context: string ; question: string } ; model?: string }, options?: Options) => Promise\<QuestionAnsweringOutput>

Type declaration

▸ (args, options?): Promise\<QuestionAnsweringOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs Object -
args.inputs.context string -
args.inputs.question string -
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<QuestionAnsweringOutput>

Defined in

inference/src/tasks/nlp/questionAnswering.ts:34


request

request: (args: { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string ; parameters?: Record\<string, unknown> } | { endpointUrl?: string ; inputs: unknown ; model?: string ; parameters?: Record\<string, unknown> } | {}, options?: Options & { chatCompletion?: boolean ; task?: string ; taskHint?: InferenceTask }) => Promise\<unknown>

Type declaration

▸ (args, options?): Promise\<unknown>

Parameters
Name Type
args { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string ; parameters?: Record\<string, unknown> } | { endpointUrl?: string ; inputs: unknown ; model?: string ; parameters?: Record\<string, unknown> } | {}
options? Options & { chatCompletion?: boolean ; task?: string ; taskHint?: InferenceTask }
Returns

Promise\<unknown>

Defined in

inference/src/tasks/custom/request.ts:7


sentenceSimilarity

sentenceSimilarity: (args: { endpointUrl?: string ; inputs: Record\<string, unknown> | Record\<string, unknown>[] ; model?: string }, options?: Options) => Promise\<SentenceSimilarityOutput>

Type declaration

▸ (args, options?): Promise\<SentenceSimilarityOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs Record\<string, unknown> | Record\<string, unknown>[] The inputs vary based on the model. For example when using sentence-transformers/paraphrase-xlm-r-multilingual-v1 the inputs will have a source_sentence string and a sentences array of strings
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<SentenceSimilarityOutput>

Defined in

inference/src/tasks/nlp/sentenceSimilarity.ts:24


streamingRequest

streamingRequest: (args: { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string ; parameters?: Record\<string, unknown> } | { endpointUrl?: string ; inputs: unknown ; model?: string ; parameters?: Record\<string, unknown> } | {}, options?: Options & { chatCompletion?: boolean ; task?: string ; taskHint?: InferenceTask }) => AsyncGenerator\<unknown, any, unknown>

Type declaration

▸ (args, options?): AsyncGenerator\<unknown, any, unknown>

Parameters
Name Type
args { data: Blob | ArrayBuffer ; endpointUrl?: string ; model?: string ; parameters?: Record\<string, unknown> } | { endpointUrl?: string ; inputs: unknown ; model?: string ; parameters?: Record\<string, unknown> } | {}
options? Options & { chatCompletion?: boolean ; task?: string ; taskHint?: InferenceTask }
Returns

AsyncGenerator\<unknown, any, unknown>

Defined in

inference/src/tasks/custom/streamingRequest.ts:9


summarization

summarization: (args: { endpointUrl?: string ; inputs: string ; model?: string ; parameters?: { max_length?: number ; max_time?: number ; min_length?: number ; repetition_penalty?: number ; temperature?: number ; top_k?: number ; top_p?: number } }, options?: Options) => Promise\<SummarizationOutput>

Type declaration

▸ (args, options?): Promise\<SummarizationOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs string A string to be summarized
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
args.parameters? Object -
args.parameters.max_length? number (Default: None). Integer to define the maximum length in tokens of the output summary.
args.parameters.max_time? number (Default: None). Float (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit.
args.parameters.min_length? number (Default: None). Integer to define the minimum length in tokens of the output summary.
args.parameters.repetition_penalty? number (Default: None). Float (0.0-100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes.
args.parameters.temperature? number (Default: 1.0). Float (0.0-100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability.
args.parameters.top_k? number (Default: None). Integer to define the top tokens considered within the sample operation to create new text.
args.parameters.top_p? number (Default: None). Float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p.
options? Options -
Returns

Promise\<SummarizationOutput>

Defined in

inference/src/tasks/nlp/summarization.ts:52


tableQuestionAnswering

tableQuestionAnswering: (args: { endpointUrl?: string ; inputs: { query: string ; table: Record\<string, string[]> } ; model?: string }, options?: Options) => Promise\<TableQuestionAnsweringOutput>

Type declaration

▸ (args, options?): Promise\<TableQuestionAnsweringOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs Object -
args.inputs.query string The query in plain text that you want to ask the table
args.inputs.table Record\<string, string[]> A table of data represented as a dict of list where entries are headers and the lists are all the values, all lists must have the same size.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<TableQuestionAnsweringOutput>

Defined in

inference/src/tasks/nlp/tableQuestionAnswering.ts:40


tabularClassification

tabularClassification: (args: { endpointUrl?: string ; inputs: { data: Record\<string, string[]> } ; model?: string }, options?: Options) => Promise\<TabularClassificationOutput>

Type declaration

▸ (args, options?): Promise\<TabularClassificationOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs Object -
args.inputs.data Record\<string, string[]> A table of data represented as a dict of list where entries are headers and the lists are all the values, all lists must have the same size.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<TabularClassificationOutput>

Defined in

inference/src/tasks/tabular/tabularClassification.ts:24


tabularRegression

tabularRegression: (args: { endpointUrl?: string ; inputs: { data: Record\<string, string[]> } ; model?: string }, options?: Options) => Promise\<TabularRegressionOutput>

Type declaration

▸ (args, options?): Promise\<TabularRegressionOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs Object -
args.inputs.data Record\<string, string[]> A table of data represented as a dict of list where entries are headers and the lists are all the values, all lists must have the same size.
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<TabularRegressionOutput>

Defined in

inference/src/tasks/tabular/tabularRegression.ts:24


textClassification

textClassification: (args: { endpointUrl?: string ; inputs: string ; model?: string }, options?: Options) => Promise\<TextClassificationOutput>

Type declaration

▸ (args, options?): Promise\<TextClassificationOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs string A string to be classified
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<TextClassificationOutput>

Defined in

inference/src/tasks/nlp/textClassification.ts:26


textGeneration

textGeneration: (args: {}, options?: Options) => Promise\<TextGenerationOutput>

Type declaration

▸ (args, options?): Promise\<TextGenerationOutput>

Parameters
Name Type
args Object
options? Options
Returns

Promise\<TextGenerationOutput>

Defined in

inference/src/tasks/nlp/textGeneration.ts:12


textGenerationStream

textGenerationStream: (args: {}, options?: Options) => AsyncGenerator\<TextGenerationStreamOutput, any, unknown>

Type declaration

▸ (args, options?): AsyncGenerator\<TextGenerationStreamOutput, any, unknown>

Parameters
Name Type
args Object
options? Options
Returns

AsyncGenerator\<TextGenerationStreamOutput, any, unknown>

Defined in

inference/src/tasks/nlp/textGenerationStream.ts:88


textToImage

textToImage: (args: { endpointUrl?: string ; inputs: string ; model?: string ; parameters?: { guidance_scale?: number ; height?: number ; negative_prompt?: string ; num_inference_steps?: number ; width?: number } }, options?: Options) => Promise\<Blob>

Type declaration

▸ (args, options?): Promise\<Blob>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs string The text to generate an image from
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
args.parameters? Object -
args.parameters.guidance_scale? number Guidance scale: Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
args.parameters.height? number The height in pixels of the generated image
args.parameters.negative_prompt? string An optional negative prompt for the image generation
args.parameters.num_inference_steps? number The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
args.parameters.width? number The width in pixels of the generated image
options? Options -
Returns

Promise\<Blob>

Defined in

inference/src/tasks/cv/textToImage.ts:41


textToSpeech

textToSpeech: (args: { endpointUrl?: string ; inputs: string ; model?: string }, options?: Options) => Promise\<Blob>

Type declaration

▸ (args, options?): Promise\<Blob>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs string The text to generate an audio from
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<Blob>

Defined in

inference/src/tasks/audio/textToSpeech.ts:18


tokenClassification

tokenClassification: (args: { endpointUrl?: string ; inputs: string ; model?: string ; parameters?: { aggregation_strategy?: "none" | "simple" | "first" | "average" | "max" } }, options?: Options) => Promise\<TokenClassificationOutput>

Type declaration

▸ (args, options?): Promise\<TokenClassificationOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs string A string to be classified
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
args.parameters? Object -
args.parameters.aggregation_strategy? "none" | "simple" | "first" | "average" | "max" (Default: simple). There are several aggregation strategies: none: Every token gets classified without further aggregation. simple: Entities are grouped according to the default schema (B-, I- tags get merged when the tag is similar). first: Same as the simple strategy except words cannot end up with different tags. Words will use the tag of the first token when there is ambiguity. average: Same as the simple strategy except words cannot end up with different tags. Scores are averaged across tokens and then the maximum label is applied. max: Same as the simple strategy except words cannot end up with different tags. Word entity will be the token with the maximum score.
options? Options -
Returns

Promise\<TokenClassificationOutput>

Defined in

inference/src/tasks/nlp/tokenClassification.ts:57


translation

translation: (args: { endpointUrl?: string ; inputs: string | string[] ; model?: string }, options?: Options) => Promise\<TranslationOutput>

Type declaration

▸ (args, options?): Promise\<TranslationOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs string | string[] A string to be translated
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<TranslationOutput>

Defined in

inference/src/tasks/nlp/translation.ts:24


visualQuestionAnswering

visualQuestionAnswering: (args: { endpointUrl?: string ; inputs: { image: Blob | ArrayBuffer ; question: string } ; model?: string }, options?: Options) => Promise\<VisualQuestionAnsweringOutput>

Type declaration

▸ (args, options?): Promise\<VisualQuestionAnsweringOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs Object -
args.inputs.image Blob | ArrayBuffer Raw image You can use native File in browsers, or new Blob([buffer]) in node, or for a base64 image new Blob([btoa(base64String)]), or even await (await fetch('...)).blob()
args.inputs.question string -
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
options? Options -
Returns

Promise\<VisualQuestionAnsweringOutput>

Defined in

inference/src/tasks/multimodal/visualQuestionAnswering.ts:32


zeroShotClassification

zeroShotClassification: (args: { endpointUrl?: string ; inputs: string | string[] ; model?: string ; parameters: { candidate_labels: string[] ; multi_label?: boolean } }, options?: Options) => Promise\<ZeroShotClassificationOutput>

Type declaration

▸ (args, options?): Promise\<ZeroShotClassificationOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs string | string[] a string or list of strings
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
args.parameters Object -
args.parameters.candidate_labels string[] a list of strings that are potential classes for inputs. (max 10 candidate_labels, for more, simply run multiple requests, results are going to be misleading if using too many candidate_labels anyway. If you want to keep the exact same, you can simply run multi_label=True and do the scaling on your end.
args.parameters.multi_label? boolean (Default: false) Boolean that is set to True if classes can overlap
options? Options -
Returns

Promise\<ZeroShotClassificationOutput>

Defined in

inference/src/tasks/nlp/zeroShotClassification.ts:34


zeroShotImageClassification

zeroShotImageClassification: (args: { endpointUrl?: string ; inputs: { image: Blob | ArrayBuffer } ; model?: string ; parameters: { candidate_labels: string[] } }, options?: Options) => Promise\<ZeroShotImageClassificationOutput>

Type declaration

▸ (args, options?): Promise\<ZeroShotImageClassificationOutput>

Parameters
Name Type Description
args Object -
args.endpointUrl? string The URL of the endpoint to use. If not specified, will call huggingface.co/api/tasks to get the default endpoint for the task. If specified, will use this URL instead of the default one.
args.inputs Object -
args.inputs.image Blob | ArrayBuffer Binary image data
args.model? string The model to use. If not specified, will call huggingface.co/api/tasks to get the default model for the task. /!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future. Use the endpointUrl parameter instead.
args.parameters Object -
args.parameters.candidate_labels string[] A list of strings that are potential classes for inputs. (max 10)
options? Options -
Returns

Promise\<ZeroShotImageClassificationOutput>

Defined in

inference/src/tasks/cv/zeroShotImageClassification.ts:33

Methods

endpoint

endpoint(endpointUrl): HfInferenceEndpoint

Returns copy of HfInference tied to a specified endpoint.

Parameters

Name Type
endpointUrl string

Returns

HfInferenceEndpoint

Defined in

inference/src/HfInference.ts:45

< > Update on GitHub