Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

♻️ Rebrand: Inference API -> Inference Endpoints (serverless) #458

Merged
merged 4 commits into from
Jan 26, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,12 +37,12 @@ await inference.textToImage({

This is a collection of JS libraries to interact with the Hugging Face API, with TS types included.

- [@huggingface/inference](packages/inference/README.md): Use the Inference API to make calls to 100,000+ Machine Learning models, or your own [inference endpoints](https://hf.co/docs/inference-endpoints/)!
- [@huggingface/inference](packages/inference/README.md): Use Inference Endpoints (serverless) to make calls to 100,000+ Machine Learning models
- [@huggingface/hub](packages/hub/README.md): Interact with huggingface.co to create or delete repos and commit / download files
- [@huggingface/agents](packages/agents/README.md): Interact with HF models through a natural language interface


With more to come, like `@huggingface/endpoints` to manage your HF Endpoints!
With more to come, like `@huggingface/endpoints` to manage your dedicated Inference Endpoints!

SBrandeis marked this conversation as resolved.
Show resolved Hide resolved
We use modern features to avoid polyfills and dependencies, so the libraries will only work on modern browsers / Node.js >= 18 / Bun / Deno.

Expand Down Expand Up @@ -128,7 +128,7 @@ await inference.imageToText({
model: 'nlpconnect/vit-gpt2-image-captioning',
})

// Using your own inference endpoint: https://hf.co/docs/inference-endpoints/
// Using your own dedicated inference endpoint: https://hf.co/docs/inference-endpoints/
const gpt2 = inference.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the universe is'});
```
Expand Down
4 changes: 2 additions & 2 deletions docs/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@
isExpanded: true
sections:
- local: inference/README
title: Use the Inference API
title: Use Inference Endpoints
- local: inference/modules
title: API Reference
title: API reference
- title: "@huggingface/hub"
isExpanded: true
sections:
Expand Down
4 changes: 2 additions & 2 deletions packages/agents/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# 🤗 Hugging Face Agents.js

A way to call Hugging Face models and inference APIs from natural language, using an LLM.
A way to call Hugging Face models and inference Endpoints from natural language, using an LLM.
SBrandeis marked this conversation as resolved.
Show resolved Hide resolved

## Install

Expand All @@ -25,7 +25,7 @@ Check out the [full documentation](https://huggingface.co/docs/huggingface.js/ag

## Usage

Agents.js leverages LLMs hosted as Inference APIs on HF, so you need to create an account and generate an [access token](https://huggingface.co/settings/tokens).
Agents.js leverages LLMs hosted as Inference Endpoints on HF, so you need to create an account and generate an [access token](https://huggingface.co/settings/tokens).

```ts
import { HfAgent } from "@huggingface/agents";
Expand Down
7 changes: 4 additions & 3 deletions packages/inference/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# 🤗 Hugging Face Inference API
# 🤗 Hugging Face Inference Endpoints

A Typescript powered wrapper for the Hugging Face Inference API. Learn more about the Inference API at [Hugging Face](https://huggingface.co/docs/api-inference/index). It also works with [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index).
A Typescript powered wrapper for the Hugging Face Inference Endpoints API. Learn more about Inference Endpoints at [Hugging Face](https://huggingface.co/inference-endpoints).
It works wither both [serverless](https://huggingface.co/docs/api-inference/index) and [dedicated](https://huggingface.co/docs/inference-endpoints/index) Endpoints.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
It works wither both [serverless](https://huggingface.co/docs/api-inference/index) and [dedicated](https://huggingface.co/docs/inference-endpoints/index) Endpoints.
It works with both [serverless](https://huggingface.co/docs/api-inference/index) and [dedicated](https://huggingface.co/docs/inference-endpoints/index) Endpoints.

Check out the [full documentation](https://huggingface.co/docs/huggingface.js/inference/README).

You can also try out a live [interactive notebook](https://observablehq.com/@huggingface/hello-huggingface-js-inference), see some demos on [hf.co/huggingfacejs](https://huggingface.co/huggingfacejs), or watch a [Scrimba tutorial that explains how the Inference API works](https://scrimba.com/scrim/cod8248f5adfd6e129582c523).
You can also try out a live [interactive notebook](https://observablehq.com/@huggingface/hello-huggingface-js-inference), see some demos on [hf.co/huggingfacejs](https://huggingface.co/huggingfacejs), or watch a [Scrimba tutorial that explains how Inference Endpoints works](https://scrimba.com/scrim/cod8248f5adfd6e129582c523).

## Getting Started

Expand Down
2 changes: 1 addition & 1 deletion packages/inference/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"packageManager": "[email protected]",
"license": "MIT",
"author": "Tim Mikeladze <[email protected]>",
"description": "Typescript wrapper for the Hugging Face Inference API",
"description": "Typescript wrapper for the Hugging Face Inference Endpoints API",
"repository": {
"type": "git",
"url": "https://github.com/huggingface/huggingface.js.git"
Expand Down
2 changes: 1 addition & 1 deletion packages/inference/src/lib/getDefaultTask.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import { isUrl } from "./isUrl";

/**
* We want to make calls to the huggingface hub the least possible, eg if
* someone is calling the inference API 1000 times per second, we don't want
* someone is calling Inference Endpoints 1000 times per second, we don't want
* to make 1000 calls to the hub to get the task name.
*/
const taskCache = new Map<string, { task: string; date: Date }>();
Expand Down
2 changes: 1 addition & 1 deletion packages/inference/src/tasks/custom/request.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import type { InferenceTask, Options, RequestArgs } from "../../types";
import { makeRequestOptions } from "../../lib/makeRequestOptions";

/**
* Primitive to make custom calls to the inference API
* Primitive to make custom calls to Inference Endpoints
*/
export async function request<T>(
args: RequestArgs,
Expand Down
4 changes: 2 additions & 2 deletions packages/inference/src/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ export interface Options {
*/
retry_on_error?: boolean;
/**
* (Default: true). Boolean. There is a cache layer on the inference API to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query.
* (Default: true). Boolean. There is a cache layer on Serverless Inference Endpoints to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query.
*/
use_cache?: boolean;
/**
Expand Down Expand Up @@ -47,7 +47,7 @@ export interface BaseArgs {
*/
accessToken?: string;
/**
* The model to use. Can be a full URL for HF inference endpoints.
* The model to use. Can be a full URL for a dedicated inference endpoint.
*
* If not specified, will call huggingface.co/api/tasks to get the default model for the task.
*/
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/library-to-tasks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import type { PipelineType } from "./pipelines";

/**
* Mapping from library name (excluding Transformers) to its supported tasks.
* Inference API should be disabled for all other (library, task) pairs beyond this mapping.
* Serverless Inference Endpoints should be disabled for all other (library, task) pairs beyond this mapping.
* As an exception, we assume Transformers supports all inference tasks.
* This mapping is generated automatically by "python-api-export-tasks" action in huggingface/api-inference-community repo upon merge.
* Ref: https://github.com/huggingface/api-inference-community/pull/158
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/model-data.ts
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ export interface ModelData {
*/
widgetData?: WidgetExample[] | undefined;
/**
* Parameters that will be used by the widget when calling Inference API
* Parameters that will be used by the widget when calling Inference Endpoints (serverless)
SBrandeis marked this conversation as resolved.
Show resolved Hide resolved
* https://huggingface.co/docs/api-inference/detailed_parameters
*
* can be set in the model card metadata (under `inference/parameters`)
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/pipelines.ts
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ export interface PipelineData {
/// This type is used in multiple places in the Hugging Face
/// ecosystem:
/// - To determine which widget to show.
/// - To determine which endpoint of Inference API to use.
/// - To determine which endpoint of Inference Endpoints to use.
/// - As filters at the left of models and datasets page.
///
/// Note that this is sensitive to order.
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/audio-classification/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Datasets such as VoxLingua107 allow anyone to train language identification mode

### Emotion recognition

Emotion recognition is self explanatory. In addition to trying the widgets, you can use the Inference API to perform audio classification. Here is a simple example that uses a [HuBERT](https://huggingface.co/superb/hubert-large-superb-er) model fine-tuned for this task.
Emotion recognition is self explanatory. In addition to trying the widgets, you can use Inference Endpoints to perform audio classification. Here is a simple example that uses a [HuBERT](https://huggingface.co/superb/hubert-large-superb-er) model fine-tuned for this task.

```python
import json
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/audio-to-audio/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ model = SpectralMaskEnhancement.from_hparams(
model.enhance_file("file.wav")
```

Alternatively, you can use the [Inference API](https://huggingface.co/inference-api) to solve this task
Alternatively, you can use [Inference Endpoints](https://huggingface.co/inference-endpoints) to solve this task

```python
import json
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The use of Multilingual ASR has become popular, the idea of maintaining just a s

## Inference

The Hub contains over [~9,000 ASR models](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads) that you can use right away by trying out the widgets directly in the browser or calling the models as a service using the Inference API. Here is a simple code snippet to do exactly this:
The Hub contains over [~9,000 ASR models](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads) that you can use right away by trying out the widgets directly in the browser or calling the models as a service using Inference Endpoints. Here is a simple code snippet to do exactly this:

```python
import json
Expand Down
4 changes: 2 additions & 2 deletions packages/tasks/src/tasks/sentence-similarity/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ You can extract information from documents using Sentence Similarity models. The

The [Sentence Transformers](https://www.sbert.net/) library is very powerful for calculating embeddings of sentences, paragraphs, and entire documents. An embedding is just a vector representation of a text and is useful for finding how similar two texts are.

You can find and use [hundreds of Sentence Transformers](https://huggingface.co/models?library=sentence-transformers&sort=downloads) models from the Hub by directly using the library, playing with the widgets in the browser or using the Inference API.
You can find and use [hundreds of Sentence Transformers](https://huggingface.co/models?library=sentence-transformers&sort=downloads) models from the Hub by directly using the library, playing with the widgets in the browser or using Inference Endpoints.

## Task Variants

### Passage Ranking

Passage Ranking is the task of ranking documents based on their relevance to a given query. The task is evaluated on Mean Reciprocal Rank. These models take one query and multiple documents and return ranked documents according to the relevancy to the query. 📄

You can infer with Passage Ranking models using the [Inference API](https://huggingface.co/inference-api). The Passage Ranking model inputs are a query for which we look for relevancy in the documents and the documents we want to search. The model will return scores according to the relevancy of these documents for the query.
You can infer with Passage Ranking models using [Inference Endpoints](https://huggingface.co/inference-endpoints). The Passage Ranking model inputs are a query for which we look for relevancy in the documents and the documents we want to search. The model will return scores according to the relevancy of these documents for the query.

```python
import json
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/tabular-classification/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Tabular classification models can be used in predicting customer churn in teleco

You can use [skops](https://skops.readthedocs.io/) for model hosting and inference on the Hugging Face Hub. This library is built to improve production workflows of various libraries that are used to train tabular models, including [sklearn](https://scikit-learn.org/stable/) and [xgboost](https://xgboost.readthedocs.io/en/stable/). Using `skops` you can:

- Easily use inference API,
- Easily use Inference Endpoints
- Build neat UIs with one line of code,
- Programmatically create model cards,
- Securely serialize your scikit-learn model. (See limitations of using pickle [here](https://huggingface.co/docs/hub/security-pickle).)
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/tabular-regression/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ model.fit(X, y)

You can use [skops](https://skops.readthedocs.io/) for model hosting and inference on the Hugging Face Hub. This library is built to improve production workflows of various libraries that are used to train tabular models, including [sklearn](https://scikit-learn.org/stable/) and [xgboost](https://xgboost.readthedocs.io/en/stable/). Using `skops` you can:

- Easily use inference API,
- Easily use Inference Endpoints,
- Build neat UIs with one line of code,
- Programmatically create model cards,
- Securely serialize your models. (See limitations of using pickle [here](https://huggingface.co/docs/hub/security-pickle).)
Expand Down
4 changes: 2 additions & 2 deletions packages/tasks/src/tasks/text-to-speech/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ TTS models are used to create voice assistants on smart devices. These models ar

TTS models are widely used in airport and public transportation announcement systems to convert the announcement of a given text into speech.

## Inference API
## Inference Endpoints

The Hub contains over [1500 TTS models](https://huggingface.co/models?pipeline_tag=text-to-speech&sort=downloads) that you can use right away by trying out the widgets directly in the browser or calling the models as a service using the Inference API. Here is a simple code snippet to get you started:
The Hub contains over [1500 TTS models](https://huggingface.co/models?pipeline_tag=text-to-speech&sort=downloads) that you can use right away by trying out the widgets directly in the browser or calling the models as a service using Inference Endpoints. Here is a simple code snippet to get you started:

```python
import json
Expand Down
2 changes: 1 addition & 1 deletion packages/widgets/src/hooks.server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ const handleSSO =
],
secret: env.OAUTH_CLIENT_SECRET,
/**
* Get the access_token without an account in DB, to make calls to the inference API
* Get the access_token without an account in DB, to make calls to Inference Endpoints
*/
callbacks: {
jwt({ token, account, profile }) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
<div class="flex items-center text-lg">
{#if !isDisabled}
<IconLightning classNames="-ml-1 mr-1 text-yellow-500" />
Inference API
Inference Endpoints (serverless)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

{:else}
Inference Examples
{/if}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,18 +17,18 @@
$: modelTooBig = $modelLoadStates[model.id]?.state === "TooBig";

const state = {
[LoadState.Loadable]: "This model can be loaded on the Inference API on-demand.",
[LoadState.Loaded]: "This model is currently loaded and running on the Inference API.",
[LoadState.Loadable]: "This model can be loaded on Inference Endpoints (serverless).",
[LoadState.Loaded]: "This model is currently loaded and running on Inference Endpoints (serverless).",
[LoadState.TooBig]:
"Model is too large to load onto the free Inference API. To try the model, launch it on Inference Endpoints instead.",
[LoadState.Error]: "⚠️ This model could not be loaded by the inference API. ⚠️",
"Model is too large to load onto on Inference Endpoints (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.",
[LoadState.Error]: "⚠️ This model could not be loaded on Inference Endpoints (serverless). ⚠️",
} as const;

const azureState = {
[LoadState.Loadable]: "This model can be loaded loaded on AzureML Managed Endpoint",
[LoadState.Loaded]: "This model is loaded and running on AzureML Managed Endpoint",
[LoadState.TooBig]:
"Model is too large to load onto the free Inference API. To try the model, launch it on Inference Endpoints instead.",
"Model is too large to load onto on Inference Endpoints (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.",
[LoadState.Error]: "⚠️ This model could not be loaded.",
} as const;

Expand Down Expand Up @@ -62,9 +62,10 @@
{:else if (model.inference === InferenceDisplayability.Yes || model.pipeline_tag === "reinforcement-learning") && !modelTooBig}
{@html getStatusReport($modelLoadStates[model.id], state)}
{:else if model.inference === InferenceDisplayability.ExplicitOptOut}
<span class="text-sm text-gray-500">Inference API has been turned off for this model.</span>
<span class="text-sm text-gray-500">Inference Endpoints (serverless) has been turned off for this model.</span>
{:else if model.inference === InferenceDisplayability.CustomCode}
<span class="text-sm text-gray-500">Inference API does not yet support model repos that contain custom code.</span
<span class="text-sm text-gray-500"
>Inference Endpoints (serverless) does not yet support model repos that contain custom code.</span
>
{:else if model.inference === InferenceDisplayability.LibraryNotDetected}
<span class="text-sm text-gray-500">
Expand All @@ -82,21 +83,21 @@
</span>
{:else if model.inference === InferenceDisplayability.PipelineLibraryPairNotSupported}
<span class="text-sm text-gray-500">
Inference API does not yet support {model.library_name} models for this pipeline type.
Inference Endpoints (serverless) does not yet support {model.library_name} models for this pipeline type.
</span>
{:else if modelTooBig}
<span class="text-sm text-gray-500">
Model is too large to load onto the free Inference API. To try the model, launch it on <a
Model is too large to load in Inference Endpoints (serverless). To try the model, launch it on <a
class="underline"
href="https://ui.endpoints.huggingface.co/new?repository={encodeURIComponent(model.id)}"
>Inference Endpoints</a
>Inference Endpoints (dedicated)</a
>
instead.
</span>
{:else}
<!-- added as a failsafe but this case cannot currently happen -->
<span class="text-sm text-gray-500">
Inference API is disabled for an unknown reason. Please open a
Inference Endpoints (serverless) is disabled for an unknown reason. Please open a
<a class="color-inherit underline" href="/{model.id}/discussions/new">Discussion in the Community tab</a>.
</span>
{/if}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@
<div class="blankslate">
<div class="subtitle text-xs text-gray-500">
<div class="loaded mt-2 {currentState !== 'loaded' ? 'hidden' : ''}">
This model is currently loaded and running on the Inference API.
This model is currently loaded and running on Inference Endpoints (serverless).
</div>
<div class="error mt-2 {currentState !== 'error' ? 'hidden' : ''}">
⚠️ This model could not be loaded by the inference API. ⚠️
⚠️ This model could not be loaded in Inference Endpoints (serverless). ⚠️
</div>
<div class="unknown mt-2 {currentState !== 'unknown' ? 'hidden' : ''}">
This model can be loaded on the Inference API on-demand.
This model can be loaded in Inference Endpoints (serverless).
</div>
</div>
</div>
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ export async function callInferenceApi<T>(
requestBody: Record<string, unknown>,
apiToken = "",
outputParsingFn: (x: unknown) => T,
waitForModel = false, // If true, the server will only respond once the model has been loaded on the inference API,
waitForModel = false, // If true, the server will only respond once the model has been loaded on Inference Endpoints (serverless)
includeCredentials = false,
isOnLoadCall = false, // If true, the server will try to answer from cache and not do anything if not
useCache = true
Expand Down Expand Up @@ -184,7 +184,7 @@ export async function getModelLoadInfo(
}
}

// Extend Inference API requestBody with user supplied Inference API parameters
// Extend requestBody with user supplied parameters for Inference Endpoints (serverless)
export function addInferenceParameters(requestBody: Record<string, unknown>, model: ModelData): void {
const inference = model?.cardData?.inference;
if (typeof inference === "object") {
Expand Down
Loading