From 4f0d3473485e8effe58fdf9f12260d3627d47091 Mon Sep 17 00:00:00 2001 From: Yi Xu Date: Mon, 17 Jul 2023 11:45:00 -0700 Subject: [PATCH] Deployed d29f1bf with MkDocs version: 1.4.3 --- api/error_handling/index.html | 35 +++--- api/python_client/index.html | 219 +++++++++++++--------------------- model_zoo/index.html | 4 +- search/search_index.json | 2 +- sitemap.xml.gz | Bin 127 -> 127 bytes 5 files changed, 106 insertions(+), 154 deletions(-) diff --git a/api/error_handling/index.html b/api/error_handling/index.html index 510138f2d..f513a5040 100644 --- a/api/error_handling/index.html +++ b/api/error_handling/index.html @@ -616,7 +616,7 @@

Error handling - BadRequestError + BadRequestError

@@ -624,14 +624,15 @@

-

- Bases: Exception

+

+ Bases: Exception

Corresponds to HTTP 400. Indicates that the request had inputs that were invalid. The user should not attempt to retry the request without changing the inputs.

+
@@ -657,7 +658,7 @@

- UnauthorizedError + UnauthorizedError

@@ -665,13 +666,14 @@

-

- Bases: Exception

+

+ Bases: Exception

Corresponds to HTTP 401. This means that no valid API key was provided.

+
@@ -697,7 +699,7 @@

- NotFoundError + NotFoundError

@@ -705,8 +707,8 @@

-

- Bases: Exception

+

+ Bases: Exception

Corresponds to HTTP 404. This means that the resource (e.g. a Model, FineTune, etc.) could not be found. @@ -715,6 +717,7 @@

the user does not have access to.

+
@@ -740,7 +743,7 @@

- RateLimitExceededError + RateLimitExceededError

@@ -748,13 +751,14 @@

-

- Bases: Exception

+

+ Bases: Exception

Corresponds to HTTP 429. Too many requests hit the API too quickly. We recommend an exponential backoff for retries.

+
@@ -780,7 +784,7 @@

- ServerError + ServerError

@@ -788,13 +792,14 @@

-

- Bases: Exception

+

+ Bases: Exception

Corresponds to HTTP 5xx errors on the server.

+
diff --git a/api/python_client/index.html b/api/python_client/index.html index 6025265cd..4d75b173a 100644 --- a/api/python_client/index.html +++ b/api/python_client/index.html @@ -1136,15 +1136,15 @@

🐍 Python Client API Reference - Completion + Completion

-

- Bases: APIEngine

+

+ Bases: APIEngine

Completion API. This API is used to generate text completions.

@@ -1157,6 +1157,7 @@

stream token responses or not.

+
@@ -1169,13 +1170,12 @@

-

- create + create @@ -1183,25 +1183,13 @@

-
create(
-    model_name: str,
-    prompt: str,
-    max_new_tokens: int = 20,
-    temperature: float = 0.2,
-    timeout: int = 10,
-    stream: bool = False,
-) -> Union[
-    CompletionSyncV1Response,
-    Iterator[CompletionStreamV1Response],
-]
+
create(model_name: str, prompt: str, max_new_tokens: int = 20, temperature: float = 0.2, timeout: int = 10, stream: bool = False) -> Union[CompletionSyncV1Response, Iterator[CompletionStreamV1Response]]
 

Creates a completion for the provided prompt and parameters synchronously.

- -

Parameters:

@@ -1305,8 +1293,6 @@

- -

Returns:

@@ -1388,13 +1374,12 @@

-

- acreate + acreate @@ -1403,25 +1388,13 @@

-
acreate(
-    model_name: str,
-    prompt: str,
-    max_new_tokens: int = 20,
-    temperature: float = 0.2,
-    timeout: int = 10,
-    stream: bool = False,
-) -> Union[
-    CompletionSyncV1Response,
-    AsyncIterable[CompletionStreamV1Response],
-]
+
acreate(model_name: str, prompt: str, max_new_tokens: int = 20, temperature: float = 0.2, timeout: int = 10, stream: bool = False) -> Union[CompletionSyncV1Response, AsyncIterable[CompletionStreamV1Response]]
 

Creates a completion for the provided prompt and parameters asynchronously (with asyncio).

- -

Parameters:

@@ -1525,8 +1498,6 @@

- -

Returns:

@@ -1629,20 +1600,21 @@

- CompletionOutput + CompletionOutput

-

- Bases: BaseModel

+

+ Bases: BaseModel

Represents the output of a completion request to a model.

+
@@ -1658,7 +1630,7 @@

- text + text @@ -1681,7 +1653,7 @@

- num_prompt_tokens + num_prompt_tokens @@ -1704,7 +1676,7 @@

- num_completion_tokens + num_completion_tokens @@ -1737,20 +1709,21 @@

- CompletionSyncV1Response + CompletionSyncV1Response

-

- Bases: BaseModel

+

+ Bases: BaseModel

Response object for a synchronous prompt completion.

+
@@ -1766,7 +1739,7 @@

- outputs + outputs @@ -1789,7 +1762,7 @@

- status + status @@ -1812,12 +1785,12 @@

- traceback + traceback - class-attribute instance-attribute + class-attribute

@@ -1846,15 +1819,16 @@

- CompletionStreamOutput + CompletionStreamOutput

-

- Bases: BaseModel

+

+ Bases: BaseModel

+ @@ -1873,7 +1847,7 @@

- text + text @@ -1896,7 +1870,7 @@

- finished + finished @@ -1919,12 +1893,12 @@

- num_prompt_tokens + num_prompt_tokens - class-attribute instance-attribute + class-attribute

@@ -1943,12 +1917,12 @@

- num_completion_tokens + num_completion_tokens - class-attribute instance-attribute + class-attribute

@@ -1977,20 +1951,21 @@

- CompletionStreamV1Response + CompletionStreamV1Response

-

- Bases: BaseModel

+

+ Bases: BaseModel

Response object for a stream prompt completion task.

+
@@ -2006,12 +1981,12 @@

- output + output - class-attribute instance-attribute + class-attribute

@@ -2030,7 +2005,7 @@

- status + status @@ -2053,12 +2028,12 @@

- traceback + traceback - class-attribute instance-attribute + class-attribute

@@ -2087,15 +2062,15 @@

- FineTune + FineTune

-

- Bases: APIEngine

+

+ Bases: APIEngine

FineTune API. This API is used to fine-tune models.

@@ -2103,6 +2078,7 @@

Scale llm-engine provides APIs to create fine-tunes on a base-model with training & validation data-sets. APIs are also provided to list, cancel and retrieve fine-tuning jobs.

+
@@ -2115,13 +2091,12 @@

-

- create + create @@ -2129,21 +2104,13 @@

-
create(
-    model: str,
-    training_file: str,
-    validation_file: Optional[str] = None,
-    hyperparameters: Optional[Dict[str, str]] = None,
-    suffix: Optional[str] = None,
-) -> CreateFineTuneResponse
+
create(model: str, training_file: str, validation_file: Optional[str] = None, hyperparameters: Optional[Dict[str, str]] = None, suffix: Optional[str] = None) -> CreateFineTuneResponse
 

Creates a job that fine-tunes a specified model from a given dataset.

- -

Parameters:

@@ -2228,8 +2195,6 @@

- -

Returns:

@@ -2299,13 +2264,12 @@

-

- list + list @@ -2320,8 +2284,6 @@

List fine-tuning jobs

- -

Returns:

@@ -2370,13 +2332,12 @@

-

- retrieve + retrieve @@ -2391,8 +2352,6 @@

Get status of a fine-tuning job

- -

Parameters:

@@ -2421,8 +2380,6 @@

- -

Returns:

@@ -2468,13 +2425,12 @@

-

- cancel + cancel @@ -2489,8 +2445,6 @@

Cancel a fine-tuning job

- -

Parameters:

@@ -2519,8 +2473,6 @@

- -

Returns:

@@ -2575,20 +2527,21 @@

- CreateFineTuneResponse + CreateFineTuneResponse

-

- Bases: BaseModel

+

+ Bases: BaseModel

Response object for creating a FineTune.

+
@@ -2604,18 +2557,16 @@

- fine_tune_id + fine_tune_id - class-attribute instance-attribute + class-attribute

-
fine_tune_id: str = Field(
-    ..., description="ID of the created fine-tuning job."
-)
+
fine_tune_id: str = Field(Ellipsis, description='ID of the created fine-tuning job.')
 
@@ -2640,20 +2591,21 @@

- GetFineTuneResponse + GetFineTuneResponse

-

- Bases: BaseModel

+

+ Bases: BaseModel

Response object for retrieving a FineTune.

+
@@ -2669,18 +2621,16 @@

- fine_tune_id + fine_tune_id - class-attribute instance-attribute + class-attribute

-
fine_tune_id: str = Field(
-    ..., description="ID of the requested job."
-)
+
fine_tune_id: str = Field(Ellipsis, description='ID of the requested job.')
 
@@ -2695,18 +2645,16 @@

- status + status - class-attribute instance-attribute + class-attribute

-
status: BatchJobStatus = Field(
-    ..., description="Status of the requested job."
-)
+
status: BatchJobStatus = Field(Ellipsis, description='Status of the requested job.')
 
@@ -2731,20 +2679,21 @@

- ListFineTunesResponse + ListFineTunesResponse

-

- Bases: BaseModel

+

+ Bases: BaseModel

Response object for listing FineTunes.

+
@@ -2760,19 +2709,16 @@

- jobs + jobs - class-attribute instance-attribute + class-attribute

-
jobs: List[GetFineTuneResponse] = Field(
-    ...,
-    description="List of fine-tuning jobs and their statuses.",
-)
+
jobs: List[GetFineTuneResponse] = Field(Ellipsis, description='List of fine-tuning jobs and their statuses.')
 
@@ -2797,20 +2743,21 @@

- CancelFineTuneResponse + CancelFineTuneResponse

-

- Bases: BaseModel

+

+ Bases: BaseModel

Response object for cancelling a FineTune.

+
@@ -2826,18 +2773,16 @@

- success + success - class-attribute instance-attribute + class-attribute

-
success: bool = Field(
-    ..., description="Whether cancellation was successful."
-)
+
success: bool = Field(Ellipsis, description='Whether cancellation was successful.')
 
diff --git a/model_zoo/index.html b/model_zoo/index.html index 32abb8ddc..77084ead2 100644 --- a/model_zoo/index.html +++ b/model_zoo/index.html @@ -554,7 +554,9 @@

🦙 Public Model ZooCompletion API.

+Completion API.

+

The specified models can be fine-tuned with the +FineTune API.

diff --git a/search/search_index.json b/search/search_index.json index 7c6113ab2..6201ad3ea 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"\u26a1 LLM Engine \u26a1","text":"

The open source engine for fine-tuning large language models. LLM Engine is the easiest way to customize and serve LLMs. Use Scale's hosted version or run it in your own cloud.

"},{"location":"#quick-install","title":"\ud83d\udcbb Quick Install","text":"Install using pip
pip install scale-llm-engine\n
"},{"location":"#about","title":"\ud83e\udd14 About","text":"

Foundation models are emerging as the building blocks of AI. However, deploying these models to the cloud and fine-tuning them still requires infrastructure and ML expertise, and can be expensive.

LLM Engine is a Python library, CLI, and Helm chart that provides everything you need to fine-tune and serve foundation models in the cloud using Kubernetes. Key features include:

\ud83d\ude80 Ready-to-use Fine-Tuning and Inference APIs for your favorite models: LLM Engine comes with ready-to-use APIs for your favorite open-source models, including MPT, Falcon, and LLaMA. Use Scale-hosted endpoints or deploy to your own infrastructure.

\ud83d\udc33 Deploying from any docker image: Turn any Docker image into an auto-scaling deployment with simple APIs.

\ud83c\udf99\ufe0fOptimized Inference: LLM Engine provides inference APIs for streaming responses and dynamically batching inputs for higher throughput and lower latency.

\ud83e\udd17 Open-Source Integrations: Deploy any Huggingface model with a single command.

"},{"location":"#features-coming-soon","title":"\ud83d\udd25 Features Coming Soon","text":"

\u2744 Fast Cold-Start Times: To prevent GPUs from idling, LLM Engine automatically scales your model to zero when it's not in use and scales up within seconds, even for large foundation models.

\ud83d\udcb8 Cost-Optimized: Deploy AI models cheaper than commercial ones, including cold-start and warm-down times.

"},{"location":"faq/","title":"Frequently Asked Questions","text":""},{"location":"getting_started/","title":"\ud83d\ude80 Getting Started","text":"

To start using LLM Engine's public inference and fine-tuning APIs:

Install using pipInstall using conda
pip install scale-llm-engine\n
conda install scale-llm-engine -c conda-forge\n

Navigate to https://spellbook.scale.com where you will get a Scale API key on the settings page. Set this API key as the SCALE_API_KEY environment variable by adding the following line to your .zshrc or .bash_profile:

Set API key
export SCALE_API_KEY = \"[Your API key]\"\n

With your API key set, you can now send LLM Engine requests using the Python client:

Using the Python Client
from llmengine import Completion\nresponse = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\nprint(response.outputs[0].text)\n
"},{"location":"model_zoo/","title":"\ud83e\udd99 Public Model Zoo","text":"

Scale hosts the following models in a model zoo:

Model Name Inference APIs Available Fine-tuning APIs Available llama-7b \u2705 \u2705 falcon-7b \u2705 falcon-7b-instruct \u2705 falcon-40b \u2705 falcon-40b-instruct \u2705 mpt-7b \u2705 mpt-7b-instruct \u2705 \u2705 flan-t5-xxl \u2705

Each of these models can be used with the Completion API.

"},{"location":"api/error_handling/","title":"Error handling","text":"

LLM Engine uses conventional HTTP response codes to indicate the success or failure of an API request. In general: codes in the 2xx range indicate success. Codes in the 4xx range indicate indicate an error that failed given the information provided (e.g. a given Model was not found, or an invalid temperature was specified). Codes in the 5xx range indicate an error with the LLM Engine servers.

In the Python client, errors are presented via a set of corresponding Exception classes, which should be caught and handled by the user accordingly.

"},{"location":"api/error_handling/#llmengine.errors.BadRequestError","title":"BadRequestError","text":"
BadRequestError(message: str)\n

Bases: Exception

Corresponds to HTTP 400. Indicates that the request had inputs that were invalid. The user should not attempt to retry the request without changing the inputs.

"},{"location":"api/error_handling/#llmengine.errors.UnauthorizedError","title":"UnauthorizedError","text":"
UnauthorizedError(message: str)\n

Bases: Exception

Corresponds to HTTP 401. This means that no valid API key was provided.

"},{"location":"api/error_handling/#llmengine.errors.NotFoundError","title":"NotFoundError","text":"
NotFoundError(message: str)\n

Bases: Exception

Corresponds to HTTP 404. This means that the resource (e.g. a Model, FineTune, etc.) could not be found. Note that this can also be returned in some cases where the object might exist, but the user does not have access to the object. This is done to avoid leaking information about the existence or nonexistence of said object that the user does not have access to.

"},{"location":"api/error_handling/#llmengine.errors.RateLimitExceededError","title":"RateLimitExceededError","text":"
RateLimitExceededError(message: str)\n

Bases: Exception

Corresponds to HTTP 429. Too many requests hit the API too quickly. We recommend an exponential backoff for retries.

"},{"location":"api/error_handling/#llmengine.errors.ServerError","title":"ServerError","text":"
ServerError(status_code: int, message: str)\n

Bases: Exception

Corresponds to HTTP 5xx errors on the server.

"},{"location":"api/langchain/","title":"\ud83e\udd9c Langchain","text":"

Coming soon!

"},{"location":"api/python_client/","title":"\ud83d\udc0d Python Client API Reference","text":""},{"location":"api/python_client/#llmengine.Completion","title":"Completion","text":"

Bases: APIEngine

Completion API. This API is used to generate text completions.

Language Models are trained to understand natural language and provide text outputs as a response to their inputs. The inputs are called prompts and outputs are referred to as completions. LLMs take the input prompts and chunk them smaller units called tokens to process and generate language. Tokens may include trailing spaces and even sub-words; this process is language dependent.

The Completions API can be run either synchronous or asynchronously (via Python asyncio); for each of these modes, you can also choose to stream token responses or not.

"},{"location":"api/python_client/#llmengine.completion.Completion.create","title":"create classmethod","text":"
create(\nmodel_name: str,\nprompt: str,\nmax_new_tokens: int = 20,\ntemperature: float = 0.2,\ntimeout: int = 10,\nstream: bool = False,\n) -> Union[\nCompletionSyncV1Response,\nIterator[CompletionStreamV1Response],\n]\n

Creates a completion for the provided prompt and parameters synchronously.

Parameters:

Name Type Description Default model_name str

Name of the model to use. See Model Zoo for a list of Models that are supported.

required prompt str

The prompt to generate completions for, encoded as a string.

required max_new_tokens int

The maximum number of tokens to generate in the completion.

The token count of your prompt plus max_new_tokens cannot exceed the model's context length. See Model Zoo for information on each supported model's context length.

20 temperature float

What sampling temperature to use, in the range (0, 1]. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

0.2 timeout int

Timeout in seconds. This is the maximum amount of time you are willing to wait for a response.

10 stream bool

Whether to stream the response. If true, the return type is an Iterator[CompletionStreamV1Response]. Otherwise, the return type is a CompletionSyncV1Response. When streaming, tokens will be sent as data-only server-sent events.

False

Returns:

Name Type Description response Union[CompletionSyncV1Response, AsyncIterable[CompletionStreamV1Response]]

The generated response (if streaming=False) or iterator of response chunks (if streaming=True)

Example request without token streaming
from llmengine import Completion\nresponse = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\nprint(response.json())\n
JSON Response
{\n\"status\": \"SUCCESS\",\n\"outputs\":\n[\n{\n\"text\": \"_______ and I am a _______\",\n\"num_prompt_tokens\": null,\n\"num_completion_tokens\": 10\n}\n],\n\"traceback\": null\n}\n
Example request with token streaming
from llmengine import Completion\nstream = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"why is the sky blue?\",\nmax_new_tokens=5,\ntemperature=0.2,\nstream=True,\n)\nfor response in stream:\nif response.output:\nprint(response.json())\n
JSON responses
{\"status\": \"SUCCESS\", \"output\": {\"text\": \"\\n\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 1 }, \"traceback\": null }\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \"I\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 2 }, \"traceback\": null }\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \" don\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 3 }, \"traceback\": null }\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \"\u2019\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 4 }, \"traceback\": null }\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \"t\", \"finished\": true, \"num_prompt_tokens\": null, \"num_completion_tokens\": 5 }, \"traceback\": null }\n
"},{"location":"api/python_client/#llmengine.completion.Completion.acreate","title":"acreate async classmethod","text":"
acreate(\nmodel_name: str,\nprompt: str,\nmax_new_tokens: int = 20,\ntemperature: float = 0.2,\ntimeout: int = 10,\nstream: bool = False,\n) -> Union[\nCompletionSyncV1Response,\nAsyncIterable[CompletionStreamV1Response],\n]\n

Creates a completion for the provided prompt and parameters asynchronously (with asyncio).

Parameters:

Name Type Description Default model_name str

Name of the model to use. See Model Zoo for a list of Models that are supported.

required prompt str

The prompt to generate completions for, encoded as a string.

required max_new_tokens int

The maximum number of tokens to generate in the completion.

The token count of your prompt plus max_new_tokens cannot exceed the model's context length. See Model Zoo for information on each supported model's context length.

20 temperature float

What sampling temperature to use, in the range (0, 1]. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

0.2 timeout int

Timeout in seconds. This is the maximum amount of time you are willing to wait for a response.

10 stream bool

Whether to stream the response. If true, the return type is an Iterator[CompletionStreamV1Response]. Otherwise, the return type is a CompletionSyncV1Response. When streaming, tokens will be sent as data-only server-sent events.

False

Returns:

Name Type Description response Union[CompletionSyncV1Response, AsyncIterable[CompletionStreamV1Response]]

The generated response (if streaming=False) or iterator of response chunks (if streaming=True)

Example without token streaming
import asyncio\nfrom llmengine import Completion\nasync def main():\nresponse = await Completion.acreate(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\nprint(response.json())\nasyncio.run(main())\n
JSON response
{\n\"status\": \"SUCCESS\",\n\"outputs\":\n[\n{\n\"text\": \"_______, and I am a _____\",\n\"num_prompt_tokens\": null,\n\"num_completion_tokens\": 10\n}\n],\n\"traceback\": null\n}\n
Example with token streaming
import asyncio\nfrom llmengine import Completion\nasync def main():\nstream = await Completion.acreate(\nmodel_name=\"llama-7b\",\nprompt=\"why is the sky blue?\",\nmax_new_tokens=5,\ntemperature=0.2,\nstream=True,\n)\nasync for response in stream:\nif response.output:\nprint(response.json())\nasyncio.run(main())\n
JSON responses
{\"status\": \"SUCCESS\", \"output\": {\"text\": \"\\n\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 1}, \"traceback\": null}\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \"I\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 2}, \"traceback\": null}\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \" think\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 3}, \"traceback\": null}\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \" the\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 4}, \"traceback\": null}\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \" sky\", \"finished\": true, \"num_prompt_tokens\": null, \"num_completion_tokens\": 5}, \"traceback\": null}\n
"},{"location":"api/python_client/#llmengine.CompletionOutput","title":"CompletionOutput","text":"

Bases: BaseModel

Represents the output of a completion request to a model.

"},{"location":"api/python_client/#llmengine.data_types.CompletionOutput.text","title":"text instance-attribute","text":"
text: str\n

The text of the completion.

"},{"location":"api/python_client/#llmengine.data_types.CompletionOutput.num_prompt_tokens","title":"num_prompt_tokens instance-attribute","text":"
num_prompt_tokens: Optional[int]\n

Number of tokens in the prompt.

"},{"location":"api/python_client/#llmengine.data_types.CompletionOutput.num_completion_tokens","title":"num_completion_tokens instance-attribute","text":"
num_completion_tokens: int\n

Number of tokens in the completion.

"},{"location":"api/python_client/#llmengine.CompletionSyncV1Response","title":"CompletionSyncV1Response","text":"

Bases: BaseModel

Response object for a synchronous prompt completion.

"},{"location":"api/python_client/#llmengine.data_types.CompletionSyncV1Response.outputs","title":"outputs instance-attribute","text":"
outputs: List[CompletionOutput]\n

List of completion outputs.

"},{"location":"api/python_client/#llmengine.data_types.CompletionSyncV1Response.status","title":"status instance-attribute","text":"
status: TaskStatus\n

Task status.

"},{"location":"api/python_client/#llmengine.data_types.CompletionSyncV1Response.traceback","title":"traceback class-attribute instance-attribute","text":"
traceback: Optional[str] = None\n

Traceback if the task failed.

"},{"location":"api/python_client/#llmengine.CompletionStreamOutput","title":"CompletionStreamOutput","text":"

Bases: BaseModel

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamOutput.text","title":"text instance-attribute","text":"
text: str\n

The text of the completion.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamOutput.finished","title":"finished instance-attribute","text":"
finished: bool\n

Whether the completion is finished.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamOutput.num_prompt_tokens","title":"num_prompt_tokens class-attribute instance-attribute","text":"
num_prompt_tokens: Optional[int] = None\n

Number of tokens in the prompt.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamOutput.num_completion_tokens","title":"num_completion_tokens class-attribute instance-attribute","text":"
num_completion_tokens: Optional[int] = None\n

Number of tokens in the completion.

"},{"location":"api/python_client/#llmengine.CompletionStreamV1Response","title":"CompletionStreamV1Response","text":"

Bases: BaseModel

Response object for a stream prompt completion task.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamV1Response.output","title":"output class-attribute instance-attribute","text":"
output: Optional[CompletionStreamOutput] = None\n

Completion output.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamV1Response.status","title":"status instance-attribute","text":"
status: TaskStatus\n

Task status.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamV1Response.traceback","title":"traceback class-attribute instance-attribute","text":"
traceback: Optional[str] = None\n

Traceback if the task failed.

"},{"location":"api/python_client/#llmengine.FineTune","title":"FineTune","text":"

Bases: APIEngine

FineTune API. This API is used to fine-tune models.

Fine-tuning is a process where the LLM is further trained on a task-specific dataset, allowing the model to adjust its parameters to better align with the task at hand. Fine-tuning involves the supervised training phase, where prompt/response pairs are provided to optimize the performance of the LLM.

Scale llm-engine provides APIs to create fine-tunes on a base-model with training & validation data-sets. APIs are also provided to list, cancel and retrieve fine-tuning jobs.

"},{"location":"api/python_client/#llmengine.fine_tuning.FineTune.create","title":"create classmethod","text":"
create(\nmodel: str,\ntraining_file: str,\nvalidation_file: Optional[str] = None,\nhyperparameters: Optional[Dict[str, str]] = None,\nsuffix: Optional[str] = None,\n) -> CreateFineTuneResponse\n

Creates a job that fine-tunes a specified model from a given dataset.

Parameters:

Name Type Description Default model `str`

The name of the base model to fine-tune. See Model Zoo for the list of available models to fine-tune.

required training_file `str`

Path to file of training dataset

required validation_file `Optional[str]`

Path to file of validation dataset

None hyperparameters `str`

Hyperparameters

None suffix `Optional[str]`

A string that will be added to your fine-tuned model name.

None

Returns:

Name Type Description CreateFineTuneResponse CreateFineTuneResponse

an object that contains the ID of the created fine-tuning job

The model is the name of base model (Model Zoo for available models) to fine. The training file should consist of prompt and response pairs. Your data must be formatted as a CSV file that includes two columns: prompt and response. A maximum of 100,000 rows of data is currently supported. At least 200 rows of data is recommended to start to see benefits from fine-tuning.

Here is an example script to create a 5-row CSV of properly formatted data for fine-tuning an airline question answering bot:

import csv\n# Define data\ndata = [\n(\"What is your policy on carry-on luggage?\", \"Our policy allows each passenger to bring one piece of carry-on luggage and one personal item such as a purse or briefcase. The maximum size for carry-on luggage is 22 x 14 x 9 inches.\"),\n(\"How can I change my flight?\", \"You can change your flight through our website or mobile app. Go to 'Manage my booking' section, enter your booking reference and last name, then follow the prompts to change your flight.\"),\n(\"What meals are available on my flight?\", \"We offer a variety of meals depending on the flight's duration and route. These can range from snacks and light refreshments to full-course meals on long-haul flights. Specific meal options can be viewed during the booking process.\"),\n(\"How early should I arrive at the airport before my flight?\", \"We recommend arriving at least two hours before domestic flights and three hours before international flights.\"),\n\"Can I select my seat in advance?\", \"Yes, you can select your seat during the booking process or afterwards via the 'Manage my booking' section on our website or mobile app.\"),\n]\n# Write data to a CSV file\nwith open('customer_service_data.csv', 'w', newline='') as file:\nwriter = csv.writer(file)\nwriter.writerow([\"prompt\", \"response\"])\nwriter.writerows(data)\n
Example code for fine-tuning
from llmengine import FineTune\nresponse = FineTune.create(\nmodel=\"llama-7b\",\ntraining_file=\"s3://my-bucket/path/to/training-file.csv\",\n)\nprint(response.json())\n
JSON Response
{\n\"fine_tune_id\": \"ft_abc123\"\n}\n
"},{"location":"api/python_client/#llmengine.fine_tuning.FineTune.list","title":"list classmethod","text":"
list() -> ListFineTunesResponse\n

List fine-tuning jobs

Returns:

Name Type Description ListFineTunesResponse ListFineTunesResponse

an object that contains a list of all fine-tuning jobs and their statuses

Example
from llmengine import FineTune\nresponse = FineTune.list()\nprint(response.json())\n
JSON Response
[\n{\n\"fine_tune_id\": \"ft_abc123\",\n\"status\": \"RUNNING\"\n},\n{\n\"fine_tune_id\": \"ft_def456\",\n\"status\": \"SUCCESS\"\n}\n]\n
"},{"location":"api/python_client/#llmengine.fine_tuning.FineTune.retrieve","title":"retrieve classmethod","text":"
retrieve(fine_tune_id: str) -> GetFineTuneResponse\n

Get status of a fine-tuning job

Parameters:

Name Type Description Default fine_tune_id `str`

ID of the fine-tuning job

required

Returns:

Name Type Description GetFineTuneResponse GetFineTuneResponse

an object that contains the ID and status of the requested job

Example
from llmengine import FineTune\nresponse = FineTune.retrieve(\nfine_tune_id=\"ft_abc123\",\n)\nprint(response.json())\n
JSON Response
{\n\"fine_tune_id\": \"ft_abc123\",\n\"status\": \"RUNNING\"\n}\n
"},{"location":"api/python_client/#llmengine.fine_tuning.FineTune.cancel","title":"cancel classmethod","text":"
cancel(fine_tune_id: str) -> CancelFineTuneResponse\n

Cancel a fine-tuning job

Parameters:

Name Type Description Default fine_tune_id `str`

ID of the fine-tuning job

required

Returns:

Name Type Description CancelFineTuneResponse CancelFineTuneResponse

an object that contains whether the cancellation was successful

Example
from llmengine import FineTune\nresponse = FineTune.cancel(fine_tune_id=\"ft_abc123\")\nprint(response.json())\n
JSON Response
{\n\"success\": \"true\"\n}\n
"},{"location":"api/python_client/#llmengine.CreateFineTuneResponse","title":"CreateFineTuneResponse","text":"

Bases: BaseModel

Response object for creating a FineTune.

"},{"location":"api/python_client/#llmengine.data_types.CreateFineTuneResponse.fine_tune_id","title":"fine_tune_id class-attribute instance-attribute","text":"
fine_tune_id: str = Field(\n..., description=\"ID of the created fine-tuning job.\"\n)\n

The ID of the FineTune.

"},{"location":"api/python_client/#llmengine.GetFineTuneResponse","title":"GetFineTuneResponse","text":"

Bases: BaseModel

Response object for retrieving a FineTune.

"},{"location":"api/python_client/#llmengine.data_types.GetFineTuneResponse.fine_tune_id","title":"fine_tune_id class-attribute instance-attribute","text":"
fine_tune_id: str = Field(\n..., description=\"ID of the requested job.\"\n)\n

The ID of the FineTune.

"},{"location":"api/python_client/#llmengine.data_types.GetFineTuneResponse.status","title":"status class-attribute instance-attribute","text":"
status: BatchJobStatus = Field(\n..., description=\"Status of the requested job.\"\n)\n

The status of the FineTune job.

"},{"location":"api/python_client/#llmengine.ListFineTunesResponse","title":"ListFineTunesResponse","text":"

Bases: BaseModel

Response object for listing FineTunes.

"},{"location":"api/python_client/#llmengine.data_types.ListFineTunesResponse.jobs","title":"jobs class-attribute instance-attribute","text":"
jobs: List[GetFineTuneResponse] = Field(\n...,\ndescription=\"List of fine-tuning jobs and their statuses.\",\n)\n

A list of FineTunes, represented as GetFineTuneResponses.

"},{"location":"api/python_client/#llmengine.CancelFineTuneResponse","title":"CancelFineTuneResponse","text":"

Bases: BaseModel

Response object for cancelling a FineTune.

"},{"location":"api/python_client/#llmengine.data_types.CancelFineTuneResponse.success","title":"success class-attribute instance-attribute","text":"
success: bool = Field(\n..., description=\"Whether cancellation was successful.\"\n)\n

Whether the cancellation succeeded.

"},{"location":"guides/completions/","title":"Completions","text":"

Language Models are trained to understand natural language and provide text outputs as a response to their inputs. The inputs are called prompts and outputs are referred to as completions. LLMs take the input prompts and chunk them smaller units called tokens to process and generate language. Tokens may include trailing spaces and even sub-words, this process is language dependent.

Scale llm-engine provides access to open source language models (see Model Zoo) that can be used for producing completions to prompts.

"},{"location":"guides/completions/#completion-api-call","title":"Completion API call","text":"

An example API call looks as follows:

from llmengine import Completion\nresponse = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\n

The model_name is the LLM to be used (see Model Zoo). The prompt is the main input for the LLM to respond to. The max_new_tokens parameter is the maximum number of tokens to generate in the chat completion. The temperature is the sampling temperature to use. Higher values make the output more random, while lower values will make it more focussed and deterministic.

See the full API reference documentation to learn more.

"},{"location":"guides/completions/#completion-api-response","title":"Completion API response","text":"

An example Completion API response looks as follows:

{\n\"outputs\": [\n{\n\"text\": \"_______ and I am a _______\",\n\"num_completion_tokens\": 10\n}\n]\n}\n

In Python, the response is of type CompletionSyncV1Response, which maps to the above JSON structure.

print( response.outputs[0].text )\n
"},{"location":"guides/completions/#token-streaming","title":"Token streaming","text":"

The Completions API support token streaming to reduce perceived latency for certain applications. When streaming, tokens will be sent as data-only server-side events.

To enable token streaming, pass stream=True to either Completion.create or Completion.acreate.

An example of token streaming using the synchronous Completions API looks as follows:

from llmengine import Completion\nstream = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"why is the sky blue?\",\nmax_new_tokens=5,\ntemperature=0.2,\nstream=True,\n)\nfor response in stream:\nif response.output:\nprint(response.json())\n
"},{"location":"guides/completions/#async-requests","title":"Async requests","text":"

The Python client supports asyncio for creating Completions. Use Completion.acreate instead of Completion.create to utilize async processing. The function signatures are otherwise identical.

An example of async Completions looks as follows:

import asyncio\nfrom llmengine import Completion\nasync def main():\nresponse = await Completion.acreate(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\nprint(response.json())\nasyncio.run(main())\n
"},{"location":"guides/completions/#which-model-should-i-use","title":"Which model should I use?","text":"

See the Model Zoo for more information on best practices for which model to use for Completions.

"},{"location":"guides/fine_tuning/","title":"Fine-tuning","text":"

Learn how to customize your models on your data with fine-tuning.

"},{"location":"guides/fine_tuning/#introduction","title":"Introduction","text":"

Fine-tuning helps improve model performance by training on specific examples of prompts and desired responses. LLMs are initially trained on data collected from the entire internet. With fine-tuning, LLMs can be optimized to perform better in a specific domain by learning from examples for that domain. Smaller LLMs that have been fine-tuned on a specific use case often outperform larger ones that were trained more generally.

Fine-tuning allows for:

  1. Higher quality results than prompt engineering alone
  2. Cost savings through shorter prompts
  3. The ability to reach equivalent accuracy with a smaller model
  4. Lower latency at inference time
  5. The chance to show an LLM more examples than can fit in a single context window

LLM Engine's fine-tuning API lets you fine-tune various open source LLMs on your own data and then make inference calls to the resulting LLM. For more specific details, see the fine-tuning API Python client reference.

"},{"location":"guides/fine_tuning/#producing-high-quality-data-for-fine-tuning","title":"Producing high quality data for fine-tuning","text":"

The training data for fine-tuning should consist of prompt and response pairs.

As a rule of thumb, you should expect to see linear improvements in your fine-tuned model's quality with each doubling of the dataset size. Having high-quality data is also essential to improving performance. For every linear increase in the error rate in your training data, you may encounter a roughly quadratic increase in your fine-tuned model's error rate.

High quality data is critical to achieve improved model performance, and in several cases will require experts to generate and prepare data - the breadth and diversity of the data is highly critical. Scale's Data Engine can help prepare such high quality, diverse data sets - more information here.

"},{"location":"guides/fine_tuning/#preparing-data","title":"Preparing data","text":"

Your data must be formatted as a CSV file that includes two columns: prompt and response. A maximum of 100,000 rows of data is currently supported. At least 200 rows of data is recommended to start to see benefits from fine-tuning.

Here is an example script to create a 50-row CSV of properly formatted data for fine-tuning an airline question answering bot:

Creating a sample dataset
import csv\n# Define data\ndata = [\n(\"What is your policy on carry-on luggage?\", \"Our policy allows each passenger to bring one piece of carry-on luggage and one personal item such as a purse or briefcase. The maximum size for carry-on luggage is 22 x 14 x 9 inches.\"),\n(\"How can I change my flight?\", \"You can change your flight through our website or mobile app. Go to 'Manage my booking' section, enter your booking reference and last name, then follow the prompts to change your flight.\"),\n(\"What meals are available on my flight?\", \"We offer a variety of meals depending on the flight's duration and route. These can range from snacks and light refreshments to full-course meals on long-haul flights. Specific meal options can be viewed during the booking process.\"),\n(\"How early should I arrive at the airport before my flight?\", \"We recommend arriving at least two hours before domestic flights and three hours before international flights.\"),\n(\"Can I select my seat in advance?\", \"Yes, you can select your seat during the booking process or afterwards via the 'Manage my booking' section on our website or mobile app.\"),\n(\"What should I do if my luggage is lost?\", \"If your luggage is lost, please report this immediately at our 'Lost and Found' counter at the airport. We will assist you in tracking your luggage.\"),\n(\"Do you offer special assistance for passengers with disabilities?\", \"Yes, we offer special assistance for passengers with disabilities. Please notify us of your needs at least 48 hours prior to your flight.\"),\n(\"Can I bring my pet on the flight?\", \"Yes, we allow small pets in the cabin, and larger pets in the cargo hold. Please check our pet policy for more details.\"),\n(\"What is your policy on flight cancellations?\", \"In case of flight cancellations, we aim to notify passengers as early as possible and offer either a refund or a rebooking on the next available flight.\"),\n(\"Can I get a refund if I cancel my flight?\", \"Refunds depend on the type of ticket purchased. Please check our cancellation policy for details. Non-refundable tickets, however, are typically not eligible for refunds unless due to extraordinary circumstances.\"),\n(\"How can I check-in for my flight?\", \"You can check-in for your flight either online, through our mobile app, or at the airport. Online and mobile app check-in opens 24 hours before departure and closes 90 minutes before.\"),\n(\"Do you offer free meals on your flights?\", \"Yes, we serve free meals on all long-haul flights. For short-haul flights, we offer a complimentary drink and snack. Special meal requests should be made at least 48 hours before departure.\"),\n(\"Can I use my electronic devices during the flight?\", \"Small electronic devices can be used throughout the flight in flight mode. Larger devices like laptops may be used above 10,000 feet.\"),\n(\"How much baggage can I check-in?\", \"The checked baggage allowance depends on the class of travel and route. The details would be mentioned on your ticket, or you can check on our website.\"),\n(\"How can I request for a wheelchair?\", \"To request a wheelchair or any other special assistance, please call our customer service at least 48 hours before your flight.\"),\n(\"Do I get a discount for group bookings?\", \"Yes, we offer discounts on group bookings of 10 or more passengers. Please contact our group bookings team for more information.\"),\n(\"Do you offer Wi-fi on your flights?\", \"Yes, we offer complimentary Wi-fi on select flights. You can check the availability during the booking process.\"),\n(\"What is the minimum connecting time between flights?\", \"The minimum connecting time varies depending on the airport and whether your flight is international or domestic. Generally, it's recommended to allow at least 45-60 minutes for domestic connections and 60-120 minutes for international.\"),\n(\"Do you offer duty-free shopping on international flights?\", \"Yes, we have a selection of duty-free items that you can pre-order on our website or purchase onboard on international flights.\"),\n(\"Can I upgrade my ticket to business class?\", \"Yes, you can upgrade your ticket through the 'Manage my booking' section on our website or by contacting our customer service. The availability and costs depend on the specific flight.\"),\n(\"Can unaccompanied minors travel on your flights?\", \"Yes, we do accommodate unaccompanied minors on our flights, with special services to ensure their safety and comfort. Please contact our customer service for more details.\"),\n(\"What amenities do you provide in business class?\", \"In business class, you will enjoy additional legroom, reclining seats, premium meals, priority boarding and disembarkation, access to our business lounge, extra baggage allowance, and personalized service.\"),\n(\"How much does extra baggage cost?\", \"Extra baggage costs vary based on flight route and the weight of the baggage. Please refer to our 'Extra Baggage' section on the website for specific rates.\"),\n(\"Are there any specific rules for carrying liquids in carry-on?\", \"Yes, liquids carried in your hand luggage must be in containers of 100 ml or less and they should all fit into a single, transparent, resealable plastic bag of 20 cm x 20 cm.\"),\n(\"What if I have a medical condition that requires special assistance during the flight?\", \"We aim to make the flight comfortable for all passengers. If you have a medical condition that may require special assistance, please contact our \u2018special services\u2019 team 48 hours before your flight.\"),\n(\"What in-flight entertainment options are available?\", \"We offer a range of in-flight entertainment options including a selection of movies, TV shows, music, and games, available on your personal seat-back screen.\"),\n(\"What types of payment methods do you accept?\", \"We accept credit/debit cards, PayPal, bank transfers, and various other forms of payment. The available options may vary depending on the country of departure.\"),\n(\"How can I earn and redeem frequent flyer miles?\", \"You can earn miles for every journey you take with us or our partner airlines. These miles can be redeemed for flight tickets, upgrades, or various other benefits. To earn and redeem miles, you need to join our frequent flyer program.\"),\n(\"Can I bring a stroller for my baby?\", \"Yes, you can bring a stroller for your baby. It can be checked in for free and will normally be given back to you at the aircraft door upon arrival.\"),\n(\"What age does my child have to be to qualify as an unaccompanied minor?\", \"Children aged between 5 and 12 years who are traveling alone are considered unaccompanied minors. Our team provides special care for these children from departure to arrival.\"),\n(\"What documents do I need to travel internationally?\", \"For international travel, you need a valid passport and may also require visas, depending on your destination and your country of residence. It's important to check the specific requirements before you travel.\"),\n(\"What happens if I miss my flight?\", \"If you miss your flight, please contact our customer service immediately. Depending on the circumstances, you may be able to rebook on a later flight, but additional fees may apply.\"),\n(\"Can I travel with my musical instrument?\", \"Yes, small musical instruments can be brought on board as your one carry-on item. Larger instruments must be transported in the cargo, or if small enough, a seat may be purchased for them.\"),\n(\"Do you offer discounts for children or infants?\", \"Yes, children aged 2-11 traveling with an adult usually receive a discount on the fare. Infants under the age of 2 who do not occupy a seat can travel for a reduced fare or sometimes for free.\"),\n(\"Is smoking allowed on your flights?\", \"No, all our flights are non-smoking for the comfort and safety of all passengers.\"),\n(\"Do you have family seating?\", \"Yes, we offer the option to seat families together. You can select seats during booking or afterwards through the 'Manage my booking' section on the website.\"),\n(\"Is there any discount for senior citizens?\", \"Some flights may offer a discount for senior citizens. Please check our website or contact customer service for accurate information.\"),\n(\"What items are prohibited on your flights?\", \"Prohibited items include, but are not limited to, sharp objects, firearms, explosive materials, and certain chemicals. You can find a comprehensive list on our website under the 'Security Regulations' section.\"),\n(\"Can I purchase a ticket for someone else?\", \"Yes, you can purchase a ticket for someone else. You'll need their correct name as it appears on their government-issued ID, and their correct travel dates.\"),\n(\"What is the process for lost and found items on the plane?\", \"If you realize you forgot an item on the plane, report it as soon as possible to our lost and found counter. We will make every effort to locate and return your item.\"),\n(\"Can I request a special meal?\", \"Yes, we offer a variety of special meals to accommodate dietary restrictions. Please request your preferred meal at least 48 hours prior to your flight.\"),\n(\"Is there a weight limit for checked baggage?\", \"Yes, luggage weight limits depend on your ticket class and route. You can find the details on your ticket or by visiting our website.\"),\n(\"Can I bring my sports equipment?\", \"Yes, certain types of sports equipment can be carried either as or in addition to your permitted baggage. Some equipment may require additional fees. It's best to check our policy on our website or contact us directly.\"),\n(\"Do I need a visa to travel to certain countries?\", \"Yes, visa requirements depend on the country you are visiting and your nationality. We advise checking with the relevant embassy or consulate prior to travel.\"),\n(\"How can I add extra baggage to my booking?\", \"You can add extra baggage to your booking through the 'Manage my booking' section on our website or by contacting our customer services.\"),\n(\"Can I check-in at the airport?\", \"Yes, you can choose to check-in at the airport. However, we also offer online and mobile check-in, which may save you time.\"),\n(\"How do I know if my flight is delayed or cancelled?\", \"In case of any changes to your flight, we will attempt to notify all passengers using the contact information given at the time of booking. You can also check your flight status on our website.\"),\n(\"What is your policy on pregnant passengers?\", \"Pregnant passengers can travel up to the end of the 36th week for single pregnancies, and the end of the 32nd week for multiple pregnancies. We recommend consulting your doctor before any air travel.\"),\n(\"Can children travel alone?\", \"Yes, children age 5 to 12 can travel alone as unaccompanied minors. We provide special care for these seats. Please contact our customer service for more information.\"),\n(\"How can I pay for my booking?\", \"You can pay for your booking using a variety of methods including credit and debit cards, PayPal, or bank transfers. The options may vary depending on the country of departure.\"),\n]\n# Write data to a CSV file\nwith open('customer_service_data.csv', 'w', newline='') as file:\nwriter = csv.writer(file)\nwriter.writerow([\"prompt\", \"response\"])\nwriter.writerows(data)\n
"},{"location":"guides/fine_tuning/#making-your-data-accessible-to-llm-engine","title":"Making your data accessible to LLM Engine","text":"

Currently, data needs to be uploaded to a publicly accessible web URL so that it can be read for fine-tuning. Publicly accessible HTTP, HTTPS, and S3 URLs are currently supported. Support for privately sharing data with the LLM Engine API is coming shortly.

"},{"location":"guides/fine_tuning/#launching-the-fine-tune","title":"Launching the fine-tune","text":"

Once you have uploaded your data, you can use the LLM Engine API to launch a fine-tune. You will need to specify which base model to fine-tune, the locations of the training file and optional validation data file, an optional set of hyperparameters to customize the fine-tuning behavior, and an optional suffix to append to the name of the fine-tune.

Create a fine-tune
from llmengine import FineTune\nresponse = FineTune.create(\nmodel=\"llama-7b\",\ntraining_file=\"s3://my-bucket/path/to/training-file.csv\",\n)\nprint(response.json())\n

See the Model Zoo to see which models have fine-tuning support.

Once the fine-tune is launched, you can also get the status of your fine-tune.

"},{"location":"guides/fine_tuning/#making-inference-calls-to-your-fine-tune","title":"Making inference calls to your fine-tune","text":"

Once your fine-tune is finished, you will be able to start making inference requests to the model. You can use the fine_tune_id returned from your FineTune.create API call to reference your fine-tuned model in the Completions API. Alternatively, you can list available LLMs with Model.list in order to find the name of your fine-tuned model. See the Completion API for more details. You can then use that name to direct your completion requests.

Inference with a fine-tuned model
from llmengine import Completion\nresponse = Completion.create(\nmodel_name=\"ft_abc123\",\nprompt=\"Do you offer in-flight Wi-fi?\",\nmax_new_tokens=100,\ntemperature=0.2,\n)\nprint(response.json())\n
"},{"location":"guides/rate_limits/","title":"Overview","text":""},{"location":"guides/rate_limits/#what-are-rate-limits","title":"What are rate limits?","text":"

A rate limit is a restriction that an API imposes on the number of times a user or client can access the server within a specified period of time.

"},{"location":"guides/rate_limits/#why-do-we-have-rate-limits","title":"Why do we have rate limits?","text":"

Rate limits are a common practice for APIs, and they're put in place for a few different reasons:

  • They help protect against abuse or misuse of the API. For example, a malicious actor could flood the API with requests in an attempt to overload it or cause disruptions in the service. By setting rate limits, the LLM Engine server can prevent this kind of activity.
  • Rate limits help ensure that everyone has fair access to API. If one person or organization makes an excessive number of requests, it could bog down the API for everyone else. By throttling the number of requests that a single user can make, LLM Engine ensures that the most number of people have an opportunity to use the API without experiencing slowdowns. This also applies when self-hosting LLM Engine, as all internal users within an organization would have fair access.
  • Rate limits can help manage the aggregate load on the server infrastructure. If requests to the API increase dramatically, it could tax the servers and cause performance issues. By setting rate limits, LLM Engine can help maintain a smooth and consistent experience for all users. This is especially important when self-hosting LLM Engine.
"},{"location":"guides/rate_limits/#how-do-i-know-if-i-am-rate-limited","title":"How do I know if I am rate limited?","text":"

Per standard HTTP practices, your request will receive a response with HTTP status code of 429, Too Many Requests.

"},{"location":"guides/rate_limits/#what-are-the-rate-limits-for-our-api","title":"What are the rate limits for our API?","text":"

The LLM Engine API is currently in a preview mode, and therefore we currently do not have any advertised rate limits. As the API moves towards a production release, we will update this section with specific rate limits. For now, the API will return HTTP 429 on an as-needed basis.

"},{"location":"guides/rate_limits/#error-mitigation","title":"Error mitigation","text":""},{"location":"guides/rate_limits/#retrying-with-exponential-backoff","title":"Retrying with exponential backoff","text":"

One easy way to avoid rate limit errors is to automatically retry requests with a random exponential backoff. Retrying with exponential backoff means performing a short sleep when a rate limit error is hit, then retrying the unsuccessful request. If the request is still unsuccessful, the sleep length is increased and the process is repeated. This continues until the request is successful or until a maximum number of retries is reached. This approach has many benefits:

  • Automatic retries means you can recover from rate limit errors without crashes or missing data
  • Exponential backoff means that your first retries can be tried quickly, while still benefiting from longer delays if your first few retries fail
  • Adding random jitter to the delay helps retries from all hitting at the same time.

Below are a few example solutions for Python that use exponential backoff.

"},{"location":"guides/rate_limits/#example-1-using-the-tenacity-library","title":"Example #1: Using the tenacity library","text":"

Tenacity is an Apache 2.0 licensed general-purpose retrying library, written in Python, to simplify the task of adding retry behavior to just about anything. To add exponential backoff to your requests, you can use the tenacity.retry decorator. The below example uses the tenacity.wait_random_exponential function to add random exponential backoff to a request.

import llmengine\nfrom tenacity import (\nretry,\nstop_after_attempt,\nwait_random_exponential,\n)  # for exponential backoff\n@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))\ndef completion_with_backoff(**kwargs):\nreturn llmengine.Completion.create(**kwargs)\ncompletion_with_backoff(model_name=\"llama-7b\", prompt=\"Why is the sky blue?\")\n
"},{"location":"guides/rate_limits/#example-2-using-the-backoff-library","title":"Example #2: Using the backoff library","text":"

Another python library that provides function decorators for backoff and retry is backoff:

import llmengine\nimport backoff\n@backoff.on_exception(backoff.expo, llmengine.error.RateLimitError)\ndef completions_with_backoff(**kwargs):\nreturn llmengine.Completion.create(**kwargs)\ncompletions_with_backoff(model_name=\"llama-7b\", prompt=\"Why is the sky blue?\")\n
"},{"location":"guides/token_streaming/","title":"Token streaming","text":"

The Completions APIs support a stream boolean parameter that, when True, will return a streamed response of token-by-token server-sent events (SSEs) rather than waiting to receive the full response when model generation has finished. This decreases latency of when you start getting a response.

The response will consist of SSEs of the form {\"token\": dict, \"generated_text\": str | null, \"details\": dict | null}, where the dictionary for each token will contain log probability information in addition to the generated string; the generated_text field will be null for all but the last SSE, for which it will contain the full generated response.

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"\u26a1 LLM Engine \u26a1","text":"

The open source engine for fine-tuning large language models. LLM Engine is the easiest way to customize and serve LLMs. Use Scale's hosted version or run it in your own cloud.

"},{"location":"#quick-install","title":"\ud83d\udcbb Quick Install","text":"Install using pip
pip install scale-llm-engine\n
"},{"location":"#about","title":"\ud83e\udd14 About","text":"

Foundation models are emerging as the building blocks of AI. However, deploying these models to the cloud and fine-tuning them still requires infrastructure and ML expertise, and can be expensive.

LLM Engine is a Python library, CLI, and Helm chart that provides everything you need to fine-tune and serve foundation models in the cloud using Kubernetes. Key features include:

\ud83d\ude80 Ready-to-use Fine-Tuning and Inference APIs for your favorite models: LLM Engine comes with ready-to-use APIs for your favorite open-source models, including MPT, Falcon, and LLaMA. Use Scale-hosted endpoints or deploy to your own infrastructure.

\ud83d\udc33 Deploying from any docker image: Turn any Docker image into an auto-scaling deployment with simple APIs.

\ud83c\udf99\ufe0fOptimized Inference: LLM Engine provides inference APIs for streaming responses and dynamically batching inputs for higher throughput and lower latency.

\ud83e\udd17 Open-Source Integrations: Deploy any Huggingface model with a single command.

"},{"location":"#features-coming-soon","title":"\ud83d\udd25 Features Coming Soon","text":"

\u2744 Fast Cold-Start Times: To prevent GPUs from idling, LLM Engine automatically scales your model to zero when it's not in use and scales up within seconds, even for large foundation models.

\ud83d\udcb8 Cost-Optimized: Deploy AI models cheaper than commercial ones, including cold-start and warm-down times.

"},{"location":"faq/","title":"Frequently Asked Questions","text":""},{"location":"getting_started/","title":"\ud83d\ude80 Getting Started","text":"

To start using LLM Engine's public inference and fine-tuning APIs:

Install using pipInstall using conda
pip install scale-llm-engine\n
conda install scale-llm-engine -c conda-forge\n

Navigate to https://spellbook.scale.com where you will get a Scale API key on the settings page. Set this API key as the SCALE_API_KEY environment variable by adding the following line to your .zshrc or .bash_profile:

Set API key
export SCALE_API_KEY = \"[Your API key]\"\n

With your API key set, you can now send LLM Engine requests using the Python client:

Using the Python Client
from llmengine import Completion\nresponse = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\nprint(response.outputs[0].text)\n
"},{"location":"model_zoo/","title":"\ud83e\udd99 Public Model Zoo","text":"

Scale hosts the following models in a model zoo:

Model Name Inference APIs Available Fine-tuning APIs Available llama-7b \u2705 \u2705 falcon-7b \u2705 falcon-7b-instruct \u2705 falcon-40b \u2705 falcon-40b-instruct \u2705 mpt-7b \u2705 mpt-7b-instruct \u2705 \u2705 flan-t5-xxl \u2705

Each of these models can be used with the Completion API.

The specified models can be fine-tuned with the FineTune API.

"},{"location":"api/error_handling/","title":"Error handling","text":"

LLM Engine uses conventional HTTP response codes to indicate the success or failure of an API request. In general: codes in the 2xx range indicate success. Codes in the 4xx range indicate indicate an error that failed given the information provided (e.g. a given Model was not found, or an invalid temperature was specified). Codes in the 5xx range indicate an error with the LLM Engine servers.

In the Python client, errors are presented via a set of corresponding Exception classes, which should be caught and handled by the user accordingly.

"},{"location":"api/error_handling/#llmengine.errors.BadRequestError","title":"BadRequestError","text":"
BadRequestError(message: str)\n

Bases: Exception

Corresponds to HTTP 400. Indicates that the request had inputs that were invalid. The user should not attempt to retry the request without changing the inputs.

"},{"location":"api/error_handling/#llmengine.errors.UnauthorizedError","title":"UnauthorizedError","text":"
UnauthorizedError(message: str)\n

Bases: Exception

Corresponds to HTTP 401. This means that no valid API key was provided.

"},{"location":"api/error_handling/#llmengine.errors.NotFoundError","title":"NotFoundError","text":"
NotFoundError(message: str)\n

Bases: Exception

Corresponds to HTTP 404. This means that the resource (e.g. a Model, FineTune, etc.) could not be found. Note that this can also be returned in some cases where the object might exist, but the user does not have access to the object. This is done to avoid leaking information about the existence or nonexistence of said object that the user does not have access to.

"},{"location":"api/error_handling/#llmengine.errors.RateLimitExceededError","title":"RateLimitExceededError","text":"
RateLimitExceededError(message: str)\n

Bases: Exception

Corresponds to HTTP 429. Too many requests hit the API too quickly. We recommend an exponential backoff for retries.

"},{"location":"api/error_handling/#llmengine.errors.ServerError","title":"ServerError","text":"
ServerError(status_code: int, message: str)\n

Bases: Exception

Corresponds to HTTP 5xx errors on the server.

"},{"location":"api/langchain/","title":"\ud83e\udd9c Langchain","text":"

Coming soon!

"},{"location":"api/python_client/","title":"\ud83d\udc0d Python Client API Reference","text":""},{"location":"api/python_client/#llmengine.Completion","title":"Completion","text":"

Bases: APIEngine

Completion API. This API is used to generate text completions.

Language Models are trained to understand natural language and provide text outputs as a response to their inputs. The inputs are called prompts and outputs are referred to as completions. LLMs take the input prompts and chunk them smaller units called tokens to process and generate language. Tokens may include trailing spaces and even sub-words; this process is language dependent.

The Completions API can be run either synchronous or asynchronously (via Python asyncio); for each of these modes, you can also choose to stream token responses or not.

"},{"location":"api/python_client/#llmengine.completion.Completion.create","title":"create classmethod","text":"
create(model_name: str, prompt: str, max_new_tokens: int = 20, temperature: float = 0.2, timeout: int = 10, stream: bool = False) -> Union[CompletionSyncV1Response, Iterator[CompletionStreamV1Response]]\n

Creates a completion for the provided prompt and parameters synchronously.

Parameters:

Name Type Description Default model_name str

Name of the model to use. See Model Zoo for a list of Models that are supported.

required prompt str

The prompt to generate completions for, encoded as a string.

required max_new_tokens int

The maximum number of tokens to generate in the completion.

The token count of your prompt plus max_new_tokens cannot exceed the model's context length. See Model Zoo for information on each supported model's context length.

20 temperature float

What sampling temperature to use, in the range (0, 1]. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

0.2 timeout int

Timeout in seconds. This is the maximum amount of time you are willing to wait for a response.

10 stream bool

Whether to stream the response. If true, the return type is an Iterator[CompletionStreamV1Response]. Otherwise, the return type is a CompletionSyncV1Response. When streaming, tokens will be sent as data-only server-sent events.

False

Returns:

Name Type Description response Union[CompletionSyncV1Response, AsyncIterable[CompletionStreamV1Response]]

The generated response (if streaming=False) or iterator of response chunks (if streaming=True)

Example request without token streaming
from llmengine import Completion\nresponse = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\nprint(response.json())\n
JSON Response
{\n\"status\": \"SUCCESS\",\n\"outputs\":\n[\n{\n\"text\": \"_______ and I am a _______\",\n\"num_prompt_tokens\": null,\n\"num_completion_tokens\": 10\n}\n],\n\"traceback\": null\n}\n
Example request with token streaming
from llmengine import Completion\nstream = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"why is the sky blue?\",\nmax_new_tokens=5,\ntemperature=0.2,\nstream=True,\n)\nfor response in stream:\nif response.output:\nprint(response.json())\n
JSON responses
{\"status\": \"SUCCESS\", \"output\": {\"text\": \"\\n\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 1 }, \"traceback\": null }\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \"I\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 2 }, \"traceback\": null }\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \" don\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 3 }, \"traceback\": null }\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \"\u2019\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 4 }, \"traceback\": null }\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \"t\", \"finished\": true, \"num_prompt_tokens\": null, \"num_completion_tokens\": 5 }, \"traceback\": null }\n
"},{"location":"api/python_client/#llmengine.completion.Completion.acreate","title":"acreate async classmethod","text":"
acreate(model_name: str, prompt: str, max_new_tokens: int = 20, temperature: float = 0.2, timeout: int = 10, stream: bool = False) -> Union[CompletionSyncV1Response, AsyncIterable[CompletionStreamV1Response]]\n

Creates a completion for the provided prompt and parameters asynchronously (with asyncio).

Parameters:

Name Type Description Default model_name str

Name of the model to use. See Model Zoo for a list of Models that are supported.

required prompt str

The prompt to generate completions for, encoded as a string.

required max_new_tokens int

The maximum number of tokens to generate in the completion.

The token count of your prompt plus max_new_tokens cannot exceed the model's context length. See Model Zoo for information on each supported model's context length.

20 temperature float

What sampling temperature to use, in the range (0, 1]. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

0.2 timeout int

Timeout in seconds. This is the maximum amount of time you are willing to wait for a response.

10 stream bool

Whether to stream the response. If true, the return type is an Iterator[CompletionStreamV1Response]. Otherwise, the return type is a CompletionSyncV1Response. When streaming, tokens will be sent as data-only server-sent events.

False

Returns:

Name Type Description response Union[CompletionSyncV1Response, AsyncIterable[CompletionStreamV1Response]]

The generated response (if streaming=False) or iterator of response chunks (if streaming=True)

Example without token streaming
import asyncio\nfrom llmengine import Completion\nasync def main():\nresponse = await Completion.acreate(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\nprint(response.json())\nasyncio.run(main())\n
JSON response
{\n\"status\": \"SUCCESS\",\n\"outputs\":\n[\n{\n\"text\": \"_______, and I am a _____\",\n\"num_prompt_tokens\": null,\n\"num_completion_tokens\": 10\n}\n],\n\"traceback\": null\n}\n
Example with token streaming
import asyncio\nfrom llmengine import Completion\nasync def main():\nstream = await Completion.acreate(\nmodel_name=\"llama-7b\",\nprompt=\"why is the sky blue?\",\nmax_new_tokens=5,\ntemperature=0.2,\nstream=True,\n)\nasync for response in stream:\nif response.output:\nprint(response.json())\nasyncio.run(main())\n
JSON responses
{\"status\": \"SUCCESS\", \"output\": {\"text\": \"\\n\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 1}, \"traceback\": null}\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \"I\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 2}, \"traceback\": null}\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \" think\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 3}, \"traceback\": null}\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \" the\", \"finished\": false, \"num_prompt_tokens\": null, \"num_completion_tokens\": 4}, \"traceback\": null}\n{\"status\": \"SUCCESS\", \"output\": {\"text\": \" sky\", \"finished\": true, \"num_prompt_tokens\": null, \"num_completion_tokens\": 5}, \"traceback\": null}\n
"},{"location":"api/python_client/#llmengine.CompletionOutput","title":"CompletionOutput","text":"

Bases: BaseModel

Represents the output of a completion request to a model.

"},{"location":"api/python_client/#llmengine.data_types.CompletionOutput.text","title":"text instance-attribute","text":"
text: str\n

The text of the completion.

"},{"location":"api/python_client/#llmengine.data_types.CompletionOutput.num_prompt_tokens","title":"num_prompt_tokens instance-attribute","text":"
num_prompt_tokens: Optional[int]\n

Number of tokens in the prompt.

"},{"location":"api/python_client/#llmengine.data_types.CompletionOutput.num_completion_tokens","title":"num_completion_tokens instance-attribute","text":"
num_completion_tokens: int\n

Number of tokens in the completion.

"},{"location":"api/python_client/#llmengine.CompletionSyncV1Response","title":"CompletionSyncV1Response","text":"

Bases: BaseModel

Response object for a synchronous prompt completion.

"},{"location":"api/python_client/#llmengine.data_types.CompletionSyncV1Response.outputs","title":"outputs instance-attribute","text":"
outputs: List[CompletionOutput]\n

List of completion outputs.

"},{"location":"api/python_client/#llmengine.data_types.CompletionSyncV1Response.status","title":"status instance-attribute","text":"
status: TaskStatus\n

Task status.

"},{"location":"api/python_client/#llmengine.data_types.CompletionSyncV1Response.traceback","title":"traceback instance-attribute class-attribute","text":"
traceback: Optional[str] = None\n

Traceback if the task failed.

"},{"location":"api/python_client/#llmengine.CompletionStreamOutput","title":"CompletionStreamOutput","text":"

Bases: BaseModel

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamOutput.text","title":"text instance-attribute","text":"
text: str\n

The text of the completion.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamOutput.finished","title":"finished instance-attribute","text":"
finished: bool\n

Whether the completion is finished.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamOutput.num_prompt_tokens","title":"num_prompt_tokens instance-attribute class-attribute","text":"
num_prompt_tokens: Optional[int] = None\n

Number of tokens in the prompt.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamOutput.num_completion_tokens","title":"num_completion_tokens instance-attribute class-attribute","text":"
num_completion_tokens: Optional[int] = None\n

Number of tokens in the completion.

"},{"location":"api/python_client/#llmengine.CompletionStreamV1Response","title":"CompletionStreamV1Response","text":"

Bases: BaseModel

Response object for a stream prompt completion task.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamV1Response.output","title":"output instance-attribute class-attribute","text":"
output: Optional[CompletionStreamOutput] = None\n

Completion output.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamV1Response.status","title":"status instance-attribute","text":"
status: TaskStatus\n

Task status.

"},{"location":"api/python_client/#llmengine.data_types.CompletionStreamV1Response.traceback","title":"traceback instance-attribute class-attribute","text":"
traceback: Optional[str] = None\n

Traceback if the task failed.

"},{"location":"api/python_client/#llmengine.FineTune","title":"FineTune","text":"

Bases: APIEngine

FineTune API. This API is used to fine-tune models.

Fine-tuning is a process where the LLM is further trained on a task-specific dataset, allowing the model to adjust its parameters to better align with the task at hand. Fine-tuning involves the supervised training phase, where prompt/response pairs are provided to optimize the performance of the LLM.

Scale llm-engine provides APIs to create fine-tunes on a base-model with training & validation data-sets. APIs are also provided to list, cancel and retrieve fine-tuning jobs.

"},{"location":"api/python_client/#llmengine.fine_tuning.FineTune.create","title":"create classmethod","text":"
create(model: str, training_file: str, validation_file: Optional[str] = None, hyperparameters: Optional[Dict[str, str]] = None, suffix: Optional[str] = None) -> CreateFineTuneResponse\n

Creates a job that fine-tunes a specified model from a given dataset.

Parameters:

Name Type Description Default model `str`

The name of the base model to fine-tune. See Model Zoo for the list of available models to fine-tune.

required training_file `str`

Path to file of training dataset

required validation_file `Optional[str]`

Path to file of validation dataset

None hyperparameters `str`

Hyperparameters

None suffix `Optional[str]`

A string that will be added to your fine-tuned model name.

None

Returns:

Name Type Description CreateFineTuneResponse CreateFineTuneResponse

an object that contains the ID of the created fine-tuning job

The model is the name of base model (Model Zoo for available models) to fine. The training file should consist of prompt and response pairs. Your data must be formatted as a CSV file that includes two columns: prompt and response. A maximum of 100,000 rows of data is currently supported. At least 200 rows of data is recommended to start to see benefits from fine-tuning.

Here is an example script to create a 5-row CSV of properly formatted data for fine-tuning an airline question answering bot:

import csv\n# Define data\ndata = [\n(\"What is your policy on carry-on luggage?\", \"Our policy allows each passenger to bring one piece of carry-on luggage and one personal item such as a purse or briefcase. The maximum size for carry-on luggage is 22 x 14 x 9 inches.\"),\n(\"How can I change my flight?\", \"You can change your flight through our website or mobile app. Go to 'Manage my booking' section, enter your booking reference and last name, then follow the prompts to change your flight.\"),\n(\"What meals are available on my flight?\", \"We offer a variety of meals depending on the flight's duration and route. These can range from snacks and light refreshments to full-course meals on long-haul flights. Specific meal options can be viewed during the booking process.\"),\n(\"How early should I arrive at the airport before my flight?\", \"We recommend arriving at least two hours before domestic flights and three hours before international flights.\"),\n\"Can I select my seat in advance?\", \"Yes, you can select your seat during the booking process or afterwards via the 'Manage my booking' section on our website or mobile app.\"),\n]\n# Write data to a CSV file\nwith open('customer_service_data.csv', 'w', newline='') as file:\nwriter = csv.writer(file)\nwriter.writerow([\"prompt\", \"response\"])\nwriter.writerows(data)\n
Example code for fine-tuning
from llmengine import FineTune\nresponse = FineTune.create(\nmodel=\"llama-7b\",\ntraining_file=\"s3://my-bucket/path/to/training-file.csv\",\n)\nprint(response.json())\n
JSON Response
{\n\"fine_tune_id\": \"ft_abc123\"\n}\n
"},{"location":"api/python_client/#llmengine.fine_tuning.FineTune.list","title":"list classmethod","text":"
list() -> ListFineTunesResponse\n

List fine-tuning jobs

Returns:

Name Type Description ListFineTunesResponse ListFineTunesResponse

an object that contains a list of all fine-tuning jobs and their statuses

Example
from llmengine import FineTune\nresponse = FineTune.list()\nprint(response.json())\n
JSON Response
[\n{\n\"fine_tune_id\": \"ft_abc123\",\n\"status\": \"RUNNING\"\n},\n{\n\"fine_tune_id\": \"ft_def456\",\n\"status\": \"SUCCESS\"\n}\n]\n
"},{"location":"api/python_client/#llmengine.fine_tuning.FineTune.retrieve","title":"retrieve classmethod","text":"
retrieve(fine_tune_id: str) -> GetFineTuneResponse\n

Get status of a fine-tuning job

Parameters:

Name Type Description Default fine_tune_id `str`

ID of the fine-tuning job

required

Returns:

Name Type Description GetFineTuneResponse GetFineTuneResponse

an object that contains the ID and status of the requested job

Example
from llmengine import FineTune\nresponse = FineTune.retrieve(\nfine_tune_id=\"ft_abc123\",\n)\nprint(response.json())\n
JSON Response
{\n\"fine_tune_id\": \"ft_abc123\",\n\"status\": \"RUNNING\"\n}\n
"},{"location":"api/python_client/#llmengine.fine_tuning.FineTune.cancel","title":"cancel classmethod","text":"
cancel(fine_tune_id: str) -> CancelFineTuneResponse\n

Cancel a fine-tuning job

Parameters:

Name Type Description Default fine_tune_id `str`

ID of the fine-tuning job

required

Returns:

Name Type Description CancelFineTuneResponse CancelFineTuneResponse

an object that contains whether the cancellation was successful

Example
from llmengine import FineTune\nresponse = FineTune.cancel(fine_tune_id=\"ft_abc123\")\nprint(response.json())\n
JSON Response
{\n\"success\": \"true\"\n}\n
"},{"location":"api/python_client/#llmengine.CreateFineTuneResponse","title":"CreateFineTuneResponse","text":"

Bases: BaseModel

Response object for creating a FineTune.

"},{"location":"api/python_client/#llmengine.data_types.CreateFineTuneResponse.fine_tune_id","title":"fine_tune_id instance-attribute class-attribute","text":"
fine_tune_id: str = Field(Ellipsis, description='ID of the created fine-tuning job.')\n

The ID of the FineTune.

"},{"location":"api/python_client/#llmengine.GetFineTuneResponse","title":"GetFineTuneResponse","text":"

Bases: BaseModel

Response object for retrieving a FineTune.

"},{"location":"api/python_client/#llmengine.data_types.GetFineTuneResponse.fine_tune_id","title":"fine_tune_id instance-attribute class-attribute","text":"
fine_tune_id: str = Field(Ellipsis, description='ID of the requested job.')\n

The ID of the FineTune.

"},{"location":"api/python_client/#llmengine.data_types.GetFineTuneResponse.status","title":"status instance-attribute class-attribute","text":"
status: BatchJobStatus = Field(Ellipsis, description='Status of the requested job.')\n

The status of the FineTune job.

"},{"location":"api/python_client/#llmengine.ListFineTunesResponse","title":"ListFineTunesResponse","text":"

Bases: BaseModel

Response object for listing FineTunes.

"},{"location":"api/python_client/#llmengine.data_types.ListFineTunesResponse.jobs","title":"jobs instance-attribute class-attribute","text":"
jobs: List[GetFineTuneResponse] = Field(Ellipsis, description='List of fine-tuning jobs and their statuses.')\n

A list of FineTunes, represented as GetFineTuneResponses.

"},{"location":"api/python_client/#llmengine.CancelFineTuneResponse","title":"CancelFineTuneResponse","text":"

Bases: BaseModel

Response object for cancelling a FineTune.

"},{"location":"api/python_client/#llmengine.data_types.CancelFineTuneResponse.success","title":"success instance-attribute class-attribute","text":"
success: bool = Field(Ellipsis, description='Whether cancellation was successful.')\n

Whether the cancellation succeeded.

"},{"location":"guides/completions/","title":"Completions","text":"

Language Models are trained to understand natural language and provide text outputs as a response to their inputs. The inputs are called prompts and outputs are referred to as completions. LLMs take the input prompts and chunk them smaller units called tokens to process and generate language. Tokens may include trailing spaces and even sub-words, this process is language dependent.

Scale llm-engine provides access to open source language models (see Model Zoo) that can be used for producing completions to prompts.

"},{"location":"guides/completions/#completion-api-call","title":"Completion API call","text":"

An example API call looks as follows:

from llmengine import Completion\nresponse = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\n

The model_name is the LLM to be used (see Model Zoo). The prompt is the main input for the LLM to respond to. The max_new_tokens parameter is the maximum number of tokens to generate in the chat completion. The temperature is the sampling temperature to use. Higher values make the output more random, while lower values will make it more focussed and deterministic.

See the full API reference documentation to learn more.

"},{"location":"guides/completions/#completion-api-response","title":"Completion API response","text":"

An example Completion API response looks as follows:

{\n\"outputs\": [\n{\n\"text\": \"_______ and I am a _______\",\n\"num_completion_tokens\": 10\n}\n]\n}\n

In Python, the response is of type CompletionSyncV1Response, which maps to the above JSON structure.

print( response.outputs[0].text )\n
"},{"location":"guides/completions/#token-streaming","title":"Token streaming","text":"

The Completions API support token streaming to reduce perceived latency for certain applications. When streaming, tokens will be sent as data-only server-side events.

To enable token streaming, pass stream=True to either Completion.create or Completion.acreate.

An example of token streaming using the synchronous Completions API looks as follows:

from llmengine import Completion\nstream = Completion.create(\nmodel_name=\"llama-7b\",\nprompt=\"why is the sky blue?\",\nmax_new_tokens=5,\ntemperature=0.2,\nstream=True,\n)\nfor response in stream:\nif response.output:\nprint(response.json())\n
"},{"location":"guides/completions/#async-requests","title":"Async requests","text":"

The Python client supports asyncio for creating Completions. Use Completion.acreate instead of Completion.create to utilize async processing. The function signatures are otherwise identical.

An example of async Completions looks as follows:

import asyncio\nfrom llmengine import Completion\nasync def main():\nresponse = await Completion.acreate(\nmodel_name=\"llama-7b\",\nprompt=\"Hello, my name is\",\nmax_new_tokens=10,\ntemperature=0.2,\n)\nprint(response.json())\nasyncio.run(main())\n
"},{"location":"guides/completions/#which-model-should-i-use","title":"Which model should I use?","text":"

See the Model Zoo for more information on best practices for which model to use for Completions.

"},{"location":"guides/fine_tuning/","title":"Fine-tuning","text":"

Learn how to customize your models on your data with fine-tuning.

"},{"location":"guides/fine_tuning/#introduction","title":"Introduction","text":"

Fine-tuning helps improve model performance by training on specific examples of prompts and desired responses. LLMs are initially trained on data collected from the entire internet. With fine-tuning, LLMs can be optimized to perform better in a specific domain by learning from examples for that domain. Smaller LLMs that have been fine-tuned on a specific use case often outperform larger ones that were trained more generally.

Fine-tuning allows for:

  1. Higher quality results than prompt engineering alone
  2. Cost savings through shorter prompts
  3. The ability to reach equivalent accuracy with a smaller model
  4. Lower latency at inference time
  5. The chance to show an LLM more examples than can fit in a single context window

LLM Engine's fine-tuning API lets you fine-tune various open source LLMs on your own data and then make inference calls to the resulting LLM. For more specific details, see the fine-tuning API Python client reference.

"},{"location":"guides/fine_tuning/#producing-high-quality-data-for-fine-tuning","title":"Producing high quality data for fine-tuning","text":"

The training data for fine-tuning should consist of prompt and response pairs.

As a rule of thumb, you should expect to see linear improvements in your fine-tuned model's quality with each doubling of the dataset size. Having high-quality data is also essential to improving performance. For every linear increase in the error rate in your training data, you may encounter a roughly quadratic increase in your fine-tuned model's error rate.

High quality data is critical to achieve improved model performance, and in several cases will require experts to generate and prepare data - the breadth and diversity of the data is highly critical. Scale's Data Engine can help prepare such high quality, diverse data sets - more information here.

"},{"location":"guides/fine_tuning/#preparing-data","title":"Preparing data","text":"

Your data must be formatted as a CSV file that includes two columns: prompt and response. A maximum of 100,000 rows of data is currently supported. At least 200 rows of data is recommended to start to see benefits from fine-tuning.

Here is an example script to create a 50-row CSV of properly formatted data for fine-tuning an airline question answering bot:

Creating a sample dataset
import csv\n# Define data\ndata = [\n(\"What is your policy on carry-on luggage?\", \"Our policy allows each passenger to bring one piece of carry-on luggage and one personal item such as a purse or briefcase. The maximum size for carry-on luggage is 22 x 14 x 9 inches.\"),\n(\"How can I change my flight?\", \"You can change your flight through our website or mobile app. Go to 'Manage my booking' section, enter your booking reference and last name, then follow the prompts to change your flight.\"),\n(\"What meals are available on my flight?\", \"We offer a variety of meals depending on the flight's duration and route. These can range from snacks and light refreshments to full-course meals on long-haul flights. Specific meal options can be viewed during the booking process.\"),\n(\"How early should I arrive at the airport before my flight?\", \"We recommend arriving at least two hours before domestic flights and three hours before international flights.\"),\n(\"Can I select my seat in advance?\", \"Yes, you can select your seat during the booking process or afterwards via the 'Manage my booking' section on our website or mobile app.\"),\n(\"What should I do if my luggage is lost?\", \"If your luggage is lost, please report this immediately at our 'Lost and Found' counter at the airport. We will assist you in tracking your luggage.\"),\n(\"Do you offer special assistance for passengers with disabilities?\", \"Yes, we offer special assistance for passengers with disabilities. Please notify us of your needs at least 48 hours prior to your flight.\"),\n(\"Can I bring my pet on the flight?\", \"Yes, we allow small pets in the cabin, and larger pets in the cargo hold. Please check our pet policy for more details.\"),\n(\"What is your policy on flight cancellations?\", \"In case of flight cancellations, we aim to notify passengers as early as possible and offer either a refund or a rebooking on the next available flight.\"),\n(\"Can I get a refund if I cancel my flight?\", \"Refunds depend on the type of ticket purchased. Please check our cancellation policy for details. Non-refundable tickets, however, are typically not eligible for refunds unless due to extraordinary circumstances.\"),\n(\"How can I check-in for my flight?\", \"You can check-in for your flight either online, through our mobile app, or at the airport. Online and mobile app check-in opens 24 hours before departure and closes 90 minutes before.\"),\n(\"Do you offer free meals on your flights?\", \"Yes, we serve free meals on all long-haul flights. For short-haul flights, we offer a complimentary drink and snack. Special meal requests should be made at least 48 hours before departure.\"),\n(\"Can I use my electronic devices during the flight?\", \"Small electronic devices can be used throughout the flight in flight mode. Larger devices like laptops may be used above 10,000 feet.\"),\n(\"How much baggage can I check-in?\", \"The checked baggage allowance depends on the class of travel and route. The details would be mentioned on your ticket, or you can check on our website.\"),\n(\"How can I request for a wheelchair?\", \"To request a wheelchair or any other special assistance, please call our customer service at least 48 hours before your flight.\"),\n(\"Do I get a discount for group bookings?\", \"Yes, we offer discounts on group bookings of 10 or more passengers. Please contact our group bookings team for more information.\"),\n(\"Do you offer Wi-fi on your flights?\", \"Yes, we offer complimentary Wi-fi on select flights. You can check the availability during the booking process.\"),\n(\"What is the minimum connecting time between flights?\", \"The minimum connecting time varies depending on the airport and whether your flight is international or domestic. Generally, it's recommended to allow at least 45-60 minutes for domestic connections and 60-120 minutes for international.\"),\n(\"Do you offer duty-free shopping on international flights?\", \"Yes, we have a selection of duty-free items that you can pre-order on our website or purchase onboard on international flights.\"),\n(\"Can I upgrade my ticket to business class?\", \"Yes, you can upgrade your ticket through the 'Manage my booking' section on our website or by contacting our customer service. The availability and costs depend on the specific flight.\"),\n(\"Can unaccompanied minors travel on your flights?\", \"Yes, we do accommodate unaccompanied minors on our flights, with special services to ensure their safety and comfort. Please contact our customer service for more details.\"),\n(\"What amenities do you provide in business class?\", \"In business class, you will enjoy additional legroom, reclining seats, premium meals, priority boarding and disembarkation, access to our business lounge, extra baggage allowance, and personalized service.\"),\n(\"How much does extra baggage cost?\", \"Extra baggage costs vary based on flight route and the weight of the baggage. Please refer to our 'Extra Baggage' section on the website for specific rates.\"),\n(\"Are there any specific rules for carrying liquids in carry-on?\", \"Yes, liquids carried in your hand luggage must be in containers of 100 ml or less and they should all fit into a single, transparent, resealable plastic bag of 20 cm x 20 cm.\"),\n(\"What if I have a medical condition that requires special assistance during the flight?\", \"We aim to make the flight comfortable for all passengers. If you have a medical condition that may require special assistance, please contact our \u2018special services\u2019 team 48 hours before your flight.\"),\n(\"What in-flight entertainment options are available?\", \"We offer a range of in-flight entertainment options including a selection of movies, TV shows, music, and games, available on your personal seat-back screen.\"),\n(\"What types of payment methods do you accept?\", \"We accept credit/debit cards, PayPal, bank transfers, and various other forms of payment. The available options may vary depending on the country of departure.\"),\n(\"How can I earn and redeem frequent flyer miles?\", \"You can earn miles for every journey you take with us or our partner airlines. These miles can be redeemed for flight tickets, upgrades, or various other benefits. To earn and redeem miles, you need to join our frequent flyer program.\"),\n(\"Can I bring a stroller for my baby?\", \"Yes, you can bring a stroller for your baby. It can be checked in for free and will normally be given back to you at the aircraft door upon arrival.\"),\n(\"What age does my child have to be to qualify as an unaccompanied minor?\", \"Children aged between 5 and 12 years who are traveling alone are considered unaccompanied minors. Our team provides special care for these children from departure to arrival.\"),\n(\"What documents do I need to travel internationally?\", \"For international travel, you need a valid passport and may also require visas, depending on your destination and your country of residence. It's important to check the specific requirements before you travel.\"),\n(\"What happens if I miss my flight?\", \"If you miss your flight, please contact our customer service immediately. Depending on the circumstances, you may be able to rebook on a later flight, but additional fees may apply.\"),\n(\"Can I travel with my musical instrument?\", \"Yes, small musical instruments can be brought on board as your one carry-on item. Larger instruments must be transported in the cargo, or if small enough, a seat may be purchased for them.\"),\n(\"Do you offer discounts for children or infants?\", \"Yes, children aged 2-11 traveling with an adult usually receive a discount on the fare. Infants under the age of 2 who do not occupy a seat can travel for a reduced fare or sometimes for free.\"),\n(\"Is smoking allowed on your flights?\", \"No, all our flights are non-smoking for the comfort and safety of all passengers.\"),\n(\"Do you have family seating?\", \"Yes, we offer the option to seat families together. You can select seats during booking or afterwards through the 'Manage my booking' section on the website.\"),\n(\"Is there any discount for senior citizens?\", \"Some flights may offer a discount for senior citizens. Please check our website or contact customer service for accurate information.\"),\n(\"What items are prohibited on your flights?\", \"Prohibited items include, but are not limited to, sharp objects, firearms, explosive materials, and certain chemicals. You can find a comprehensive list on our website under the 'Security Regulations' section.\"),\n(\"Can I purchase a ticket for someone else?\", \"Yes, you can purchase a ticket for someone else. You'll need their correct name as it appears on their government-issued ID, and their correct travel dates.\"),\n(\"What is the process for lost and found items on the plane?\", \"If you realize you forgot an item on the plane, report it as soon as possible to our lost and found counter. We will make every effort to locate and return your item.\"),\n(\"Can I request a special meal?\", \"Yes, we offer a variety of special meals to accommodate dietary restrictions. Please request your preferred meal at least 48 hours prior to your flight.\"),\n(\"Is there a weight limit for checked baggage?\", \"Yes, luggage weight limits depend on your ticket class and route. You can find the details on your ticket or by visiting our website.\"),\n(\"Can I bring my sports equipment?\", \"Yes, certain types of sports equipment can be carried either as or in addition to your permitted baggage. Some equipment may require additional fees. It's best to check our policy on our website or contact us directly.\"),\n(\"Do I need a visa to travel to certain countries?\", \"Yes, visa requirements depend on the country you are visiting and your nationality. We advise checking with the relevant embassy or consulate prior to travel.\"),\n(\"How can I add extra baggage to my booking?\", \"You can add extra baggage to your booking through the 'Manage my booking' section on our website or by contacting our customer services.\"),\n(\"Can I check-in at the airport?\", \"Yes, you can choose to check-in at the airport. However, we also offer online and mobile check-in, which may save you time.\"),\n(\"How do I know if my flight is delayed or cancelled?\", \"In case of any changes to your flight, we will attempt to notify all passengers using the contact information given at the time of booking. You can also check your flight status on our website.\"),\n(\"What is your policy on pregnant passengers?\", \"Pregnant passengers can travel up to the end of the 36th week for single pregnancies, and the end of the 32nd week for multiple pregnancies. We recommend consulting your doctor before any air travel.\"),\n(\"Can children travel alone?\", \"Yes, children age 5 to 12 can travel alone as unaccompanied minors. We provide special care for these seats. Please contact our customer service for more information.\"),\n(\"How can I pay for my booking?\", \"You can pay for your booking using a variety of methods including credit and debit cards, PayPal, or bank transfers. The options may vary depending on the country of departure.\"),\n]\n# Write data to a CSV file\nwith open('customer_service_data.csv', 'w', newline='') as file:\nwriter = csv.writer(file)\nwriter.writerow([\"prompt\", \"response\"])\nwriter.writerows(data)\n
"},{"location":"guides/fine_tuning/#making-your-data-accessible-to-llm-engine","title":"Making your data accessible to LLM Engine","text":"

Currently, data needs to be uploaded to a publicly accessible web URL so that it can be read for fine-tuning. Publicly accessible HTTP, HTTPS, and S3 URLs are currently supported. Support for privately sharing data with the LLM Engine API is coming shortly.

"},{"location":"guides/fine_tuning/#launching-the-fine-tune","title":"Launching the fine-tune","text":"

Once you have uploaded your data, you can use the LLM Engine API to launch a fine-tune. You will need to specify which base model to fine-tune, the locations of the training file and optional validation data file, an optional set of hyperparameters to customize the fine-tuning behavior, and an optional suffix to append to the name of the fine-tune.

Create a fine-tune
from llmengine import FineTune\nresponse = FineTune.create(\nmodel=\"llama-7b\",\ntraining_file=\"s3://my-bucket/path/to/training-file.csv\",\n)\nprint(response.json())\n

See the Model Zoo to see which models have fine-tuning support.

Once the fine-tune is launched, you can also get the status of your fine-tune.

"},{"location":"guides/fine_tuning/#making-inference-calls-to-your-fine-tune","title":"Making inference calls to your fine-tune","text":"

Once your fine-tune is finished, you will be able to start making inference requests to the model. You can use the fine_tune_id returned from your FineTune.create API call to reference your fine-tuned model in the Completions API. Alternatively, you can list available LLMs with Model.list in order to find the name of your fine-tuned model. See the Completion API for more details. You can then use that name to direct your completion requests.

Inference with a fine-tuned model
from llmengine import Completion\nresponse = Completion.create(\nmodel_name=\"ft_abc123\",\nprompt=\"Do you offer in-flight Wi-fi?\",\nmax_new_tokens=100,\ntemperature=0.2,\n)\nprint(response.json())\n
"},{"location":"guides/rate_limits/","title":"Overview","text":""},{"location":"guides/rate_limits/#what-are-rate-limits","title":"What are rate limits?","text":"

A rate limit is a restriction that an API imposes on the number of times a user or client can access the server within a specified period of time.

"},{"location":"guides/rate_limits/#why-do-we-have-rate-limits","title":"Why do we have rate limits?","text":"

Rate limits are a common practice for APIs, and they're put in place for a few different reasons:

  • They help protect against abuse or misuse of the API. For example, a malicious actor could flood the API with requests in an attempt to overload it or cause disruptions in the service. By setting rate limits, the LLM Engine server can prevent this kind of activity.
  • Rate limits help ensure that everyone has fair access to API. If one person or organization makes an excessive number of requests, it could bog down the API for everyone else. By throttling the number of requests that a single user can make, LLM Engine ensures that the most number of people have an opportunity to use the API without experiencing slowdowns. This also applies when self-hosting LLM Engine, as all internal users within an organization would have fair access.
  • Rate limits can help manage the aggregate load on the server infrastructure. If requests to the API increase dramatically, it could tax the servers and cause performance issues. By setting rate limits, LLM Engine can help maintain a smooth and consistent experience for all users. This is especially important when self-hosting LLM Engine.
"},{"location":"guides/rate_limits/#how-do-i-know-if-i-am-rate-limited","title":"How do I know if I am rate limited?","text":"

Per standard HTTP practices, your request will receive a response with HTTP status code of 429, Too Many Requests.

"},{"location":"guides/rate_limits/#what-are-the-rate-limits-for-our-api","title":"What are the rate limits for our API?","text":"

The LLM Engine API is currently in a preview mode, and therefore we currently do not have any advertised rate limits. As the API moves towards a production release, we will update this section with specific rate limits. For now, the API will return HTTP 429 on an as-needed basis.

"},{"location":"guides/rate_limits/#error-mitigation","title":"Error mitigation","text":""},{"location":"guides/rate_limits/#retrying-with-exponential-backoff","title":"Retrying with exponential backoff","text":"

One easy way to avoid rate limit errors is to automatically retry requests with a random exponential backoff. Retrying with exponential backoff means performing a short sleep when a rate limit error is hit, then retrying the unsuccessful request. If the request is still unsuccessful, the sleep length is increased and the process is repeated. This continues until the request is successful or until a maximum number of retries is reached. This approach has many benefits:

  • Automatic retries means you can recover from rate limit errors without crashes or missing data
  • Exponential backoff means that your first retries can be tried quickly, while still benefiting from longer delays if your first few retries fail
  • Adding random jitter to the delay helps retries from all hitting at the same time.

Below are a few example solutions for Python that use exponential backoff.

"},{"location":"guides/rate_limits/#example-1-using-the-tenacity-library","title":"Example #1: Using the tenacity library","text":"

Tenacity is an Apache 2.0 licensed general-purpose retrying library, written in Python, to simplify the task of adding retry behavior to just about anything. To add exponential backoff to your requests, you can use the tenacity.retry decorator. The below example uses the tenacity.wait_random_exponential function to add random exponential backoff to a request.

import llmengine\nfrom tenacity import (\nretry,\nstop_after_attempt,\nwait_random_exponential,\n)  # for exponential backoff\n@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))\ndef completion_with_backoff(**kwargs):\nreturn llmengine.Completion.create(**kwargs)\ncompletion_with_backoff(model_name=\"llama-7b\", prompt=\"Why is the sky blue?\")\n
"},{"location":"guides/rate_limits/#example-2-using-the-backoff-library","title":"Example #2: Using the backoff library","text":"

Another python library that provides function decorators for backoff and retry is backoff:

import llmengine\nimport backoff\n@backoff.on_exception(backoff.expo, llmengine.error.RateLimitError)\ndef completions_with_backoff(**kwargs):\nreturn llmengine.Completion.create(**kwargs)\ncompletions_with_backoff(model_name=\"llama-7b\", prompt=\"Why is the sky blue?\")\n
"},{"location":"guides/token_streaming/","title":"Token streaming","text":"

The Completions APIs support a stream boolean parameter that, when True, will return a streamed response of token-by-token server-sent events (SSEs) rather than waiting to receive the full response when model generation has finished. This decreases latency of when you start getting a response.

The response will consist of SSEs of the form {\"token\": dict, \"generated_text\": str | null, \"details\": dict | null}, where the dictionary for each token will contain log probability information in addition to the generated string; the generated_text field will be null for all but the last SSE, for which it will contain the full generated response.

"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 86efc7c30a44bcf55897ea87d2efb72dbf0687e4..de99ef96a7ba038bfe271c320ecd1f489cbb7c85 100644 GIT binary patch delta 12 Tcmb=gXOr*d;Lz@w$W{pe7RLjI delta 12 Tcmb=gXOr*d;E3&-$W{pe7r6tr