Skip to content
This repository has been archived by the owner on May 28, 2024. It is now read-only.

Commit

Permalink
Add more details about prompt format in the docs (#126)
Browse files Browse the repository at this point in the history
Trying to make it easier for users to self-service add custom models to
use with ray-llm.

---------

Signed-off-by: Alan Guo <[email protected]>
  • Loading branch information
alanwguo authored Jan 25, 2024
1 parent 5255abe commit f6926b7
Showing 1 changed file with 23 additions and 3 deletions.
26 changes: 23 additions & 3 deletions models/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ RayLLM supports continuous batching, meaning incoming requests are processed as
* `model_id` is the ID that refers to the model in the RayLLM or OpenAI API.
* `type` is the type of inference engine. `VLLMEngine`, `TRTLLMEngine`, and `EmbeddingEngine` are currently supported.
* `engine_kwargs` and `max_total_tokens` are configuration options for the inference engine (e.g. gpu_memory_utilization, quantization, max_num_seqs and so on, see [more options](https://github.com/vllm-project/vllm/blob/main/vllm/engine/arg_utils.py#L11)). These options may vary depending on the hardware accelerator type and model size. We have tuned the parameters in the configuration files included in RayLLM for you to use as reference.
* `generation` contains configurations related to default generation parameters such as `prompt_format` and `stopping_sequences`.
* `generation` contains configurations related to default generation parameters such as `prompt_format` and `stopping_sequences`. More info about prompt format can be found [here](#prompt-format).
* `hf_model_id` is the Hugging Face model ID. This can also be a path to a local directory. If not specified, defaults to `model_id`.
* `runtime_env` is a dictionary that contains Ray runtime environment configuration. It allows you to set per-model pip packages and environment variables. See [Ray documentation on Runtime Environments](https://docs.ray.io/en/latest/ray-core/handling-dependencies.html#runtime-environments) for more information.
* `s3_mirror_config` is a dictionary that contains configuration for loading the model from S3 instead of Hugging Face Hub. You can use this to speed up downloads.
Expand All @@ -49,7 +49,7 @@ RayLLM supports continuous batching, meaning incoming requests are processed as
* `type` is the type of inference engine. `VLLMEngine`, `TRTLLMEngine`, and `EmbeddingEngine` are currently supported.
* `model_local_path` is the path to the TensorRT-LLM model directory.
* `s3_mirror_config` is a dictionary that contains configurations for loading the model from S3 instead of Hugging Face Hub. You can use this to speed up downloads.
* `generation` contains configurations related to default generation parameters such as `prompt_format` and `stopping_sequences`.
* `generation` contains configurations related to default generation parameters such as `prompt_format` and `stopping_sequences`. More info about prompt format can be found [here](#prompt-format).
* `scheduler_policy` sets the scheduler policy to either `MAX_UTILIZATION` or `GUARANTEED_NO_EVICT`.
(`MAX_UTILIZATION` packs as many requests as the underlying TRT engine can support in any iteration of the InflightBatching generation loop. While this is expected to maximize GPU throughput, it might require that some requests be paused and restarted depending on peak KV cache memory availability.
`GUARANTEED_NO_EVICT` uses KV cache more conservatively and guarantees that a request, once started, runs to completion without eviction.)
Expand All @@ -69,6 +69,26 @@ RayLLM supports continuous batching, meaning incoming requests are processed as
You can follow the TensorRT-LLM example to generate the model.(https://github.com/NVIDIA/TensorRT-LLM/tree/v0.6.1/examples/llama). After generating the model, you can upload the model artifact to S3 and use the `s3_mirror_config` to load the model from S3. You can also place the model artifacts in a local directory and use the `model_local_path` to load the model from the local directory. See the [llama example](continuous_batching/trtllm-meta-llama--Llama-2-7b-chat-hf.yaml) for more details.


#### Prompt Format
A prompt format is used to convert a chat completions API input into a prompt to feed into the LLM engine. The format is a dictionary where the key refers to one of the chat actors and the value is a string template for which to convert the content of the message into a string. Each message in the API input is formated into a string and these strings are assembled together to form the final prompt.

The string template should include the `{instruction}` keyword, which will be replaced with message content from the ChatCompletions API.

The following keys are supported:
* `system` - The system message. This is a message inserted at the beginning of the prompt to provide instructions for the LLM.
* `assistant` - The assistant message. These messages are from the past turns of the assistant as defined in the list of messages provided in the ChatCompletions API.
* `trailing_assistant` - The new assistant message. This is the message that the assistant will send in the current turn as generated by the LLM.
* `user` - The user message. This is the messages of the user as defined in the list of messages provided in the ChatCompletions API.

In addition, there some configurations to control the prompt formatting behavior:
* `default_system_message` - The default system message. This system message is used by default if one is not provided in the ChatCompletions API.
* `system_in_user` - Whether the system prompt should be included in the user prompt. If true, the user field should include '{system}'.
* `add_system_tags_even_if_message_is_empty` - Whether to include the system tags even if the user message is empty.
* `strip_whitespace` - Whether to automatically strip whitespace from left and right of the content for the messages provided in the ChatCompletions API.


You can see an example in the [Adding a new model](#adding-a-new-model) section below.

### Scaling config

Finally, the `scaling_config` section specifies what resources should be used to serve the model - this corresponds to Ray AIR [ScalingConfig](https://docs.ray.io/en/latest/train/api/doc/ray.train.ScalingConfig.html). Note that the `scaling_config` applies to each model replica, and not the entire model deployment (in other words, each replica will have `num_workers` workers).
Expand Down Expand Up @@ -126,7 +146,7 @@ engine_config:
s3_mirror_config:
bucket_uri: s3://large-dl-models-mirror/models--mosaicml--mpt-7b-instruct/main-safetensors/
generation:
# Prompt format to wrap queries in. {instruction} refers to user-supplied input.
# Format to convert user API input into prompts to feed into the LLM engine. {instruction} refers to user-supplied input.
prompt_format:
system: "{instruction}\n" # System message. Will default to default_system_message
assistant: "### Response:\n{instruction}\n" # Past assistant message. Used in chat completions API.
Expand Down

0 comments on commit f6926b7

Please sign in to comment.