Skip to content
This repository has been archived by the owner on May 28, 2024. It is now read-only.

Commit

Permalink
Merge pull request #95 from ray-project/quantization
Browse files Browse the repository at this point in the history
Add AWQ and SqueezeLLM quantization configs
  • Loading branch information
uvikas authored Nov 20, 2023
2 parents 335e688 + 2aa47f3 commit fa3a766
Show file tree
Hide file tree
Showing 13 changed files with 225 additions and 5 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ a variety of open source LLMs, built on [Ray Serve](https://docs.ray.io/en/lates

In addition to LLM serving, it also includes a CLI and a web frontend (Aviary Explorer) that you can use to compare the outputs of different models directly, rank them by quality, get a cost and latency estimate, and more.

RayLLM supports continuous batching by integrating with [vLLM](https://github.com/vllm-project/vllm). Continuous batching allows you to get much better throughput and latency than static batching.
RayLLM supports continuous batching and quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Continuous batching allows you to get much better throughput and latency than static batching. Quantization allows you to deploy compressed models with cheaper hardware requirements and lower inference costs. See [quantization guide](models/continuous_batching/quantization/README.md) for more details on running quantized models on RayLLM.

RayLLM leverages [Ray Serve](https://docs.ray.io/en/latest/serve/index.html), which has native support for autoscaling
and multi-node deployments. RayLLM can scale to zero and create
Expand Down
4 changes: 2 additions & 2 deletions models/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,11 @@ Engine is the abstraction for interacting with a model. It is responsible for sc

The `engine_config` section specifies the Hugging Face model ID (`model_id`), how to initialize it and what parameters to use when generating tokens with an LLM.

RayLLM supports continuous batching, meaning incoming requests are processed as soon as they arrive, and can be added to batches that are already being processed. This means that the model is not slowed down by certain sentences taking longer to generate than others.
RayLLM supports continuous batching, meaning incoming requests are processed as soon as they arrive, and can be added to batches that are already being processed. This means that the model is not slowed down by certain sentences taking longer to generate than others. RayLLM also supports quantization, meaning compressed models can be deployed with cheaper hardware requirements. For more details on using quantized models in RayLLM, see the [quantization guide](continuous_batching/quantization/README.md).

* `model_id` is the ID that refers to the model in the RayLLM or OpenAI API.
* `type` is the type of inference engine. Only `VLLMEngine` is currently supported.
* `engine_kwargs` and `max_total_tokens` are configuration options for the inference engine. These options may vary depending on the hardware accelerator type and model size. We have tuned the parameters in the configuration files included in RayLLM for you to use as reference.
* `engine_kwargs` and `max_total_tokens` are configuration options for the inference engine (e.g. gpu memory utilization, quantization, max number of concurrent sequences). These options may vary depending on the hardware accelerator type and model size. We have tuned the parameters in the configuration files included in RayLLM for you to use as reference.
* `generation` contains configurations related to default generation parameters such as `prompt_format` and `stopping_sequences`.
* `hf_model_id` is the Hugging Face model ID. This can also be a path to a local directory. If not specified, defaults to `model_id`.
* `runtime_env` is a dictionary that contains Ray runtime environment configuration. It allows you to set per-model pip packages and environment variables. See [Ray documentation on Runtime Environments](https://docs.ray.io/en/latest/ray-core/handling-dependencies.html#runtime-environments) for more information.
Expand Down
28 changes: 28 additions & 0 deletions models/continuous_batching/quantization/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Quantization in RayLLM

Quantization is a technique to reduce the computational and memory costs of running inference by representing the weights and/or activations with low-precision data types like 4-bit integer (int4) instead of the usual 16-bit floating point (float16).
Quantization allows users to deploy models with cheaper hardware requirements with potentially lower inference costs.

RayLLM supports AWQ and SqueezeLLM weight-only quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Quantization can be enabled in RayLLM by specifying the `quantization` method in `engine_kwargs` and using a quantized model for `model_id` and `hf_model_id`. See the configs in this directory for quantization examples. Note that the AWQ and SqueezeLLM quantization methods in vLLM have not been fully optimized and can be slower than FP16 models for larger batch sizes.

See the following tables for benchmarks conducted on Llama2 models using the [llmperf](https://github.com/ray-project/llmperf/) evaluation framework with vLLM 0.2.2. The quantized models were benchmarked for end-to-end (E2E) latency, time to first token (TTFT), iter-token latency (ITL), and generation throughput using default llmperf parameters.

Llama2 7B on 1 A100 80G
| Quantization Method | Mean E2E (ms) | Mean TTFT (ms) | Mean ITL (ms/token) | Mean Throughput (tok/s) |
| ------------------- | ------------- | -------------- | ------------------- | ----------------------- |
| Baseline (W16A16) | 3212 | 362 | 18.81 | 53.44 |
| AWQ (W4A16) | 4148 | 994 | 21.76 | 47.09 |
| SqueezeLLM (W4A16) | 42372 | 13857 | 109.77 | 9.13 |

Llama2 13B on 1 A100 80G
| Quantization Method | Mean E2E (ms) | Mean TTFT (ms) | Mean ITL (ms/token) | Mean Throughput (tok/s) |
| ------------------- | ------------- | -------------- | ------------------- | ----------------------- |
| Baseline (W16A16) | 4371 | 644 | 31.06 | 32.25 |
| AWQ (W4A16) | 5626 | 1695 | 41.35 | 24.48 |
| SqueezeLLM (W4A16) | 64293 | 21676 | 628.71 | 5.6 |

Llama2 70B on 4 A100 80G (SqueezeLLM Llama2 70B not available on huggingface)
| Quantization Method | Mean E2E (ms) | Mean TTFT (ms) | Mean ITL (ms/token) | Mean Throughput (tok/s) |
| ------------------- | ------------- | -------------- | ------------------- | ----------------------- |
| Baseline (W16A16) | 8048 | 1073 | 58.1 | 17.26 |
| AWQ (W4A16) | 9902 | 2174 | 69.64 | 14.4 |
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
deployment_config:
autoscaling_config:
min_replicas: 1
initial_replicas: 1
max_replicas: 8
target_num_ongoing_requests_per_replica: 16
metrics_interval_s: 10.0
look_back_period_s: 30.0
smoothing_factor: 0.5
downscale_delay_s: 300.0
upscale_delay_s: 15.0
max_concurrent_queries: 48
ray_actor_options:
resources:
accelerator_type_a100_40g: 0.01
engine_config:
model_id: TheBloke/Llama-2-13B-chat-AWQ
hf_model_id: TheBloke/Llama-2-13B-chat-AWQ
type: VLLMEngine
engine_kwargs:
quantization: awq
max_num_batched_tokens: 12288
max_num_seqs: 48
max_total_tokens: 4096
generation:
prompt_format:
system: "<<SYS>>\n{instruction}\n<</SYS>>\n\n"
assistant: " {instruction} </s><s>"
trailing_assistant: ""
user: "[INST] {system}{instruction} [/INST]"
system_in_user: true
default_system_message: ""
stopping_sequences: ["<unk>"]
scaling_config:
num_workers: 1
num_gpus_per_worker: 1
num_cpus_per_worker: 8
placement_strategy: "STRICT_PACK"
resources_per_worker:
accelerator_type_a100_40g: 0.01
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ scaling_config:
num_cpus_per_worker: 8
placement_strategy: "STRICT_PACK"
resources_per_worker:
accelerator_type_a100_80g: 0.01
accelerator_type_a100_80g: 0.01
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
deployment_config:
autoscaling_config:
min_replicas: 1
initial_replicas: 1
max_replicas: 8
target_num_ongoing_requests_per_replica: 24
metrics_interval_s: 10.0
look_back_period_s: 30.0
smoothing_factor: 0.5
downscale_delay_s: 300.0
upscale_delay_s: 15.0
max_concurrent_queries: 64
ray_actor_options:
resources:
accelerator_type_a10: 0.01
engine_config:
model_id: TheBloke/Llama-2-7B-chat-AWQ
hf_model_id: TheBloke/Llama-2-7B-chat-AWQ
type: VLLMEngine
engine_kwargs:
quantization: awq
trust_remote_code: true
max_num_batched_tokens: 4096
max_num_seqs: 64
gpu_memory_utilization: 0.9
max_total_tokens: 4096
generation:
prompt_format:
system: "<<SYS>>\n{instruction}\n<</SYS>>\n\n"
assistant: " {instruction} </s><s>"
trailing_assistant: ""
user: "[INST] {system}{instruction} [/INST]"
system_in_user: true
default_system_message: ""
stopping_sequences: ["<unk>"]
scaling_config:
num_workers: 1
num_gpus_per_worker: 1
num_cpus_per_worker: 8
placement_strategy: "STRICT_PACK"
resources_per_worker:
accelerator_type_a10: 0.01
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
deployment_config:
autoscaling_config:
min_replicas: 1
initial_replicas: 1
max_replicas: 8
target_num_ongoing_requests_per_replica: 16
metrics_interval_s: 10.0
look_back_period_s: 30.0
smoothing_factor: 0.5
downscale_delay_s: 300.0
upscale_delay_s: 15.0
max_concurrent_queries: 48
ray_actor_options:
resources:
accelerator_type_a100_40g: 0.01
engine_config:
model_id: squeeze-ai-lab/sq-llama-2-13b-w4-s0
hf_model_id: squeeze-ai-lab/sq-llama-2-13b-w4-s0
type: VLLMEngine
engine_kwargs:
quantization: squeezellm
max_num_batched_tokens: 12288
max_num_seqs: 48
max_total_tokens: 4096
generation:
prompt_format:
system: "<<SYS>>\n{instruction}\n<</SYS>>\n\n"
assistant: " {instruction} </s><s>"
trailing_assistant: ""
user: "[INST] {system}{instruction} [/INST]"
system_in_user: true
default_system_message: ""
stopping_sequences: ["<unk>"]
scaling_config:
num_workers: 1
num_gpus_per_worker: 1
num_cpus_per_worker: 8
placement_strategy: "STRICT_PACK"
resources_per_worker:
accelerator_type_a100_40g: 0.01
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
deployment_config:
autoscaling_config:
min_replicas: 1
initial_replicas: 1
max_replicas: 8
target_num_ongoing_requests_per_replica: 24
metrics_interval_s: 10.0
look_back_period_s: 30.0
smoothing_factor: 0.5
downscale_delay_s: 300.0
upscale_delay_s: 15.0
max_concurrent_queries: 64
ray_actor_options:
resources:
accelerator_type_a10: 0.01
engine_config:
model_id: squeeze-ai-lab/sq-llama-2-7b-w4-s0
hf_model_id: squeeze-ai-lab/sq-llama-2-7b-w4-s0
type: VLLMEngine
engine_kwargs:
quantization: squeezellm
trust_remote_code: true
max_num_batched_tokens: 4096
max_num_seqs: 64
gpu_memory_utilization: 0.9
max_total_tokens: 4096
generation:
prompt_format:
system: "<<SYS>>\n{instruction}\n<</SYS>>\n\n"
assistant: " {instruction} </s><s>"
trailing_assistant: ""
user: "[INST] {system}{instruction} [/INST]"
system_in_user: true
default_system_message: ""
stopping_sequences: ["<unk>"]
scaling_config:
num_workers: 1
num_gpus_per_worker: 1
num_cpus_per_worker: 8
placement_strategy: "STRICT_PACK"
resources_per_worker:
accelerator_type_a10: 0.01
7 changes: 7 additions & 0 deletions serve_configs/TheBloke--Llama-2-13B-chat-AWQ.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
applications:
- name: ray-llm
route_prefix: /
import_path: rayllm.backend:router_application
args:
models:
- "./models/continuous_batching/quantization/TheBloke--Llama-2-13B-chat-AWQ.yaml"
2 changes: 1 addition & 1 deletion serve_configs/TheBloke--Llama-2-70B-chat-AWQ.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ applications:
import_path: rayllm.backend:router_application
args:
models:
- "./models/continuous_batching/TheBloke--Llama-2-70B-chat-AWQ.yaml"
- "./models/continuous_batching/quantization/TheBloke--Llama-2-70B-chat-AWQ.yaml"
7 changes: 7 additions & 0 deletions serve_configs/TheBloke--Llama-2-7B-chat-AWQ.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
applications:
- name: ray-llm
route_prefix: /
import_path: rayllm.backend:router_application
args:
models:
- "./models/continuous_batching/quantization/TheBloke--Llama-2-7B-chat-AWQ.yaml"
7 changes: 7 additions & 0 deletions serve_configs/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
applications:
- name: ray-llm
route_prefix: /
import_path: rayllm.backend:router_application
args:
models:
- "./models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml"
7 changes: 7 additions & 0 deletions serve_configs/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
applications:
- name: ray-llm
route_prefix: /
import_path: rayllm.backend:router_application
args:
models:
- "./models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml"

0 comments on commit fa3a766

Please sign in to comment.