From 793ebce1843fbff19180c74fd2562249075d6971 Mon Sep 17 00:00:00 2001 From: Vikas Ummadisetty Date: Wed, 15 Nov 2023 16:51:42 -0800 Subject: [PATCH 1/4] add awq and squeezellm configs --- README.md | 2 +- models/README.md | 4 +- .../TheBloke--Llama-2-13B-chat-AWQ.yaml | 40 ++++++++++++++++++ .../TheBloke--Llama-2-70B-chat-AWQ.yaml | 2 +- .../TheBloke--Llama-2-7B-chat-AWQ.yaml | 42 +++++++++++++++++++ .../squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml | 40 ++++++++++++++++++ .../squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml | 42 +++++++++++++++++++ .../TheBloke--Llama-2-13B-chat-AWQ.yaml | 7 ++++ .../TheBloke--Llama-2-70B-chat-AWQ.yaml | 2 +- .../TheBloke--Llama-2-7B-chat-AWQ.yaml | 7 ++++ .../squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml | 7 ++++ .../squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml | 7 ++++ 12 files changed, 197 insertions(+), 5 deletions(-) create mode 100644 models/continuous_batching/quantization/TheBloke--Llama-2-13B-chat-AWQ.yaml rename models/continuous_batching/{ => quantization}/TheBloke--Llama-2-70B-chat-AWQ.yaml (96%) create mode 100644 models/continuous_batching/quantization/TheBloke--Llama-2-7B-chat-AWQ.yaml create mode 100644 models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml create mode 100644 models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml create mode 100644 serve_configs/TheBloke--Llama-2-13B-chat-AWQ.yaml create mode 100644 serve_configs/TheBloke--Llama-2-7B-chat-AWQ.yaml create mode 100644 serve_configs/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml create mode 100644 serve_configs/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml diff --git a/README.md b/README.md index b2671dfc..43e0aaaa 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ a variety of open source LLMs, built on [Ray Serve](https://docs.ray.io/en/lates In addition to LLM serving, it also includes a CLI and a web frontend (Aviary Explorer) that you can use to compare the outputs of different models directly, rank them by quality, get a cost and latency estimate, and more. -RayLLM supports continuous batching by integrating with [vLLM](https://github.com/vllm-project/vllm). Continuous batching allows you to get much better throughput and latency than static batching. +RayLLM supports continuous batching and quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Continuous batching allows you to get much better throughput and latency than static batching. Quantization allows you to deploy compressed models with cheaper hardware requirements and lower inference latency. RayLLM leverages [Ray Serve](https://docs.ray.io/en/latest/serve/index.html), which has native support for autoscaling and multi-node deployments. RayLLM can scale to zero and create diff --git a/models/README.md b/models/README.md index e4787d25..6f6126a8 100644 --- a/models/README.md +++ b/models/README.md @@ -32,11 +32,11 @@ Engine is the abstraction for interacting with a model. It is responsible for sc The `engine_config` section specifies the Hugging Face model ID (`model_id`), how to initialize it and what parameters to use when generating tokens with an LLM. -RayLLM supports continuous batching, meaning incoming requests are processed as soon as they arrive, and can be added to batches that are already being processed. This means that the model is not slowed down by certain sentences taking longer to generate than others. +RayLLM supports continuous batching, meaning incoming requests are processed as soon as they arrive, and can be added to batches that are already being processed. This means that the model is not slowed down by certain sentences taking longer to generate than others. RayLLM also supports quantization, meaning compressed models can be deployed with cheaper hardware requirements and lower inference latency. * `model_id` is the ID that refers to the model in the RayLLM or OpenAI API. * `type` is the type of inference engine. Only `VLLMEngine` is currently supported. -* `engine_kwargs` and `max_total_tokens` are configuration options for the inference engine. These options may vary depending on the hardware accelerator type and model size. We have tuned the parameters in the configuration files included in RayLLM for you to use as reference. +* `engine_kwargs` and `max_total_tokens` are configuration options for the inference engine (e.g. gpu memory utilization, quantization, max number of concurrent sequences). These options may vary depending on the hardware accelerator type and model size. We have tuned the parameters in the configuration files included in RayLLM for you to use as reference. * `generation` contains configurations related to default generation parameters such as `prompt_format` and `stopping_sequences`. * `hf_model_id` is the Hugging Face model ID. This can also be a path to a local directory. If not specified, defaults to `model_id`. * `runtime_env` is a dictionary that contains Ray runtime environment configuration. It allows you to set per-model pip packages and environment variables. See [Ray documentation on Runtime Environments](https://docs.ray.io/en/latest/ray-core/handling-dependencies.html#runtime-environments) for more information. diff --git a/models/continuous_batching/quantization/TheBloke--Llama-2-13B-chat-AWQ.yaml b/models/continuous_batching/quantization/TheBloke--Llama-2-13B-chat-AWQ.yaml new file mode 100644 index 00000000..54ced343 --- /dev/null +++ b/models/continuous_batching/quantization/TheBloke--Llama-2-13B-chat-AWQ.yaml @@ -0,0 +1,40 @@ +deployment_config: + autoscaling_config: + min_replicas: 1 + initial_replicas: 1 + max_replicas: 8 + target_num_ongoing_requests_per_replica: 16 + metrics_interval_s: 10.0 + look_back_period_s: 30.0 + smoothing_factor: 0.5 + downscale_delay_s: 300.0 + upscale_delay_s: 15.0 + max_concurrent_queries: 48 + ray_actor_options: + resources: + accelerator_type_a100_40g: 0.01 +engine_config: + model_id: TheBloke/Llama-2-13B-chat-AWQ + hf_model_id: TheBloke/Llama-2-13B-chat-AWQ + type: VLLMEngine + engine_kwargs: + quantization: awq + max_num_batched_tokens: 12288 + max_num_seqs: 48 + max_total_tokens: 4096 + generation: + prompt_format: + system: "<>\n{instruction}\n<>\n\n" + assistant: " {instruction} " + trailing_assistant: "" + user: "[INST] {system}{instruction} [/INST]" + system_in_user: true + default_system_message: "" + stopping_sequences: [""] +scaling_config: + num_workers: 1 + num_gpus_per_worker: 1 + num_cpus_per_worker: 8 + placement_strategy: "STRICT_PACK" + resources_per_worker: + accelerator_type_a100_40g: 0.01 diff --git a/models/continuous_batching/TheBloke--Llama-2-70B-chat-AWQ.yaml b/models/continuous_batching/quantization/TheBloke--Llama-2-70B-chat-AWQ.yaml similarity index 96% rename from models/continuous_batching/TheBloke--Llama-2-70B-chat-AWQ.yaml rename to models/continuous_batching/quantization/TheBloke--Llama-2-70B-chat-AWQ.yaml index b49c201f..b1c01f09 100644 --- a/models/continuous_batching/TheBloke--Llama-2-70B-chat-AWQ.yaml +++ b/models/continuous_batching/quantization/TheBloke--Llama-2-70B-chat-AWQ.yaml @@ -37,4 +37,4 @@ scaling_config: num_cpus_per_worker: 8 placement_strategy: "STRICT_PACK" resources_per_worker: - accelerator_type_a100_80g: 0.01 + accelerator_type_a100_80g: 0.01 \ No newline at end of file diff --git a/models/continuous_batching/quantization/TheBloke--Llama-2-7B-chat-AWQ.yaml b/models/continuous_batching/quantization/TheBloke--Llama-2-7B-chat-AWQ.yaml new file mode 100644 index 00000000..feca08e8 --- /dev/null +++ b/models/continuous_batching/quantization/TheBloke--Llama-2-7B-chat-AWQ.yaml @@ -0,0 +1,42 @@ +deployment_config: + autoscaling_config: + min_replicas: 1 + initial_replicas: 1 + max_replicas: 8 + target_num_ongoing_requests_per_replica: 24 + metrics_interval_s: 10.0 + look_back_period_s: 30.0 + smoothing_factor: 0.5 + downscale_delay_s: 300.0 + upscale_delay_s: 15.0 + max_concurrent_queries: 64 + ray_actor_options: + resources: + accelerator_type_a10: 0.01 +engine_config: + model_id: TheBloke/Llama-2-7B-chat-AWQ + hf_model_id: TheBloke/Llama-2-7B-chat-AWQ + type: VLLMEngine + engine_kwargs: + quantization: awq + trust_remote_code: true + max_num_batched_tokens: 4096 + max_num_seqs: 64 + gpu_memory_utilization: 0.9 + max_total_tokens: 4096 + generation: + prompt_format: + system: "<>\n{instruction}\n<>\n\n" + assistant: " {instruction} " + trailing_assistant: "" + user: "[INST] {system}{instruction} [/INST]" + system_in_user: true + default_system_message: "" + stopping_sequences: [""] +scaling_config: + num_workers: 1 + num_gpus_per_worker: 1 + num_cpus_per_worker: 8 + placement_strategy: "STRICT_PACK" + resources_per_worker: + accelerator_type_a10: 0.01 diff --git a/models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml b/models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml new file mode 100644 index 00000000..905c0e19 --- /dev/null +++ b/models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml @@ -0,0 +1,40 @@ +deployment_config: + autoscaling_config: + min_replicas: 1 + initial_replicas: 1 + max_replicas: 8 + target_num_ongoing_requests_per_replica: 16 + metrics_interval_s: 10.0 + look_back_period_s: 30.0 + smoothing_factor: 0.5 + downscale_delay_s: 300.0 + upscale_delay_s: 15.0 + max_concurrent_queries: 48 + ray_actor_options: + resources: + accelerator_type_a100_40g: 0.01 +engine_config: + model_id: squeeze-ai-lab/sq-llama-2-13b-w4-s0 + hf_model_id: squeeze-ai-lab/sq-llama-2-13b-w4-s0 + type: VLLMEngine + engine_kwargs: + quantization: squeezellm + max_num_batched_tokens: 12288 + max_num_seqs: 48 + max_total_tokens: 4096 + generation: + prompt_format: + system: "<>\n{instruction}\n<>\n\n" + assistant: " {instruction} " + trailing_assistant: "" + user: "[INST] {system}{instruction} [/INST]" + system_in_user: true + default_system_message: "" + stopping_sequences: [""] +scaling_config: + num_workers: 1 + num_gpus_per_worker: 1 + num_cpus_per_worker: 8 + placement_strategy: "STRICT_PACK" + resources_per_worker: + accelerator_type_a100_40g: 0.01 \ No newline at end of file diff --git a/models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml b/models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml new file mode 100644 index 00000000..52180f92 --- /dev/null +++ b/models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml @@ -0,0 +1,42 @@ +deployment_config: + autoscaling_config: + min_replicas: 1 + initial_replicas: 1 + max_replicas: 8 + target_num_ongoing_requests_per_replica: 24 + metrics_interval_s: 10.0 + look_back_period_s: 30.0 + smoothing_factor: 0.5 + downscale_delay_s: 300.0 + upscale_delay_s: 15.0 + max_concurrent_queries: 64 + ray_actor_options: + resources: + accelerator_type_a10: 0.01 +engine_config: + model_id: squeeze-ai-lab/sq-llama-2-7b-w4-s0 + hf_model_id: squeeze-ai-lab/sq-llama-2-7b-w4-s0 + type: VLLMEngine + engine_kwargs: + quantization: squeezellm + trust_remote_code: true + max_num_batched_tokens: 4096 + max_num_seqs: 64 + gpu_memory_utilization: 0.9 + max_total_tokens: 4096 + generation: + prompt_format: + system: "<>\n{instruction}\n<>\n\n" + assistant: " {instruction} " + trailing_assistant: "" + user: "[INST] {system}{instruction} [/INST]" + system_in_user: true + default_system_message: "" + stopping_sequences: [""] +scaling_config: + num_workers: 1 + num_gpus_per_worker: 1 + num_cpus_per_worker: 8 + placement_strategy: "STRICT_PACK" + resources_per_worker: + accelerator_type_a10: 0.01 \ No newline at end of file diff --git a/serve_configs/TheBloke--Llama-2-13B-chat-AWQ.yaml b/serve_configs/TheBloke--Llama-2-13B-chat-AWQ.yaml new file mode 100644 index 00000000..29e25bc6 --- /dev/null +++ b/serve_configs/TheBloke--Llama-2-13B-chat-AWQ.yaml @@ -0,0 +1,7 @@ +applications: +- name: ray-llm + route_prefix: / + import_path: rayllm.backend:router_application + args: + models: + - "./models/continuous_batching/quantization/TheBloke--Llama-2-13B-chat-AWQ.yaml" \ No newline at end of file diff --git a/serve_configs/TheBloke--Llama-2-70B-chat-AWQ.yaml b/serve_configs/TheBloke--Llama-2-70B-chat-AWQ.yaml index 523ed0d0..c9862465 100644 --- a/serve_configs/TheBloke--Llama-2-70B-chat-AWQ.yaml +++ b/serve_configs/TheBloke--Llama-2-70B-chat-AWQ.yaml @@ -4,4 +4,4 @@ applications: import_path: rayllm.backend:router_application args: models: - - "./models/continuous_batching/TheBloke--Llama-2-70B-chat-AWQ.yaml" \ No newline at end of file + - "./models/continuous_batching/quantization/TheBloke--Llama-2-70B-chat-AWQ.yaml" \ No newline at end of file diff --git a/serve_configs/TheBloke--Llama-2-7B-chat-AWQ.yaml b/serve_configs/TheBloke--Llama-2-7B-chat-AWQ.yaml new file mode 100644 index 00000000..0a6020aa --- /dev/null +++ b/serve_configs/TheBloke--Llama-2-7B-chat-AWQ.yaml @@ -0,0 +1,7 @@ +applications: +- name: ray-llm + route_prefix: / + import_path: rayllm.backend:router_application + args: + models: + - "./models/continuous_batching/quantization/TheBloke--Llama-2-7B-chat-AWQ.yaml" \ No newline at end of file diff --git a/serve_configs/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml b/serve_configs/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml new file mode 100644 index 00000000..9a938c8a --- /dev/null +++ b/serve_configs/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml @@ -0,0 +1,7 @@ +applications: +- name: ray-llm + route_prefix: / + import_path: rayllm.backend:router_application + args: + models: + - "./models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-13b-w4-s0.yaml" \ No newline at end of file diff --git a/serve_configs/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml b/serve_configs/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml new file mode 100644 index 00000000..1272a482 --- /dev/null +++ b/serve_configs/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml @@ -0,0 +1,7 @@ +applications: +- name: ray-llm + route_prefix: / + import_path: rayllm.backend:router_application + args: + models: + - "./models/continuous_batching/quantization/squeeze-ai-lab--sq-llama-2-7b-w4-s0.yaml" \ No newline at end of file From 0557452266e942e22346705dcee02eafe8ce9edc Mon Sep 17 00:00:00 2001 From: Vikas Ummadisetty Date: Mon, 20 Nov 2023 14:53:27 -0800 Subject: [PATCH 2/4] add llmperf benchmarks --- README.md | 2 +- models/README.md | 2 +- .../quantization/quantization.md | 28 +++++++++++++++++++ 3 files changed, 30 insertions(+), 2 deletions(-) create mode 100644 models/continuous_batching/quantization/quantization.md diff --git a/README.md b/README.md index 43e0aaaa..7d556ce0 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ a variety of open source LLMs, built on [Ray Serve](https://docs.ray.io/en/lates In addition to LLM serving, it also includes a CLI and a web frontend (Aviary Explorer) that you can use to compare the outputs of different models directly, rank them by quality, get a cost and latency estimate, and more. -RayLLM supports continuous batching and quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Continuous batching allows you to get much better throughput and latency than static batching. Quantization allows you to deploy compressed models with cheaper hardware requirements and lower inference latency. +RayLLM supports continuous batching and quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Continuous batching allows you to get much better throughput and latency than static batching. Quantization allows you to deploy compressed models with cheaper hardware requirements and lower inference costs. RayLLM leverages [Ray Serve](https://docs.ray.io/en/latest/serve/index.html), which has native support for autoscaling and multi-node deployments. RayLLM can scale to zero and create diff --git a/models/README.md b/models/README.md index 6f6126a8..fc835100 100644 --- a/models/README.md +++ b/models/README.md @@ -32,7 +32,7 @@ Engine is the abstraction for interacting with a model. It is responsible for sc The `engine_config` section specifies the Hugging Face model ID (`model_id`), how to initialize it and what parameters to use when generating tokens with an LLM. -RayLLM supports continuous batching, meaning incoming requests are processed as soon as they arrive, and can be added to batches that are already being processed. This means that the model is not slowed down by certain sentences taking longer to generate than others. RayLLM also supports quantization, meaning compressed models can be deployed with cheaper hardware requirements and lower inference latency. +RayLLM supports continuous batching, meaning incoming requests are processed as soon as they arrive, and can be added to batches that are already being processed. This means that the model is not slowed down by certain sentences taking longer to generate than others. RayLLM also supports quantization, meaning compressed models can be deployed with cheaper hardware requirements. For more details on using quantized models in RayLLM, see the [quantization guide](continuous_batching/quantization/quantization.md). * `model_id` is the ID that refers to the model in the RayLLM or OpenAI API. * `type` is the type of inference engine. Only `VLLMEngine` is currently supported. diff --git a/models/continuous_batching/quantization/quantization.md b/models/continuous_batching/quantization/quantization.md new file mode 100644 index 00000000..1d48fe71 --- /dev/null +++ b/models/continuous_batching/quantization/quantization.md @@ -0,0 +1,28 @@ +# Quantization in RayLLM + +Quantization is a technique to reduce the computational and memory costs of running inference by representing the weights and/or activations with low-precision data types like 4-bit integer (int4) instead of the usual 16-bit floating point (float16). +Quantization allows users to deploy models with cheaper hardware requirements with potentially lower inference costs. + +RayLLM supports AWQ and SqueezeLLM weight-only quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Note that the AWQ and SqueezeLLM quantization methods in vLLM have not been fully optimized and can be slower than FP16 models for larger batch sizes. + +See the following tables for benchmarks conducted on Llama2 models using the [llmperf](https://github.com/ray-project/llmperf/) evaluation framework with vLLM 0.2.2. The quantized models were benchmarked for end-to-end (E2E) latency, time to first token (TTFT), iter-token latency (ITL), and generation throughput using default llmperf parameters. + +Llama2 7B on 1 A100 80G +| Quantization Method | Mean E2E (ms) | Mean TTFT (ms) | Mean ITL (ms/token) | Mean Throughput (tok/s) | +| ------------------- | ------------- | -------------- | ------------------- | ----------------------- | +| Baseline (W16A16) | 3212 | 362 | 18.81 | 53.44 | +| AWQ (W4A16) | 4148 | 994 | 21.76 | 47.09 | +| SqueezeLLM (W4A16) | 42372 | 13857 | 109.77 | 9.13 | + +Llama2 13B on 1 A100 80G +| Quantization Method | Mean E2E (ms) | Mean TTFT (ms) | Mean ITL (ms/token) | Mean Throughput (tok/s) | +| ------------------- | ------------- | -------------- | ------------------- | ----------------------- | +| Baseline (W16A16) | 4371 | 644 | 31.06 | 32.25 | +| AWQ (W4A16) | 5626 | 1695 | 41.35 | 24.48 | +| SqueezeLLM (W4A16) | 64293 | 21676 | 628.71 | 5.6 | + +Llama2 70B on 4 A100 80G (SqueezeLLM Llama2 70B not available on huggingface) +| Quantization Method | Mean E2E (ms) | Mean TTFT (ms) | Mean ITL (ms/token) | Mean Throughput (tok/s) | +| ------------------- | ------------- | -------------- | ------------------- | ----------------------- | +| Baseline (W16A16) | 8048 | 1073 | 58.1 | 17.26 | +| AWQ (W4A16) | 9902 | 2174 | 69.64 | 14.4 | \ No newline at end of file From 436f478dd09e511989cc1ef42c628e7841674c6b Mon Sep 17 00:00:00 2001 From: Vikas Ummadisetty Date: Mon, 20 Nov 2023 14:56:02 -0800 Subject: [PATCH 3/4] rename --- models/README.md | 2 +- .../quantization/{quantization.md => README.md} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename models/continuous_batching/quantization/{quantization.md => README.md} (100%) diff --git a/models/README.md b/models/README.md index fc835100..0fdc8dab 100644 --- a/models/README.md +++ b/models/README.md @@ -32,7 +32,7 @@ Engine is the abstraction for interacting with a model. It is responsible for sc The `engine_config` section specifies the Hugging Face model ID (`model_id`), how to initialize it and what parameters to use when generating tokens with an LLM. -RayLLM supports continuous batching, meaning incoming requests are processed as soon as they arrive, and can be added to batches that are already being processed. This means that the model is not slowed down by certain sentences taking longer to generate than others. RayLLM also supports quantization, meaning compressed models can be deployed with cheaper hardware requirements. For more details on using quantized models in RayLLM, see the [quantization guide](continuous_batching/quantization/quantization.md). +RayLLM supports continuous batching, meaning incoming requests are processed as soon as they arrive, and can be added to batches that are already being processed. This means that the model is not slowed down by certain sentences taking longer to generate than others. RayLLM also supports quantization, meaning compressed models can be deployed with cheaper hardware requirements. For more details on using quantized models in RayLLM, see the [quantization guide](continuous_batching/quantization/README.md). * `model_id` is the ID that refers to the model in the RayLLM or OpenAI API. * `type` is the type of inference engine. Only `VLLMEngine` is currently supported. diff --git a/models/continuous_batching/quantization/quantization.md b/models/continuous_batching/quantization/README.md similarity index 100% rename from models/continuous_batching/quantization/quantization.md rename to models/continuous_batching/quantization/README.md From 2aa47f32f7360b44b564b438dc102ed0751e3d71 Mon Sep 17 00:00:00 2001 From: Vikas Ummadisetty Date: Mon, 20 Nov 2023 15:28:13 -0800 Subject: [PATCH 4/4] link quantization guide --- README.md | 2 +- models/continuous_batching/quantization/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 7d556ce0..f0898071 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ a variety of open source LLMs, built on [Ray Serve](https://docs.ray.io/en/lates In addition to LLM serving, it also includes a CLI and a web frontend (Aviary Explorer) that you can use to compare the outputs of different models directly, rank them by quality, get a cost and latency estimate, and more. -RayLLM supports continuous batching and quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Continuous batching allows you to get much better throughput and latency than static batching. Quantization allows you to deploy compressed models with cheaper hardware requirements and lower inference costs. +RayLLM supports continuous batching and quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Continuous batching allows you to get much better throughput and latency than static batching. Quantization allows you to deploy compressed models with cheaper hardware requirements and lower inference costs. See [quantization guide](models/continuous_batching/quantization/README.md) for more details on running quantized models on RayLLM. RayLLM leverages [Ray Serve](https://docs.ray.io/en/latest/serve/index.html), which has native support for autoscaling and multi-node deployments. RayLLM can scale to zero and create diff --git a/models/continuous_batching/quantization/README.md b/models/continuous_batching/quantization/README.md index 1d48fe71..bdf34045 100644 --- a/models/continuous_batching/quantization/README.md +++ b/models/continuous_batching/quantization/README.md @@ -3,7 +3,7 @@ Quantization is a technique to reduce the computational and memory costs of running inference by representing the weights and/or activations with low-precision data types like 4-bit integer (int4) instead of the usual 16-bit floating point (float16). Quantization allows users to deploy models with cheaper hardware requirements with potentially lower inference costs. -RayLLM supports AWQ and SqueezeLLM weight-only quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Note that the AWQ and SqueezeLLM quantization methods in vLLM have not been fully optimized and can be slower than FP16 models for larger batch sizes. +RayLLM supports AWQ and SqueezeLLM weight-only quantization by integrating with [vLLM](https://github.com/vllm-project/vllm). Quantization can be enabled in RayLLM by specifying the `quantization` method in `engine_kwargs` and using a quantized model for `model_id` and `hf_model_id`. See the configs in this directory for quantization examples. Note that the AWQ and SqueezeLLM quantization methods in vLLM have not been fully optimized and can be slower than FP16 models for larger batch sizes. See the following tables for benchmarks conducted on Llama2 models using the [llmperf](https://github.com/ray-project/llmperf/) evaluation framework with vLLM 0.2.2. The quantized models were benchmarked for end-to-end (E2E) latency, time to first token (TTFT), iter-token latency (ITL), and generation throughput using default llmperf parameters.