diff --git a/models/README.md b/models/README.md index a8fe030a..85edf396 100644 --- a/models/README.md +++ b/models/README.md @@ -50,7 +50,7 @@ RayLLM supports continuous batching, meaning incoming requests are processed as * `model_local_path` is the path to the TensorRT-LLM model directory. * `s3_mirror_config` is a dictionary that contains configurations for loading the model from S3 instead of Hugging Face Hub. You can use this to speed up downloads. * `generation` contains configurations related to default generation parameters such as `prompt_format` and `stopping_sequences`. -* `scheduler_policy` is to choose scheduler policy between max_utilization/guaranteed_no_evict. +* `scheduler_policy` sets the scheduler policy to either `MAX_UTILIZATION` or `GUARANTEED_NO_EVICT`. (`MAX_UTILIZATION` packs as many requests as the underlying TRT engine can support in any iteration of the InflightBatching generation loop. While this is expected to maximize GPU throughput, it might require that some requests be paused and restarted depending on peak KV cache memory availability. `GUARANTEED_NO_EVICT` uses KV cache more conservatively and guarantees that a request, once started, runs to completion without eviction.) * `logger_level` is to configure log level for TensorRT-LLM engine. ("VERBOSE", "INFO", "WARNING", "ERROR")