You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When TiDB receives a query request, it forwards the request to the underlying TiKV for data retrieval and processing. Currently, TiDB does not have a built-in request caching mechanism, and it sends each request directly to TiKV for processing, even if TiKV is under a heavy load, without any caching or filtering.
However, in high-concurrency scenarios, directly sending all requests to TiKV can lead to issues. For example, as the concurrency of requests increases, TiKV may experience performance bottlenecks due to insufficient processing capacity, resulting in increasing request processing time and impacting overall query performance.
To address this issue, a suggestion is to introduce a request caching mechanism at the TiDB layer. When TiDB receives a query request, it would first check if the same query result exists in the cache. If it does, TiDB can directly return the cached result without sending the request to TiKV. Only when the query result is not in the cache or the cache has expired, the request would be sent to TiKV for processing.
Introducing a request caching mechanism brings the following benefits:
Improved query performance: For identical query requests, if the result is already cached, TiDB can directly return the result, reducing the communication overhead with TiKV and data processing time, thus enhancing query performance.
Protecting the stability of TiKV: By caching requests on the TiDB side, the concurrency of requests can be controlled, preventing an excessive number of requests from being simultaneously sent to TiKV, thereby safeguarding the stability and availability of TiKV.
When TiDB receives a query request, it forwards the request to the underlying TiKV for data retrieval and processing. Currently, TiDB does not have a built-in request caching mechanism, and it sends each request directly to TiKV for processing, even if TiKV is under a heavy load, without any caching or filtering.
However, in high-concurrency scenarios, directly sending all requests to TiKV can lead to issues. For example, as the concurrency of requests increases, TiKV may experience performance bottlenecks due to insufficient processing capacity, resulting in increasing request processing time and impacting overall query performance.
To address this issue, a suggestion is to introduce a request caching mechanism at the TiDB layer. When TiDB receives a query request, it would first check if the same query result exists in the cache. If it does, TiDB can directly return the cached result without sending the request to TiKV. Only when the query result is not in the cache or the cache has expired, the request would be sent to TiKV for processing.
Introducing a request caching mechanism brings the following benefits:
Improved query performance: For identical query requests, if the result is already cached, TiDB can directly return the result, reducing the communication overhead with TiKV and data processing time, thus enhancing query performance.
Protecting the stability of TiKV: By caching requests on the TiDB side, the concurrency of requests can be controlled, preventing an excessive number of requests from being simultaneously sent to TiKV, thereby safeguarding the stability and availability of TiKV.
Task:
The text was updated successfully, but these errors were encountered: