-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 Add request latency, rate limiter latency and request retry metrics #2481
Conversation
Signed-off-by: Stefan Büringer [email protected]
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbueringer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold @alvaroaleman I thought about this a bit more and the high cardinality problem that we had before was only caused by the "host" label. I think these metrics are valuable even without this label. WDYT about adding the metrics but without the host label? Basically it's good to have at least some idea about request latency, rate limiting and retries even if you don't know for which host. I think if a controller only communicates with a single apiserver the host also doesn't matter that much. |
/hold cancel |
cc @fabriziopandini (fyi) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This mostly seems ok. What I am starting to heavily dislike is the fact that its impossible to configure this. Can you think of any way we could make this configurable?
|
||
rateLimiterLatency = prometheus.NewHistogramVec( | ||
prometheus.HistogramOpts{ | ||
Name: "rest_client_rate_limiter_duration_seconds", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sig apimachineries guidance the last time around I asked was not to use client-side ratelimiting but rely on APF instead - Maybe skip this one?
|
||
requestRetry = prometheus.NewCounterVec( | ||
prometheus.CounterOpts{ | ||
Name: "rest_client_request_retries_total", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my understanding, this will only ever be populated for watch requests by the informer, because in all other cases, the error gets bubbled up to the application and the application then has to decide if it wants to retry which entails the underlying machinery doesn't know its a retry or am I wrong?
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/close |
@sbueringer: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Signed-off-by: Stefan Büringer [email protected]