-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bearer Token Authentication not responding #8794
Comments
Unfortunately, that sounds like an issue with token size. Most web servers support summary request header sizes up to 4-8 kB. We do not have any logic to detect token length. We could add that, but it would still not solve your issue. Does |
Dear @floreks, I'm a college of Kevin. I'm able to use the token in kubectl --token. so that does not seem to be the problem. Could maybe be 4 the limit or something? Thanks for checking Are there test commands we can try to run in the pod to see if the header is added correctly in the response? Thanks |
I think that kong by default supports summary header sizes up to 8 kB. They are using nginx underneath. Our UI -> API most probably has a 4 kB limit currently. I'd have to debug it on our side to make sure where it gets terminated. If you can configure token content and get rid of unused information it should make it work for now. I know that some providers include lots of unnecessary information that are not required by Kubernetes API server. |
Hi @floreks We are using Azure kubelogin, which does not allow configuring the token content as far as I know. I have taken a quick glance at the code with my limited go knowledge. Line 99 in 1d4897c
|
AFAIR azure allows configuring JWT token content, groups, audience, etc. With azure it is usually an issue of configuring too many groups and that all of them are embedded into the token, not only actually used ones. |
Regarding code changes, max header size would need to be checked and increased for both API and Auth modules. If that's the only issue. |
I can indeed see that there are many groups included in the token, but unfortunately i dont find a way to configure the response. We are using kubelogin which does not have the option to do so, but if you know of another way that leverages azure authentication to generate the token, it might help us to (temporarily) overcome this issue.
Given the behavior it does look like that would be the issue, but the only way to be sure is to test it of course. What would be the best course of action to get this tested? |
@floreks Thanks for finding the potential issue. Thanks for checking |
@floreks Did you get the chance to look at this? Or what can we do to make this move forward? |
It's a bit problematic to test locally, unfortunately. From what I have checked header is not trimmed on our side (auth container). It was able to receive headers bigger than 4kB. Configuring API server with a custom OIDC exchange to allow testing custom tokens is time-consuming. I didn't get a chance to do a full end-to-end test to figure out the root cause yet. On our side header size does not seem to be the problem. |
I was facing this issue and managed to login to kong-proxy with a regular admin-user token after i recreated it. |
This would reduce the length of the token, and as such avoid the issue. Unfortunately when you have no control over the token length (such as with azure generated tokens) this does not help. |
My use case is: |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
This is still a show stopper for many install targets. |
agreed, we are still not able to update because of this issue. |
What happened?
When trying to login using a Bearer Token the page is not responding.
We can find this in the logs of the auth-pod:
in the kong-rpoxy we find this:
and in the devtools i can see the response 500 from /api/v1/me is this:
The token is correct because it works for directly authenticating. Also, when i just type some random characters, the UI returns a clear error and in devtools i can see it is returned from api/v1/login instead
What did you expect to happen?
The page responds and you are logged in (or you get an error message about invalid credentials)
How can we reproduce it (as minimally and precisely as possible)?
It is unclear, we have 2 environments where it works, and 2 others where it doesn't work. The environments are programatically deployed, and we can see no difference in configuration between the clusters.
The only difference we find is that the bearer token is much longer on the environment where it doesn't work so our best guess is that it has to do with this.
Anything else we need to know?
We are now running behind an Istio Virtualservice that redirects to Kong Proxy, but that should not be related, as we tried running istio directly without Kong. We also get the same result when using a portforward. (on the kong proxy, port forward does not seem to work since the pods have been split up)
What browsers are you seeing the problem on?
Chrome, Microsoft Edge, Firefox
Kubernetes Dashboard version
7.1.1 (Helm)
Kubernetes version
1.28.3
Dev environment
No response
The text was updated successfully, but these errors were encountered: