-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[YUNIKORN-1977] E2E test for verifying user info with non kube-admin user #915
Conversation
…nfo with non-kube-admin user
…nfo with non-kube-admin user
…nfo with non-kube-admin user
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #915 +/- ##
==========================================
- Coverage 68.18% 68.03% -0.15%
==========================================
Files 70 70
Lines 7621 9195 +1574
==========================================
+ Hits 5196 6256 +1060
- Misses 2216 2732 +516
+ Partials 209 207 -2 ☔ View full report in Codecov by Sentry. |
…nfo with non-kube-admin user
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, It looks great! having a simple question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-1 I'm not a fan of the kubectl
approach, it's completely unnecessary. It makes the test more complicated. It's possible to use the API.
_, err = clientset.CoreV1().Secrets(namespace).Create(context.TODO(), &v1.Secret{ | ||
ObjectMeta: metav1.ObjectMeta{ | ||
Name: secretName, | ||
Annotations: map[string]string{ | ||
"kubernetes.io/service-account.name": serviceAccountName, | ||
}, | ||
}, | ||
Type: v1.SecretTypeServiceAccountToken, | ||
}, metav1.CreateOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use KubeCtl.CreateSecret()
if possible. If the annotation is necessary, create a new method with the name CreateSecretWithAnnotation(secret *v1.Secret, namespace string, annotations map[string]string)
to avoid calling this directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the code with requested changes.
func createKubeconfig(path, currentContext, clusterCA, clusterServer, userTokenValue string) error { | ||
kubeconfigTemplate := ` | ||
apiVersion: v1 | ||
kind: Config | ||
current-context: ${CURRENT_CONTEXT} | ||
contexts: | ||
- name: ${CURRENT_CONTEXT} | ||
context: | ||
cluster: ${CURRENT_CONTEXT} | ||
user: test-user | ||
clusters: | ||
- name: ${CURRENT_CONTEXT} | ||
cluster: | ||
certificate-authority-data: ${CLUSTER_CA} | ||
server: ${CLUSTER_SERVER} | ||
users: | ||
- name: test-user | ||
user: | ||
token: ${USER_TOKEN_VALUE} | ||
` | ||
// Replace placeholders in the template | ||
kubeconfigContent := strings.ReplaceAll(kubeconfigTemplate, "${CURRENT_CONTEXT}", currentContext) | ||
kubeconfigContent = strings.ReplaceAll(kubeconfigContent, "${CLUSTER_CA}", clusterCA) | ||
kubeconfigContent = strings.ReplaceAll(kubeconfigContent, "${CLUSTER_SERVER}", clusterServer) | ||
kubeconfigContent = strings.ReplaceAll(kubeconfigContent, "${USER_TOKEN_VALUE}", userTokenValue) | ||
|
||
// Write the kubeconfig YAML to the file | ||
err := os.WriteFile(path, []byte(kubeconfigContent), 0600) | ||
gomega.Ω(err).NotTo(gomega.HaveOccurred()) | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this should NOT be necessary. Way too complicated to mess around with separate kubectl
calls.
You can do this:
config, _ := kClient.GetKubeConfig() // handle error in real code
newConf := config.DeepCopy() // copy existing config
newConf.TLSClientConfig.CertFile = "" // remove cert file
newConf.TLSClientConfig.KeyFile = "" // remove key file
newConf.BearerToken = "<base64Token>" // set token that is retrieved in the test
_ = kClient.SetClientFromConfig(newConf)
After this point, kClient
will use the token for authentication there's no need to delete/restore anything.
New method in KubeCtl
:
func (k *KubeCtl) SetClientFromConfig(conf *rest.Config) error {
k.kubeConfig = conf.DeepCopy()
k.clientSet, err = kubernetes.NewForConfig(k.kubeConfig) // creates new clientset
return err
}
Also, try to retrieve the secret token using KubeCtl
. We might need to create a new method for it, but again, it shouldn't involve running the kubectl
command:
func (k *KubeCtl) GetSecret(namespace string) (*v1.Secret, error) {
return k.clientSet.CoreV1().Secrets(namespace).Get(context.TODO(), namespace, metav1.GetOptions{})
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure @pbacsko will accommodate the requested changes.
@rrajesh-cloudera please fix the merge conflicts in |
Hi @pbacsko , resolved the Merge Conflict. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your patch, some questions and nits left.
time.Sleep(10 * time.Second) | ||
_, err = kClient.CreateSecret(secret, namespace) | ||
gomega.Ω(err).NotTo(HaveOccurred()) | ||
time.Sleep(10 * time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you use Eventually
in this case? time.Sleep can lead to flaky tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or could we further optimize it using TokenRequest API instead of manually create secret and wait until it pupulate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 fixed sleeps are dangerous and can be time consuming. Avoid them if possible. Add helper poll method which repeatedly checks if the secret exists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the code with requested changes.
kubeconfigPath := filepath.Join(os.TempDir(), "test-user-config") | ||
err = k8s.WriteConfigToFile(newConf, kubeconfigPath) | ||
gomega.Ω(err).NotTo(gomega.HaveOccurred()) | ||
config, err = clientcmd.BuildConfigFromFlags("", kubeconfigPath) | ||
gomega.Ω(err).NotTo(HaveOccurred()) | ||
clientset, err = kubernetes.NewForConfig(config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we simplify this six line to:
clientset, err = kubernetes.NewForConfig(newConf)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kubeconfigPath := filepath.Join(os.TempDir(), "test-user-config") | |
err = k8s.WriteConfigToFile(newConf, kubeconfigPath) | |
gomega.Ω(err).NotTo(gomega.HaveOccurred()) | |
config, err = clientcmd.BuildConfigFromFlags("", kubeconfigPath) | |
gomega.Ω(err).NotTo(HaveOccurred()) | |
clientset, err = kubernetes.NewForConfig(config) | |
clientset, err = kubernetes.NewForConfig(newConf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried the suggested solution but facing issue , It's not working with this test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more comments from me as well.
func WriteConfigToFile(config *rest.Config, kubeconfigPath string) error { | ||
// Build the kubeconfig API object from the rest.Config | ||
kubeConfig := &clientcmdapi.Config{ | ||
Clusters: map[string]*clientcmdapi.Cluster{ | ||
"default-cluster": { | ||
Server: config.Host, | ||
CertificateAuthorityData: config.CAData, | ||
InsecureSkipTLSVerify: config.Insecure, | ||
}, | ||
}, | ||
AuthInfos: map[string]*clientcmdapi.AuthInfo{ | ||
"default-auth": { | ||
Token: config.BearerToken, | ||
}, | ||
}, | ||
Contexts: map[string]*clientcmdapi.Context{ | ||
"default-context": { | ||
Cluster: "default-cluster", | ||
AuthInfo: "default-auth", | ||
}, | ||
}, | ||
CurrentContext: "default-context", | ||
} | ||
|
||
// Ensure the directory where the file is being written exists | ||
err := os.MkdirAll(filepath.Dir(kubeconfigPath), os.ModePerm) | ||
if err != nil { | ||
return fmt.Errorf("failed to create directory for kubeconfig file: %v", err) | ||
} | ||
|
||
// Write the kubeconfig to the specified file | ||
err = clientcmd.WriteToFile(*kubeConfig, kubeconfigPath) | ||
if err != nil { | ||
return fmt.Errorf("failed to write kubeconfig to file: %v", err) | ||
} | ||
|
||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need this? I think we can get away with writing stuff to files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a temp file to write the kubeconfig and reload the kubeconfig of the temp file to apply the changes to Kubernetes clientset.
Tried without writing the file and copying the existing kubeconfig to newConf with bearertoken value set but that approach is not working well so that's why we needed to add the write config file to temp store the kubeconfig and reload for the k8s clientset.
} | ||
|
||
// Wait before retrying | ||
time.Sleep(2 * time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This retry loop & sleep should definitely not be here. Normally, a getter function is only about retrieving something, whenever possible. If we have to wait for a value to show up, let's create a separate helper method for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the code with requested changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need to code the retry. We can take advantage of existing k8s functions.
To be consistent with existing code, let's use the following:
func (k *KubeCtl) GetSecretValue(namespace, secretName, key string) (string, error) {
secret, err := k.GetSecret(namespace, secretName)
if err != nil {
return "", err
}
// Check if the key exists in the secret
value, ok := secret.Data[key]
if !ok {
return "", fmt.Errorf("key %s not found in secret %s", key, secretName)
}
return string(value), nil
}
func (k *KubeCtl) GetSecret(namespace, secretName string) (*v1.Secret, error) {
secret, err := k.clientSet.CoreV1().Secrets(namespace).Get(context.TODO(), secretName, metav1.GetOptions{})
if err != nil {
return nil, err
}
return secret, nil
}
func (k *KubeCtl) WaitForSecret(namespace, secretName string, timeout time.Duration) error {
var cond wait.ConditionFunc
cond = func() (done bool, err error) {
secret, err := k.GetSecret(namespace, secretName)
if err != nil {
return false, err
}
if secret != nil {
return false, nil
}
return true, nil
}
return wait.PollUntilContextTimeout(context.TODO(), time.Second, timeout, false, cond.WithContext())
}
First, you call kClient.WaitForSecret()
to indicate that you're waiting for a secret to appear over the API. Then call kClient.GetSecretValue()
. By having these 3 methods, all of them are usable indepdently of one another.
time.Sleep(10 * time.Second) | ||
_, err = kClient.CreateSecret(secret, namespace) | ||
gomega.Ω(err).NotTo(HaveOccurred()) | ||
time.Sleep(10 * time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 fixed sleeps are dangerous and can be time consuming. Avoid them if possible. Add helper poll method which repeatedly checks if the secret exists.
if !ok { | ||
return nil, fmt.Errorf("user info not found in pod annotation") | ||
} | ||
var userInfoObj interface{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We know the type of this object. It's si.UserGroupInformation
, let's use that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the code with requested changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more comments
} | ||
|
||
// Wait before retrying | ||
time.Sleep(2 * time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't need to code the retry. We can take advantage of existing k8s functions.
To be consistent with existing code, let's use the following:
func (k *KubeCtl) GetSecretValue(namespace, secretName, key string) (string, error) {
secret, err := k.GetSecret(namespace, secretName)
if err != nil {
return "", err
}
// Check if the key exists in the secret
value, ok := secret.Data[key]
if !ok {
return "", fmt.Errorf("key %s not found in secret %s", key, secretName)
}
return string(value), nil
}
func (k *KubeCtl) GetSecret(namespace, secretName string) (*v1.Secret, error) {
secret, err := k.clientSet.CoreV1().Secrets(namespace).Get(context.TODO(), secretName, metav1.GetOptions{})
if err != nil {
return nil, err
}
return secret, nil
}
func (k *KubeCtl) WaitForSecret(namespace, secretName string, timeout time.Duration) error {
var cond wait.ConditionFunc
cond = func() (done bool, err error) {
secret, err := k.GetSecret(namespace, secretName)
if err != nil {
return false, err
}
if secret != nil {
return false, nil
}
return true, nil
}
return wait.PollUntilContextTimeout(context.TODO(), time.Second, timeout, false, cond.WithContext())
}
First, you call kClient.WaitForSecret()
to indicate that you're waiting for a secret to appear over the API. Then call kClient.GetSecretValue()
. By having these 3 methods, all of them are usable indepdently of one another.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This PR adds an end-to-end (E2E) test that verifies pod deployment and user info using a non-
kube-admin
user. The test ensures that RBAC roles and permissions for non-admin users are correctly configured. Specifically, it verifies that user information can be retrieved through pod annotations.What type of PR is it?
Todos
What is the Jira issue?
https://issues.apache.org/jira/browse/YUNIKORN-1977
How should this be tested?
Screenshots (if appropriate)
Questions: