-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix context issue during cleanup of kind clusters #6771
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Pulkit Jain <[email protected]>
creationTimestamp=$(kubectl get nodes --context kind-$kind_cluster_name -o json -l node-role.kubernetes.io/control-plane | \ | ||
for context in $(kubectl config get-contexts -o name | grep 'kind-'); do | ||
cluster_name=$(echo $context | sed 's/^kind-//') | ||
if docker ps --format '{{.Names}}' | grep -q "$cluster_name"; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this list all containers in the given cluster name , docker ps --format '{{.Names}}' | grep -q "$cluster_name"
?
docker ps --format '{{.Names}}' | grep '$cluster_name'
if docker ps --format '{{.Names}}' | grep -q "$cluster_name"; then | |
if docker ps --format '{{.Names}}' | grep '$cluster_name'; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to list the cluster names we just want to check if they are present or not, so -q is required here.
creationTimestamp=$(kubectl get nodes --context kind-$kind_cluster_name -o json -l node-role.kubernetes.io/control-plane | \ | ||
for context in $(kubectl config get-contexts -o name | grep 'kind-'); do | ||
cluster_name=$(echo $context | sed 's/^kind-//') | ||
if docker ps --format '{{.Names}}' | grep -q "$cluster_name"; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are we relying on this docker command instead of kind get clusters
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And kind get nodes
can also be used to list all node names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are not relying on kind get clusters because it lists those clusters also for which the creation is not yet completed or successful, and is in mid way, which creates a problem because when we try to get contexts of such clusters which are yet not completed it returns an error.
This error we may encounter in case of aborted jobs and multiple jobs run.
In case of aborted job suppose we abort the job as soon as the cluster creation starts, so when we do kind get clusters
we will get to see that cluster from aborted job in the list but since its context will not be available so the cleanup will panic and job will fail.
In case of multiple jobs run suppose there are two jobs running parallely and one job has just triggered cluster creation step and the other job triggers the cleanup function so it will list the cluster which is ye not created and then it will try to fetch the context for that cluster and the job will fail because of panic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sometimes because of these stale clusters present in the environment testbed also becomes unhealthy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jainpulkit22 Thanks for the explanation, my concern is whether docker ps --format '{{.Names}}' | grep -q "$cluster_name"
is a sufficient basis to determine if a cluster is ready. Is there something like status.conditions fields in the context that would allow us to accurately determine if the cluster is ready?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to point out that at this stage we are already iterating over known contexts (for context in $(kubectl config get-contexts -o name | grep 'kind-')
, so we know that the context exists. It is not the same as the previous solution, which was first calling kind get clusters
, and then calling kubectl config get-contexts
.
In your comment and in the original issue, you talk about a "panic", but it's not clear to me what failure you are referring to exactly.
Note that even with your proposed solution, races seem possible:
- job 1 calls
kubectl config get-contexts
and getscontext1
- job 2 calls
kubectl config get-contexts
and getscontext1
- job 1 calls
docker ps
and finds the container it is looking for - job 2 calls
docker ps
and finds the container it is looking for - job 1 calls
kubectl get nodes --context context1 ...
succesfully - job 1 deletes the kind cluster sucessfully
- job 2 calls
kubectl get nodes --context context1 ...
butcontext1
does not exist anymore!
Either the code should be best effort and tolerate failures, or maybe you should use a lock (see flock
) for correctness, and then you don't need to worry about concurrent executions and you can use the most appropriate commands for the task.
Fix context issue during cleanup of kind clusters.
Fixes #6768.