-
Notifications
You must be signed in to change notification settings - Fork 787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does ebs-csi-driver have impact on the AZ chosen when EKS replace a worker node? #2133
Comments
@ensean: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hi @ensean - EBS volumes are a zonal resource, so any instances using a volume must be in the same zone as the volume (for more info see the AWS docs for EBS volumes). The EBS CSI Driver will automatically populate a CSI Topology label with the key Most node scalers such as |
Hi @ConnorJC3 , thanks a lot for your explanation. Suppose the AZ where the PV belongs fails, will ebs-csi-driver try to copy the PV to another AZ through snapshot? |
No, the ebs-csi-driver will not try to copy a failed PV to another AZ through snapshot. You may be able to write your own Kubernetes Operator which would do this. |
Hello, we are planning to deploy a stateful application(Clickhouse) to EKS with the support of ebs-csi-driver.
But we are concerning that during the process of worker node replacement(may be due to hardware failure etc.), is there a chance that an instance in a different AZ is launched, causing EBS volume mount failure?
/triage support
The text was updated successfully, but these errors were encountered: