-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add explanation on how to get the servers short name and FQDN into the SANs of the server.crt for ETCD #844
Comments
@jenting please help with this. Thanks. ref: kubeadm provides |
Thanks @innobead pointing out 🙇 @Martin-Weiss I'm wondering if the user would like to change ETCD server SAN to FQDN at day-2 operation. |
Hopefully not - servers should get static names (short, FQDN) and also static IP addresses. IMO any server rename should require a reinstall - similar to master remove; reinstall; re-add. It would be nice to support server rename and IP changes but with all the dependencies we have at the moment I am not sure if the effort is worth the value.. |
@Martin-Weiss Question: would it be possible that different etcd server with different FQDN? Like
Currently evaluation is it's doable by
|
FQDN = the FQDN of the server .. (hostname -f) A single ETCD would not have multiple FQDNs. IMO the etcd SAN needs to have the FQDN of the server ETCD is running on - so a different SAN on each ETCD host. |
@Martin-Weiss etcd-server-1: FQDN=etcd-server1.. Correct me if I’m wrong. |
Yes So the best would be to have the short and full qualified domain name as SAN and also the IP addresses of the host: etcd-server-1: SANs: DNS.1.etcd-server1.; DNS.2.etcd-server1; IP.1: IPv4; IP.2: IPv6 |
I can't find a way to fulfills each ETCD server's have its short/FDQN as SAN. All I could find now is all ETCD servers use the same short/FQDN as SAN. So adding a blocked label. |
An other option would be to use a wildcard certificate..
Am 06.07.2020 um 04:34 schrieb JenTing Hsiao <[email protected]>:
I can't find a way to fulfills each ETCD server's have its short/FDQN as SAN.
All I could find now is all ETCD servers use the same short/FQDN as SAN. So adding a blocked label.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#844 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADF6LDIV6DVXP37EDVZWO3TR2EZ3LANCNFSM4NKN6OTA>.
|
Ref to coreos/etcd-operator#901 |
@Martin-Weiss Could you elaborate more on your user cases? What commands did your execute? This would make us know more about field/customer user case/feedback. Thanks in advance. |
Not sure what commands you are asking for - basically I just checked the certificates with openssl. openssl s_client -connect .... |openssl x509 -noout -text.. |
I think the command would begin with I was wondering you mentioned in description From now, I could think the scenario to support SAN with short name/FQDN is the ETCD server externally (not insides Kubernetes cluster), and the users would use |
Ah - now I get the question! The problem is the Prometheus configuration for the scraping!
--> we should NEVER use IP addresses - and use FQDNs instead. BUT the usage of FQDN with the certificates used for ETCD does not work as the SANs are not correct / do not have the FQDNs. |
I encountered into a problem, my cluster is bootstrap by IP address and I'd like to access ETCD servers by its hostname so my Prometheus scrape config would like this:
However, due to the etcd Pod does not have Service so the coredns can't resolve the etcd server's record. Ref to https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods |
Does anybody know how to solve this problem? |
At the moment ETCD is using a server.crt which per default only has the IP address of the master node it is running on.
In case we want to scrape or backup etc. it would be better if we could target etcd with the FQDN and/or short name of the master instead of using the IP address.
Could we add the information to the documentation on how to do this?
The text was updated successfully, but these errors were encountered: