Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNS Resolve issue in Google Container Engine #8

Open
nickveenhof opened this issue Mar 12, 2017 · 8 comments
Open

DNS Resolve issue in Google Container Engine #8

nickveenhof opened this issue Mar 12, 2017 · 8 comments

Comments

@nickveenhof
Copy link

See kubernetes/kubernetes#40180 for details. Looks like the example in this github repository isn't working as expected, but it is related to a bug somewhere in kubernetes and/or Google Cloud.

@simt2
Copy link

simt2 commented Mar 13, 2017

I just want to note that mounting NFS volumes via a server that is referenced as ExternalName Service also fails for me in a Kubernetes Cluster installed on premises.

@rimusz
Copy link
Owner

rimusz commented Mar 13, 2017

Thanks guys for reporting, but this is Kubernetes bug not this project ones :)

But let's keep this issue opened.

@nickveenhof
Copy link
Author

True, curious if we could adapt the example in this repository so that it works on Google Container Engine?

@rimusz
Copy link
Owner

rimusz commented Mar 13, 2017

it used to work. I'm not using gluster myself, this one was just an example

@Locus99
Copy link

Locus99 commented Jun 30, 2017

I have gotten glusterfs to work on GCE in a kubernetes cluster. To do this, I had to do 2 things:

  1. Add a storageClassName tag to both the pv and pvc yaml files in the spec section. If you don't do this (as far as I can tell), you will actually mount a different pv provided by kubernetes instead. In other words, it will appear to work just fine, but you are not actually using the gluster mount. The actual value in the storageClassName doesn't matter, but they need to match between the two yaml files.
  2. I had to switch from using hostnames in the volume create command to IP addresses. This is a fairly trivial change in the create_volume.sh file:
    CLUSTER=$CLUSTER${SPACE}${STATIC_IP[$i-1]}:/data/brick1/${VOLUME}
    but is more complicated in the create_cluster.sh script as the "gluster peer probe ${SERVER}-${i}" is not sufficient since you will need to also do a peer probe with the IP address as well. I did this manually as peer probing with an IP address on the same host will fail (doesn't allow localhost). So I had to log into different hosts to peer probe with that IP. If I update the script at a later date to address this issue, I will post it here.

Note that none of these changes are required to enable glusterfs to work outside a kubernetes cluster, but are necessary at this time to work with kubernetes (at least with gce).

@rimusz
Copy link
Owner

rimusz commented Jun 30, 2017

thanks @Locus99 for the info 👍

@poor-bob
Copy link

Would be very exciting to see this :) 👍

@rimusz
Copy link
Owner

rimusz commented Aug 15, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants