-
Notifications
You must be signed in to change notification settings - Fork 55
Directory permission issue when using DaemonSet and PMEM-CSI on OpenShift 4.6.9 #912
Comments
This smells like an issue in the container runtime, potentially related to SELinux. Can you reproduce it with SELinux disabled? Can you reproduce it when replacing PMEM-CSI with some other CSI driver, for example https://github.com/kubernetes-csi/csi-driver-host-path? |
I tried to reproduce this on our QEMU cluster, but without success. it worked:
|
Here are the objects that I used. Local volume (same as in description):
Daemonset:
Pod:
|
Does it perhaps matter where volumes are mounted inside the containers? I would avoid mounting volumes on top of each other, if it can be avoided. I don't have a particular reason, it just seems unnecessarily complicated. |
For hostpath, distributed provisioning from v1.6.0 would be needed to get all pods of the DaemonSet running. But it looks like I broke CSI ephemeral volume support in that driver when adding capacity simulation in that release. Somehow that didn't show up in tests... because CSI ephemeral volume support is not tested with that driver. Will fix both. To use hostpath:
If you don't want to build yourself, you can also use the image that I pushed and directly deploy. Then use this DaemonSet:
|
Thanks for the information. I forgot to mention that the issue was found in the OpenShift environment. I'm not sure if this is caused by OpenShift. I will try not mounting on the same path. |
@Tianyang-Zhang Did using different paths help? |
Sorry about the late update. I tried using a different path(
|
Can you reproduce it with the CSI hostpath driver instead of PMEM-CSI? v1.6.2 should work out of the box, i.e. no image building needed. If yes, then this is something that can be reported to Red Hat. |
When I trying to create your daemonSet example, I got this error:
Should I build a image from source? |
How did you install the CSI hostpath driver? When you install via https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/deploy/kubernetes-distributed/deploy.sh, then it should install a CSIDriver object from https://github.com/kubernetes-csi/csi-driver-host-path/blob/master/deploy/kubernetes-distributed/hostpath/csi-hostpath-driverinfo.yaml |
I rechecked the whole cluster system and found selinux was re-enabled. This issue is gone after disabled selinux. Sorry about the confusion and extra time you spent! |
But the solution can't be "disable SELinux", right? It might require some extra work, but ideally it should also work with SELinux enabled - whatever "it" is that was failing. |
You are right. It might related to how |
FYI, we also reproduced this issue without using any CSI driver on Diamanti cluster(k8s). Disable SELinux also fixed it. |
I created a local PV and PVC with local storage class(no provisioner) and
readWriteMany
access mode for storage sharing between pods:Then I created a daemonSet mount to this volume(path
/tmp/memverge
). This daemonSet uses PMEM-CSI to provision PMEM by CSI ephemeral volume(I'm using OpenShift 4.6 and generic ephemeral volume somehow is not supported). Everything works fine and I can attach to my pods(saypod A
) and access the mounted directory. But if I create another pod(saypod B
, which is running on the same node aspod A
) mounting to the same local PV, I no longer able to access/tmp/memverge
inpod A
and get error:The permission in container is correct:
If I create more pods mounting to the same local PV, all these pods works fine and I am able to access the mounted dir. But not the pod A.
If I remove the CSI ephemeral volume part in the daemonSet and re-do everything, this issue is gone. The volume spec for PMEM-CSI is as following:
This issue seems only happens when
daemonSet
is involved. I haven't doThe text was updated successfully, but these errors were encountered: