How to create PersistantVolume (ReadWriteOnce) on the node that runs the workflow? (Kubernetes backend) #3068
-
Hi. I am running Woodpecker on a little k3s cluster. The workflows run correctly, however they always run on the same node (I have two nodes). I've set the Pending pod description:
From the PersistantVolume:
Good node description:
My steps:
coucou:
image: busybox:musl
commands:
- echo coucou
backend_options:
kubernetes:
resources:
requests:
memory: 100Mi
cpu: 100m
nodeSelector:
merde: merde |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments
-
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
That's on me. There is a nodeSelector (I was testing things) on the agent pods that's why they are on the same node. I'll change that and retry running a workflow. |
Beta Was this translation helpful? Give feedback.
-
Here are the files. With one agent on each node. I didn't edit the nodes name this time: the "good" node is "renegade-one" and the other one is "nanopi.lan". The secret has been edited. |
Beta Was this translation helpful? Give feedback.
-
K3s I ran pipeline four times
That also seems logical. Local volume was created on one node. You created pod on another. Pod cannot mount local volume from first node.
And you have two agents. Looks like an issue with Server->Agent scheduling.
Maybe related: Kubernetes: smart(er) agent affinity You supposed to be able to disable Agents, but agent option "stop agent from taking new tasks" not working. I just delete unnecessary Agent, run pipeline via the second and then restart the first. In your case, it is, probably, simpler to use one agent and give away scheduling to Kubernetes. |
Beta Was this translation helpful? Give feedback.
K3s
v1.29.0+k3s1
, 3 nodesWoodpecker
2.2.2
, 1 server, 1 agent with max workflows 4I ran pipeline four times
and workflows were spawned across three nodes
At start of pipeline workspace volume is created. Then repository cloned into it. Then it mounts into each step also.
Volume is created on node, where first pod runs (binding). Rest of the pods are scheduled on the same node due to affinity, that LocalPathProvisioner sets.
So, seems works as intended.
That also seems logical. Loc…