-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split edpm into nodeset and deployment stages #197
Conversation
7194363
to
23641e6
Compare
automation/vars/uni01alpha.yaml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'll also have to modify this file to use split data plane
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently, it is fine, but how can I test this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I'm confused, because I thought we wanted to break the dataplane stages into two stages, such that OpenStackDataPlaneNodeSet
can reach Ready
state in the first stage and then OpenStackDataPlaneDeployment
reaches Ready
in the second stage (which we check for via the separate stages' wait_conditions
, i.e. oc wait osdpns openstack-edpm -n openstack --for condition=Ready --timeout=1200s
and then oc wait osdpd edpm-deployment -n openstack --for condition=Ready --timeout=1200s
). If we don't separate these into two different stages, won't we be applying OpenStackDataPlaneNodeSet
and OpenStackDataPlaneDeployment
at the same time still?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, you are right!
We'll also need to update the various example data plane |
7e77fad
to
ffb6e43
Compare
``` | ||
oc wait osdpns openstack-edpm --for condition=SetupReady --timeout=600s | ||
``` | ||
Create the nodeset CR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/nodeset/deployment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to rewrite this using the Component
pattern like va/hci/edpm-pre-ceph/kustomization.yaml
, so that we're not duplicating replacements
logic here and in va/hci/edpm-post-ceph/deployment/kustomization.yaml
and va/hci/edpm-post-ceph/nodeset/kustomization.yaml
. I know there's also a components
include here of - ../../../lib/control-plane
, but perhaps it would be okay to place it in va/hci/edpm-post-ceph/nodeset/kustomization.yaml
? Or maybe we need a separate stage all together for something like va/hci/control-plane-post-ceph
to house the control plane update (and thus the EDPM post-Ceph stage would really become three stages: control-plane
, nodeset
and deployment
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the duplicated replacements
was a leftover from my previous attempt
de4eb31
to
73aa700
Compare
From my side, I'm ready. I want to get some reviews from the team, but openstack-k8s-operators/ci-framework#1547 works. Since we'll break current jobs downstream if they inject any hook or kustomization (usually they stage_2, which would now be stage_3), I stacked a second patch in the PR to allow using a "better" name, built on the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+2
Overall I'm fine with this change.
Before we merge, though someone (because we don't yet have fully automated gating) should ensure that a full ci-framework reproducer run with the updated automation/vars/default.yaml
completes successfully before we merge. Testing va1 and uni01alpha seems prudent. Let's make sure someone from ci-team agrees before merging.
With VA1 we usually get:
├── dataplane-post-ceph.yaml
├── dataplane-pre-ceph.yaml
After this PR merges we'll get:
├── dataplane-deployment-pre-ceph.yaml
├── dataplane-nodeset-pre-ceph.yaml
├── deployment-post-ceph.yaml
└── nodeset-post-ceph.yaml
I built both and then compared the new pre ceph from this branch against the old:
cd new
(cat dataplane-deployment-pre-ceph.yaml ; echo "---"; cat dataplane-nodeset-pre-ceph.yaml) > pre.yaml
cd ..
diff -u old/dataplane-pre-ceph.yaml new/pre.yaml
It looked fine (i.e. no meaningful difference).
I compared the new post ceph from this branch against the main branch (old) too. diff -u old/dataplane-post-ceph.yaml new/nodeset-post-ceph.yaml
is the same as deployment-post-ceph.yaml; which is how it should be.
Nit: I didn't like the name dataplane-post-ceph.yaml
in VA1 since it also included an update to the control plane. Similarly, nodeset-post-ceph.yaml has a nodeset but it also has an update to the control plane CR, so I think calling it post-ceph.yaml
would be better. We don't have to block this on that however.
uni01alpha testing results
|
6330fef
to
1c87769
Compare
Signed-off-by: Fabricio Aguiar <[email protected]>
Is this PR (197) being done in response to this change to the dataplane operator? openstack-k8s-operators/dataplane-operator#839 I think @abays added his most recent change to this patch (197) because he is still getting stuck on the check introduced by PR839 above. |
That is fixed by openstack-k8s-operators/dataplane-operator#859 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
No description provided.