-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel SR-IOV configuration #427
Conversation
Thanks for your PR,
To skip the vendors CIs use one of:
|
Thanks for your PR,
To skip the vendors CIs use one of:
|
Thanks for your PR,
To skip the vendors CIs use one of:
|
Thanks for your PR,
To skip the vendors CIs use one of:
|
Pull Request Test Coverage Report for Build 5465798889
💛 - Coveralls |
Thanks for your PR,
To skip the vendors CIs use one of:
|
if err := dn.applyDrainRequired(); err != nil { | ||
return err | ||
} | ||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we return and not continue to the OpenShift part. Seems that it can change the logic, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@e0ne will you be updating/Adding UT for this PR ?
controllers/drain_controller.go
Outdated
if err := dr.Update(ctx, &node); err != nil { | ||
return reconcile.Result{}, err | ||
} | ||
drainingNodes++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we wait for annotation to be reflected back in cache ?
if, for some reason list returns items in a different order we may end up with more nodes draining than needed.
that said, im not sure this may happen (that is listing from cache will yield node list in different order).
if its the same order, then the same nodes will be selected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since we're listening for the nodes, we shouldn't face chaching issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but we can, maybe u patch one node but another node triggers reconcile.
then the new state of the first node might not have propagated to cache.
controllers/drain_controller.go
Outdated
if utils.NodeHasAnnotation(node, "sriovnetwork.openshift.io/state", "Drain_Required") { | ||
if drainingNodes < config.Spec.MaxParallelNodeConfiguration { | ||
node.Annotations["sriovnetwork.openshift.io/state"] = "Draining" | ||
if err := dr.Update(ctx, &node); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add info log, that this node is now set to Draining ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Thanks for your PR,
To skip the vendors CIs use one of:
|
Thanks for your PR,
To skip the vendors CIs use one of:
|
Thanks for your PR,
To skip the vendors CIs use one of:
|
1 similar comment
Thanks for your PR,
To skip the vendors CIs use one of:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR
I left some comments we are gettinng close
controllers/drain_controller.go
Outdated
// reconciliation count | ||
qHandler := func(q workqueue.RateLimitingInterface) { | ||
q.Add(reconcile.Request{NamespacedName: types.NamespacedName{ | ||
Namespace: "drain-reconcile-namespace", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the namespace by the sriov namespace?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used hard-coded namesapce here because it's not related to operator namespace objects
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but do we need the namespace to exist?
Thanks for your PR,
To skip the vendors CIs use one of:
|
Thanks for your PR,
To skip the vendors CIs use one of:
|
/test-e2e-nvidia-all |
controllers/drain_controller.go
Outdated
} | ||
|
||
// Watch for spec and annotation changes | ||
nodePredicates := builder.WithPredicates(predicate.AnnotationChangedPredicate{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we limit watching here on nodes that have a certain annotation ?
for nodes that dont have the node drain annot we dont need to don anything. perhaps its possible to filter them at an eariler stage and not trigger reconcile in case of annot change for those nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds reasonable. Let me implement own predicate for this
|
||
config := &sriovnetworkv1.SriovOperatorConfig{} | ||
err := dr.Get(ctx, types.NamespacedName{ | ||
Name: constants.DefaultConfigName, Namespace: namespace}, config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will fail when you enqueue a reconcile request with "drain-reconcile-namespace" namespace.
perhaps you need to identify this type of request first, or enqueue with sriov operator namespace so namespace field in request will be correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
controllers/drain_controller.go
Outdated
} | ||
|
||
reqLogger.Info("Count of draining", "drainingNodes", drainingNodes) | ||
if config.Spec.MaxParallelNodeConfiguration != 0 && drainingNodes == config.Spec.MaxParallelNodeConfiguration { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: drainingNodes >= config.spec.MaxParallelNodeConfiguration
think of a case where we allow 5 nodes to drain, and then reduce the max number to 3 mid upgrade.
in this case we want to return as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -756,55 +734,14 @@ func (dn *Daemon) getNodeMachinePool() error { | |||
return fmt.Errorf("getNodeMachinePool(): Failed to find the MCP of the node") | |||
} | |||
|
|||
func (dn *Daemon) getDrainLock(ctx context.Context, done chan bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good riddance !! :)
}, | ||
}, | ||
}) | ||
return nil | ||
} | ||
|
||
func (dn *Daemon) pauseMCP() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
next step after this, is to move pauseMCP logic to controller
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR going in the right direction. i think we are close :)
added a few comments.
/test-e2e-nvidia-all |
Thanks for your PR,
To skip the vendors CIs use one of:
|
1 similar comment
Thanks for your PR,
To skip the vendors CIs use one of:
|
/test-all |
Thanks for your PR,
To skip the vendors CIs use one of:
|
/test-all |
Thanks for your PR,
To skip the vendors CIs use one of:
|
Thanks for your PR,
To skip the vendors CIs use one of:
|
/test-all |
/test-e2e-nvidia-all |
1 similar comment
/test-e2e-nvidia-all |
No description provided.