-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Diagnostic tool to look for potential evaluation errors #1268
Conversation
@michaelbynum Just to put down in writing the comments I made at the working group meeting, suggest renaming |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for putting this together, @michaelbynum. A few questions on the detection of exponent evaluation errors. Also, do you think that the _EvalErrorWalker
should be part of, say, pyomo.util
? I guess this PR does not preclude the possibility of "promoting" it eventually if there is a need or use case.
Done. |
Working on tests now. |
Would it be possible or make sense to include an option where bounds are treated as strict inequalities? I.e. I think some modelers would think that |
Good point. I agree. Should it be an option, or should that just be the behavior? |
I guess it is solver dependent, so it should probably be an option. |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1268 +/- ##
==========================================
+ Coverage 77.30% 77.33% +0.02%
==========================================
Files 390 390
Lines 63249 63367 +118
Branches 11625 11648 +23
==========================================
+ Hits 48893 49003 +110
- Misses 11837 11843 +6
- Partials 2519 2521 +2 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of (hopefully) small requests from me. There are a few ways we could address the first comment, and we might want to discuss different options to work out the best way to do this.
idaes/core/util/model_diagnostics.py
Outdated
@@ -1229,6 +1244,10 @@ def report_structural_issues(self, stream=None): | |||
warnings, next_steps = self._collect_structural_warnings() | |||
cautions = self._collect_structural_cautions() | |||
|
|||
eval_error_warnings, eval_error_cautions = self._collect_potential_eval_errors() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Is there an easy way this could be moved inside the
_collect_structural_warnings
and_collect_structural_cautions
methods? It looks like you did this for efficiency so you only need to walk the model once. However, there is anassert_no_structural_issues
method as well (that only checks for warnings) which could not include this as written (although the solution might be to just add this code there as well). I am also working on adding diagnostics to the convergence tester (parameter sweep) tool, and I am looking at using these methods there too (which would mean they need to become public). - Could you add an entry to
next_steps
as well pointing users to the associateddisplay
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I have addressed both of these now. I realized I was being silly with the cautions. They were completely unnecessary/wrong.
@Robbybp, I added an option as you suggested. |
If tests pass, then this will hopefully be ready to go. |
idaes/core/util/model_diagnostics.py
Outdated
CONFIG.declare( | ||
"strict_evaluation_error_detection", | ||
ConfigValue( | ||
default=True, | ||
domain=bool, | ||
description="If False, warnings will not be generated for things like log(x) with x >= 0", | ||
), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what the default should be here. For Ipopt, with default options (i.e. bound_relax_factor=1e-8
), we are strictly feasible path with respect to bounds of zero, but not other bounds (at least according to this option's documentation). So something like log(1 - x); x <= 1
could still hit the evaluation error. I think I lean towards treating this as an advanced option that people can turn on if they know what they're doing, although I think we will have to test this in the wild to really know what the "right" default is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested the example you mentioned with x initialized to 2, and I did not get an evaluation error (if I set the nlp_scaling_method to none).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which is consistent with the way we do fraction to the boundary in parapint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That said, I agree with your overall point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested the example you mentioned with x initialized to 2, and I did not get an evaluation error (if I set the nlp_scaling_method to none).
What if you set bound_push
to zero (or anything less than 1*bound_relax_factor
, I think)?
The point is just that the evaluation error could happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'm suggesting renaming the option, although I'd prefer to get another opinion, as naming preferences can be fairly idiosyncratic.
I would be fine with a name that describes the "region of evaluation error", but would prefer to call it something other than "strict" as this makes me think of how the bounds are handled (unless others think the name makes sense).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I read "strict_evaluation_error_detection" in the sense that @michaelbynum used it, but equally when I read the doc string I asked "what exactly happens when it is set to False
and why?" This does have me leaning towards @Robbybp's suggestion as would focus the documentation on "why you might want to set this to relax the definition of bounds".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is a potentially niche use case, maybe we should defer including this option for now (and keep the default behavior of treating bounds as if they are non-strict), then add this later if we find that the output is often dominated by bounds of zero that we don't actually expect to hit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andrewlee94, are you okay with removing the option for now and adding later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is fine by me.
@Robbybp & @michaelbynum: ping. We're looking at cutting the RC just after Thanksgiving, will this make it? |
Should make it. |
@michaelbynum Note that you have a few lingering pylint issues too: idaes/core/util/model_diagnostics.py:59:0: E0611: No name 'generate_standard_repn' in module 'pyomo.repn.standard_repn' (no-name-in-module) |
@@ -251,6 +267,14 @@ def svd_sparse(jacobian, number_singular_values): | |||
description="Tolerance for raising a warning for small Jacobian values.", | |||
), | |||
) | |||
CONFIG.declare( | |||
"warn_for_evaluation_error_at_bounds", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Robbybp, @andrewlee94 - what do you think of this name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome. Thank you both.
I'm not sure why tests are getting skipped... |
@michaelbynum I think it is OK - it is just GitHub getting confused. |
Summary/Motivation:
This PR adds a method to the
DiagnosticToolBox
that looks for potential evaluation errors such as the log of a negative number. I'm marking it as draft while I work on tests/docs, but I wanted to create the PR to get some initial feedback.Example:
gives
Note that adding the constraint
m.c3 = pe.Constraint(expr=m.x - 1 >= 0)
does not change this report because there could still be evaluation errors in infeasible path solvers (like Ipopt). However, if this constraint is added and FBBT is used before checking for evaluation errors, then no warnings will be issued.Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: