-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding diagnostics tools for ill conditioning #1299
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a few comments while my experience with the ill conditioning tool is still fresh.
Co-authored-by: bknueven <[email protected]>
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1299 +/- ##
==========================================
+ Coverage 77.41% 77.43% +0.01%
==========================================
Files 390 390
Lines 63649 63758 +109
Branches 11705 11737 +32
==========================================
+ Hits 49276 49370 +94
- Misses 11835 11845 +10
- Partials 2538 2543 +5 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The below set of changes avoids cubic time complexity in the check_ill_conditioning
model construction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few small comments, then I've opened a PR into this branch where I do the following:
- Clean up variable names in some of the code I wrote for the parallel vector checker
- Implement a relative tolerance check for parallel vectors.
The PR is here: andrewlee94#19. Let me know what you think about the changes.
Use a relative tolerance when checking parallel rows/cols
build ill conditioning problem iterating over non-zeros
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor comments. These do not necessarily need to happen now, as this is beta functionality, but I would like them to be considered before this is made more permanent. In addition, I think a more permanent implementation should be more configurable to incorporate a DegeneracyHunter-like option.
idaes/core/util/model_diagnostics.py
Outdated
return parallel | ||
|
||
|
||
def check_ill_conditioning( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this name is a too vague, as there are many different ways of checking for ill-conditioning. Maybe compute_ill_conditioning_certificate
or compute_worst_conditioned_subset
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
compute_ill_conditioning_certificate
is better.
compute_worst_conditioned_subset
would be a bit inaccurate since it might involve solving a MIP version of this problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're solving the minimum-residual problem globally, we have the "worst conditioned" combination of rows/columns, right? Then you're saying that there is no reason to believe that the result will be a subset?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so, but it will tend to be a subset because we minimize the 1-norm of the multiplier alongside requiring an optimal basic feasible solution to the LP, i.e., it's just LASSO.
Put another way, we could try to minimize the number of non-zeros in the multiplier, which would give a minimal subset demonstrating the ill conditioning.
jac = jac.tocsr() | ||
|
||
# Build test problem | ||
inf_prob = ConcreteModel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this called inf_prob
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably should just go with m
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like m
. I'm fine with another name too, I just don't know what "inf" is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Infeasibility" of the linear dependence constraint?
solver.options["primalT"] = target_feasibility_tol * 1e-1 | ||
solver.options["dualT"] = target_feasibility_tol * 1e-1 | ||
|
||
results = solver.solve(inf_prob, tee=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does not have to happen now, but I generally like splitting these types of functions into two functions: One that simply builds the model, and one that just builds the model (by calling the first function), then solves the model. That way an advanced user can build the model and inspect it, apply a transformation, or use a custom solver. It also makes it easier to see what the default solver and options are.
I'm open to arguments that this is a bad idea, and again, this does not need to happen now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We actually solve two problems at the moment -- we minimize the norm of the RHS, then we fix the norm of the RHS and minimize the 1-norm of the vector. That makes a cleaner workflow more difficult.
I also imagine eventually trying to re-solve with tighter tolerances if the answer is nonsensical, e.g., the norm of the RHS is negative. These things can be done automatically much more straightforwardly if we pick a single solver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not realize that we solve two problems. I see the difficulty in splitting this up in that case.
# Objective -- minimize residual | ||
inf_prob.min_res = Objective(expr=inf_prob.res_norm) | ||
|
||
solver = SolverFactory("cbc") # TODO: Consider making this an option |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to default to HiGHS or SCIP, although I understand if that can't happen for the time being.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The best solver to pick would probably be SoPlex as we can set arbitrary precision and utilize its iterative refinement capability. But I went with cbc
because it ships with idaes get-extensions
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, for testing at least we do have the AMPL version of SCIP available now. For DegeneracyHunter, I made the solver a user argument with SCIP as the default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than the name of check_ill_conditioning
, I think this is ready.
I'd still like to see this merged with DegeneracyHunter eventually, because I think having two different optimization-based degeneracy/ill-conditioning detectors will be confusing for users. This does not have to hold up this PR.
idaes/core/util/model_diagnostics.py
Outdated
return parallel | ||
|
||
|
||
def check_ill_conditioning( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think check_ill_conditioning
is too vague of a name here. I suggest compute_ill_conditioning_certificate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this name better.
If we were honest it might be try_to_compute_ill_conditioning_certificate
, as this LP itself will also be ill-conditioned. SoPlex has the capability to solve LPs to arbitrary precision, but it's not available currently as a (direct) Pyomo solver. And you cannot access the same options through SCIP, unfortunately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the name to compute_ill_conditioning_certificate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good
@bknueven Would you have time to review this? I think it is just waiting on a second reviewer. |
Fixes None
Summary/Motivation:
This PR adds new diagnostics capabilities for detecting near-parallel variables and constraints and for finding sources of ill conditioning in models (similar to DegeneracyHunter).
Changes proposed in this PR:
Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: