Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are absolute and relative tolerances equivaltent in multi-language benchmarks? #1047

Closed
ljuszkie opened this issue Sep 13, 2024 · 3 comments
Closed
Labels

Comments

@ljuszkie
Copy link

I was discussing Julia ODE solvers performance compared with solvers available in MATLAB and other languages and the someone raised a following question: are the absolute and relative tolerances defined in an equivalent way for the benchamarked solvers? Will the same abstol and reltol give the same precision in Julia, MATLAB and Sundials?

@ChrisRackauckas
Copy link
Member

We are using work-precision diagrams. When you look at the diagram, it shows the true error, i.e. the error of the computed solution against a reference or analytical solution, vs time. This allows for even methods which implement adaptivity differently (and thus interpret the tolerances slightly differently) to be compared just by taking a vertical line, because that is then directly "how accurate vs how fast"

image

@DagBruck
Copy link

DagBruck commented Oct 6, 2024

Well, that compensates for difference interpretations of the tolerance set by the user, but it does not explain what you should set it to in order to get approximately the same answer. I.e. a generalization such as "1e-4 in DASSL corresponds to 1e-5 in CVODE".

@ChrisRackauckas
Copy link
Member

That's a completely different question but unrelated to the benchmarks. It's pretty much impossible to ever know a true mapping to global error in general since for every problem the translation can be different. Sometimes you have RK methods with a PI adaptivity that completely changes the equation, so it may track global error more accurately than say a P-adaptive method, and then you have predictive controllers, etc. But that does not effect the benchmarks because the benchmarks just compute the global error you get from a wide range of tolerances, and so if you look at the benchmarks you see that they are run with many (abstol,reltol) pairings and then the true error is what's actually calculated and plotted there. From diving around the benchmarks you will see for example that CVODE is not very good at tracking the global error, it tends to give a bit more error for the same tolerances as other methods, but using a work-precision plot it's compensated for because you just take a vertical line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants