Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linear optimization bottlenecks #38

Open
marcelinomalmeidan opened this issue Jul 19, 2017 · 0 comments
Open

Linear optimization bottlenecks #38

marcelinomalmeidan opened this issue Jul 19, 2017 · 0 comments

Comments

@marcelinomalmeidan
Copy link

Hi all!
I am performing several tests with this code, and it feels like there is some optimization to do in it.
The reasoning behind this is because I want to calculate optimal segment times as fast as possible. So far, I made my own backtracking gradient descent (which depends only on the linear solver) and compared results with what is obtained from nlopt (without inequality constraints).
In my tests, I am getting to the solution in about 25%-50% faster than the time that nlopt does (using the same relative tolerance). See figure (number of points here means number of waypoints):

soltimeoptsegments

However, I feel like there is room for improvement in the linear solver, which would consequently improve the backtracking gradient descent method.

In order to find the bottlenecks for the linear solver, I started putting timers everywhere.
My initial assumption was that the bottleneck would be in the matrix inversions. I was wrong.
In order to make the bottlenecks evident, I solved the linear problem with 1500 waypoints on a quad-core machine (3.50GHz).
Total calculation time: 5.65s
Time spent on the function setupConstraintReorderingMatrix(): 2.51s
Time spent on the function updateSegmentsFromCompactConstraints(): 2.83s
Time spent on all the rest: 0.31s

At this point, it seems to me that I could only improve the solver by changing these functions. However, I got lost in nested code trying to understand everything.

If I understand it well, the setupConstraintReorderingMatrix() function is there to be able to further exploit the structure of the problem by having to invert a smaller matrix dp = -Rpp^-1 * Rpf * df
However, I don't think that the trade-off is paying off. Sparse matrix inversions can be quite fast, specially with use of the sparsesuite methods. My bet would be that it would be better to solve the full problem with lagrange multipliers and everything, and still be more efficient.
For instance, I just ran the problem above with 10.000 points. The time to calculate setupConstraintReorderingMatrix() was 220s, while the time to solve dp = -Rpp^-1 * Rpf * df (Rpp is 10.000x10.000) was 0.24s.

As for the function updateSegmentsFromCompactConstraints(), it seems to be responsible to populate the segments that goes into the output. I don't really understand why it is inefficient. Any ideas?

I am not sure what to ask here in this issue, but are you developers still active regarding this repository? It seems to me that I would have to dig deep to improve these two functions, but maybe you have better ideas on how to do it, given your experience with the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant