Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add expm (exponential matrix) implementation #156

Open
elsuizo opened this issue Jun 13, 2019 · 11 comments
Open

Add expm (exponential matrix) implementation #156

elsuizo opened this issue Jun 13, 2019 · 11 comments

Comments

@elsuizo
Copy link

elsuizo commented Jun 13, 2019

https://en.wikipedia.org/wiki/Matrix_exponential

@termoshtt
Copy link
Member

SciPy has three functions

and expm2 and expm3 are deprecated. We should use Pade approximation.

Links

@elsuizo
Copy link
Author

elsuizo commented Jun 19, 2019

The Julia language implementation is the following:
https://github.com/JuliaLang/julia/blob/master/stdlib/LinearAlgebra/src/dense.jl#L530-L611

@SuperFluffy
Copy link

You can try https://github.com/SuperFluffy/rust-expm.

I have to be honest: I implemented it more for fun and never used it in production. So YMMV.

@bluss
Copy link
Member

bluss commented Nov 18, 2021

@ethanhs see also this issue

@matthagan15
Copy link

Assuming this issue is still open I'd love to take a stab at it.

@naren-nallapareddy
Copy link

@matthagan15 If you are working on the code, I will write tests, debug or help in any way I can (FYI new to Rust).

@matthagan15
Copy link

@naren-nallapareddy I have yet to get around to this, I'm still struggling to understand how the degree of the pade approximation is chosen & how the denominator is implemented (I believe you just compute the inverse of the polynomial of the matrix but a full inverse seems like a lot so I'm not sure).

@matthagan15
Copy link

status update: I have a bare-bones implementation of the norm estimation needed that seems to be working with preliminary testing (around 90% accuracy w/ exact values) and I have some expm approximations that seem to working with a single 3x3 test matrix. Some things still needed to be done:

  • Implement the squaring and scaling steps. This basically involves checking norms, choosing approximation orders, and scaling the matrix appropriately.
  • Verify the norm estimation a little better. It seems to be working now but better testing and a go-through of the algorithm one more time couldn't hurt.
  • Currently everything is done for Array2 matrices. I want to convert these to Scalar generic operations.
  • Optimizations afterwards. My main goal has to get a minimal working product and then fine tune it later.

@matthagan15
Copy link

After a busy few days, it seems that expm is working for 2x2 complex rotation matrices! AKA picking a random angle theta, we know that Exp[i * theta * pauli_x] = cos[theta] I + i Sin[theta] pauli_x, so I tested this over random values of theta. The higher order approximations give results within a factor of 2 or so of machine precision (f64::EPSILON), but the lower orders can have much higher errors. This seems to indicate something is wonky with the norm checkings, but even then the worst error I've seen was on the order of 10^-13 or so which is good but just not as accurate as it should be. Next steps:

  • Currently using exact operator norms in all necessary calculations, convert some of these to estimations to improve speed.
  • Double check allocations and matrix multiplications. Don't want unnecessary matrix allocations floating around or unnecessary matrix multiplications.
  • clean up helpers. Theres a lot of random helper functions and one helper struct.
  • better testing at higher dimensions. Thinking of writing up a test suite, computing random matrices and their expm using scipy's implementation and storing the results in json, then checking this with the implemented version.

After I have it in a presentable state I'll try to break it down into a few chunks (total changes are currently sitting at around 1100 lines of code) and send some pull requests!

@matthagan15
Copy link

matthagan15 commented Oct 27, 2022

so I've simplified the code a fair bit, but unfortunately I've found a bug which is preventing me from attempting to send it out. I'm not sure what the bug is, but there seems to be an error introduced that scales with the dimension of the input matrix, at around 200x200 matrices each entry has an error on the order of 966 x f64::EPSILON, which is about 10^-12, which leads to a one norm error of at least 2 x 10^-10. This may not seem significant, but the per entry error is around 20 times worse than scipy's implementation, and further scipy's per entry error does not increase with the dimension. I used this in a monte carlo numerical test and I found that under a critical dimension of about 50 or so (when the error is comparable to scipys) the numerics worked beautifully but after dimension of around 150 I noticed my output became incredibly noisy, indicating the errors accumulated to my working precision.

I'm currently trying to determine what could be causing this, it is not a bug with my pade approximations as it seems to be occuring at all approximation levels (meaning it doesn't seem to be a dumb code mistake). Further because the earlier approximations do not use any scaling + squaring it can't be an issue with that. I've tested the inversions used and the inverses are working as expected. It may be an issue with one of the subroutines, but in order to unit test it I have to check it against the scipy implementations. Hopefully will find this soon!

@matthagan15
Copy link

did some tweaks, I believe the error mentioned previously is not an algorithmic implementation issue as it does not appear with sparse matrices. There are still some TODO's which I personally have: smartly switch from exact 1-norm calculation to norm estimation (which I also have implemented on my fork) for large matrices, which is what scipy does. The pull request is located here #352 for anyone curious.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants