Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

is the pytorch backend differentiable? #45

Open
Ni-Chen opened this issue Apr 17, 2022 · 3 comments
Open

is the pytorch backend differentiable? #45

Ni-Chen opened this issue Apr 17, 2022 · 3 comments

Comments

@Ni-Chen
Copy link

Ni-Chen commented Apr 17, 2022

Hi,

Thanks for making this amazing package, this could be useful.
May I ask if the PyTorch backend is differentiable?

Thanks.

@flaport
Copy link
Owner

flaport commented Apr 18, 2022

Hey @Ni-Chen ,

this has been on my todo list for too long, currently there are a few in-place operations preventing this... However, I don't think it should be too difficult to change, I expect the simulator to work slower, however...

Maybe when I find the time I might finally give it a shot soon. This would obviously be an awesome feature to have.

@Ni-Chen
Copy link
Author

Ni-Chen commented Apr 19, 2022

Thanks for your reply.
I believe it would be of great interest for many applications. @flaport

@simenhu
Copy link

simenhu commented Sep 26, 2023

Would you be able to point to the places where the changes would have to be made? Could see if I could try to make a PR on it if it's not a very big change?

Would the general approach of preallocating the field tensors and write to a new index instead of updating an already populated tensor be the general solution you are thinking of?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants