-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Transforms #43
Conversation
@samuelburbulla The current state of this pullrequest also includes backward/undo implementations for the transformations. This can only be done for bijective mappings. There are many non-bijective mappings like cropping, random noise, limiting. In cases where this is not possible the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adapted to requested changes. Need some further clarification on whether it is okay to remove the UserWarning Filter.
b6c3703
to
32e4e8f
Compare
Description
Transforming tensors in the setting of neural operators can have many reasons. Transformations can improve model performance, enhance generalization, enable varied input size inputs, facilitate feature extraction, reduce overfitting, improve computational efficiency, or much more.
Currently these transformations need to be applied manually and are dependent on the implementation of the end user. The result is difficulty in implementing correct transformations and especially chained transformations are difficult to apply. This issue aims to introduce a base transformation class for all tensors.
Which issue does this PR tackle?
How does it solve the problem?
How are the changes tested?
Transform
with dummy classes.Compose
with dummy classes. The order of the applied transformations proof that the order in which they are called in backward and forward is correct.Checklist for Contributors
feature/title-slug
convention.Bugfix: Title
convention.Checklist for Reviewers: