Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FullGrad for Vision Transformers #14

Open
imanuelroz opened this issue Mar 8, 2022 · 1 comment
Open

FullGrad for Vision Transformers #14

imanuelroz opened this issue Mar 8, 2022 · 1 comment

Comments

@imanuelroz
Copy link

Hi I wanted to know if there is a version of FullGrad which could be applied on Vision Transformers like ViT or the Swin Transformer, or if there are some small changes that could be done in the code in order to do it. Thank you in advance.

@suraj-srinivas
Copy link
Collaborator

suraj-srinivas commented Mar 16, 2022

Hi! Sorry for the late reply. So technically Fullgrad is proposed for convolution or fully connected neural networks, and as a result the completeness conditions may not be satisfied for transformers.

However you are free to use Simple / Smooth FullGrad in this case, which don't have completeness associated. I haven't tested it for transformers myself, but you'd need to change this line:

if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear) or isinstance(m, nn.BatchNorm2d):
to include self-attention layers and exclude fully connected maybe.

If you do happen to use it, I'd be happy to learn about your experience or the issues you faced!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants