Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(optim): wrap torch.autograd.grad() with torch.enable_grad() context #220

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

XuehaiPan
Copy link
Member

Description

Describe your changes in detail.

Explicitly wrap torch.autograd.grad() with torch.enable_grad() context. Users can use inference-only mode for hyper gradient with:

meta_optim = torchopt.MetaAdam(model, lr=0.1)

loss = compute_loss(model, batch)
with torch.no_grad():
    meta_optim.step(loss)

Motivation and Context

Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here.
You can use the syntax close #15213 if this solves the issue #15213

  • I have raised an issue to propose this change (required for new features and bug fixes)

#218 (comment)

Types of changes

What types of changes does your code introduce? Put an x in all the boxes that apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of example)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide. (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly. (required for a bug fix or a new feature)
  • I have updated the documentation accordingly.
  • I have reformatted the code using make format. (required)
  • I have checked the code using make lint. (required)
  • I have ensured make test pass. (required)

@XuehaiPan XuehaiPan added enhancement New feature or request functorch Something functorch related labels May 21, 2024
@XuehaiPan XuehaiPan requested a review from JieRen98 May 21, 2024 14:39
@XuehaiPan XuehaiPan self-assigned this May 21, 2024
@XuehaiPan XuehaiPan linked an issue May 21, 2024 that may be closed by this pull request
3 tasks
Copy link

codecov bot commented May 21, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 93.69%. Comparing base (b3f570c) to head (5f9b918).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #220   +/-   ##
=======================================
  Coverage   93.69%   93.69%           
=======================================
  Files          83       83           
  Lines        2963     2965    +2     
=======================================
+ Hits         2776     2778    +2     
  Misses        187      187           
Flag Coverage Δ
unittests 93.69% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request functorch Something functorch related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Question] Memory keep increase in MetaAdam due to gradient link
1 participant