Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate torch export/AOT inductor #102

Open
ElliottKasoar opened this issue Mar 28, 2024 · 2 comments
Open

Investigate torch export/AOT inductor #102

ElliottKasoar opened this issue Mar 28, 2024 · 2 comments
Labels
enhancement New feature or request Hacktoberfest Issues open for Hacktoberfest contributions

Comments

@ElliottKasoar
Copy link
Contributor

As of ~PyTorch 2, several discussions seem to suggest that TorchScript is no longer in active development (see here and here), although it is unlikely to be deprecated soon either.

Newer forms of saving models appear to use torch.export (tutorial) to save a ".pt2" file - the "PyTorch 2.X way to export PyTorch models into standardized model representations, intended to be run on different (i.e. Python-less) environments".

Development seems to be focused around TorchInductor for model compilation, including a new (PyTorch 2.2) ahead-of-time extension of TorchInductor - AOTInductor, which is designed for creating artefacts for non-Python environments.

These features are currently prototypes, so the interfaces are likely to change significantly, but it seems likely that support would be useful in the not-too-distant future.

See also: recent portability discussion

@ElliottKasoar ElliottKasoar added the enhancement New feature or request label Mar 28, 2024
@tztsai
Copy link

tztsai commented Apr 10, 2024

torch.export saves the PyTorch model as a single computation graph in a standardized representation in the .pt2 file. This intermediate representation captured by AOT Inductor is compatible to many backends including Triton and C++/OpenMP. However, this new feature is said to be subject to backwards compatibility breaking changes. Although TorchScript is no longer in active development, its latest stable version will be supported in the foreseeable future. Therefore maybe we can wait till TorchInductor has a stable API and becomes more commonly adopted in place of TorchScript.

@ElliottKasoar
Copy link
Contributor Author

Quick update that PyTorch 2.4 supports (in beta) freezing for CPU with the AOTInductor through a TORCHINDUCTOR_FREEZING environment variable, if it is eventually supported in some form.

Also, less a FTorch problem but just a another note that torch.compile (same PT2 stack as torch.export) seems to requires 2.4 for Python 3.12 support, if it ever comes up.

This isn't necessarily the best place to share this, but I also recently came across the PyTorch 2 paper and other tutorial material, which looks like a useful reference for previous and new implementations of PyTorch Graph Capture (torch.jit, TorchDynamo etc.), portability (AOTInductor etc.) and other newer features.

@jatkinson1000 jatkinson1000 added the Hacktoberfest Issues open for Hacktoberfest contributions label Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Hacktoberfest Issues open for Hacktoberfest contributions
Projects
None yet
Development

No branches or pull requests

3 participants