Flexibility for supporting package groups tied to specific sources (PyTorch issue) #2968
Unanswered
polarathene
asked this question in
Q&A
Replies: 1 comment
-
During my investigation with how viable PDM would be for a project using PyTorch, it may also be worth noting that presently the cache feature PDM has does not work well with even the single PyTorch source (5GB is cached locally):
For caching, it seems some official advice is to cache the That reference also suggests for those using Github Actions to use the official action |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Basic single source for specific packages
This will install
torch
and other deps by prioritizing the packages from the configured source inpyproject.toml
.include_packages
is required due torespect-source-order = true
to avoid undesired behaviour.Caveats when introducing multiple sources (cannot restrict packages to sources via groups)
Now you'd like the
pyproject.toml
for your app to support installing PyTorch with CUDA or CPU (there's also ROCm for AMD, but let's keep it simple):You'd then choose to install either via:
But unlike the prior example, the
torchvision
andtorchaudio
deps will install via PyPi.include_packages
for the referenced group is likely doing nothing and incorrect here.include_packages
to include the explicittorch
/torchvision
/torchaudio
deps you'd find the CUDA source is resolved as it is defined first, thustorch_cpu
is not able to use the correct source.With the groups I was more explicit about the local identifier being present (
+cpu
/+cu121
), which only has support for==
/!=
with a version (which is mandatory when paired with a local identifier unfortunately).torchvision
andtorchaudio
packages too, keeping track of their explicit version to pin.nvidia-*
pattern, if you want to support the other local identifiers for different versions of CUDA (additional sources).There is one other caveat I've not mentioned thus far. PyTorch CPU source only has the
+cpu
local identifier for the x86_64 package, the aarch64 / ARM64 package does not have that assigned, which complicates support there (more details covered in a relateduv
discussion). Meanwhile the CUDA group has no aarch64 packages (despite nvidia having CUDA compatible products that run on aarch64, nvidia publishes it's own alternative to these PyTorch wheels sources for that).Official advice for this scenario
Official advice appears to be to:
torch
and configurepypi.torch.url
(a bit vague, not enough context but presumably a CLI option?): Variable expansion at lock time for package indexes #2063 (comment)Beta Was this translation helpful? Give feedback.
All reactions