You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AlgoPerf submitter team reports that they are no longer able to reproduce the NAdam baseline results in PyTorch using the current repo in PyTorch on the ImageNet workloads (both ResNet and ViT).
See the plot below in terms of differences in the training/validation loss and accuracy between the given NAdam Jax results and the current run's results on ImageNet ViT.
Running submission runner with eval_num_workers=4 (recently changed default to help speed up evals).
Source or Possible Fix
Setting the eval_num_workers to 0 resolves the discrepancy in evals. We are still investigating why.
The text was updated successfully, but these errors were encountered:
priyakasimbeg
changed the title
Incorrect Imagenet evals for PyTorch data loader num workers > 0
Incorrect Imagenet evals with pytorch_eval_num_workers > 0
Mar 27, 2024
Changed default number of workers for PyTorch data loaders to 0.
Important update: for speech workloads the pytorch_eval_num_workers flag to submission_runner.py has to be set to >0, to prevent data loader crash in jax code.
I tried reproducing the issue by running the target setting run on the current dev branch with pytorch_eval_num_workers=4, but I don't see the drop in eval metrics compared to an older reference run (this one).
If someone can share the exact command and commit they used to produce the run in the plot I will try to run this instead.
AlgoPerf submitter team reports that they are no longer able to reproduce the NAdam baseline results in PyTorch using the current repo in PyTorch on the ImageNet workloads (both ResNet and ViT).
See the plot below in terms of differences in the training/validation loss and accuracy between the given NAdam Jax results and the current run's results on ImageNet ViT.
They did not see a change in OGBG and FastMRI.
The list of commits that we merged were from 389fe3f823a5016289b55b48aa8061a37b18b401 to 79ccc5e860d7928cf896ffe12ec686c72fd840d4.
Steps to Reproduce
Running submission runner with
eval_num_workers=4
(recently changed default to help speed up evals).Source or Possible Fix
Setting the
eval_num_workers
to 0 resolves the discrepancy in evals. We are still investigating why.The text was updated successfully, but these errors were encountered: