Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Release/2.5.0] Timm model jx_nest_base amp_fp16 inference got fail_accuracy #900

Open
mengfei25 opened this issue Sep 12, 2024 · 0 comments
Assignees
Milestone

Comments

@mengfei25
Copy link
Contributor

🐛 Describe the bug

xpu eval jx_nest_base
[WARNING] Failed to create Level Zero tracer: 2013265921
(I): Detected 1024 spills, recompiling the kernel using large GRF mode
(I): Kernel has now 0 spills
(I): Detected 8192 spills, recompiling the kernel using large GRF mode
(I): Kernel has now 0 spills
(I): Detected 8192 spills, recompiling the kernel using large GRF mode
(I): Kernel has now 0 spills
(I): Detected 4096 spills, recompiling the kernel using large GRF mode
(I): Kernel has now 0 spills
(I): Detected 4096 spills, recompiling the kernel using large GRF mode
(I): Kernel has now 0 spills
E0912 00:16:10.029000 3264502 site-packages/torch/_dynamo/utils.py:1802] RMSE (res-fp64): 0.00087, (ref-fp64): 0.00036 and shape=torch.Size([8, 1000]). res.dtype: torch.float16, multiplier: 2.000000, tol: 0.001000, use_larger_multiplier_for_smaller_tensor: 0
fail_accuracy

Versions

pytorch: 2.5.0-rc1 (https://download.pytorch.org/whl/test/xpu)
torch-xpu-ops: 1206590 (main)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants