chore(deps): bump ruff from 0.1.14 to 0.2.1 in /requirements/lintrunner #1273
1 errors, 13 fail, 2 958 skipped, 8 455 pass in 1h 42m 35s
Annotations
github-actions / Test Results
3 out of 15 runs failed: test_output_match_opinfo__ops_aten__scaled_dot_product_flash_attention_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript/tests/function_libs/torch_lib/ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/testing/_internal/opinfo/core.py:1114: in __call__
return self.op(*args, **kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
E BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript/tests/function_libs/torch_lib/ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/testing/_internal/opinfo/core.py:1114: in __call__
return self.op(*args, **kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
E BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript/tests/function_libs/torch_lib/ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/testing/_internal/opinfo/core.py:1114: in __call__
return self.op(*args, **kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
E BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
github-actions / Test Results
3 out of 15 runs failed: test_output_match_opinfo__addmm_decomposed_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
RuntimeError: ONNX Runtime failed to evaluate:
Inputs:
{'input_0': array([[-1.8750734 , 7.464527 , -5.334317 , -5.367582 , -5.367906 ,
8.094986 , 2.99926 , 8.660255 , -7.4274864 , -8.926886 ],
[-7.041274 , -6.0542016 , 3.645361 , 3.222683 , 7.478319 ,
-4.647828 , -6.135406 , 4.7752037 , -3.6378403 , 5.4623146 ],
[-2.135706 , 5.1484137 , -6.992712 , -4.5418477 , 2.743888 ,
1.902668 , -2.2946286 , 5.364625 , 6.1182833 , -6.5265603 ],
[-4.804814 , 8.240957 , -3.0368924 , -3.1906476 , -8.708351 ,
-5.1540318 , 2.2482328 , -1.1879387 , -6.532974 , 0.21111012],
[-6.1477337 , -7.63557 , -4.9559636 , -7.876909 , -5.7306423 ,
8.996479 , 1.6998749 , 2.7734375 , -8.394158 , -5.910964 ]],
dtype=float32),
'input_1': array([], shape=(5, 0), dtype=float32),
'input_2': array([], shape=(0, 10), dtype=float32)}
Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float[5,10] input_0, float[5,0] input_1, float[0,10] input_2) => (float[5,10] _val_3)
<float[5,10] input_0, float[5,0] input_1, float[0,10] input_2, float[5,10] _val_3>
{
_val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 1, beta: float = 1> (input_0, input_1, input_2)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
aten_addmm (self, mat1, mat2) => (return_val)
{
return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
RuntimeError: ONNX Runtime failed to evaluate:
Inputs:
{'input_0': array([-2.9957023 , 1.40734 , -7.919292 , -3.8778572 , -5.388017 ,
0.02494144, -3.348929 , -0.623662 , -6.098667 , -6.1775565 ],
dtype=float32),
'input_1': array([], shape=(5, 0), dtype=float32),
'input_2': array([], shape=(0, 10), dtype=float32)}
Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float[10] input_0, float[5,0] input_1, float[0,10] input_2) => (float[5,10] _val_3)
<float[10] input_0, float[5,0] input_1, float[0,10] input_2, float[5,10] _val_3>
{
_val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 1, beta: float = 1> (input_0, input_1, input_2)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
aten_addmm (self, mat1, mat2) => (return_val)
{
return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:542: in _capture_graph_and_evaluate_torch_script_evaluator
return _ort_session_run(onnx_model.SerializeToString(), ort_inputs)
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:315: in _ort_session_run
return session.run(None, ort_inputs)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'_inline_aten_addmmn0' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:556: in _capture_graph_and_evaluate_torch_script_evaluator
raise RuntimeError(
E RuntimeError: ONNX Runtime failed to evaluate:
E Inputs:
E {'input_0': array([[-1.8750734 , 7.464527 , -5.334317 , -5.367582 , -5.367906 ,
E 8.094986 , 2.99926 , 8.660255 , -7.4274864 , -8.926886 ],
E [-7.041274 , -6.0542016 , 3.645361 , 3.222683 , 7.478319 ,
E -4.647828 , -6.135406 , 4.7752037 , -3.6378403 , 5.4623146 ],
E [-2.135706 , 5.1484137 , -6.992712 , -4.5418477 , 2.743888 ,
E 1.902668 , -2.2946286 , 5.364625 , 6.1182833 , -6.5265603 ],
E [-4.804814 , 8.240957 , -3.0368924 , -3.1906476 , -8.708351 ,
E -5.1540318 , 2.2482328 , -1.1879387 , -6.532974 , 0.21111012],
E [-6.1477337 , -7.63557 , -4.9559636 , -7.876909 , -5.7306423 ,
E 8.996479 , 1.6998749 , 2.7734375 , -8.394158 , -5.910964 ]],
E dtype=float32),
E 'input_1': array([], shape=(5, 0), dtype=float32),
E 'input_2': array([], shape=(0, 10), dtype=float32)}
E Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float[5,10] input_0, float[5,0] input_1, float[0,10] input_2) => (float[5,10] _val_3)
E <float[5,10] input_0, float[5,0] input_1, float[0,10] input_2, float[5,10] _val_3>
E {
E _val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 1, beta: float = 1> (input_0, input_1, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E aten_addmm (self, mat1, mat2) => (return_val)
E {
E return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:542: in _capture_graph_and_evaluate_torch_script_evaluator
return _ort_session_run(onnx_model.SerializeToString(), ort_inputs)
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:315: in _ort_session_run
return session.run(None, ort_inputs)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'_inline_aten_addmmn0' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:556: in _capture_graph_and_evaluate_torch_script_evaluator
raise RuntimeError(
E RuntimeError: ONNX Runtime failed to evaluate:
E Inputs:
E {'input_0': array([-2.9957023 , 1.40734 , -7.919292 , -3.8778572 , -5.388017 ,
E 0.02494144, -3.348929 , -0.623662 , -6.098667 , -6.1775565 ],
E dtype=float32),
E 'input_1': array([], shape=(5, 0), dtype=float32),
E 'input_2': array([], shape=(0, 10), dtype=float32)}
E Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float[10] input_0, float[5,0] input_1, float[0,10] input_2) => (float[5,10] _val_3)
E <float[10] input_0, float[5,0] input_1, float[0,10] input_2, float[5,10] _val_3>
E {
E _val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 1, beta: float = 1> (input_0, input_1, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E aten_addmm (self, mat1, mat2) => (return_val)
E {
E return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
github-actions / Test Results
1 out of 15 runs with error: test_output_match_opinfo__clamp_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-onnx-weekly-windows-latest)/pytest.xml [took 0s]
Raw output
failed on setup with "worker 'gw1' crashed while running 'onnxscript/tests/function_libs/torch_lib/ops_test.py::TestOutputConsistencyEagerCPU::test_output_match_opinfo__clamp_cpu_float16'"
worker 'gw1' crashed while running 'onnxscript/tests/function_libs/torch_lib/ops_test.py::TestOutputConsistencyEagerCPU::test_output_match_opinfo__clamp_cpu_float16'
github-actions / Test Results
3 out of 15 runs failed: test_output_match_opinfo__addmm_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
RuntimeError: ONNX Runtime failed to evaluate:
Inputs:
{'input_0': array([[-1.8750734 , 7.464527 , -5.334317 , -5.367582 , -5.367906 ,
8.094986 , 2.99926 , 8.660255 , -7.4274864 , -8.926886 ],
[-7.041274 , -6.0542016 , 3.645361 , 3.222683 , 7.478319 ,
-4.647828 , -6.135406 , 4.7752037 , -3.6378403 , 5.4623146 ],
[-2.135706 , 5.1484137 , -6.992712 , -4.5418477 , 2.743888 ,
1.902668 , -2.2946286 , 5.364625 , 6.1182833 , -6.5265603 ],
[-4.804814 , 8.240957 , -3.0368924 , -3.1906476 , -8.708351 ,
-5.1540318 , 2.2482328 , -1.1879387 , -6.532974 , 0.21111012],
[-6.1477337 , -7.63557 , -4.9559636 , -7.876909 , -5.7306423 ,
8.996479 , 1.6998749 , 2.7734375 , -8.394158 , -5.910964 ]],
dtype=float32),
'input_1': array([], shape=(5, 0), dtype=float32),
'input_2': array([], shape=(0, 10), dtype=float32)}
Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float[5,10] input_0, float[5,0] input_1, float[0,10] input_2) => (float[5,10] _val_3)
<float[5,10] input_0, float[5,0] input_1, float[0,10] input_2, float[5,10] _val_3>
{
_val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 0.6, beta: float = 0.2> (input_0, input_1, input_2)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
aten_addmm (self, mat1, mat2) => (return_val)
{
return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
RuntimeError: ONNX Runtime failed to evaluate:
Inputs:
{'input_0': array([-2.9957023 , 1.40734 , -7.919292 , -3.8778572 , -5.388017 ,
0.02494144, -3.348929 , -0.623662 , -6.098667 , -6.1775565 ],
dtype=float32),
'input_1': array([], shape=(5, 0), dtype=float32),
'input_2': array([], shape=(0, 10), dtype=float32)}
Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float[10] input_0, float[5,0] input_1, float[0,10] input_2) => (float[5,10] _val_3)
<float[10] input_0, float[5,0] input_1, float[0,10] input_2, float[5,10] _val_3>
{
_val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 0.6, beta: float = 0.2> (input_0, input_1, input_2)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
aten_addmm (self, mat1, mat2) => (return_val)
{
return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:542: in _capture_graph_and_evaluate_torch_script_evaluator
return _ort_session_run(onnx_model.SerializeToString(), ort_inputs)
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:315: in _ort_session_run
return session.run(None, ort_inputs)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'_inline_aten_addmmn0' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:556: in _capture_graph_and_evaluate_torch_script_evaluator
raise RuntimeError(
E RuntimeError: ONNX Runtime failed to evaluate:
E Inputs:
E {'input_0': array([[-1.8750734 , 7.464527 , -5.334317 , -5.367582 , -5.367906 ,
E 8.094986 , 2.99926 , 8.660255 , -7.4274864 , -8.926886 ],
E [-7.041274 , -6.0542016 , 3.645361 , 3.222683 , 7.478319 ,
E -4.647828 , -6.135406 , 4.7752037 , -3.6378403 , 5.4623146 ],
E [-2.135706 , 5.1484137 , -6.992712 , -4.5418477 , 2.743888 ,
E 1.902668 , -2.2946286 , 5.364625 , 6.1182833 , -6.5265603 ],
E [-4.804814 , 8.240957 , -3.0368924 , -3.1906476 , -8.708351 ,
E -5.1540318 , 2.2482328 , -1.1879387 , -6.532974 , 0.21111012],
E [-6.1477337 , -7.63557 , -4.9559636 , -7.876909 , -5.7306423 ,
E 8.996479 , 1.6998749 , 2.7734375 , -8.394158 , -5.910964 ]],
E dtype=float32),
E 'input_1': array([], shape=(5, 0), dtype=float32),
E 'input_2': array([], shape=(0, 10), dtype=float32)}
E Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float[5,10] input_0, float[5,0] input_1, float[0,10] input_2) => (float[5,10] _val_3)
E <float[5,10] input_0, float[5,0] input_1, float[0,10] input_2, float[5,10] _val_3>
E {
E _val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 0.6, beta: float = 0.2> (input_0, input_1, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E aten_addmm (self, mat1, mat2) => (return_val)
E {
E return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:542: in _capture_graph_and_evaluate_torch_script_evaluator
return _ort_session_run(onnx_model.SerializeToString(), ort_inputs)
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:315: in _ort_session_run
return session.run(None, ort_inputs)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'_inline_aten_addmmn0' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:556: in _capture_graph_and_evaluate_torch_script_evaluator
raise RuntimeError(
E RuntimeError: ONNX Runtime failed to evaluate:
E Inputs:
E {'input_0': array([-2.9957023 , 1.40734 , -7.919292 , -3.8778572 , -5.388017 ,
E 0.02494144, -3.348929 , -0.623662 , -6.098667 , -6.1775565 ],
E dtype=float32),
E 'input_1': array([], shape=(5, 0), dtype=float32),
E 'input_2': array([], shape=(0, 10), dtype=float32)}
E Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float[10] input_0, float[5,0] input_1, float[0,10] input_2) => (float[5,10] _val_3)
E <float[10] input_0, float[5,0] input_1, float[0,10] input_2, float[5,10] _val_3>
E {
E _val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 0.6, beta: float = 0.2> (input_0, input_1, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E aten_addmm (self, mat1, mat2) => (return_val)
E {
E return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
github-actions / Test Results
3 out of 15 runs failed: test_output_match_opinfo__ops_aten__scaled_dot_product_flash_attention_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 1s]
Raw output
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript/tests/function_libs/torch_lib/ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/testing/_internal/opinfo/core.py:1114: in __call__
return self.op(*args, **kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
E BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript/tests/function_libs/torch_lib/ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/testing/_internal/opinfo/core.py:1114: in __call__
return self.op(*args, **kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
E BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript/tests/function_libs/torch_lib/ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/testing/_internal/opinfo/core.py:1114: in __call__
return self.op(*args, **kwargs)
.nox/test_torch_nightly/lib/python3.10/site-packages/torch/_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [MPS, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:75 [backend fallback]
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:732 [backend fallback]
E BatchedNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:759 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:158 [backend fallback]
github-actions / Test Results
1 out of 15 runs failed: test_output_match_opinfo__mm_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
Raw output
AssertionError: Tensor-likes are not close!
Mismatched elements: 14 / 50 (28.0%)
Greatest absolute difference: nan at index (2, 6) (up to 1e-05 allowed)
Greatest relative difference: nan at index (2, 6) (up to 1.3e-06 allowed)
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: Tensor-likes are not close!
E
E Mismatched elements: 14 / 50 (28.0%)
E Greatest absolute difference: nan at index (2, 6) (up to 1e-05 allowed)
E Greatest relative difference: nan at index (2, 6) (up to 1.3e-06 allowed)
github-actions / Test Results
3 out of 15 runs failed: test_output_match_opinfo__addmm_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:590: in executor
return function(*args, **kwargs)
onnxscript/values.py:529: in __call__
return evaluator.default().eval_function(self, args, kwargs)
onnxscript/evaluator.py:309: in eval_function
result = function.function(*adapted_args, **adapted_kwargs)
onnxscript/function_libs/torch_lib/ops/core.py:244: in aten_addmm
return op.Gemm(mat1, mat2, self, alpha=alpha, beta=beta)
onnxscript/onnx_opset/_impl/opset13.py:1230: in Gemm
return op(
onnxscript/values.py:304: in __call__
return evaluator.default().eval(schema, args, kwargs)
onnxscript/evaluator.py:196: in eval
outputs = self._eval(schema, inputs, attributes, closure)
onnxscript/evaluator.py:514: in _eval
return _call_ort(schema, inputs, attributes, closure)
onnxscript/evaluator.py:491: in _call_ort
result = session.run(None, session_run_input)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:590: in executor
return function(*args, **kwargs)
onnxscript/values.py:529: in __call__
return evaluator.default().eval_function(self, args, kwargs)
onnxscript/evaluator.py:309: in eval_function
result = function.function(*adapted_args, **adapted_kwargs)
onnxscript/function_libs/torch_lib/ops/core.py:244: in aten_addmm
return op.Gemm(mat1, mat2, self, alpha=alpha, beta=beta)
onnxscript/onnx_opset/_impl/opset13.py:1230: in Gemm
return op(
onnxscript/values.py:304: in __call__
return evaluator.default().eval(schema, args, kwargs)
onnxscript/evaluator.py:196: in eval
outputs = self._eval(schema, inputs, attributes, closure)
onnxscript/evaluator.py:514: in _eval
return _call_ort(schema, inputs, attributes, closure)
onnxscript/evaluator.py:491: in _call_ort
result = session.run(None, session_run_input)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
github-actions / Test Results
All 3 runs failed: test_output_match_opinfo__addmm_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:590: in executor
return function(*args, **kwargs)
onnxscript/values.py:529: in __call__
return evaluator.default().eval_function(self, args, kwargs)
onnxscript/evaluator.py:309: in eval_function
result = function.function(*adapted_args, **adapted_kwargs)
onnxscript/function_libs/torch_lib/ops/core.py:244: in aten_addmm
return op.Gemm(mat1, mat2, self, alpha=alpha, beta=beta)
onnxscript/onnx_opset/_impl/opset13.py:1230: in Gemm
return op(
onnxscript/values.py:304: in __call__
return evaluator.default().eval(schema, args, kwargs)
onnxscript/evaluator.py:196: in eval
outputs = self._eval(schema, inputs, attributes, closure)
onnxscript/evaluator.py:514: in _eval
return _call_ort(schema, inputs, attributes, closure)
onnxscript/evaluator.py:491: in _call_ort
result = session.run(None, session_run_input)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:590: in executor
return function(*args, **kwargs)
onnxscript/values.py:529: in __call__
return evaluator.default().eval_function(self, args, kwargs)
onnxscript/evaluator.py:309: in eval_function
result = function.function(*adapted_args, **adapted_kwargs)
onnxscript/function_libs/torch_lib/ops/core.py:244: in aten_addmm
return op.Gemm(mat1, mat2, self, alpha=alpha, beta=beta)
onnxscript/onnx_opset/_impl/opset13.py:1230: in Gemm
return op(
onnxscript/values.py:304: in __call__
return evaluator.default().eval(schema, args, kwargs)
onnxscript/evaluator.py:196: in eval
outputs = self._eval(schema, inputs, attributes, closure)
onnxscript/evaluator.py:514: in _eval
return _call_ort(schema, inputs, attributes, closure)
onnxscript/evaluator.py:491: in _call_ort
result = session.run(None, session_run_input)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
github-actions / Test Results
All 3 runs failed: test_output_match_opinfo__native_batch_norm_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
AssertionError: Output 0 mismatch
AssertionError: Output 0 mismatch
AssertionError: Output 0 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: Tensor-likes are not close!
E
E Mismatched elements: 10 / 125 (8.0%)
E Greatest absolute difference: 0.002197265625 at index (2, 1, 2) (up to 1e-05 allowed)
E Greatest relative difference: 0.01470947265625 at index (1, 0, 0) (up to 0.001 allowed)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 0 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: Tensor-likes are not close!
E
E Mismatched elements: 1 / 3 (33.3%)
E Greatest absolute difference: 0.000732421875 at index (1, 0) (up to 1e-05 allowed)
E Greatest relative difference: 0.0014848709106445312 at index (1, 0) (up to 0.001 allowed)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 0 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: Tensor-likes are not close!
E
E Mismatched elements: 2 / 72 (2.8%)
E Greatest absolute difference: 0.000732421875 at index (0, 0, 0, 2) (up to 1e-05 allowed)
E Greatest relative difference: 0.0090484619140625 at index (1, 0, 0, 0) (up to 0.001 allowed)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 0 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
github-actions / Test Results
All 3 runs failed: test_output_match_opinfo__native_layer_norm_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 1s]
Raw output
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript/tests/function_libs/torch_lib/ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
github-actions / Test Results
All 3 runs failed: test_output_match_opinfo__addmm_decomposed_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
RuntimeError: ONNX Runtime failed to evaluate:
Inputs:
{'input_0': array([[ 2.398e+00, -5.598e+00, -3.727e+00, -6.230e+00, -8.883e+00,
-6.883e+00, 8.938e+00, -2.180e+00, 3.031e+00, -4.043e+00],
[-1.116e+00, 2.980e+00, -8.203e+00, 3.217e+00, -6.064e-01,
3.990e+00, 3.754e+00, -4.535e+00, -2.188e+00, 8.281e+00],
[-8.703e+00, -7.199e+00, 7.031e-01, 8.180e+00, 4.930e+00,
7.656e+00, 3.402e+00, 8.789e-03, -2.637e-02, 3.418e+00],
[-4.035e+00, 9.229e-01, 6.777e+00, 7.215e+00, 4.184e+00,
-2.830e+00, -5.477e+00, -2.594e+00, 4.879e+00, -7.586e+00],
[-7.234e+00, 8.414e+00, -2.549e-01, -6.637e+00, 7.578e+00,
-1.837e+00, 2.373e+00, -5.000e+00, 4.051e+00, 6.383e+00]],
dtype=float16),
'input_1': array([], shape=(5, 0), dtype=float16),
'input_2': array([], shape=(0, 10), dtype=float16)}
Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[5,10] input_0, float16[5,0] input_1, float16[0,10] input_2) => (float16[5,10] _val_3)
<float16[5,10] input_0, float16[5,0] input_1, float16[0,10] input_2, float16[5,10] _val_3>
{
_val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 1, beta: float = 1> (input_0, input_1, input_2)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
aten_addmm (self, mat1, mat2) => (return_val)
{
return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
RuntimeError: ONNX Runtime failed to evaluate:
Inputs:
{'input_0': array([ 2.207 , -0.0703, 6.16 , -6.406 , 6.363 , -2.68 , 6.574 ,
-6.04 , -1.283 , 0.457 ], dtype=float16),
'input_1': array([], shape=(5, 0), dtype=float16),
'input_2': array([], shape=(0, 10), dtype=float16)}
Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[10] input_0, float16[5,0] input_1, float16[0,10] input_2) => (float16[5,10] _val_3)
<float16[10] input_0, float16[5,0] input_1, float16[0,10] input_2, float16[5,10] _val_3>
{
_val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 1, beta: float = 1> (input_0, input_1, input_2)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
aten_addmm (self, mat1, mat2) => (return_val)
{
return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:542: in _capture_graph_and_evaluate_torch_script_evaluator
return _ort_session_run(onnx_model.SerializeToString(), ort_inputs)
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:315: in _ort_session_run
return session.run(None, ort_inputs)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'_inline_aten_addmmn0' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:556: in _capture_graph_and_evaluate_torch_script_evaluator
raise RuntimeError(
E RuntimeError: ONNX Runtime failed to evaluate:
E Inputs:
E {'input_0': array([[ 2.398e+00, -5.598e+00, -3.727e+00, -6.230e+00, -8.883e+00,
E -6.883e+00, 8.938e+00, -2.180e+00, 3.031e+00, -4.043e+00],
E [-1.116e+00, 2.980e+00, -8.203e+00, 3.217e+00, -6.064e-01,
E 3.990e+00, 3.754e+00, -4.535e+00, -2.188e+00, 8.281e+00],
E [-8.703e+00, -7.199e+00, 7.031e-01, 8.180e+00, 4.930e+00,
E 7.656e+00, 3.402e+00, 8.789e-03, -2.637e-02, 3.418e+00],
E [-4.035e+00, 9.229e-01, 6.777e+00, 7.215e+00, 4.184e+00,
E -2.830e+00, -5.477e+00, -2.594e+00, 4.879e+00, -7.586e+00],
E [-7.234e+00, 8.414e+00, -2.549e-01, -6.637e+00, 7.578e+00,
E -1.837e+00, 2.373e+00, -5.000e+00, 4.051e+00, 6.383e+00]],
E dtype=float16),
E 'input_1': array([], shape=(5, 0), dtype=float16),
E 'input_2': array([], shape=(0, 10), dtype=float16)}
E Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[5,10] input_0, float16[5,0] input_1, float16[0,10] input_2) => (float16[5,10] _val_3)
E <float16[5,10] input_0, float16[5,0] input_1, float16[0,10] input_2, float16[5,10] _val_3>
E {
E _val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 1, beta: float = 1> (input_0, input_1, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E aten_addmm (self, mat1, mat2) => (return_val)
E {
E return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:542: in _capture_graph_and_evaluate_torch_script_evaluator
return _ort_session_run(onnx_model.SerializeToString(), ort_inputs)
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:315: in _ort_session_run
return session.run(None, ort_inputs)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'_inline_aten_addmmn0' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:556: in _capture_graph_and_evaluate_torch_script_evaluator
raise RuntimeError(
E RuntimeError: ONNX Runtime failed to evaluate:
E Inputs:
E {'input_0': array([ 2.207 , -0.0703, 6.16 , -6.406 , 6.363 , -2.68 , 6.574 ,
E -6.04 , -1.283 , 0.457 ], dtype=float16),
E 'input_1': array([], shape=(5, 0), dtype=float16),
E 'input_2': array([], shape=(0, 10), dtype=float16)}
E Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[10] input_0, float16[5,0] input_1, float16[0,10] input_2) => (float16[5,10] _val_3)
E <float16[10] input_0, float16[5,0] input_1, float16[0,10] input_2, float16[5,10] _val_3>
E {
E _val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 1, beta: float = 1> (input_0, input_1, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E aten_addmm (self, mat1, mat2) => (return_val)
E {
E return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
github-actions / Test Results
All 3 runs failed: test_output_match_opinfo__native_layer_norm_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 1s]
Raw output
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] input_3) => (float16[1,2,3] _val_4, float16[1,1,1] _val_5, float16[1,1,1] _val_6)
<float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] input_3, float16[1,2,3] _val_4, float16[1,1,1] _val_5, float16[1,1,1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_3) => (float16[1,2,3] _val_7, float16[1,1,1] _val_8, float16[1,1,1] _val_9)
<float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_3, float16[1,2,3] _val_7, float16[1,1,1] _val_8, float16[1,1,1] _val_9, float[1] _val_3, int64[3] _val_4, float[1,2,3] _val_5, float16[1,2,3] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -3> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2) => (float16[1,2,3] _val_3, float16[1,1,1] _val_4, float16[1,1,1] _val_5)
<float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] _val_3, float16[1,1,1] _val_4, float16[1,1,1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2,3] input_0, int64[3] input_1) => (float16[1,2,3] _val_6, float16[1,1,1] _val_7, float16[1,1,1] _val_8)
<float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] _val_6, float16[1,1,1] _val_7, float16[1,1,1] _val_8, float[1] _val_2, int64[3] _val_3, float[1,2,3] _val_4, float16[1,2,3] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -3> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,3] input_3) => (float16[2,2,3] _val_4, float16[2,1,1] _val_5, float16[2,1,1] _val_6)
<float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,3] input_3, float16[2,2,3] _val_4, float16[2,1,1] _val_5, float16[2,1,1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_3) => (float16[2,2,3] _val_7, float16[2,1,1] _val_8, float16[2,1,1] _val_9)
<float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_3, float16[2,2,3] _val_7, float16[2,1,1] _val_8, float16[2,1,1] _val_9, float[1] _val_3, int64[2] _val_4, float[2,3] _val_5, float16[2,3] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -2> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2) => (float16[2,2,3] _val_3, float16[2,1,1] _val_4, float16[2,1,1] _val_5)
<float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,2,3] _val_3, float16[2,1,1] _val_4, float16[2,1,1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,2,3] input_0, int64[2] input_1) => (float16[2,2,3] _val_6, float16[2,1,1] _val_7, float16[2,1,1] _val_8)
<float16[2,2,3] input_0, int64[2] input_1, float16[2,2,3] _val_6, float16[2,1,1] _val_7, float16[2,1,1] _val_8, float[1] _val_2, int64[2] _val_3, float[2,3] _val_4, float16[2,3] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -2> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3) => (float16[1] _val_4, float16[1] _val_5, float16[1] _val_6)
<float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] _val_4, float16[1] _val_5, float16[1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_3) => (float16[1] _val_7, float16[1] _val_8, float16[1] _val_9)
<float16[1] input_0, int64[1] input_1, float16[1] input_3, float16[1] _val_7, float16[1] _val_8, float16[1] _val_9, float[1] _val_3, int64[1] _val_4, float[1] _val_5, float16[1] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -1> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_2) => (float16[1] _val_3, float16[1] _val_4, float16[1] _val_5)
<float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] _val_3, float16[1] _val_4, float16[1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1] input_0, int64[1] input_1) => (float16[1] _val_6, float16[1] _val_7, float16[1] _val_8)
<float16[1] input_0, int64[1] input_1, float16[1] _val_6, float16[1] _val_7, float16[1] _val_8, float[1] _val_2, int64[1] _val_3, float[1] _val_4, float16[1] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -1> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[2] input_3) => (float16[1,2] _val_4, float16[1,1] _val_5, float16[1,1] _val_6)
<float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[2] input_3, float16[1,2] _val_4, float16[1,1] _val_5, float16[1,1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_3) => (float16[1,2] _val_7, float16[1,1] _val_8, float16[1,1] _val_9)
<float16[1,2] input_0, int64[1] input_1, float16[2] input_3, float16[1,2] _val_7, float16[1,1] _val_8, float16[1,1] _val_9, float[1] _val_3, int64[1] _val_4, float[2] _val_5, float16[2] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -1> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_2) => (float16[1,2] _val_3, float16[1,1] _val_4, float16[1,1] _val_5)
<float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[1,2] _val_3, float16[1,1] _val_4, float16[1,1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2] input_0, int64[1] input_1) => (float16[1,2] _val_6, float16[1,1] _val_7, float16[1,1] _val_8)
<float16[1,2] input_0, int64[1] input_1, float16[1,2] _val_6, float16[1,1] _val_7, float16[1,1] _val_8, float[1] _val_2, int64[1] _val_3, float[2] _val_4, float16[2] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -1> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3) => (float16[0,1] _val_4, float16[0,1] _val_5, float16[0,1] _val_6)
<float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3, float16[0,1] _val_4, float16[0,1] _val_5, float16[0,1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_3) => (float16[0,1] _val_7, float16[0,1] _val_8, float16[0,1] _val_9)
<float16[0,1] input_0, int64[1] input_1, float16[1] input_3, float16[0,1] _val_7, float16[0,1] _val_8, float16[0,1] _val_9, float[1] _val_3, int64[1] _val_4, float[1] _val_5, float16[1] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -1> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_2) => (float16[0,1] _val_3, float16[0,1] _val_4, float16[0,1] _val_5)
<float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[0,1] _val_3, float16[0,1] _val_4, float16[0,1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[0,1] input_0, int64[1] input_1) => (float16[0,1] _val_6, float16[0,1] _val_7, float16[0,1] _val_8)
<float16[0,1] input_0, int64[1] input_1, float16[0,1] _val_6, float16[0,1] _val_7, float16[0,1] _val_8, float[1] _val_2, int64[1] _val_3, float[1] _val_4, float16[1] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -1> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] input_3) => (float16[1,2,3] _val_4, float16[1,1,1] _val_5, float16[1,1,1] _val_6)
E <float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] input_3, float16[1,2,3] _val_4, float16[1,1,1] _val_5, float16[1,1,1] _val_6>
E {
E _val_4, _val_5, _val_6 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, input_2, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_3) => (float16[1,2,3] _val_7, float16[1,1,1] _val_8, float16[1,1,1] _val_9)
E <float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_3, float16[1,2,3] _val_7, float16[1,1,1] _val_8, float16[1,1,1] _val_9, float[1] _val_3, int64[3] _val_4, float[1,2,3] _val_5, float16[1,2,3] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -3> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2) => (float16[1,2,3] _val_3, float16[1,1,1] _val_4, float16[1,1,1] _val_5)
E <float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] _val_3, float16[1,1,1] _val_4, float16[1,1,1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2,3] input_0, int64[3] input_1) => (float16[1,2,3] _val_6, float16[1,1,1] _val_7, float16[1,1,1] _val_8)
E <float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] _val_6, float16[1,1,1] _val_7, float16[1,1,1] _val_8, float[1] _val_2, int64[3] _val_3, float[1,2,3] _val_4, float16[1,2,3] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -3> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,3] input_3) => (float16[2,2,3] _val_4, float16[2,1,1] _val_5, float16[2,1,1] _val_6)
E <float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,3] input_3, float16[2,2,3] _val_4, float16[2,1,1] _val_5, float16[2,1,1] _val_6>
E {
E _val_4, _val_5, _val_6 = LayerNormalization …, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_3) => (float16[2,2,3] _val_7, float16[2,1,1] _val_8, float16[2,1,1] _val_9)
E <float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_3, float16[2,2,3] _val_7, float16[2,1,1] _val_8, float16[2,1,1] _val_9, float[1] _val_3, int64[2] _val_4, float[2,3] _val_5, float16[2,3] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -2> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2) => (float16[2,2,3] _val_3, float16[2,1,1] _val_4, float16[2,1,1] _val_5)
E <float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,2,3] _val_3, float16[2,1,1] _val_4, float16[2,1,1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[2,2,3] input_0, int64[2] input_1) => (float16[2,2,3] _val_6, float16[2,1,1] _val_7, float16[2,1,1] _val_8)
E <float16[2,2,3] input_0, int64[2] input_1, float16[2,2,3] _val_6, float16[2,1,1] _val_7, float16[2,1,1] _val_8, float[1] _val_2, int64[2] _val_3, float[2,3] _val_4, float16[2,3] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -2> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3) => (float16[1] _val_4, float16[1] _val_5, float16[1] _val_6)
E <float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] _val_4, float16[1] _val_5, float16[1] _val_6>
E {
E _val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_3) => (float16[1] _val_7, float16[1] _val_8, float16[1] _val_9)
E <float16[1] input_0, int64[1] input_1, float16[1] input_3, float16[1] _val_7, float16[1] _val_8, float16[1] _val_9, float[1] _val_3, int64[1] _val_4, float[1] _val_5, float16[1] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -1> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_2) => (float16[1] _val_3, float16[1] _val_4, float16[1] _val_5)
E <float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] _val_3, float16[1] _val_4, float16[1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1] input_0, int64[1] input_1) => (float16[1] _val_6, float16[1] _val_7, float16[1] _val_8)
E <float16[1] input_0, int64[1] input_1, float16[1] _val_6, float16[1] _val_7, float16[1] _val_8, float[1] _val_2, int64[1] _val_3, float[1] _val_4, float16[1] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -1> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[2] input_3) => (float16[1,2] _val_4, float16[1,1] _val_5, float16[1,1] _val_6)
E <float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[2] input_3, float16[1,2] _val_4, float16[1,1] _val_5, float16[1,1] _val_6>
E {
E _val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_3) => (float16[1,2] _val_7, float16[1,1] _val_8, float16[1,1] _val_9)
E <float16[1,2] input_0, int64[1] input_1, float16[2] input_3, float16[1,2] _val_7, float16[1,1] _val_8, float16[1,1] _val_9, float[1] _val_3, int64[1] _val_4, float[2] _val_5, float16[2] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -1> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_2) => (float16[1,2] _val_3, float16[1,1] _val_4, float16[1,1] _val_5)
E <float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[1,2] _val_3, float16[1,1] _val_4, float16[1,1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2] input_0, int64[1] input_1) => (float16[1,2] _val_6, float16[1,1] _val_7, float16[1,1] _val_8)
E <float16[1,2] input_0, int64[1] input_1, float16[1,2] _val_6, float16[1,1] _val_7, float16[1,1] _val_8, float[1] _val_2, int64[1] _val_3, float[2] _val_4, float16[2] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -1> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3) => (float16[0,1] _val_4, float16[0,1] _val_5, float16[0,1] _val_6)
E <float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3, float16[0,1] _val_4, float16[0,1] _val_5, float16[0,1] _val_6>
E {
E _val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_3) => (float16[0,1] _val_7, float16[0,1] _val_8, float16[0,1] _val_9)
E <float16[0,1] input_0, int64[1] input_1, float16[1] input_3, float16[0,1] _val_7, float16[0,1] _val_8, float16[0,1] _val_9, float[1] _val_3, int64[1] _val_4, float[1] _val_5, float16[1] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -1> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_2) => (float16[0,1] _val_3, float16[0,1] _val_4, float16[0,1] _val_5)
E <float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[0,1] _val_3, float16[0,1] _val_4, float16[0,1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[0,1] input_0, int64[1] input_1) => (float16[0,1] _val_6, float16[0,1] _val_7, float16[0,1] _val_8)
E <float16[0,1] input_0, int64[1] input_1, float16[0,1] _val_6, float16[0,1] _val_7, float16[0,1] _val_8, float[1] _val_2, int64[1] _val_3, float[1] _val_4, float16[1] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -1> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
github-actions / Test Results
All 3 runs failed: test_output_match_opinfo__native_batch_norm_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[5,5,5] input_0, float16[5] input_1, float16[5] input_2, float16[5] input_3, float16[5] input_4) => (float16[5,5,5] _val_6, float16[5] _val_7, float16[5] _val_8)
<float16[5,5,5] input_0, float16[5] input_1, float16[5] input_2, float16[5] input_3, float16[5] input_4, float16[5,5,5] _val_6, float16[5] _val_7, float16[5] _val_8, int64[2] _val_5>
{
_val_5 = Constant <value_ints: ints = [0, 2]> ()
_val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 0.6, momentum: float = 0.5, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
_aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
{
norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
upcast_input = Cast <to: int = 1> (input)
mean = ReduceMean (upcast_input, axes)
input_sub_mean = Sub (upcast_input, mean)
sqr = Mul (input_sub_mean, input_sub_mean)
var = ReduceMean <keepdims: int = 0> (sqr, axes)
const = Constant <value: tensor = float const {1}> ()
eps = Constant <value_float: float = @eps> ()
eps_cast = CastLike (eps, var)
tmp = Add (var, eps_cast)
tmp_2 = Sqrt (tmp)
const_cast = CastLike (const, tmp_2)
rstd = Div (const_cast, tmp_2)
mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[3,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4) => (float16[3,1] _val_6, float16[1] _val_7, float16[1] _val_8)
<float16[3,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4, float16[3,1] _val_6, float16[1] _val_7, float16[1] _val_8, int64[1] _val_5>
{
_val_5 = Constant <value_ints: ints = [0]> ()
_val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 1e-05, momentum: float = 0, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
_aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
{
norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
upcast_input = Cast <to: int = 1> (input)
mean = ReduceMean (upcast_input, axes)
input_sub_mean = Sub (upcast_input, mean)
sqr = Mul (input_sub_mean, input_sub_mean)
var = ReduceMean <keepdims: int = 0> (sqr, axes)
const = Constant <value: tensor = float const {1}> ()
eps = Constant <value_float: float = @eps> ()
eps_cast = CastLike (eps, var)
tmp = Add (var, eps_cast)
tmp_2 = Sqrt (tmp)
const_cast = CastLike (const, tmp_2)
rstd = Div (const_cast, tmp_2)
mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[3,2,3,4] input_0, float16[2] input_1, float16[2] input_2, float16[2] input_3, float16[2] input_4) => (float16[3,2,3,4] _val_6, float16[2] _val_7, float16[2] _val_8)
<float16[3,2,3,4] input_0, float16[2] input_1, float16[2] input_2, float16[2] input_3, float16[2] input_4, float16[3,2,3,4] _val_6, float16[2] _val_7, float16[2] _val_8, int64[3] _val_5>
{
_val_5 = Constant <value_ints: ints = [0, 2, 3]> ()
_val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 0.5, momentum: float = -1, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
_aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
{
norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
upcast_input = Cast <to: int = 1> (input)
mean = ReduceMean (upcast_input, axes)
input_sub_mean = Sub (upcast_input, mean)
sqr = Mul (input_sub_mean, input_sub_mean)
var = ReduceMean <keepdims: int = 0> (sqr, axes)
const = Constant <value: tensor = float const {1}> ()
eps = Constant <value_float: float = @eps> ()
eps_cast = CastLike (eps, var)
tmp = Add (var, eps_cast)
tmp_2 = Sqrt (tmp)
const_cast = CastLike (const, tmp_2)
rstd = Div (const_cast, tmp_2)
mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4) => (float16[2,1] _val_6, float16[1] _val_7, float16[1] _val_8)
<float16[2,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4, float16[2,1] _val_6, float16[1] _val_7, float16[1] _val_8, int64[1] _val_5>
{
_val_5 = Constant <value_ints: ints = [0]> ()
_val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 1e-05, momentum: float = 0.5, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
_aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
{
norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
upcast_input = Cast <to: int = 1> (input)
mean = ReduceMean (upcast_input, axes)
input_sub_mean = Sub (upcast_input, mean)
sqr = Mul (input_sub_mean, input_sub_mean)
var = ReduceMean <keepdims: int = 0> (sqr, axes)
const = Constant <value: tensor = float const {1}> ()
eps = Constant <value_float: float = @eps> ()
eps_cast = CastLike (eps, var)
tmp = Add (var, eps_cast)
tmp_2 = Sqrt (tmp)
const_cast = CastLike (const, tmp_2)
rstd = Div (const_cast, tmp_2)
mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4) => (float16[2,1] _val_6, float16[1] _val_7, float16[1] _val_8)
<float16[2,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4, float16[2,1] _val_6, float16[1] _val_7, float16[1] _val_8, int64[1] _val_5>
{
_val_5 = Constant <value_ints: ints = [0]> ()
_val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 1e-05, momentum: float = 0.5, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
_aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
{
norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
upcast_input = Cast <to: int = 1> (input)
mean = ReduceMean (upcast_input, axes)
input_sub_mean = Sub (upcast_input, mean)
sqr = Mul (input_sub_mean, input_sub_mean)
var = ReduceMean <keepdims: int = 0> (sqr, axes)
const = Constant <value: tensor = float const {1}> ()
eps = Constant <value_float: float = @eps> ()
eps_cast = CastLike (eps, var)
tmp = Add (var, eps_cast)
tmp_2 = Sqrt (tmp)
const_cast = CastLike (const, tmp_2)
rstd = Div (const_cast, tmp_2)
mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2,3] input_0) => (float16[1,2,3] _val_17, float16[2] _val_18, float16[2] _val_19)
<float16[1,2,3] input_0, float16[1,2,3] _val_17, float16[2] _val_18, float16[2] _val_19, float[1] _val_1, float16[1] _val_2, int64[1] _val_3, float16[2] _val_4, float[1] _val_5, float16[1] _val_6, int64[1] _val_7, float16[2] _val_8, int64[2] _val_9, float16[1,2,1] _val_10, float16[2] _val_11, float16[1,2,1] _val_12, float16[1,2,3] _val_13, float16[1,2,3] _val_14, float16[1,2,1] _val_15, float16[2] _val_16>
{
_val_1 = Constant <value_floats: floats = [1]> ()
_val_2 = CastLike (_val_1, input_0)
_val_3 = Shape <end: int = 2, start: int = 1> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = Constant <value_floats: floats = [0]> ()
_val_6 = CastLike (_val_5, input_0)
_val_7 = Shape <end: int = 2, start: int = 1> (input_0)
_val_8 = Expand (_val_6, _val_7)
_val_9 = Constant <value_ints: ints = [0, 2]> ()
_val_10 = ReduceMean <keepdims: int = 1, noop_with_empty_axes: int = 0> (input_0, _val_9)
_val_11 = Squeeze (_val_10)
_val_12 = ReduceMean <keepdims: int = 1, noop_with_empty_axes: int = 0> (input_0, _val_9)
_val_13 = Sub (input_0, _val_12)
_val_14 = Mul (_val_13, _val_13)
_val_15 = ReduceMean <keepdims: int = 1, noop_with_empty_axes: int = 0> (_val_14, _val_9)
_val_16 = Squeeze (_val_15)
_val_17, _val_18, _val_19 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 1e-05, momentum: float = 0.5, training: int = 1> (input_0, _val_4, _val_8, _val_11, _val_16, _val_9)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
_aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
{
norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
upcast_input = Cast <to: int = 1> (input)
mean = ReduceMean (upcast_input, axes)
input_sub_mean = Sub (upcast_input, mean)
sqr = Mul (input_sub_mean, input_sub_mean)
var = ReduceMean <keepdims: int = 0> (sqr, axes)
const = Constant <value: tensor = float const {1}> ()
eps = Constant <value_float: float = @eps> ()
eps_cast = CastLike (eps, var)
tmp = Add (var, eps_cast)
tmp_2 = Sqrt (tmp)
const_cast = CastLike (const, tmp_2)
rstd = Div (const_cast, tmp_2)
mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_training_onnx, node name: _aten_native_batch_norm_training_onnx_1): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[5,5,5] input_0, float16[5] input_1, float16[5] input_2, float16[5] input_3, float16[5] input_4) => (float16[5,5,5] _val_6, float16[5] _val_7, float16[5] _val_8)
E <float16[5,5,5] input_0, float16[5] input_1, float16[5] input_2, float16[5] input_3, float16[5] input_4, float16[5,5,5] _val_6, float16[5] _val_7, float16[5] _val_8, int64[2] _val_5>
E {
E _val_5 = Constant <value_ints: ints = [0, 2]> ()
E _val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 0.6, momentum: float = 0.5, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E _aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
E {
E norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
E upcast_input = Cast <to: int = 1> (input)
E mean = ReduceMean (upcast_input, axes)
E input_sub_mean = Sub (upcast_input, mean)
E sqr = Mul (input_sub_mean, input_sub_mean)
E var = ReduceMean <keepdims: int = 0> (sqr, axes)
E const = Constant <value: tensor = float const {1}> ()
E eps = Constant <value_float: float = @eps> ()
E eps_cast = CastLike (eps, var)
E tmp = Add (var, eps_cast)
E tmp_2 = Sqrt (tmp)
E const_cast = CastLike (const, tmp_2)
E rstd = Div (const_cast, tmp_2)
E mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_training_onnx, node name: _aten_native_batch_norm_training_onnx_1): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[3,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4) => (float16[3,1] _val_6, float16[1] _val_7, float16[1] _val_8)
E <float16[3,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4, float16[3,1] _val_6, float16[1] _val_7, float16[1] _val_8, int64[1] _val_5>
E {
E _val_5 = Constant <value_ints: ints = [0]> ()
E _val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 1e-05, momentum: float = 0, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E _aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
E {
E norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
E upcast_input = Cast <to: int = 1> (input)
E mean = ReduceMean (upcast_input, axes)
E input_sub_mean = Sub (upcast_input, mean)
E sqr = Mul (input_sub_mean, input_sub_mean)
E var = ReduceMean <keepdims: int = 0> (sqr, axes)
E const = Constant <value: tensor = float const {1}> ()
E eps = Constant <value_float: float = @eps> ()
E eps_cast = CastLike (eps, var)
E tmp = Add (var, eps_cast)
E tmp_2 = Sqrt (tmp)
E const_cast = CastLike (const, tmp_2)
E rstd = Div (const_cast, tmp_2)
E mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_training_onnx, node name: _aten_native_batch_norm_training_onnx_1): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[3,2,3,4] input_0, float16[2] input_1, float16[2] input_2, float16[2] input_3, float16[2] input_4) => (float16[3,2,3,4] _val_6, float16[2] _val_7, float16[2] _val_8)
E <float16[3,2,3,4] input_0, float16[2] input_1, float16[2] input_2, float16[2] input_3, float16[2] input_4, float16[3,2,3,4] _val_6, float16[2] _val_7, float16[2] _val_8, int64[3] _val_5>
E {
E _val_5 = Constant <value_ints: ints = [0, 2, 3]> ()
E _val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 0.5, momentum: float = -1, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E _aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
E {
E norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
E upcast_input = Cast <to: int = 1> (input)
E mean = ReduceMean (upcast_input, axes)
E input_sub_mean = Sub (upcast_input, mean)
E sqr = Mul (input_sub_mean, input_sub_mean)
E var = ReduceMean <keepdims: int = 0> (sqr, axes)
E const = Constant <value: tensor = float const {1}> ()
E eps = Constant <value_float: float = @eps> ()
E eps_cast = CastLike (eps, var)
E tmp = Add (var, eps_cast)
E tmp_2 = Sqrt (tmp)
E const_cast = CastLike (const, tmp_2)
E rstd = Div (const_cast, tmp_2)
E mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_training_onnx, node name: _aten_native_batch_norm_training_onnx_1): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[2,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4) => (float16[2,1] _val_6, float16[1] _val_7, float16[1] _val_8)
E <float16[2,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4, float16[2,1] _val_6, float16[1] _val_7, float16[1] _val_8, int64[1] _val_5>
E {
E _val_5 = Constant <value_ints: ints = [0]> ()
E _val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 1e-05, momentum: float = 0.5, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E _aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
E {
E norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
E upcast_input = Cast <to: int = 1> (input)
E mean = ReduceMean (upcast_input, axes)
E input_sub_mean = Sub (upcast_input, mean)
E sqr = Mul (input_sub_mean, input_sub_mean)
E var = ReduceMean <keepdims: int = 0> (sqr, axes)
E const = Constant <value: tensor = float const {1}> ()
E eps = Constant <value_float: float = @eps> ()
E eps_cast = CastLike (eps, var)
E tmp = Add (var, eps_cast)
E tmp_2 = Sqrt (tmp)
E const_cast = CastLike (const, tmp_2)
E rstd = Div (const_cast, tmp_2)
E mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_training_onnx, node name: _aten_native_batch_norm_training_onnx_1): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[2,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4) => (float16[2,1] _val_6, float16[1] _val_7, float16[1] _val_8)
E <float16[2,1] input_0, float16[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] input_4, float16[2,1] _val_6, float16[1] _val_7, float16[1] _val_8, int64[1] _val_5>
E {
E _val_5 = Constant <value_ints: ints = [0]> ()
E _val_6, _val_7, _val_8 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 1e-05, momentum: float = 0.5, training: int = 1> (input_0, input_1, input_2, input_3, input_4, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E _aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
E {
E norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
E upcast_input = Cast <to: int = 1> (input)
E mean = ReduceMean (upcast_input, axes)
E input_sub_mean = Sub (upcast_input, mean)
E sqr = Mul (input_sub_mean, input_sub_mean)
E var = ReduceMean <keepdims: int = 0> (sqr, axes)
E const = Constant <value: tensor = float const {1}> ()
E eps = Constant <value_float: float = @eps> ()
E eps_cast = CastLike (eps, var)
E tmp = Add (var, eps_cast)
E tmp_2 = Sqrt (tmp)
E const_cast = CastLike (const, tmp_2)
E rstd = Div (const_cast, tmp_2)
E mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnx/checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_training_onnx, node name: _aten_native_batch_norm_training_onnx_16): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2,3] input_0) => (float16[1,2,3] _val_17, float16[2] _val_18, float16[2] _val_19)
E <float16[1,2,3] input_0, float16[1,2,3] _val_17, float16[2] _val_18, float16[2] _val_19, float[1] _val_1, float16[1] _val_2, int64[1] _val_3, float16[2] _val_4, float[1] _val_5, float16[1] _val_6, int64[1] _val_7, float16[2] _val_8, int64[2] _val_9, float16[1,2,1] _val_10, float16[2] _val_11, float16[1,2,1] _val_12, float16[1,2,3] _val_13, float16[1,2,3] _val_14, float16[1,2,1] _val_15, float16[2] _val_16>
E {
E _val_1 = Constant <value_floats: floats = [1]> ()
E _val_2 = CastLike (_val_1, input_0)
E _val_3 = Shape <end: int = 2, start: int = 1> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = Constant <value_floats: floats = [0]> ()
E _val_6 = CastLike (_val_5, input_0)
E _val_7 = Shape <end: int = 2, start: int = 1> (input_0)
E _val_8 = Expand (_val_6, _val_7)
E _val_9 = Constant <value_ints: ints = [0, 2]> ()
E _val_10 = ReduceMean <keepdims: int = 1, noop_with_empty_axes: int = 0> (input_0, _val_9)
E _val_11 = Squeeze (_val_10)
E _val_12 = ReduceMean <keepdims: int = 1, noop_with_empty_axes: int = 0> (input_0, _val_9)
E _val_13 = Sub (input_0, _val_12)
E _val_14 = Mul (_val_13, _val_13)
E _val_15 = ReduceMean <keepdims: int = 1, noop_with_empty_axes: int = 0> (_val_14, _val_9)
E _val_16 = Squeeze (_val_15)
E _val_17, _val_18, _val_19 = pkg.onnxscript.torch_lib._aten_native_batch_norm_training_onnx <eps: float = 1e-05, momentum: float = 0.5, training: int = 1> (input_0, _val_4, _val_8, _val_11, _val_16, _val_9)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E _aten_native_batch_norm_training_onnx <training,momentum,eps>(input, weight, bias, running_mean, running_var, axes) => (norm, mean_3, rstd)
E {
E norm, running_mean_0, running_var_1 = BatchNormalization <epsilon: float = @eps, momentum: float = @momentum, training_mode: int = @training> (input, weight, bias, running_mean, running_var)
E upcast_input = Cast <to: int = 1> (input)
E mean = ReduceMean (upcast_input, axes)
E input_sub_mean = Sub (upcast_input, mean)
E sqr = Mul (input_sub_mean, input_sub_mean)
E var = ReduceMean <keepdims: int = 0> (sqr, axes)
E const = Constant <value: tensor = float const {1}> ()
E eps = Constant <value_float: float = @eps> ()
E eps_cast = CastLike (eps, var)
E tmp = Add (var, eps_cast)
E tmp_2 = Sqrt (tmp)
E const_cast = CastLike (const, tmp_2)
E rstd = Div (const_cast, tmp_2)
E mean_3 = ReduceMean <keepdims: int = 0> (upcast_input, axes)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
github-actions / Test Results
All 3 runs failed: test_output_match_opinfo__addmm_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
RuntimeError: ONNX Runtime failed to evaluate:
Inputs:
{'input_0': array([[ 2.398e+00, -5.598e+00, -3.727e+00, -6.230e+00, -8.883e+00,
-6.883e+00, 8.938e+00, -2.180e+00, 3.031e+00, -4.043e+00],
[-1.116e+00, 2.980e+00, -8.203e+00, 3.217e+00, -6.064e-01,
3.990e+00, 3.754e+00, -4.535e+00, -2.188e+00, 8.281e+00],
[-8.703e+00, -7.199e+00, 7.031e-01, 8.180e+00, 4.930e+00,
7.656e+00, 3.402e+00, 8.789e-03, -2.637e-02, 3.418e+00],
[-4.035e+00, 9.229e-01, 6.777e+00, 7.215e+00, 4.184e+00,
-2.830e+00, -5.477e+00, -2.594e+00, 4.879e+00, -7.586e+00],
[-7.234e+00, 8.414e+00, -2.549e-01, -6.637e+00, 7.578e+00,
-1.837e+00, 2.373e+00, -5.000e+00, 4.051e+00, 6.383e+00]],
dtype=float16),
'input_1': array([], shape=(5, 0), dtype=float16),
'input_2': array([], shape=(0, 10), dtype=float16)}
Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[5,10] input_0, float16[5,0] input_1, float16[0,10] input_2) => (float16[5,10] _val_3)
<float16[5,10] input_0, float16[5,0] input_1, float16[0,10] input_2, float16[5,10] _val_3>
{
_val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 0.6, beta: float = 0.2> (input_0, input_1, input_2)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
aten_addmm (self, mat1, mat2) => (return_val)
{
return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
RuntimeError: ONNX Runtime failed to evaluate:
Inputs:
{'input_0': array([ 2.207 , -0.0703, 6.16 , -6.406 , 6.363 , -2.68 , 6.574 ,
-6.04 , -1.283 , 0.457 ], dtype=float16),
'input_1': array([], shape=(5, 0), dtype=float16),
'input_2': array([], shape=(0, 10), dtype=float16)}
Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[10] input_0, float16[5,0] input_1, float16[0,10] input_2) => (float16[5,10] _val_3)
<float16[10] input_0, float16[5,0] input_1, float16[0,10] input_2, float16[5,10] _val_3>
{
_val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 0.6, beta: float = 0.2> (input_0, input_1, input_2)
}
<
domain: "pkg.onnxscript.torch_lib",
opset_import: ["" : 18]
>
aten_addmm (self, mat1, mat2) => (return_val)
{
return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:542: in _capture_graph_and_evaluate_torch_script_evaluator
return _ort_session_run(onnx_model.SerializeToString(), ort_inputs)
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:315: in _ort_session_run
return session.run(None, ort_inputs)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'_inline_aten_addmmn0' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:556: in _capture_graph_and_evaluate_torch_script_evaluator
raise RuntimeError(
E RuntimeError: ONNX Runtime failed to evaluate:
E Inputs:
E {'input_0': array([[ 2.398e+00, -5.598e+00, -3.727e+00, -6.230e+00, -8.883e+00,
E -6.883e+00, 8.938e+00, -2.180e+00, 3.031e+00, -4.043e+00],
E [-1.116e+00, 2.980e+00, -8.203e+00, 3.217e+00, -6.064e-01,
E 3.990e+00, 3.754e+00, -4.535e+00, -2.188e+00, 8.281e+00],
E [-8.703e+00, -7.199e+00, 7.031e-01, 8.180e+00, 4.930e+00,
E 7.656e+00, 3.402e+00, 8.789e-03, -2.637e-02, 3.418e+00],
E [-4.035e+00, 9.229e-01, 6.777e+00, 7.215e+00, 4.184e+00,
E -2.830e+00, -5.477e+00, -2.594e+00, 4.879e+00, -7.586e+00],
E [-7.234e+00, 8.414e+00, -2.549e-01, -6.637e+00, 7.578e+00,
E -1.837e+00, 2.373e+00, -5.000e+00, 4.051e+00, 6.383e+00]],
E dtype=float16),
E 'input_1': array([], shape=(5, 0), dtype=float16),
E 'input_2': array([], shape=(0, 10), dtype=float16)}
E Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[5,10] input_0, float16[5,0] input_1, float16[0,10] input_2) => (float16[5,10] _val_3)
E <float16[5,10] input_0, float16[5,0] input_1, float16[0,10] input_2, float16[5,10] _val_3>
E {
E _val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 0.6, beta: float = 0.2> (input_0, input_1, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E aten_addmm (self, mat1, mat2) => (return_val)
E {
E return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:542: in _capture_graph_and_evaluate_torch_script_evaluator
return _ort_session_run(onnx_model.SerializeToString(), ort_inputs)
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:315: in _ort_session_run
return session.run(None, ort_inputs)
.nox/test_torch_nightly/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:220: in run
return self._sess.run(output_names, input_feed, run_options)
E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Gemm node. Name:'_inline_aten_addmmn0' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/gemm_helper.h:59 onnxruntime::GemmHelper::GemmHelper(const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &, bool, const onnxruntime::TensorShape &) M_ >= 0 && K_ > 0 && N_ >= 0 was false.
The above exception was the direct cause of the following exception:
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:556: in _capture_graph_and_evaluate_torch_script_evaluator
raise RuntimeError(
E RuntimeError: ONNX Runtime failed to evaluate:
E Inputs:
E {'input_0': array([ 2.207 , -0.0703, 6.16 , -6.406 , 6.363 , -2.68 , 6.574 ,
E -6.04 , -1.283 , 0.457 ], dtype=float16),
E 'input_1': array([], shape=(5, 0), dtype=float16),
E 'input_2': array([], shape=(0, 10), dtype=float16)}
E Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[10] input_0, float16[5,0] input_1, float16[0,10] input_2) => (float16[5,10] _val_3)
E <float16[10] input_0, float16[5,0] input_1, float16[0,10] input_2, float16[5,10] _val_3>
E {
E _val_3 = pkg.onnxscript.torch_lib.aten_addmm <alpha: float = 0.6, beta: float = 0.2> (input_0, input_1, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib",
E opset_import: ["" : 18]
E >
E aten_addmm (self, mat1, mat2) => (return_val)
E {
E return_val = Gemm <alpha: float = @alpha, beta: float = @beta> (mat1, mat2, self)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }