Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it necessary to verify the stash_type keyword for LayerNormalization op? #3725

Open
SihangZhu opened this issue Sep 24, 2024 · 0 comments

Comments

@SihangZhu
Copy link

When I converted from the torch onnx dialect to the torch dialect, the float16 LayerNormalization op failed to convert successfully. The main reason was that the operand type was f16, stash_type was not described in the attribute list, and stash_type in the code was set to 1 by default, that is, the default was float32 type, resulting in verification errors.

%3 = torch.operator "onnx.LayerNormalization"(%0, %1, %2) {torch.onnx.axis = -3 : si64, torch.onnx.epsilon = 9.99999997E-7 : f32} : (!torch.vtensor<[144,32,32,16],f16>, !torch.vtensor<[32,32,16],f16>, !torch.vtensor<[32,32,16],f16>) -> !torch.vtensor<[144,32,32,16],f16>

My torch_onnx file is converted from the onnx model into an mlir file by torch_mlir.tools.export_onnx module.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant