Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Squeeze] Introduce Squeeze and Unsqueeze hardware operators #1153

Draft
wants to merge 2 commits into
base: dev
Choose a base branch
from

Conversation

iksnagreb
Copy link
Contributor

This includes HWCustomOp and HLSBackend specializations of the operators aiming for full ONNX compliance. Adds infrastructure for converting the standard ONNX version of the operators to the FINN dialect, which mostly means transplanting the node into the FINN domain and setting a few type and shape attributes. Adds unit tests in Python, C++ and RTL simulation as well as a simple integration test starting from PyTorch model export.

Proposes a new scheme for registering and importing custom operators into their corresponding module namespace, i.e., the 'custom_op' dictionary used to lookup operators by ONNX domain. This is the same as already proposed in #1040.

Support for these operators might seem unnecessary as they have no real effect on the stream/dataflow. However, they can be useful as a workaround for adapting between datalayouts, for example when combining convolutions (assuming 4-dimensional layouts) and attention operations (working on 3-dimensional, or rather 2-dimensional layouts). I will link some example presenting this later...

This includes HWCustomOp and HLSBackend specializations of the operators
aiming for full ONNX compliance. Adds infrastructure for converting the
standard ONNX version of the operators to the FINN dialect, which mostly
means transplanting the node into the FINN domain and setting a few type
and shape attributes. Adds unit tests in Python, C++ and RTL simulation
as well as a simple integration test starting from PyTorch model export.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant