-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request : Implement LogSoftmax, Softmax, ReduceMax #680
Comments
Thanks for the suggestion. We don't currently have plans to implement these ops. You can find a softmax implementation in the LLM use case. There is already a ReduceSum operator and a MaxPool operator, you could try to adapt them to obtain ReduceMax. |
Thanks for the pointers. Do you have additional guides on how one implements a custom op in this context? I really would need to convert my existing (torch) model, I can't re-write and retrain a quantized version. It does seem like the reduce_sum could be adapted easily (np.sum -> np.amax) although I don't know if that's FHE compliant. Also I notice that there's already an implementation for Softmax, although the docstring says it's not FHE compliant it doesn't elaborate as to why. |
We have this documentation / guide on how to proceed to implement a new onnx node. Let us know if anything is unclear! https://docs.zama.ai/concrete-ml/developers/support_new_onnx_node |
Feature request
Request the implementation of the following ONNX operators:
Motivation
These operators are common in neural networks of many types; softmax is common in classification, reducemax is common in CNNs.
The text was updated successfully, but these errors were encountered: