-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Support Unfold torch operator #799
Comments
Hi @summer-xrx, Could you share some code how you got this error? E.g. the torch model in question? The operator seems to be a reshaping operation. If you could rewrite it differently (e.g. using Otherwise, once we know what operation you are trying to do we can create a corresponding issue to support it in concrete-ml. |
Hi, @jfrery,
|
Hi, @jfrery, The experiment code is as follows:
The output of this code is as follows. It can be seen from the output that onnx does not support the "Unfold" operation, but support the "Range" operation. However, concrete-ml does not support the "Range" operation.
|
Yes you are right. For now concrete-ml does not support the unfold operator. Let's convert your issue into a Feature request for unfold support. A workaround for you could be this to replace unfold by manual shape transformation: import torch
import torch.nn as nn
import onnx
from onnx import helper
class Test(nn.Module):
def __init__(self):
super(Test,self).__init__()
self.kernel_size = 3
self.stride = 2
def unfold(self, x):
batch_size, channels, height, width = x.shape
kernel_size = self.kernel_size
stride = self.stride
# Calculate output dimensions
out_height = (height - kernel_size) // stride + 1
out_width = (width - kernel_size) // stride + 1
# Create a list to store patches
patches = []
# Use loops to extract patches
for i in range(out_height):
for j in range(out_width):
h_start = i * stride
w_start = j * stride
patch = x[:, :, h_start:h_start+kernel_size, w_start:w_start+kernel_size]
patches.append(patch.reshape(batch_size, -1))
# Stack patches along the last dimension
output = torch.stack(patches, dim=-1)
return output
def forward(self, x):
return self.unfold(x)
model = Test()
torch.onnx.export(model,torch.randn(32,128,32,32),"model.onnx")
from concrete.ml.torch.compile import compile_torch_model
torch_inputset = torch.randn(32,128,32,32)
q_module = compile_torch_model(model, torch_inputset=torch_inputset, n_bits=6, rounding_threshold_bits=6) |
Hello, @jfrery, |
Hello, @jfrery,
|
unfortunately that's a problem we also face sometimes when we have too many loops. We don't have a solution to this yet. That being said compilation should be a one time computation so less of a problem than actual FHE execution being long. |
What do you call a PBC? Do you mean PBS? If so, approximating the relu with polynomials won't help. This is because x^N is a PBS so you are actually doing more PBS than using nn.Relu() which should be a single PBS per value. About why the accuracy drop, I am not too sure but one problem I see if that you add a x^7 with a x. I doubt x as any value when quantized since x^7 must be pretty high. So containing x and x^7 on 2^6 values is probably not possible. If you mean something else by PBC let me know! |
Thanks for your reply.It should be PBS yes,just a typo. |
Hello!
When I was running the concrete-ml library, I encountered a problem called "torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::col2im' to ONNX opset version 14 is not supported. Support for this operator was added in version 18, try exporting with this version".
How can I solve this problem? Thank you for your help!
The text was updated successfully, but these errors were encountered: