You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it's a general problem when the input to the functional layer is dynamic.
I had a situation where functional avg_pool3d that depended on the shape of the previous layer's outputs. One has to either make the kernel constant or switch to non-functional pytorch's api.
Does anybody know how can I make the kernel size static here ?
I think it's a general problem when the input to the functional layer is dynamic.
I had a situation where functional avg_pool3d that depended on the shape of the previous layer's outputs. One has to either make the kernel constant or switch to non-functional pytorch's api.
Does anybody know how can I make the kernel size static here ?
class GeM(nn.Module):
def init(self, p=3, eps=1e-6):
super(GeM,self).init()
self.p = nn.Parameter(torch.ones(1)*p)
self.eps = eps
The text was updated successfully, but these errors were encountered: