-
Notifications
You must be signed in to change notification settings - Fork 86
Adding checker of hardware constrain for Conv #639
base: master
Are you sure you want to change the base?
Conversation
dlk/python/dlk/core/operators.py
Outdated
super()._check_consistency() | ||
if self.kernel_shape[0] != self.kernel_shape[1] or self.kernel_shape[0] not in (1, 3): | ||
warnings.warn(warning_sign + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SInce this violation is fatal, it looks better to exit here if this constraint violation happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will use assert upon detecting this violation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in commit a35e4a8, but there are dlk test using 2x2 conv, will need to fix them.
dlk/python/dlk/core/operators.py
Outdated
stacklevel=2) | ||
if self.input_ops['X'].channel % 32 != 0: | ||
warnings.warn(warning_sign + | ||
f" Input channel size need be multiple of 32, but got " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I remembered correctly, If channels size is not multiple of 32, rest channels are zero padded, so it should work, though it may not be efficient. How about changing this message to "Input channel size should be multiple of 32"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Padding is happened only if the conv are quantized, but whether the conv are qunatized or not is not known in this time...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will change the message, but indeed at the importing stage it is hard to tell whether the conv is quantized. Maybe by checking the quantizer operators around the conv could help?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about changing fp32 operator to pad automatically also? (You don't need to implement on this pull request, it's just a proposal of future direction.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about adding padding feature in fp32 conv later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry overlooked, I didn't get this, do you mean we automatically pad the fp32 conv during importing?
This PR needs Approvals as follows.
Please choose reviewers and requet reviews! Click to see how to approve each reviewsYou can approve this PR by triggered comments as follows.
See all trigger commentsPlease replace [Target] to review target
|
|
1 similar comment
|
This PR adds the checker for the hardware constrains when importing
Conv
operator, the checking rule is following the discussion in issue #124Motivation and Context
Description
How has this been tested?
Screenshots (if appropriate):
Types of changes
Checklist: