Skip to content
This repository has been archived by the owner on Dec 1, 2021. It is now read-only.

Adding checker of hardware constrain for Conv #639

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

nlpng
Copy link
Contributor

@nlpng nlpng commented Nov 26, 2019

This PR adds the checker for the hardware constrains when importing Conv operator, the checking rule is following the discussion in issue #124

  • Kernel size must be 1x1 or 3x3.
  • Max input and output channel size allowed are 1024.
  • Input channel size is multiple of 32.

Motivation and Context

Description

How has this been tested?

Screenshots (if appropriate):

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature / Optimization (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)

Checklist:

  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.

@blueoil-butler blueoil-butler bot added the CI: auto-run Run CI automatically label Nov 26, 2019
@nlpng nlpng changed the title [WIP] Adding checker of hardware constrain for Conv Adding checker of hardware constrain for Conv Nov 26, 2019
@tkng tkng self-requested a review December 11, 2019 01:25
super()._check_consistency()
if self.kernel_shape[0] != self.kernel_shape[1] or self.kernel_shape[0] not in (1, 3):
warnings.warn(warning_sign +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SInce this violation is fatal, it looks better to exit here if this constraint violation happens.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I will use assert upon detecting this violation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in commit a35e4a8, but there are dlk test using 2x2 conv, will need to fix them.

stacklevel=2)
if self.input_ops['X'].channel % 32 != 0:
warnings.warn(warning_sign +
f" Input channel size need be multiple of 32, but got "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I remembered correctly, If channels size is not multiple of 32, rest channels are zero padded, so it should work, though it may not be efficient. How about changing this message to "Input channel size should be multiple of 32"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Padding is happened only if the conv are quantized, but whether the conv are qunatized or not is not known in this time...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will change the message, but indeed at the importing stage it is hard to tell whether the conv is quantized. Maybe by checking the quantizer operators around the conv could help?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about changing fp32 operator to pad automatically also? (You don't need to implement on this pull request, it's just a proposal of future direction.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about adding padding feature in fp32 conv later?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry overlooked, I didn't get this, do you mean we automatically pad the fp32 conv during importing?

@bo-code-review-bot
Copy link

This PR needs Approvals as follows.

  • Ownership Approval for / from iizukak, tkng, ruimashita
  • Readability Approval for Python from tkng, tsawada

Please choose reviewers and requet reviews!

Click to see how to approve each reviews

You can approve this PR by triggered comments as follows.

  • Approve all reviews requested to you (readability and ownership) and LGTM review
    Approval, LGTM

  • Approve all ownership reviews
    Ownership Approval or OA

  • Approve all readability reviews
    Readability Approval or RA

  • Approve specified review targets

    • Example of Ownership Reviewer of /: Ownership Approval for / or OA for /
    • Example of Readability Reviewer of Python: Readability Approval for Python or RA for Python
  • Approve LGTM review
    LGTM

See all trigger comments

Please replace [Target] to review target

  • Ownership Approval
    • Ownership Approval for [Target]
    • OA for [Target]
    • Ownership Approval
    • OA
    • Approval
  • Readability Approval
    • Readability Approval for [Target]
    • RA for [Target]
    • [Target] Readability Approval
    • [Target] RA
    • Readability Approval
    • RA
    • Approval
  • LGTM
    • LGTM
    • lgtm

@iizukak
Copy link
Member

iizukak commented Mar 11, 2020

@tkng @nlpng
How about the progress of this PR?
Some people want this feature..

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

1 similar comment
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CI: auto-run Run CI automatically
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants