Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

packed_weight optimization #16

Open
tpoisonooo opened this issue May 9, 2019 · 0 comments
Open

packed_weight optimization #16

tpoisonooo opened this issue May 9, 2019 · 0 comments
Assignees

Comments

@tpoisonooo
Copy link
Contributor

I have read bnn::bconv_3x3 and old version pack_128.

BNN was invented to deploy deep learning model to edge device, such as Raspberry PI, RK3308 and so on. RK3308 's memory is only about 32MB, so we have to think carefully about whether we need to packed_weight. After all, an app requires more than one model.

On the other hand, the code was reorder input then unpack the result, and I was worried that the overall efficiency was no better than using it directly in the normal order.

@daquexian daquexian self-assigned this Aug 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants