-
Notifications
You must be signed in to change notification settings - Fork 417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more quantization support for burn-jit #2275
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2275 +/- ##
==========================================
- Coverage 85.92% 85.79% -0.13%
==========================================
Files 750 754 +4
Lines 94328 95189 +861
==========================================
+ Hits 81047 81671 +624
- Misses 13281 13518 +237 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had to change the equality assertion for quantized values since some tests produced very close values on macos (in floating point), but the assertion failed on the quantization parameters values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Checklist
run-checks all
script has been executed.Changes
Quantization support for burn-jit.
QJitTensor
implementation w/ contained qparamsquantize_per_tensor
anddequantize_per_tensor
cube kernelsfrom_data
andinto_data
conversionsTesting
Unit tests.