Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Dev] Support Tile Lang INT8xINT8 TensorCore Macro #231

Merged
merged 6 commits into from
Nov 1, 2024

Conversation

LeiWang1999
Copy link
Contributor

This pull request includes various changes across multiple files, primarily focusing on code refactoring, bug fixes, and updates to submodule references. The most important changes include updating the submodule branch reference, adding new decorator functions, refactoring class methods, and fixing layout functions.

Submodule updates:

  • .gitmodules: Changed the branch for the tvm submodule from tilelang to upstream.
  • 3rdparty/tvm: Updated the submodule commit reference.

Code refactoring and improvements:

Bug fixes:

  • bitblas/tl/mma_layout.py: Corrected the function names for layout transformations from ldmatrix_32x16_to_shared_16x32_layout_* to ldmatrix_16x32_to_shared_16x32_layout_*.
  • bitblas/tl/utils.py: Updated layout transformation function references and fixed the get_ldmatrix_offset function to handle different data types correctly. [1] [2]

Linting script update:

  • format.sh: Modified the lint function to use ruff check instead of ruff.

- Adjusted the local fragment sizes for tensor core memory allocation in the MatmulFineGrainScheduler class.
- Updated the allocation sizes for A_local, B_local, and C_local variables based on the new fragment sizes.
- The changes ensure efficient memory utilization and improve performance.

Refactor tensor core memory allocation in MatmulDequantizeFineGrainedScheduler

- Modified the fragment sizes for tensor core memory allocation in the MatmulDequantizeFineGrainedScheduler class.
- Updated the allocation sizes for A_frag, B_frag, and C_frag variables based on the new fragment sizes.
- The changes optimize memory usage and enhance the efficiency of the dequantization process.

Refactor tensor core memory allocation in MatmulDequantizeWeightPropagationScheduler

- Adjusted the fragment sizes for tensor core memory allocation in the MatmulDequantizeWeightPropagationScheduler class.
- Updated the allocation sizes for A_frag, B_frag, B_dequantize_frag, and C_frag variables based on the new fragment sizes.
- The changes improve memory utilization and optimize the weight propagation process.
@LeiWang1999 LeiWang1999 merged commit 33d6170 into microsoft:main Nov 1, 2024
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant