-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Transform] mlir: named op layout propagation #101
base: main
Are you sure you want to change the base?
Conversation
58f363e
to
b3f5763
Compare
032c080
to
41cd05f
Compare
ca0e2b6
to
4e7f7b9
Compare
41e5251
to
7be3e15
Compare
fe6f573
to
0087328
Compare
b3bf8dc
to
23dfa97
Compare
7f6defd
to
1246465
Compare
21b4dfe
to
5765bc7
Compare
e249d77
to
49464e2
Compare
5765bc7
to
51527c0
Compare
49464e2
to
6cc350f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've put some comments. Haven't gone through all of it yet. A general question I have is how expensive the analysis is?
@@ -5,6 +5,7 @@ gc_set_mlir_link_components(MLIR_LINK_COMPONENTS | |||
gc_add_mlir_library(GcAnalysis | |||
TargetDescriptionAnalysis.cpp | |||
MatmulConfigAnalysis.cpp | |||
GlobalAnalysis.cpp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please give it a more descriptive name?
} else if (isa<tensor::ExpandShapeOp, tensor::CollapseShapeOp, tensor::PadOp>( | ||
op)) | ||
return true; | ||
return false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} else if (isa<tensor::ExpandShapeOp, tensor::CollapseShapeOp, tensor::PadOp>( | |
op)) | |
return true; | |
return false; | |
} | |
return isa<tensor::ExpandShapeOp, tensor::CollapseShapeOp, tensor::PadOp>(op); |
return false; | ||
} | ||
|
||
bool hasAllTensorSemantics(linalg::LinalgOp linalgOp) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be available in some utils already?
]; | ||
} | ||
|
||
def PostProcessPackUnpack : Pass<"post-process-pack-unpack"> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it need to be a separate pass if it is a post-processing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd add tests with the smallest patterns possible just to capture the functionality for all passes and their aspects. They also help to understand what passes do.
// copied from mlir | ||
static SmallVector<int64_t> | ||
projectToInnerMostNonUnitDimsPos(ArrayRef<int64_t> dimsPos, | ||
ArrayRef<ReassociationIndices> reassocIndices, | ||
ArrayRef<int64_t> targetShape) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we move it into utils in the upstream?
// if forceBlocking is set to true, we will unconditionally convert | ||
// input/weight/output to blocking layout; otherwise we follow the default | ||
// heuristic logic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// if forceBlocking is set to true, we will unconditionally convert | |
// input/weight/output to blocking layout; otherwise we follow the default | |
// heuristic logic | |
// if forceBlocking is set to true, we will unconditionally convert | |
// input/weight/output to blocking layout; otherwise we follow the default | |
// heuristic logic, which is ?... |
if (constantA || curInputLayouts[0].isBlocking() || (M % iim) || (K % iik) || | ||
(elementType.isBF16() && | ||
curInputLayouts[0] == TensorLayout({1, 0}, {}, {}))) { | ||
ALayouts.emplace_back( | ||
APackInfo.first, APackInfo.second, | ||
SmallVector<OpFoldResult>{rewriter.getIndexAttr(iim), | ||
rewriter.getIndexAttr(iik)}); | ||
} else { | ||
ALayouts.emplace_back(APackInfo.first, SmallVector<int64_t>{}, | ||
SmallVector<OpFoldResult>{}); | ||
} | ||
if (constantB || curInputLayouts[1].isBlocking() || K % iik || N % iin || | ||
elementType.isBF16()) { | ||
BLayouts.emplace_back( | ||
BPackInfo.first, BPackInfo.second, | ||
SmallVector<OpFoldResult>{rewriter.getIndexAttr(iik), | ||
rewriter.getIndexAttr(iin)}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can there be some abstraction here to avoid platform specificity?
} else if (isa<linalg::SoftmaxOp>(linalgOp) || isa<linalg::MapOp>(linalgOp) || | ||
isa<linalg::YieldOp>(linalgOp) || isa<linalg::IndexOp>(linalgOp)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it fail on generics too?
if (failed(packVNNIMMT4D(rewriter, linalgOp))) | ||
return failure(); | ||
return success(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (failed(packVNNIMMT4D(rewriter, linalgOp))) | |
return failure(); | |
return success(); | |
return packVNNIMMT4D(rewriter, linalgOp); |
Add layout propagation related passes.
Tracking: #42