Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load backend dialects in IRSource to make sure parse_mlir_module works for third_party backends #5146

Merged
merged 5 commits into from
Nov 15, 2024

Conversation

anmyachev
Copy link
Contributor

The changes from #4924 do not take into account the situation when ttgir level contains dialects defined in third_party plugins (at least that's my understanding).

I'd also like to point out that the second use of parse_mlir_module function (via parse function call) happens after the dialects are loaded for the backend as well, which is why I thought my changes make sense.

I hope this implementation will suit Triton, or maybe one can suggest other options.

…works for third_party backends

Signed-off-by: Anatoly Myachev <[email protected]>
self.path = path
path = Path(path)
self.ext = path.suffix[1:]
self.src = path.read_text()
ir.load_dialects(context)
if target is None:
target = driver.active.get_current_target()
assert isinstance(target, GPUTarget), "target must be of GPUTarget type"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the target must be a GPU target?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took this check from compile function, just in case. However, looking at the code, it seems I can make target (or passing backend directly as suggested by @peterbell10 ) non-optional, then there will be no need to have assert here. Does it make sense?

Copy link
Collaborator

@ThomasRaoux ThomasRaoux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would expect all the dialects to be loaded at this time. I don't understand why it is different for third_party dialects. We have some third_party dialect in tree and they work as far as I know

@anmyachev
Copy link
Contributor Author

I would expect all the dialects to be loaded at this time. I don't understand why it is different for third_party dialects. We have some third_party dialect in tree and they work as far as I know

Let me start with the fact that Nvidia and AMD layouts are used in the test file:

For testing, we add our own layout (DPAS), which is defined as an intel-specific dialect and it is loaded during loading of backend dialects. However, those dialects are defined as the main gpu dialects and are loaded when ir.load_dialects(context) function is called.

def NvidiaMmaEncodingAttr : DistributedEncoding<"NvidiaMmaEncoding", "nvidia_mma_encoding", [MmaEncodingTrait]> {

So at the moment everything works for Triton, but as soon as one want to use backend-specific layouts (namely those defined in third_party folder), be it NVIDIA or AMD, in tests, I believe the error will start to appear here too.

@@ -270,10 +271,7 @@ def compile(src, target=None, options=None):
context = ir.context()
ir.load_dialects(context)
backend.load_dialects(context)
else:
# For IRSource, we have already grabbed the context + called ir.load_dialects
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please keep the comment, and just update it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Signed-off-by: Anatoly Myachev <[email protected]>
@peterbell10 peterbell10 enabled auto-merge (squash) November 15, 2024 15:06
@peterbell10 peterbell10 merged commit 1c51a7d into triton-lang:main Nov 15, 2024
7 checks passed
@anmyachev
Copy link
Contributor Author

Thanks everyone for review!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants