Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Flang][OpenMP][Lower] Add lowering support of OpenMP distribute to MLIR #67798

Merged
merged 1 commit into from
Jun 12, 2024

Conversation

skatrak
Copy link
Contributor

@skatrak skatrak commented Sep 29, 2023

This patch adds support for lowering the OpenMP DISTRIBUTE directive from PFT to MLIR. It only supports standalone DISTRIBUTE, support for composite constructs will come in follow-up PRs.

@llvmbot
Copy link
Collaborator

llvmbot commented Sep 29, 2023

@llvm/pr-subscribers-mlir
@llvm/pr-subscribers-mlir-openmp
@llvm/pr-subscribers-flang-openmp

@llvm/pr-subscribers-flang-fir-hlfir

Changes

This patch adds support for lowering the OpenMP distribute directive from PFT to MLIR. This in turn unlocks support for several related combined loop constructs as well.

Depends on #67720.


Patch is 43.11 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/67798.diff

7 Files Affected:

  • (modified) flang/lib/Lower/OpenMP.cpp (+40-1)
  • (modified) flang/test/Lower/OpenMP/FIR/if-clause.f90 (+863-39)
  • (modified) flang/test/Lower/OpenMP/FIR/loop-combined.f90 (+149-13)
  • (added) flang/test/Lower/OpenMP/distribute.f90 (+117)
  • (modified) mlir/include/mlir/Dialect/OpenMP/OpenMPOps.td (+51)
  • (modified) mlir/lib/Dialect/OpenMP/IR/OpenMPDialect.cpp (+16)
  • (modified) mlir/test/Dialect/OpenMP/ops.mlir (+30)
diff --git a/flang/lib/Lower/OpenMP.cpp b/flang/lib/Lower/OpenMP.cpp
index 5f5e968eaaa6414..37ad86d6ac599aa 100644
--- a/flang/lib/Lower/OpenMP.cpp
+++ b/flang/lib/Lower/OpenMP.cpp
@@ -497,6 +497,9 @@ class ClauseProcessor {
   bool processDevice(Fortran::lower::StatementContext &stmtCtx,
                      mlir::Value &result) const;
   bool processDeviceType(mlir::omp::DeclareTargetDeviceType &result) const;
+  bool processDistSchedule(Fortran::lower::StatementContext &stmtCtx,
+                           mlir::UnitAttr &scheduleStatic,
+                           mlir::Value &chunkSize) const;
   bool processFinal(Fortran::lower::StatementContext &stmtCtx,
                     mlir::Value &result) const;
   bool processHint(mlir::IntegerAttr &result) const;
@@ -1335,6 +1338,19 @@ bool ClauseProcessor::processDeviceType(
   return false;
 }
 
+bool ClauseProcessor::processDistSchedule(
+    Fortran::lower::StatementContext &stmtCtx, mlir::UnitAttr &scheduleStatic,
+    mlir::Value &chunkSize) const {
+  if (auto *distScheduleClause = findUniqueClause<ClauseTy::DistSchedule>()) {
+    scheduleStatic = converter.getFirOpBuilder().getUnitAttr();
+    if (const auto *expr = Fortran::semantics::GetExpr(distScheduleClause->v)) {
+      chunkSize = fir::getBase(converter.genExprValue(*expr, stmtCtx));
+    }
+    return true;
+  }
+  return false;
+}
+
 bool ClauseProcessor::processFinal(Fortran::lower::StatementContext &stmtCtx,
                                    mlir::Value &result) const {
   const Fortran::parser::CharBlock *source = nullptr;
@@ -2473,6 +2489,27 @@ genTeamsOp(Fortran::lower::AbstractConverter &converter,
                                  reductionDeclSymbols));
 }
 
+static mlir::omp::DistributeOp
+genDistributeOp(Fortran::lower::AbstractConverter &converter,
+                Fortran::lower::pft::Evaluation &eval,
+                mlir::Location currentLocation,
+                const Fortran::parser::OmpClauseList &clauseList,
+                bool outerCombined = false) {
+  Fortran::lower::StatementContext stmtCtx;
+  mlir::UnitAttr scheduleStatic;
+  mlir::Value chunkSize;
+  llvm::SmallVector<mlir::Value> allocateOperands, allocatorOperands;
+
+  ClauseProcessor cp(converter, clauseList);
+  cp.processDistSchedule(stmtCtx, scheduleStatic, chunkSize);
+  cp.processAllocate(allocatorOperands, allocateOperands);
+
+  return genOpWithBody<mlir::omp::DistributeOp>(
+      converter, eval, currentLocation, outerCombined, &clauseList,
+      scheduleStatic, chunkSize, allocateOperands, allocatorOperands,
+      /*order_val=*/nullptr);
+}
+
 /// Extract the list of function and variable symbols affected by the given
 /// 'declare target' directive and return the intended device type for them.
 static mlir::omp::DeclareTargetDeviceType getDeclareTargetInfo(
@@ -2681,7 +2718,9 @@ static void genOMP(Fortran::lower::AbstractConverter &converter,
     }
     if (llvm::omp::allDistributeSet.test(ompDirective)) {
       validDirective = true;
-      TODO(currentLocation, "Distribute construct");
+      bool outerCombined = llvm::omp::topDistributeSet.test(ompDirective);
+      genDistributeOp(converter, eval, currentLocation, loopOpClauseList,
+                      outerCombined);
     }
     if ((llvm::omp::allParallelSet & llvm::omp::loopConstructSet)
             .test(ompDirective)) {
diff --git a/flang/test/Lower/OpenMP/FIR/if-clause.f90 b/flang/test/Lower/OpenMP/FIR/if-clause.f90
index ef98a00f10dbd21..bf77c3edaefed10 100644
--- a/flang/test/Lower/OpenMP/FIR/if-clause.f90
+++ b/flang/test/Lower/OpenMP/FIR/if-clause.f90
@@ -7,23 +7,147 @@ program main
   integer :: i
 
   ! TODO When they are supported, add tests for:
-  ! - DISTRIBUTE PARALLEL DO
-  ! - DISTRIBUTE PARALLEL DO SIMD
-  ! - DISTRIBUTE SIMD
   ! - PARALLEL SECTIONS
   ! - PARALLEL WORKSHARE
-  ! - TARGET PARALLEL
-  ! - TARGET TEAMS DISTRIBUTE
-  ! - TARGET TEAMS DISTRIBUTE PARALLEL DO
-  ! - TARGET TEAMS DISTRIBUTE PARALLEL DO SIMD
-  ! - TARGET TEAMS DISTRIBUTE SIMD
   ! - TARGET UPDATE
   ! - TASKLOOP
   ! - TASKLOOP SIMD
-  ! - TEAMS DISTRIBUTE
-  ! - TEAMS DISTRIBUTE PARALLEL DO
-  ! - TEAMS DISTRIBUTE PARALLEL DO SIMD
-  ! - TEAMS DISTRIBUTE SIMD
+
+  ! ----------------------------------------------------------------------------
+  ! DISTRIBUTE PARALLEL DO SIMD
+  ! ----------------------------------------------------------------------------
+  !$omp teams
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp distribute parallel do simd
+  do i = 1, 10
+  end do
+  !$omp end distribute parallel do simd
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp distribute parallel do simd if(.true.)
+  do i = 1, 10
+  end do
+  !$omp end distribute parallel do simd
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp distribute parallel do simd if(parallel: .true.) if(simd: .false.)
+  do i = 1, 10
+  end do
+  !$omp end distribute parallel do simd
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp distribute parallel do simd if(parallel: .true.)
+  do i = 1, 10
+  end do
+  !$omp end distribute parallel do simd
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp distribute parallel do simd if(simd: .true.)
+  do i = 1, 10
+  end do
+  !$omp end distribute parallel do simd
+
+  !$omp end teams
+
+  ! ----------------------------------------------------------------------------
+  ! DISTRIBUTE PARALLEL DO
+  ! ----------------------------------------------------------------------------
+  !$omp teams
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp distribute parallel do
+  do i = 1, 10
+  end do
+  !$omp end distribute parallel do
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  !$omp distribute parallel do if(.true.)
+  do i = 1, 10
+  end do
+  !$omp end distribute parallel do
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  !$omp distribute parallel do if(parallel: .true.)
+  do i = 1, 10
+  end do
+  !$omp end distribute parallel do
+
+  !$omp end teams
+
+  ! ----------------------------------------------------------------------------
+  ! DISTRIBUTE SIMD
+  ! ----------------------------------------------------------------------------
+  !$omp teams
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp distribute simd
+  do i = 1, 10
+  end do
+  !$omp end distribute simd
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp distribute simd if(.true.)
+  do i = 1, 10
+  end do
+  !$omp end distribute simd
+
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp distribute simd if(simd: .true.)
+  do i = 1, 10
+  end do
+  !$omp end distribute simd
+
+  !$omp end teams
 
   ! ----------------------------------------------------------------------------
   ! DO SIMD
@@ -362,6 +486,53 @@ program main
   end do
   !$omp end target parallel do simd
 
+  ! ----------------------------------------------------------------------------
+  ! TARGET PARALLEL
+  ! ----------------------------------------------------------------------------
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target parallel
+  i = 1
+  !$omp end target parallel
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  !$omp target parallel if(.true.)
+  i = 1
+  !$omp end target parallel
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  !$omp target parallel if(target: .true.) if(parallel: .false.)
+  i = 1
+  !$omp end target parallel
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target parallel if(target: .true.)
+  i = 1
+  !$omp end target parallel
+
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  !$omp target parallel if(parallel: .true.)
+  i = 1
+  !$omp end target parallel
+
   ! ----------------------------------------------------------------------------
   ! TARGET SIMD
   ! ----------------------------------------------------------------------------
@@ -415,71 +586,724 @@ program main
   !$omp end target simd
 
   ! ----------------------------------------------------------------------------
-  ! TARGET TEAMS
+  ! TARGET TEAMS DISTRIBUTE
   ! ----------------------------------------------------------------------------
-
   ! CHECK:      omp.target
   ! CHECK-NOT:  if({{.*}})
   ! CHECK-SAME: {
   ! CHECK:      omp.teams
   ! CHECK-NOT:  if({{.*}})
   ! CHECK-SAME: {
-  !$omp target teams
-  i = 1
-  !$omp end target teams
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute
 
   ! CHECK:      omp.target
   ! CHECK-SAME: if({{.*}})
   ! CHECK:      omp.teams
   ! CHECK-SAME: if({{.*}})
-  !$omp target teams if(.true.)
-  i = 1
-  !$omp end target teams
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute if(.true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute
 
   ! CHECK:      omp.target
   ! CHECK-SAME: if({{.*}})
   ! CHECK:      omp.teams
   ! CHECK-SAME: if({{.*}})
-  !$omp target teams if(target: .true.) if(teams: .false.)
-  i = 1
-  !$omp end target teams
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute if(target: .true.) if(teams: .false.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute
 
   ! CHECK:      omp.target
   ! CHECK-SAME: if({{.*}})
   ! CHECK:      omp.teams
   ! CHECK-NOT:  if({{.*}})
   ! CHECK-SAME: {
-  !$omp target teams if(target: .true.)
-  i = 1
-  !$omp end target teams
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute if(target: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute
 
   ! CHECK:      omp.target
   ! CHECK-NOT:  if({{.*}})
   ! CHECK-SAME: {
   ! CHECK:      omp.teams
   ! CHECK-SAME: if({{.*}})
-  !$omp target teams if(teams: .true.)
-  i = 1
-  !$omp end target teams
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute if(teams: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute
 
   ! ----------------------------------------------------------------------------
-  ! TASK
+  ! TARGET TEAMS DISTRIBUTE PARALLEL DO
   ! ----------------------------------------------------------------------------
-  ! CHECK:      omp.task
+  ! CHECK:      omp.target
   ! CHECK-NOT:  if({{.*}})
   ! CHECK-SAME: {
-  !$omp task
-  !$omp end task
+  ! CHECK:      omp.teams
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute parallel do
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do
 
-  ! CHECK:      omp.task
+  ! CHECK:      omp.target
   ! CHECK-SAME: if({{.*}})
-  !$omp task if(.true.)
-  !$omp end task
+  ! CHECK:      omp.teams
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  !$omp target teams distribute parallel do if(.true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do
 
-  ! CHECK:      omp.task
+  ! CHECK:      omp.target
   ! CHECK-SAME: if({{.*}})
-  !$omp task if(task: .true.)
-  !$omp end task
+  ! CHECK:      omp.teams
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  !$omp target teams distribute parallel do if(target: .true.) if(teams: .false.) if(parallel: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.teams
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute parallel do if(target: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do
+
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.teams
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute parallel do if(teams: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do
+
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.teams
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  !$omp target teams distribute parallel do if(parallel: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do
+
+  ! ----------------------------------------------------------------------------
+  ! TARGET TEAMS DISTRIBUTE PARALLEL DO SIMD
+  ! ----------------------------------------------------------------------------
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.teams
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute parallel do simd
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do simd
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.teams
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp target teams distribute parallel do simd if(.true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do simd
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.teams
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp target teams distribute parallel do simd if(target: .true.) if(teams: .false.) if(parallel: .true.) if(simd: .false.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do simd
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.teams
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute parallel do simd if(target: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do simd
+
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.teams
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute parallel do simd if(teams: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do simd
+
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.teams
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute parallel do simd if(parallel: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do simd
+
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.teams
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.parallel
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp target teams distribute parallel do simd if(simd: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute parallel do simd
+
+  ! ----------------------------------------------------------------------------
+  ! TARGET TEAMS DISTRIBUTE SIMD
+  ! ----------------------------------------------------------------------------
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.teams
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute simd
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute simd
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.teams
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp target teams distribute simd if(.true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute simd
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.teams
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-SAME: if({{.*}})
+  !$omp target teams distribute simd if(target: .true.) if(teams: .false.) if(simd: .false.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute simd
+
+  ! CHECK:      omp.target
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.teams
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute simd if(target: .true.)
+  do i = 1, 10
+  end do
+  !$omp end target teams distribute simd
+
+  ! CHECK:      omp.target
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.teams
+  ! CHECK-SAME: if({{.*}})
+  ! CHECK:      omp.distribute
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  ! CHECK:      omp.simdloop
+  ! CHECK-NOT:  if({{.*}})
+  ! CHECK-SAME: {
+  !$omp target teams distribute simd if(t...
[truncated]

@skatrak skatrak changed the title [Flang][Lower] Add lowering support of OpenMP distribute to MLIR [Flang][OpenMP][Lower] Add lowering support of OpenMP distribute to MLIR Sep 29, 2023
@skatrak
Copy link
Contributor Author

skatrak commented Jan 25, 2024

This PR can now be reviewed, since #67720 with support for the omp.distribute MLIR operation was merged.

Copy link
Member

@TIFitis TIFitis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :)

flang/lib/Lower/OpenMP.cpp Outdated Show resolved Hide resolved
bool ClauseProcessor::processDistSchedule(
Fortran::lower::StatementContext &stmtCtx, mlir::UnitAttr &scheduleStatic,
mlir::Value &chunkSize) const {
if (auto *distScheduleClause = findUniqueClause<ClauseTy::DistSchedule>()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Expand auto here and below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The returned type is spelled out in the initialization expression here and all variables obtained from calling Fortran::semantics::GetExpr in this file use auto like below, so I prefer following the same approach.

Copy link
Contributor

@DominikAdamski DominikAdamski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Please update tests before merge

Copy link
Contributor

@kiranchandramohan kiranchandramohan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs some more discussion or explanation. Could you construct a table of the lowerings for the various distribute combinations? Could you also mark if any of these are composite constructs?

@skatrak
Copy link
Contributor Author

skatrak commented Jan 31, 2024

This needs some more discussion or explanation. Could you construct a table of the lowerings for the various distribute combinations? Could you also mark if any of these are composite constructs?

The different constructs where DISTRIBUTE appears at the top, using the 5.0 spec (sect 2.9.4) as reference, are listed in the table below. All these can appear combined with TEAMS and TARGET, in which case they would be represented the same way but nested inside in the following manner:

!$omp teams distribute ...

omp.teams {
  omp.distribute {
    ...
  }
  omp.terminator
}
!$omp target teams distribute ...

omp.target {
  omp.teams {
    omp.distribute {
      ...
    }
    omp.terminator
  }
  omp.terminator
}

Construct

Composite

MLIR

!$omp distribute

No
omp.distribute {
  omp.wsloop ... {
    ...
  }
  omp.terminator
}

!$omp distribute parallel do

Yes
omp.distribute {
  omp.parallel {
    omp.wsloop ... {
      ...
    }
    omp.terminator
  }
  omp.terminator
}

!$omp distribute parallel do simd

Yes
omp.distribute {
  omp.parallel {
    omp.wsloop ... {
      <omp.simd?>
      ...
    }
    omp.terminator
  }
  omp.terminator
}

!$omp distribute simd

Yes
omp.distribute {
  omp.simdloop ... {
    ...
  }
  omp.terminator
}

@DominikAdamski
Copy link
Contributor

PR: #79843 relates to lowering of omp distributes clause.

@skatrak
Copy link
Contributor Author

skatrak commented May 9, 2024

This patch has now been updated to follow the recently introduced loop wrapper approach and is ready for review again. It only addresses standalone DISTRIBUTE and support for composite constructs will be left for follow-up PRs.

@skatrak skatrak requested review from ergawy, kparzysz and tblah May 9, 2024 12:17
@skatrak
Copy link
Contributor Author

skatrak commented May 20, 2024

Ping for review! Just updated the PR and addressed some conflicts with changes which landed recently.

Copy link
Contributor

@kiranchandramohan kiranchandramohan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the tests do not check for omp.loop_nest (in loop-combined.f90). Is that missed accidentally or by design?

@skatrak
Copy link
Contributor Author

skatrak commented Jun 6, 2024

Some of the tests do not check for omp.loop_nest (in loop-combined.f90). Is that missed accidentally or by design?

Thank you Kiran for giving this another look. In all these cases an omp.loop_nest is expected, but that test mainly just checks loop wrappers, which is also the case for if-clause.f90. The thinking behind it is that the existence of a single loop operation closely nested inside all loop wrappers is tested for each of the wrappers separately (distribute.f90, wsloop*.f90, etc) so I didn't think it to be necessary for combined/composite construct tests.

I agree it doesn't hurt to add these checks though, but maybe it's something I could do as a follow-up PR to update both tests rather than making unrelated test changes here. Let me know if that works for you.

@kiranchandramohan
Copy link
Contributor

Some of the tests do not check for omp.loop_nest (in loop-combined.f90). Is that missed accidentally or by design?

Thank you Kiran for giving this another look. In all these cases an omp.loop_nest is expected, but that test mainly just checks loop wrappers, which is also the case for if-clause.f90. The thinking behind it is that the existence of a single loop operation closely nested inside all loop wrappers is tested for each of the wrappers separately (distribute.f90, wsloop*.f90, etc) so I didn't think it to be necessary for combined/composite construct tests.

I agree it doesn't hurt to add these checks though, but maybe it's something I could do as a follow-up PR to update both tests rather than making unrelated test changes here. Let me know if that works for you.

That is fine.

This patch adds support for lowering the OpenMP DISTRIBUTE directive from PFT
to MLIR. It only supports standalone DISTRIBUTE, support for composite
constructs will come in follow-up PRs.
@skatrak skatrak merged commit fc1c34b into llvm:main Jun 12, 2024
5 of 6 checks passed
@skatrak skatrak deleted the distribute-pft-mlir branch June 12, 2024 11:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flang:fir-hlfir flang:openmp flang Flang issues not falling into any other category mlir:openmp mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants