PeleLMeX is a solver for high fidelity reactive flow simulations, namely direct numerical simulation (DNS) and large eddy simulation (LES). The solver combines a low Mach number approach, adaptive mesh refinement (AMR), embedded boundary (EB) geometry treatment and high performance computing (HPC) to provide a flexible tool to address research questions on platforms ranging from small workstations to the world's largest GPU-accelerated supercomputers. PeleLMeX has been used to study complex flame/turbulence interactions in RCCI engines and hydrogen combustion or the effect of sustainable aviation fuel on gas turbine combustion.
PeleLMeX is part of the Pele combustion Suite.
PeleLMeX is a non-subcycling version of PeleLM based on AMReX's AmrCore and borrowing from the incompressible solver incflo. It solves of the multispecies reactive Navier-Stokes equations in the low Mach number limit as described in the documentation. It inherits most of PeleLM algorithmic features, but differs significantly in its implementation stemming from the non-subcycling approach. PeleLM is no longer under active development; PeleLMeX should be used for simulations of low Mach number reacting flows and PeleC for simulations of flows with higher Mach numbers where compressibility effects are significant.
A overview of PeleLMeX controls is provided in the documentation.
The PeleLMeX governing equations and core algorithms are described in:
https://amrex-combustion.github.io/PeleLMeX/manual/html/Model.html#mathematical-background
https://amrex-combustion.github.io/PeleLMeX/manual/html/Model.html#pelelmex-algorithm
A set of self-contained tutorials describing more complex problems is also provided:
https://amrex-combustion.github.io/PeleLMeX/manual/html/Tutorials.html
The compilations of PeleLMeX requires a C++17 compatible compiler (GCC >= 8 or Clang >= 3.6) as well as CMake >= 3.23 for compiling the SUNDIALS third party library.
Most of the examples provided hereafter and in the tutorials will use MPI to run in parallel. Although not mandatory, it is advised to build PeleLMeX with MPI support from the get go if more than a single core is available to you. Any of mpich or open-mpi is a suitable option if MPI is not already available on your platform.
Finally, when building with GPU support, CUDA >= 11 is required with NVIDIA GPUs and ROCm >= 5.2 is required with AMD GPUs.
The preferred method consists of cloning PeleLMeX and its submodules
(PelePhysics,
amrex,
AMReX-Hydro, and
SUNDIALS using a recursive git clone
:
git clone --recursive --shallow-submodules --single-branch https://github.com/AMReX-Combustion/PeleLMeX.git
The --shallow-submodules
and --single-branch
flags are recommended for most users as they substantially reduce the size of the download by
skipping extraneous parts of the git history. Developers may wish to omit these flags in order download the complete git history of PeleLMeX
and its submodules, though standard git
commands may also be used after a shallow clone to obtain the skipped portions if needed.
Alternatively, you can use a separate git clone
of each of the submodules.
The default location for PeleLMeX dependencies is the Submodules
folder but you optionally
setup the following environment variables (e.g. using bash) to any other location:
export PELE_HOME=<path_to_PeleLMeX>
export AMREX_HYDRO_HOME=${PELE_HOME}/Submodules/AMReX-Hydro
export PELE_PHYSICS_HOME=${PELE_HOME}/Submodules/PelePhysics
export AMREX_HOME=${PELE_PHYSICS_HOME}/Submodules/amrex
export SUNDIALS_HOME=${PELE_PHYSICS_HOME}/Submodules/sundials
Both GNUmake and CMake can be used to build PeleLMeX executables. GNUmake is the preferred choice for single executables when running production simulations. While CMake is the preferred method for automatically building and testing most available executables.
The code handling the initial condition and boundary conditions is unique to each case,
and subfolders in the Exec
directory provide a number of examples.
For instance, to compile the executable for the case of a rising hot bubble,
move into the HotBubble
folder:
cd PeleLMeX/Exec/RegTests/HotBubble
If this is a clean install, you will need to make the third party libraries with: make TPL
(note: if on macOS, you might need to specify COMP=llvm
in the make
statements).
Finally, make with: make -j
, or if on macOS: make -j COMP=llvm
. To clean the installation, use either make clean
or make realclean
. If running into compile errors after changing compile time options in PeleLMeX (e.g., the chemical mechanism), the first thing to try is to clean your build by running make TPLrealclean && make realclean
, then try to rebuild the third party libraries and PeleLMeX with make TPL && make -j
. See the Tutorial for this case for instructions on how to compile with different options (for example, to compile without MPI support or to compile for GPUs) and how to run the code once compiled.
To compile and test using CMake, refer to the example cmake.sh
script in the Build
directory, or reference the GitHub Actions workflows in the .github/workflows
directory.
Do you have a question ? Found an issue ? Please use the GitHub Discussions to engage with the development team or open a new GitHub issue to report a bug. The development team also encourages users to take an active role in respectfully answering each other's questions in these spaces. When reporting a bug, it is helpful to provide as much detail as possible, including a case description and the major compile and runtime options being used. Though not required, it is most effective to create a fork of this repository and share a branch of that fork with a case that minimally reproduces the error.
New contributions to PeleLMeX are welcome ! Contributing Guidelines are provided in CONTRIBUTING.md.
This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations -- the Office of Science and the National Nuclear Security Administration -- responsible for the planning and preparation of a capable exascale ecosystem -- including software, applications, hardware, advanced system engineering, and early testbed platforms -- to support the nation's exascale computing imperative.
To cite PeleLMeX, please use and the Pele software suite
@article{PeleLMeX_JOSS,
doi = {10.21105/joss.05450},
url = {https://doi.org/10.21105/joss.05450},
year = {2023},
month = october,
publisher = {The Open Journal},
volume = {8},
number = {90},
pages = {5450},
author = {Lucas Esclapez and Marc Day and John Bell and Anne Felden and Candace Gilet and Ray Grout and Marc {Henry de Frahan} and Emmanuel Motheau and Andrew Nonaka and Landon Owen and Bruce Perry and Jon Rood and Nicolas Wimer and Weiqun Zhang},
journal = {Journal of Open Source Software},
title= {{PeleLMeX: an AMR Low Mach Number Reactive Flow Simulation Code without level sub-cycling}}
}
@article{PeleSoftware,
author = {Marc T. {Henry de Frahan} and Lucas Esclapez and Jon Rood and Nicholas T. Wimer and Paul Mullowney and Bruce A. Perry and Landon Owen and Hariswaran Sitaraman and Shashank Yellapantula and Malik Hassanaly and Mohammad J. Rahimi and Michael J. Martin and Olga A. Doronina and Sreejith N. A. and Martin Rieth and Wenjun Ge and Ramanan Sankaran and Ann S. Almgren and Weiqun Zhang and John B. Bell and Ray Grout and Marc S. Day and Jacqueline H. Chen},
title = {The Pele Simulation Suite for Reacting Flows at Exascale},
booktitle = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing},
journal = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing},
chapter = {},
pages = {13-25},
doi = {10.1137/1.9781611977967.2},
URL = {https://epubs.siam.org/doi/abs/10.1137/1.9781611977967.2},
eprint = {https://epubs.siam.org/doi/pdf/10.1137/1.9781611977967.2},
year = {2024},
publisher = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing}
}