diff --git a/docs/polaris/workflows/libensemble.md b/docs/polaris/workflows/libensemble.md index c0624963c..58ad34bc9 100644 --- a/docs/polaris/workflows/libensemble.md +++ b/docs/polaris/workflows/libensemble.md @@ -1,6 +1,13 @@ # libEnsemble -libEnsemble is a Python toolkit for running dynamic ensembles of calculations. Users provide generator and simulator functions to express their ensembles, where the generator can steer the ensemble based on previous results. A library of example functions is available which can be modified as needed. These functions can submit external executables at any scale and in a portable way. System details are detected, and dynamic resource management is provided. libEnsemble can be used in a consistent manner on laptops, clusters, and supercomputers with minimal required dependencies. +[libEnsemble](https://libensemble.readthedocs.io/en/main/) is a Python toolkit for running dynamic ensembles of calculations. + +Users provide generator and simulator functions to express their ensembles, where the generator can steer the ensemble based on previous results. These functions can portably submit external executables at any scale. + +System details are detected, and dynamic resource management is provided. This includes automatically detecting, assigning, and reassigning +GPUs for ensemble members. + +libEnsemble can be used in a consistent manner on laptops, clusters, and supercomputers with minimal required dependencies. ## Getting libEnsemble on Polaris @@ -22,7 +29,7 @@ See the docs for more details on using [python on Polaris](https://docs.alcf.anl . /path/to-venv/bin/activate ``` - Where /path/to-venv can be anywhere you have write access. + Where ``/path/to-venv`` can be anywhere you have write access. For future uses just load the conda module and run the activate line. You can also ensure you are using the latest version of libEnsemble: @@ -35,25 +42,23 @@ See the docs for more details on using [python on Polaris](https://docs.alcf.anl ## libEnsemble examples -For a very simple example of using libEnsemble see the [Simple Sine tutorial](https://libensemble.readthedocs.io/en/main/tutorials/local_sine_tutorial.html) +For a very simple example of using libEnsemble see the [Simple Introduction tutorial](https://libensemble.readthedocs.io/en/main/tutorials/local_sine_tutorial.html) For an example that runs a small ensemble using a C application (offloading work to the GPU), see -[the GPU app tutorial](https://libensemble.readthedocs.io/en/main/tutorials/forces_gpu_tutorial.html). The required files for the this tutorial can be found in [this directory](https://github.com/Libensemble/libensemble/tree/develop/libensemble/tests/scaling_tests/forces). Also, see the -[video demo](https://youtu.be/Ff0dYYLQzoU). - -Note that when initializing the MPIExecutor on Polaris (**run_libe_forces.py** in the example), you currently need to use the following options to pick up the correct MPI runner: - - exctr = MPIExecutor(custom_info={'mpi_runner':'mpich', 'runner_name':'mpiexec'}) +[the GPU app tutorial](https://libensemble.readthedocs.io/en/main/tutorials/forces_gpu_tutorial.html). +The required files for this tutorial can be found +in [this directory](https://github.com/Libensemble/libensemble/tree/main/libensemble/tests/scaling_tests/forces). +A [video demo](https://youtu.be/Ff0dYYLQzoU) is also available. ## Job Submission -libEnsemble runs on the compute nodes on Polaris using either -``multi-processing`` or ``mpi4py``. The user can set the number of workers for +libEnsemble runs on the compute nodes on Polaris using either Python's +``multiprocessing`` or ``mpi4py``. The user can set the number of workers for maximum concurrency. libEnsemble will detect the nodes available from the PBS environment and use these for running simulations. Polaris supports -running multiple concurrent simulations on each node if desired, +running multiple concurrent simulations on each node if desired. -A simple example batch script for a libEnsemble use case that runs four workers on one node: +A simple example batch script for a libEnsemble use case that runs five workers on one node: ```shell #!/bin/bash -l @@ -65,7 +70,7 @@ A simple example batch script for a libEnsemble use case that runs four workers export MPICH_GPU_SUPPORT_ENABLED=1 cd $PBS_O_WORKDIR - python run_libe_forces.py --comms local --nworkers 4 + python run_libe_forces.py --comms local --nworkers 5 ``` The script can be run with: