-
Notifications
You must be signed in to change notification settings - Fork 2
Home
HPC node architectures are trending toward large numbers of cores/CPUs, as well as accelerators such as GPUs. To make better use of shared resources within a node and to program accelerators, users have turned to hybrid programming that combines MPI with a node-level and data parallel programming models. The goal of this working group is to improve the programmability and performance of MPI+X usage models
Investigate support for MPI communication involving accelerators
- Hybrid programming of MPI + [CUDA, HIP, DPC++, ...]
- Host-initiated communication with accelerator memory
- Host-setup with accelerator triggering
- Host-setup, enqueued on a stream or queue
- Accelerator-initiated communication
Investigate improved compatibility/efficiency for multithreaded MPI communication
- MPI + [Pthreads, OpenMP, C/C++ threading, TBB, ...]
- James Dinan -- jdinan (at) nvidia (dot) com
mpiwg-hybridpm (at) lists (dot) mpi-forum (dot) org -- Subscribe
The HACC WG is currently sharing a meeting time with the Persistence WG.
Wednesdays, 10:00 AM - 11:00 AM ET
Meeting details and recordings are available here.
- Continuations proposal #6 (Joseph)
- Memory Allocation Kinds Side Document v2
- OpenMP (Edgar, Maria)
- OpenCL (Maria)
- Coherent Memory, std::par (Rohit)
- Accelerator bindings for partitioned communication #4 (Jim)
- File IO from GPUs (Edgar)
- Accelerator Synchronous MPI Operations #11 (Need someone to drive)
- MPI Teams / Helper Threads (Joseph)
- Clarification of thread ordering rules #117 (MPI 4.1)
- Integration with accelerator programming models:
- Accelerator info keys follow-on
- Memory allocation kind in MPI allocators (e.g. MPI_Win_allocate, MPI_Alloc_mem, etc.)
- Partitioned communication buffer preparation (shared with Persistence WG) #264
- Accelerator info keys follow-on
- Asynchronous operations #585
- 1/17 -- Planning Meeting
- 1/31
- 2/7 -- Memory Alloc Kinds (Rohit Zambre)
- 2/14
- 2/28
- 3/6 -- Continuations (Joseph Schuchart)
- 3/20 -- MPI Forum Meeting in Chicago
- 3/27