-
Notifications
You must be signed in to change notification settings - Fork 2
Home
HPC node architectures are trending toward large numbers of cores/CPUs, as well as accelerators such as GPUs. To make better use of shared resources within a node and to program accelerators, users have turned to hybrid programming that combines MPI with a node-level and data parallel programming models. The goal of this working group is to improve the programmability and performance of MPI+X usage models
Investigate support for MPI communication involving accelerators
- Hybrid programming of MPI + [CUDA, HIP, DPC++, ...]
- Host-initiated communication with accelerator memory
- Host-setup with accelerator triggering
- Host-setup, enqueued on a stream or queue
- Accelerator-initiated communication
Investigate improved compatibility/efficiency for multithreaded MPI communication
- MPI + [Pthreads, OpenMP, C/C++ threading, TBB, ...]
- James Dinan -- jdinan (at) nvidia (dot) com
mpiwg-hybridpm (at) lists (dot) mpi-forum (dot) org -- Subscribe
The HACC WG is currently sharing a meeting time with the Persistence WG.
Wednesdays, 10:00 AM - 11:00 AM ET
Meeting details and recordings are available here.
- Continuations proposal #6
- Clarification of thread ordering rules #117 (MPI 4.1)
- Integration with accelerator programming models:
- Asynchronous operations #585
- 4/26 -- Joseph Schuchart - Continuations
- 5/3 -- MPI Forum (No meeting)
- 5/10 -- Open
- 5/17 -- Rohit Zambre - Memory allocation kinds side document
- 5/24 -- ISC (No meeting)
- 5/31 -- Quincey Koziol - Truly async MPI operations
- 6/7 -- Open
- 6/14 -- Open
- 6/21 -- Open
- 6/28 -- Open
- 7/5 -- Open
- 7/12 -- MPI Forum (No meeting)
- 7/19 -- Open
- 7/26 -- Memory Allocation Kinds Side Document (Rohit)
- 8/2 -- MPI Object Handles and GPUs (Edgar)
- 8/9 -- Open
- 8/16 -- Open (Jim OOO)
- 8/23 -- Open
- 8/30 -- Open