-
Notifications
You must be signed in to change notification settings - Fork 2
09 29 2021
James Dinan edited this page Oct 13, 2021
·
8 revisions
September 29, 2021
Agenda:
- Message ordering in the case of multiple threads (#117).
- The draft continuations proposal.
Action items:
- Jim to update Hybrid WG fork of MPI spec and follow up with Joseph
- Jim to follow up with Bill and Rolf on original intent of message ordering text
- Update: Bill and Rolf indicated there was not consensus at the time the text was written
Completion continuations:
- Discuss interface between MPI_Continueall and function signature of the callback
- Requests are not passed to the callback
- Array of status may need to be allocated by MPI runtime
- Tied to proposal to update MPI_Test etc from INOUT to IN
- Need to understand interaction with persistent operations (example would be helpful)
- What is interaction with threading level?
- Do callbacks need to occur on the main thread in funneled
- In serialized, MPI would need to ensure asynchronous callbacks don't violate threading model
- In single, MPI would need to be in an MPI call in order to process the continuation
Logically current isn't issue:
- Discussion: https://github.com/mpi-forum/mpi-issues/issues/117
- Summary: There is an ambiguity since MPI-2 in
MPI_THREAD_MULTIPLE
message ordering ---- In an
MPI_THREAD_MULTIPLE
execution, the MPI library must perform the requested (e.g. send or recv) operations in that order that it sees them. For example, if the programmer enforces an order on MPI calls made by the threads in an MPI process on the same communicator.- Con: May render some MPI libraries nonconformant for
MPI_THREAD_MULTIPLE
(none known). - Pro: Possible to establish ordering across threads. Follows principle of least surprise.
- Con: May render some MPI libraries nonconformant for
- In an
MPI_THREAD_MULITPLE
execution, MPI calls made by different threads are always unordered, even if the programmer uses thread synchronization to enforce an order on the calls made by the threads in a given MPI process.- Con: May break some applications. Difficult for tools/libraries that have to work with whatever thread level app chooses.
- Pro: Could allow for better scaling and lower overhead of multithreaded communication.
- In an
- Proposed text (Option 1): https://github.com/mpi-forum/mpi-standard/pull/627
- Open Question (Martin): Does this require
MPI_THREAD_MULTIPLE
to have a centralized implementation, e.g. in a highly multithreaded execution environment- Collectives: Already require ordering for nonblocking collectives
- Point-to-point: Per-communicator info key
mpi_assert_allow_thread_overtaking
or threading modeMPI_THREAD_CONCURRENT
or instruct users to use different communicators to create different ordering domains.