Skip to content

Commit

Permalink
Change init method from fork-server to spawn (#2427)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #2427

It seems AMD GPU runtime doesn't quite work with forkserver for some reasons that we still need to debug. Basically hipMalloc will fail for subprocess. So just change it to spawn - I think it should be fine?

Reviewed By: joebos

Differential Revision: D63311340
  • Loading branch information
xw285cornell authored and facebook-github-bot committed Sep 26, 2024
1 parent 68180e6 commit 1dee4ef
Showing 1 changed file with 7 additions and 2 deletions.
9 changes: 7 additions & 2 deletions torchrec/distributed/test_utils/multi_process.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@
)


# AMD's HIP runtime doesn't seem to work with forkserver; hipMalloc will fail
# Therefore we use spawn for HIP runtime until AMD fixes the issue
_MP_INIT_MODE = "forkserver" if torch.version.hip is None else "spawn"


class MultiProcessContext:
def __init__(
self,
Expand Down Expand Up @@ -126,7 +131,7 @@ def _run_multi_process_test(
# pyre-ignore
**kwargs,
) -> None:
ctx = multiprocessing.get_context("forkserver")
ctx = multiprocessing.get_context(_MP_INIT_MODE)
processes = []
for rank in range(world_size):
kwargs["rank"] = rank
Expand All @@ -152,7 +157,7 @@ def _run_multi_process_test_per_rank(
world_size: int,
kwargs_per_rank: List[Dict[str, Any]],
) -> None:
ctx = multiprocessing.get_context("forkserver")
ctx = multiprocessing.get_context(_MP_INIT_MODE)
processes = []
for rank in range(world_size):
kwargs = {}
Expand Down

0 comments on commit 1dee4ef

Please sign in to comment.