-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support parallel for operations, like data parallel training, model parallel training etc #3102
base: main
Are you sure you want to change the base?
Support parallel for operations, like data parallel training, model parallel training etc #3102
Conversation
Signed-off-by: typhoonzero <[email protected]>
b14c41c
to
ef1b41f
Compare
…_parallel_for_operations
Signed-off-by: typhoonzero <[email protected]>
7d371cd
to
5cb02ab
Compare
Signed-off-by: typhoonzero <[email protected]>
c6653bb
to
cb47ff4
Compare
Signed-off-by: typhoonzero <[email protected]>
cb47ff4
to
ee6eb87
Compare
@akchinSTC Can you please checkout this feature |
What is the current status of this PR? I still see the |
This PR is ready for review now. Work under the TODO list will move on after this feature is merged. |
What changes were proposed in this pull request?
Support ParallelFor pipeline features for each operation. Set
parallel_count > 2
to start parallel operations like distributed training, distributed data processing etc. Below are features/limitations:TF_CONFIG
for Tensorflow andMASTER_ADDR
,MASTER_PORT
for Pytorch. Yet in some cases, workers rank >=1 should wait for rank0 to start. This can be achieved by waiting rank0's TCP server port by user.How was this pull request tested?
Unit tests are included in
test_bootstrapper.py
to ensure argumentparallel_count
is working.TODO: