Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimization of AMR with MPI in case of TreeMesh #1532

Closed
wants to merge 7 commits into from

Conversation

ArseniyKholod
Copy link
Contributor

@ArseniyKholod ArseniyKholod commented Jun 16, 2023

Now if an AMRCallback wants to refine/coarse a TreeMesh, every MPI process have to do all amount of work by itself.
I started optimization with coarsen.
Idea is to distribute cells to coarsen between all MPI ranks (to the one to which it belongs). And then to collect the TreeMeshes from all ranks to root and join them. After completing this job, the root sends it to all other processes.

function coarsen!(t::AbstractTree, cell_ids::AbstractArray{Int})

Hotspots of this algorithm are MPI communications, that synchronize processes, so some of them have to wait others. Also to transfer a whole TreeMesh takes time and memory. I'm still looking for a way to improve efficiency of MPI communications
Algorithm of uniting TreeMeshes looks like difficult due to the specific conversion of all ids, but seems to be quite fast.

I'm not sure that it will work for all elixirs, but I didn't found such example yet.
Of coarse, if this algorithm will make sense in terms of efficiency, I will re-write it in more easy-to-understand way.

@codecov
Copy link

codecov bot commented Jun 16, 2023

Codecov Report

Merging #1532 (4d24895) into main (2bc1cc6) will increase coverage by 0.94%.
The diff coverage is 65.22%.

@@            Coverage Diff             @@
##             main    #1532      +/-   ##
==========================================
+ Coverage   94.84%   95.78%   +0.94%     
==========================================
  Files         363      364       +1     
  Lines       30622    31099     +477     
==========================================
+ Hits        29041    29786     +745     
+ Misses       1581     1313     -268     
Flag Coverage Δ
unittests 95.78% <65.22%> (+0.94%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/meshes/abstract_tree.jl 83.40% <65.22%> (-12.97%) ⬇️

... and 40 files with indirect coverage changes

@ArseniyKholod
Copy link
Contributor Author

ArseniyKholod commented Jun 17, 2023

In the last commit I tested that algorithm works on all tests. But MPI communications cost a lot. So I will introduce criteria, that will enable parallel AMR with treemesh only for "big" amount of cells to coarse

@ArseniyKholod
Copy link
Contributor Author

ArseniyKholod commented Jun 17, 2023

Hi @sloede!
Algorithm is not efficient yet, because of MPI communications, but it works correctly. I'll take a shot to improve efficiency and then if sufficient efficiency is possible I'll rewrite and explain every part of code, sorry that it's not understandable wrote, I added part by part after receiving a correspond error and didn't try to make it "beautiful" yet.

@ArseniyKholod ArseniyKholod changed the title Optimization of AMR with in case of TreeMesh Optimization of AMR with MPI in case of TreeMesh Jun 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant