Replies: 1 comment 1 reply
-
Yes, this is how MPI works -- each process gets its own allocation, regardless of where it is "located".
I've never heard of an "MPI shared memory interface". That seems to defeat the point of MPI (large-scale distributed computing). However, meep does support shared-memory parallelism using openMP (which is also compatible with multiple processes launched using MPI). In this case, you can launch a single process on your machine (thereby creating a single array) but set the Note that the current thread-level parallelism does not scale as well as the process-level parallelism, as described here.
is there an actual memory leak? Or are you just running out of memory? Can you trace the leak with |
Beta Was this translation helpful? Give feedback.
-
Dear authors,
I'm attempting to use several MPI processes with meep to simulate a huge volume. After checking the code, it seems that when passing a NumPy array to meep, the array is copied multiple times for every processes even if the core is inside the same machine. code
This error looks similar to the issue #1082, where a segmentation fault occurred in susceptibility object for large parallel simulations.
A possible workaround for this memory leak is instead of coping the data at each process in a different memory address, allocate the material information in the shared memory only once using C++ and MPI shared memory interface.
I attempted to load the NumPy array in the shared memory before passing it to meep. However, the data is copied multiple times.
An
example.py
file is :When using this script with
mpiexcec -n 16 python example.py
the following memory consumption is presented in the cluster:After some time the process ends up with a segmentation fault due to insufficient memory:
Beta Was this translation helpful? Give feedback.
All reactions