Skip to content

Commit

Permalink
Dump load docs (#1784)
Browse files Browse the repository at this point in the history
* call self._evaluate_dft_objects() before loading saved fields

* remove extraneous log statements

* Update docs for the simulation state dump/load methods to cover the new methods added to support field dump/load.
  • Loading branch information
kkg4theweb authored Oct 13, 2021
1 parent b9efcc6 commit 5de8230
Show file tree
Hide file tree
Showing 2 changed files with 179 additions and 8 deletions.
129 changes: 125 additions & 4 deletions doc/docs/Python_User_Interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -2311,11 +2311,21 @@ is a 1d array of `nfreq` dimensions.
</div>


### Load and Dump Structure
### Load and Dump Simulation State

These functions dump the raw ε and μ data to disk and load it back for doing multiple simulations with the same materials but different sources etc. The only prerequisite is that the dump/load simulations have the same [chunks](Chunks_and_Symmetry.md) (i.e. the same grid, number of processors, symmetries, and PML). When using `split_chunks_evenly=False`, you must also dump the original chunk layout using `dump_chunk_layout` and load it into the new `Simulation` using the `chunk_layout` parameter. Currently only stores dispersive and non-dispersive $\varepsilon$ and $\mu$ but not nonlinearities. Note that loading data from a file in this way overwrites any `geometry` data passed to the `Simulation` constructor.
The functions below add support for saving and restoring parts of the MEEP
Simulation state.

For all functions below, when dumping/loading state to/from a distributed filesystem
(using say, parallel HDF5) and running in a MPI environment, setting
`single_parallel_file=True` (the default) will result in all
processes writing/reading to/from the same/single file after computing their respective offsets into this file.
When set to `False`, each process writes/reads data for the chunks it owns
to/from a separate, process unique file.

For dump_structure and load_structure, when `single_parallel_file=True` (the default) - all chunks write to the same/single file after computing their respective offsets into this file. When set to 'False', each chunk writes writes its data to a separate file that is appropriately named by its chunk index.
#### Load and Dump Structure

These functions dump the raw ε and μ data to disk and load it back for doing multiple simulations with the same materials but different sources etc. The only prerequisite is that the dump/load simulations have the same [chunks](Chunks_and_Symmetry.md) (i.e. the same grid, number of processors, symmetries, and PML). When using `split_chunks_evenly=False`, you must also dump the original chunk layout using `dump_chunk_layout` and load it into the new `Simulation` using the `chunk_layout` parameter. Currently only stores dispersive and non-dispersive $\varepsilon$ and $\mu$ but not nonlinearities. Note that loading data from a file in this way overwrites any `geometry` data passed to the `Simulation` constructor.


<a id="Simulation.dump_structure"></a>
Expand Down Expand Up @@ -2350,6 +2360,9 @@ Loads a structure from the file `fname`.

</div>

#### Load and Dump Chunk Layout


<a id="Simulation.dump_chunk_layout"></a>

<div class="class_members" markdown="1">
Expand All @@ -2366,7 +2379,6 @@ Dumps the chunk layout to file `fname`.

</div>


To load a chunk layout into a `Simulation`, use the `chunk_layout` argument to the constructor, passing either a file obtained from `dump_chunk_layout` or another `Simulation` instance. Note that when using `split_chunks_evenly=False` this parameter is required when saving and loading flux spectra, force spectra, or near-to-far spectra so that the two runs have the same chunk layout. Just pass the `Simulation` object from the first run to the second run:

```python
Expand All @@ -2383,6 +2395,115 @@ sim2.load_minus_flux(...)
sim2.run(...)
```

#### Load and Dump Fields

These functions can be used to dump (and later load) the raw fields, auxillary
fields and the dft fields at a certain timestamp. The timestamp at which the
dump happens is also saved so that simulation can continue from where it was saved.
The one pre-requisite of this feature is that it needs the Simulation object to have
been setup *exactly* the same as the one it was dumped from.


<a id="Simulation.dump_fields"></a>

<div class="class_members" markdown="1">

```python
def dump_fields(self, fname, single_parallel_file=True):
```

<div class="method_docstring" markdown="1">

Dumps the fields to the file `fname`.

</div>

</div>

<a id="Simulation.load_fields"></a>

<div class="class_members" markdown="1">

```python
def load_fields(self, fname, single_parallel_file=True):
```

<div class="method_docstring" markdown="1">

Loads a fields from the file `fname`.

</div>

</div>

#### Checkpoint / Restore.

The above dump/load related functions can be used to implement a
checkpoint/restore *like* feature for MEEP. The caveat of this feature is that
it does *not* store all the state required to re-create the `Simulation` object
itself. Instead, it expects the user to create and set up the new `Simulation`
object to be *exactly* the same as the one the state was dumped from.


<a id="Simulation.dump"></a>

<div class="class_members" markdown="1">

```python
def dump(self,
dirname,
dump_structure=True,
dump_fields=True,
single_parallel_file=True):
```

<div class="method_docstring" markdown="1">

Dumps simulation state.

</div>

</div>

<a id="Simulation.load"></a>

<div class="class_members" markdown="1">

```python
def load(self,
dirname,
load_structure=True,
load_fields=True,
single_parallel_file=True):
```

<div class="method_docstring" markdown="1">

Loads simulation state.

This should called right after the Simulation object has been created
but before 'init_sim' is called.

</div>

</div>

Example usage:

```python
# Setup, run and dump the simulation.
sim1 = mp.Simulation(...)
sim1.run(..., until=xx)
sim1.dump(dirname, dump_structure=True, dump_fields=True, ...)

...

# Later in the same or a new process/run
sim2 = mp.Simulation(<same setup as sim1>)
sim2.load(dirname, load_structure=True, load_fields=True, ...)
sim2.run(...) # continues from t=xx
```

### Frequency-Domain Solver

Meep contains a frequency-domain solver that computes the fields produced in a geometry in response to a [continuous-wave (CW) source](https://en.wikipedia.org/wiki/Continuous_wave). This is based on an [iterative linear solver](https://en.wikipedia.org/wiki/Iterative_method) instead of time-stepping. For details, see Section 5.3 ("Frequency-domain solver") of [Computer Physics Communications, Vol. 181, pp. 687-702, 2010](http://ab-initio.mit.edu/~oskooi/papers/Oskooi10.pdf). Benchmarking results have shown that in many instances, such as cavities (e.g., [ring resonators](Python_Tutorials/Frequency_Domain_Solver.md)) with long-lived resonant modes, this solver converges much faster than simply running an equivalent time-domain simulation with a CW source (using the default `width` of zero for no transient turn-on), time-stepping until all transient effects from the source turn-on have disappeared, especially if the fields are desired to a very high accuracy.
Expand Down
58 changes: 54 additions & 4 deletions doc/docs/Python_User_Interface.md.in
Original file line number Diff line number Diff line change
Expand Up @@ -347,16 +347,28 @@ And this [`DftNear2Far`](#DftNear2Far) method:
@@ DftNear2Far.flux @@


### Load and Dump Structure
### Load and Dump Simulation State

These functions dump the raw ε and μ data to disk and load it back for doing multiple simulations with the same materials but different sources etc. The only prerequisite is that the dump/load simulations have the same [chunks](Chunks_and_Symmetry.md) (i.e. the same grid, number of processors, symmetries, and PML). When using `split_chunks_evenly=False`, you must also dump the original chunk layout using `dump_chunk_layout` and load it into the new `Simulation` using the `chunk_layout` parameter. Currently only stores dispersive and non-dispersive $\varepsilon$ and $\mu$ but not nonlinearities. Note that loading data from a file in this way overwrites any `geometry` data passed to the `Simulation` constructor.
The functions below add support for saving and restoring parts of the MEEP
Simulation state.

For all functions below, when dumping/loading state to/from a distributed filesystem
(using say, parallel HDF5) and running in a MPI environment, setting
`single_parallel_file=True` (the default) will result in all
processes writing/reading to/from the same/single file after computing their respective offsets into this file.
When set to `False`, each process writes/reads data for the chunks it owns
to/from a separate, process unique file.

For dump_structure and load_structure, when `single_parallel_file=True` (the default) - all chunks write to the same/single file after computing their respective offsets into this file. When set to 'False', each chunk writes writes its data to a separate file that is appropriately named by its chunk index.
#### Load and Dump Structure

These functions dump the raw ε and μ data to disk and load it back for doing multiple simulations with the same materials but different sources etc. The only prerequisite is that the dump/load simulations have the same [chunks](Chunks_and_Symmetry.md) (i.e. the same grid, number of processors, symmetries, and PML). When using `split_chunks_evenly=False`, you must also dump the original chunk layout using `dump_chunk_layout` and load it into the new `Simulation` using the `chunk_layout` parameter. Currently only stores dispersive and non-dispersive $\varepsilon$ and $\mu$ but not nonlinearities. Note that loading data from a file in this way overwrites any `geometry` data passed to the `Simulation` constructor.

@@ Simulation.dump_structure @@
@@ Simulation.load_structure @@
@@ Simulation.dump_chunk_layout @@

#### Load and Dump Chunk Layout

@@ Simulation.dump_chunk_layout @@

To load a chunk layout into a `Simulation`, use the `chunk_layout` argument to the constructor, passing either a file obtained from `dump_chunk_layout` or another `Simulation` instance. Note that when using `split_chunks_evenly=False` this parameter is required when saving and loading flux spectra, force spectra, or near-to-far spectra so that the two runs have the same chunk layout. Just pass the `Simulation` object from the first run to the second run:

Expand All @@ -374,6 +386,44 @@ sim2.load_minus_flux(...)
sim2.run(...)
```

#### Load and Dump Fields

These functions can be used to dump (and later load) the raw fields, auxillary
fields and the dft fields at a certain timestamp. The timestamp at which the
dump happens is also saved so that simulation can continue from where it was saved.
The one pre-requisite of this feature is that it needs the Simulation object to have
been setup *exactly* the same as the one it was dumped from.

@@ Simulation.dump_fields @@
@@ Simulation.load_fields @@

#### Checkpoint / Restore.

The above dump/load related functions can be used to implement a
checkpoint/restore *like* feature for MEEP. The caveat of this feature is that
it does *not* store all the state required to re-create the `Simulation` object
itself. Instead, it expects the user to create and set up the new `Simulation`
object to be *exactly* the same as the one the state was dumped from.

@@ Simulation.dump @@
@@ Simulation.load @@

Example usage:

```python
# Setup, run and dump the simulation.
sim1 = mp.Simulation(...)
sim1.run(..., until=xx)
sim1.dump(dirname, dump_structure=True, dump_fields=True, ...)

...

# Later in the same or a new process/run
sim2 = mp.Simulation(<same setup as sim1>)
sim2.load(dirname, load_structure=True, load_fields=True, ...)
sim2.run(...) # continues from t=xx
```

### Frequency-Domain Solver

Meep contains a frequency-domain solver that computes the fields produced in a geometry in response to a [continuous-wave (CW) source](https://en.wikipedia.org/wiki/Continuous_wave). This is based on an [iterative linear solver](https://en.wikipedia.org/wiki/Iterative_method) instead of time-stepping. For details, see Section 5.3 ("Frequency-domain solver") of [Computer Physics Communications, Vol. 181, pp. 687-702, 2010](http://ab-initio.mit.edu/~oskooi/papers/Oskooi10.pdf). Benchmarking results have shown that in many instances, such as cavities (e.g., [ring resonators](Python_Tutorials/Frequency_Domain_Solver.md)) with long-lived resonant modes, this solver converges much faster than simply running an equivalent time-domain simulation with a CW source (using the default `width` of zero for no transient turn-on), time-stepping until all transient effects from the source turn-on have disappeared, especially if the fields are desired to a very high accuracy.
Expand Down

0 comments on commit 5de8230

Please sign in to comment.