diff --git a/doc/docs/Python_User_Interface.md b/doc/docs/Python_User_Interface.md
index 0c16a5849..971e4f5d2 100644
--- a/doc/docs/Python_User_Interface.md
+++ b/doc/docs/Python_User_Interface.md
@@ -2311,11 +2311,21 @@ is a 1d array of `nfreq` dimensions.
-### Load and Dump Structure
+### Load and Dump Simulation State
-These functions dump the raw ε and μ data to disk and load it back for doing multiple simulations with the same materials but different sources etc. The only prerequisite is that the dump/load simulations have the same [chunks](Chunks_and_Symmetry.md) (i.e. the same grid, number of processors, symmetries, and PML). When using `split_chunks_evenly=False`, you must also dump the original chunk layout using `dump_chunk_layout` and load it into the new `Simulation` using the `chunk_layout` parameter. Currently only stores dispersive and non-dispersive $\varepsilon$ and $\mu$ but not nonlinearities. Note that loading data from a file in this way overwrites any `geometry` data passed to the `Simulation` constructor.
+The functions below add support for saving and restoring parts of the MEEP
+Simulation state.
+
+For all functions below, when dumping/loading state to/from a distributed filesystem
+(using say, parallel HDF5) and running in a MPI environment, setting
+`single_parallel_file=True` (the default) will result in all
+processes writing/reading to/from the same/single file after computing their respective offsets into this file.
+When set to `False`, each process writes/reads data for the chunks it owns
+to/from a separate, process unique file.
-For dump_structure and load_structure, when `single_parallel_file=True` (the default) - all chunks write to the same/single file after computing their respective offsets into this file. When set to 'False', each chunk writes writes its data to a separate file that is appropriately named by its chunk index.
+#### Load and Dump Structure
+
+These functions dump the raw ε and μ data to disk and load it back for doing multiple simulations with the same materials but different sources etc. The only prerequisite is that the dump/load simulations have the same [chunks](Chunks_and_Symmetry.md) (i.e. the same grid, number of processors, symmetries, and PML). When using `split_chunks_evenly=False`, you must also dump the original chunk layout using `dump_chunk_layout` and load it into the new `Simulation` using the `chunk_layout` parameter. Currently only stores dispersive and non-dispersive $\varepsilon$ and $\mu$ but not nonlinearities. Note that loading data from a file in this way overwrites any `geometry` data passed to the `Simulation` constructor.
@@ -2350,6 +2360,9 @@ Loads a structure from the file `fname`.
+#### Load and Dump Chunk Layout
+
+
@@ -2366,7 +2379,6 @@ Dumps the chunk layout to file `fname`.
-
To load a chunk layout into a `Simulation`, use the `chunk_layout` argument to the constructor, passing either a file obtained from `dump_chunk_layout` or another `Simulation` instance. Note that when using `split_chunks_evenly=False` this parameter is required when saving and loading flux spectra, force spectra, or near-to-far spectra so that the two runs have the same chunk layout. Just pass the `Simulation` object from the first run to the second run:
```python
@@ -2383,6 +2395,115 @@ sim2.load_minus_flux(...)
sim2.run(...)
```
+#### Load and Dump Fields
+
+These functions can be used to dump (and later load) the raw fields, auxillary
+fields and the dft fields at a certain timestamp. The timestamp at which the
+dump happens is also saved so that simulation can continue from where it was saved.
+The one pre-requisite of this feature is that it needs the Simulation object to have
+been setup *exactly* the same as the one it was dumped from.
+
+
+
+
+
+
+```python
+def dump_fields(self, fname, single_parallel_file=True):
+```
+
+
+
+Dumps the fields to the file `fname`.
+
+
+
+
+
+
+
+
+
+```python
+def load_fields(self, fname, single_parallel_file=True):
+```
+
+
+
+Loads a fields from the file `fname`.
+
+
+
+
+
+#### Checkpoint / Restore.
+
+The above dump/load related functions can be used to implement a
+checkpoint/restore *like* feature for MEEP. The caveat of this feature is that
+it does *not* store all the state required to re-create the `Simulation` object
+itself. Instead, it expects the user to create and set up the new `Simulation`
+object to be *exactly* the same as the one the state was dumped from.
+
+
+
+
+
+
+```python
+def dump(self,
+ dirname,
+ dump_structure=True,
+ dump_fields=True,
+ single_parallel_file=True):
+```
+
+
+
+Dumps simulation state.
+
+
+
+
+
+
+
+
+
+```python
+def load(self,
+ dirname,
+ load_structure=True,
+ load_fields=True,
+ single_parallel_file=True):
+```
+
+
+
+Loads simulation state.
+
+This should called right after the Simulation object has been created
+but before 'init_sim' is called.
+
+
+
+
+
+Example usage:
+
+```python
+# Setup, run and dump the simulation.
+sim1 = mp.Simulation(...)
+sim1.run(..., until=xx)
+sim1.dump(dirname, dump_structure=True, dump_fields=True, ...)
+
+...
+
+# Later in the same or a new process/run
+sim2 = mp.Simulation()
+sim2.load(dirname, load_structure=True, load_fields=True, ...)
+sim2.run(...) # continues from t=xx
+```
+
### Frequency-Domain Solver
Meep contains a frequency-domain solver that computes the fields produced in a geometry in response to a [continuous-wave (CW) source](https://en.wikipedia.org/wiki/Continuous_wave). This is based on an [iterative linear solver](https://en.wikipedia.org/wiki/Iterative_method) instead of time-stepping. For details, see Section 5.3 ("Frequency-domain solver") of [Computer Physics Communications, Vol. 181, pp. 687-702, 2010](http://ab-initio.mit.edu/~oskooi/papers/Oskooi10.pdf). Benchmarking results have shown that in many instances, such as cavities (e.g., [ring resonators](Python_Tutorials/Frequency_Domain_Solver.md)) with long-lived resonant modes, this solver converges much faster than simply running an equivalent time-domain simulation with a CW source (using the default `width` of zero for no transient turn-on), time-stepping until all transient effects from the source turn-on have disappeared, especially if the fields are desired to a very high accuracy.
diff --git a/doc/docs/Python_User_Interface.md.in b/doc/docs/Python_User_Interface.md.in
index 260df72e0..b39d2c7ce 100644
--- a/doc/docs/Python_User_Interface.md.in
+++ b/doc/docs/Python_User_Interface.md.in
@@ -347,16 +347,28 @@ And this [`DftNear2Far`](#DftNear2Far) method:
@@ DftNear2Far.flux @@
-### Load and Dump Structure
+### Load and Dump Simulation State
-These functions dump the raw ε and μ data to disk and load it back for doing multiple simulations with the same materials but different sources etc. The only prerequisite is that the dump/load simulations have the same [chunks](Chunks_and_Symmetry.md) (i.e. the same grid, number of processors, symmetries, and PML). When using `split_chunks_evenly=False`, you must also dump the original chunk layout using `dump_chunk_layout` and load it into the new `Simulation` using the `chunk_layout` parameter. Currently only stores dispersive and non-dispersive $\varepsilon$ and $\mu$ but not nonlinearities. Note that loading data from a file in this way overwrites any `geometry` data passed to the `Simulation` constructor.
+The functions below add support for saving and restoring parts of the MEEP
+Simulation state.
+
+For all functions below, when dumping/loading state to/from a distributed filesystem
+(using say, parallel HDF5) and running in a MPI environment, setting
+`single_parallel_file=True` (the default) will result in all
+processes writing/reading to/from the same/single file after computing their respective offsets into this file.
+When set to `False`, each process writes/reads data for the chunks it owns
+to/from a separate, process unique file.
-For dump_structure and load_structure, when `single_parallel_file=True` (the default) - all chunks write to the same/single file after computing their respective offsets into this file. When set to 'False', each chunk writes writes its data to a separate file that is appropriately named by its chunk index.
+#### Load and Dump Structure
+
+These functions dump the raw ε and μ data to disk and load it back for doing multiple simulations with the same materials but different sources etc. The only prerequisite is that the dump/load simulations have the same [chunks](Chunks_and_Symmetry.md) (i.e. the same grid, number of processors, symmetries, and PML). When using `split_chunks_evenly=False`, you must also dump the original chunk layout using `dump_chunk_layout` and load it into the new `Simulation` using the `chunk_layout` parameter. Currently only stores dispersive and non-dispersive $\varepsilon$ and $\mu$ but not nonlinearities. Note that loading data from a file in this way overwrites any `geometry` data passed to the `Simulation` constructor.
@@ Simulation.dump_structure @@
@@ Simulation.load_structure @@
-@@ Simulation.dump_chunk_layout @@
+#### Load and Dump Chunk Layout
+
+@@ Simulation.dump_chunk_layout @@
To load a chunk layout into a `Simulation`, use the `chunk_layout` argument to the constructor, passing either a file obtained from `dump_chunk_layout` or another `Simulation` instance. Note that when using `split_chunks_evenly=False` this parameter is required when saving and loading flux spectra, force spectra, or near-to-far spectra so that the two runs have the same chunk layout. Just pass the `Simulation` object from the first run to the second run:
@@ -374,6 +386,44 @@ sim2.load_minus_flux(...)
sim2.run(...)
```
+#### Load and Dump Fields
+
+These functions can be used to dump (and later load) the raw fields, auxillary
+fields and the dft fields at a certain timestamp. The timestamp at which the
+dump happens is also saved so that simulation can continue from where it was saved.
+The one pre-requisite of this feature is that it needs the Simulation object to have
+been setup *exactly* the same as the one it was dumped from.
+
+@@ Simulation.dump_fields @@
+@@ Simulation.load_fields @@
+
+#### Checkpoint / Restore.
+
+The above dump/load related functions can be used to implement a
+checkpoint/restore *like* feature for MEEP. The caveat of this feature is that
+it does *not* store all the state required to re-create the `Simulation` object
+itself. Instead, it expects the user to create and set up the new `Simulation`
+object to be *exactly* the same as the one the state was dumped from.
+
+@@ Simulation.dump @@
+@@ Simulation.load @@
+
+Example usage:
+
+```python
+# Setup, run and dump the simulation.
+sim1 = mp.Simulation(...)
+sim1.run(..., until=xx)
+sim1.dump(dirname, dump_structure=True, dump_fields=True, ...)
+
+...
+
+# Later in the same or a new process/run
+sim2 = mp.Simulation()
+sim2.load(dirname, load_structure=True, load_fields=True, ...)
+sim2.run(...) # continues from t=xx
+```
+
### Frequency-Domain Solver
Meep contains a frequency-domain solver that computes the fields produced in a geometry in response to a [continuous-wave (CW) source](https://en.wikipedia.org/wiki/Continuous_wave). This is based on an [iterative linear solver](https://en.wikipedia.org/wiki/Iterative_method) instead of time-stepping. For details, see Section 5.3 ("Frequency-domain solver") of [Computer Physics Communications, Vol. 181, pp. 687-702, 2010](http://ab-initio.mit.edu/~oskooi/papers/Oskooi10.pdf). Benchmarking results have shown that in many instances, such as cavities (e.g., [ring resonators](Python_Tutorials/Frequency_Domain_Solver.md)) with long-lived resonant modes, this solver converges much faster than simply running an equivalent time-domain simulation with a CW source (using the default `width` of zero for no transient turn-on), time-stepping until all transient effects from the source turn-on have disappeared, especially if the fields are desired to a very high accuracy.