Skip to content

Commit

Permalink
Merge branch 'main' into quasi_shallow_water_1d
Browse files Browse the repository at this point in the history
  • Loading branch information
jlchan authored Sep 8, 2023
2 parents edf45e0 + 953f88a commit ef6503c
Show file tree
Hide file tree
Showing 4 changed files with 35 additions and 15 deletions.
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "Trixi"
uuid = "a7f1ee26-1774-49b1-8366-f1abc58fbfcb"
authors = ["Michael Schlottke-Lakemper <[email protected]>", "Gregor Gassner <[email protected]>", "Hendrik Ranocha <[email protected]>", "Andrew R. Winters <[email protected]>", "Jesse Chan <[email protected]>"]
version = "0.5.41-pre"
version = "0.5.42-pre"

[deps]
CodeTracking = "da1fd8a2-8d9e-5ec2-8556-3022fb5608a2"
Expand Down Expand Up @@ -56,7 +56,7 @@ DiffEqCallbacks = "2.25"
EllipsisNotation = "1.0"
FillArrays = "0.13.2, 1"
ForwardDiff = "0.10.18"
HDF5 = "0.14, 0.15, 0.16"
HDF5 = "0.14, 0.15, 0.16, 0.17"
IfElse = "0.1"
LinearMaps = "2.7, 3.0"
LoopVectorization = "0.12.118"
Expand Down
41 changes: 30 additions & 11 deletions docs/src/parallelization.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,17 +166,36 @@ section, specifically at the descriptions of the performance index (PID).


### Using error-based step size control with MPI
If you use error-based step size control (see also the section on [error-based adaptive step sizes](@ref adaptive_step_sizes))
together with MPI you need to pass `internalnorm=ode_norm` and you should pass
`unstable_check=ode_unstable_check` to OrdinaryDiffEq's [`solve`](https://docs.sciml.ai/DiffEqDocs/latest/basics/common_solver_opts/),
If you use error-based step size control (see also the section on
[error-based adaptive step sizes](@ref adaptive_step_sizes)) together with MPI you need to pass
`internalnorm=ode_norm` and you should pass `unstable_check=ode_unstable_check` to
OrdinaryDiffEq's [`solve`](https://docs.sciml.ai/DiffEqDocs/latest/basics/common_solver_opts/),
which are both included in [`ode_default_options`](@ref).

### Using parallel input and output
Trixi.jl allows parallel I/O using MPI by leveraging parallel HDF5.jl. To enable this, you first need
to use a system-provided MPI library, see also [here](@ref parallel_system_MPI) and you need to tell
[HDF5.jl](https://github.com/JuliaIO/HDF5.jl) to use this library.
To do so, set the environment variable `JULIA_HDF5_PATH` to the local path
that contains the `libhdf5.so` shared object file and build HDF5.jl by executing `using Pkg; Pkg.build("HDF5")`.
For more information see also the [documentation of HDF5.jl](https://juliaio.github.io/HDF5.jl/stable/mpi/).

If you do not perform these steps to use parallel HDF5 or if the HDF5 is not MPI-enabled, Trixi.jl will fall back on a less efficient I/O mechanism. In that case, all disk I/O is performed only on rank zero and data is distributed to/gathered from the other ranks using regular MPI communication.
Trixi.jl allows parallel I/O using MPI by leveraging parallel HDF5.jl. On most systems, this is
enabled by default. Additionally, you can also use a local installation of the HDF5 library
(with MPI support). For this, you first need to use a system-provided MPI library, see also
[here](@ref parallel_system_MPI) and you need to tell [HDF5.jl](https://github.com/JuliaIO/HDF5.jl)
to use this library. To do so with HDF5.jl v0.17 and newer, set the preferences `libhdf5` and
`libhdf5_hl` to the local paths of the libraries `libhdf5` and `libhdf5_hl`, which can be done by
```julia
julia> using Preferences, UUIDs
julia> set_preferences!(
UUID("f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"), # UUID of HDF5.jl
"libhdf5" => "/path/to/your/libhdf5.so",
"libhdf5_hl" => "/path/to/your/libhdf5_hl.so", force = true)
```
For more information see also the
[documentation of HDF5.jl](https://juliaio.github.io/HDF5.jl/stable/mpi/). In total, you should
have a file called LocalPreferences.toml in the project directory that contains a section
`[MPIPreferences]`, a section `[HDF5]` with entries `libhdf5` and `libhdf5_hl`, a section `[P4est]`
with the entry `libp4est` as well as a section `[T8code]` with the entries `libt8`, `libp4est`
and `libsc`.
If you use HDF5.jl v0.16 or older, instead of setting the preferences for HDF5.jl, you need to set
the environment variable `JULIA_HDF5_PATH` to the path, where the HDF5 binaries are located and
then call `]build HDF5` from Julia.

If HDF5 is not MPI-enabled, Trixi.jl will fall back on a less efficient I/O mechanism. In that
case, all disk I/O is performed only on rank zero and data is distributed to/gathered from the
other ranks using regular MPI communication.
3 changes: 2 additions & 1 deletion test/test_tree_1d_shallowwater.jl
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,8 @@ EXAMPLES_DIR = pkgdir(Trixi, "examples", "tree_1d_dgsem")
@test_trixi_include(joinpath(EXAMPLES_DIR, "elixir_shallowwater_beach.jl"),
l2 = [0.17979210479598923, 1.2377495706611434, 6.289818963361573e-8],
linf = [0.845938394800688, 3.3740800777086575, 4.4541473087633676e-7],
tspan = (0.0, 0.05))
tspan = (0.0, 0.05),
atol = 3e-10) # see https://github.com/trixi-framework/Trixi.jl/issues/1617
end

@trixi_testset "elixir_shallowwater_parabolic_bowl.jl" begin
Expand Down
2 changes: 1 addition & 1 deletion test/test_trixi.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import Trixi
# inside an elixir.
"""
@test_trixi_include(elixir; l2=nothing, linf=nothing,
atol=10*eps(), rtol=0.001,
atol=500*eps(), rtol=sqrt(eps()),
parameters...)
Test Trixi by calling `trixi_include(elixir; parameters...)`.
Expand Down

0 comments on commit ef6503c

Please sign in to comment.