diff --git a/Project.toml b/Project.toml index cac573a7d8..1d62cb9034 100644 --- a/Project.toml +++ b/Project.toml @@ -66,7 +66,7 @@ DiffEqCallbacks = "2.25" EllipsisNotation = "1.0" FillArrays = "0.13.2, 1" ForwardDiff = "0.10.18" -HDF5 = "0.14, 0.15, 0.16" +HDF5 = "0.14, 0.15, 0.16, 0.17" IfElse = "0.1" LinearMaps = "2.7, 3.0" LoopVectorization = "0.12.118" diff --git a/docs/src/parallelization.md b/docs/src/parallelization.md index 245fdc1185..d56777c9af 100644 --- a/docs/src/parallelization.md +++ b/docs/src/parallelization.md @@ -166,17 +166,36 @@ section, specifically at the descriptions of the performance index (PID). ### Using error-based step size control with MPI -If you use error-based step size control (see also the section on [error-based adaptive step sizes](@ref adaptive_step_sizes)) -together with MPI you need to pass `internalnorm=ode_norm` and you should pass -`unstable_check=ode_unstable_check` to OrdinaryDiffEq's [`solve`](https://docs.sciml.ai/DiffEqDocs/latest/basics/common_solver_opts/), +If you use error-based step size control (see also the section on +[error-based adaptive step sizes](@ref adaptive_step_sizes)) together with MPI you need to pass +`internalnorm=ode_norm` and you should pass `unstable_check=ode_unstable_check` to +OrdinaryDiffEq's [`solve`](https://docs.sciml.ai/DiffEqDocs/latest/basics/common_solver_opts/), which are both included in [`ode_default_options`](@ref). ### Using parallel input and output -Trixi.jl allows parallel I/O using MPI by leveraging parallel HDF5.jl. To enable this, you first need -to use a system-provided MPI library, see also [here](@ref parallel_system_MPI) and you need to tell -[HDF5.jl](https://github.com/JuliaIO/HDF5.jl) to use this library. -To do so, set the environment variable `JULIA_HDF5_PATH` to the local path -that contains the `libhdf5.so` shared object file and build HDF5.jl by executing `using Pkg; Pkg.build("HDF5")`. -For more information see also the [documentation of HDF5.jl](https://juliaio.github.io/HDF5.jl/stable/mpi/). - -If you do not perform these steps to use parallel HDF5 or if the HDF5 is not MPI-enabled, Trixi.jl will fall back on a less efficient I/O mechanism. In that case, all disk I/O is performed only on rank zero and data is distributed to/gathered from the other ranks using regular MPI communication. +Trixi.jl allows parallel I/O using MPI by leveraging parallel HDF5.jl. On most systems, this is +enabled by default. Additionally, you can also use a local installation of the HDF5 library +(with MPI support). For this, you first need to use a system-provided MPI library, see also +[here](@ref parallel_system_MPI) and you need to tell [HDF5.jl](https://github.com/JuliaIO/HDF5.jl) +to use this library. To do so with HDF5.jl v0.17 and newer, set the preferences `libhdf5` and +`libhdf5_hl` to the local paths of the libraries `libhdf5` and `libhdf5_hl`, which can be done by +```julia +julia> using Preferences, UUIDs +julia> set_preferences!( + UUID("f67ccb44-e63f-5c2f-98bd-6dc0ccc4ba2f"), # UUID of HDF5.jl + "libhdf5" => "/path/to/your/libhdf5.so", + "libhdf5_hl" => "/path/to/your/libhdf5_hl.so", force = true) +``` +For more information see also the +[documentation of HDF5.jl](https://juliaio.github.io/HDF5.jl/stable/mpi/). In total, you should +have a file called LocalPreferences.toml in the project directory that contains a section +`[MPIPreferences]`, a section `[HDF5]` with entries `libhdf5` and `libhdf5_hl`, a section `[P4est]` +with the entry `libp4est` as well as a section `[T8code]` with the entries `libt8`, `libp4est` +and `libsc`. +If you use HDF5.jl v0.16 or older, instead of setting the preferences for HDF5.jl, you need to set +the environment variable `JULIA_HDF5_PATH` to the path, where the HDF5 binaries are located and +then call `]build HDF5` from Julia. + +If HDF5 is not MPI-enabled, Trixi.jl will fall back on a less efficient I/O mechanism. In that +case, all disk I/O is performed only on rank zero and data is distributed to/gathered from the +other ranks using regular MPI communication.