Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Notes on parabolic terms #1147

Open
23 of 35 tasks
ranocha opened this issue May 27, 2022 · 21 comments
Open
23 of 35 tasks

Notes on parabolic terms #1147

ranocha opened this issue May 27, 2022 · 21 comments
Labels
enhancement New feature or request

Comments

@ranocha
Copy link
Member

ranocha commented May 27, 2022

This is basically taken from #1136. Currently, our idea is to merge the DGMulti setup from that PR into dev, draft a corresponding TreeMesh & DGSEM version in another PR, and see how it looks for Navier-Stokes. When that's okay, we can go ahead and merge dev into main to finally get support for parabolic terms in Trixi.jl.

  • TreeMesh version with DGSEM
  • P4estMesh
  • StructuredMesh
  • Navier-Stokes
    • Decide on which non-dimensional scaling to use.
  • Add a parabolic solver field (or at least a way to specify parabolic numerical parameters)
    • Add parabolic penalty terms which depend on jump(u)
    • Where should penalty terms live: equations, solver, or the elixir itself? These terms will probably need to be specialized for every single equation.
  • Should we have "default" BC behavior (e.g., "boundary_condition_use_inner_state / boundary_condition_do_nothing")? Answer: no. The user should be required to specify this explicitly.
  • Should the viscous flux depend on orientation (i.e., compute x and y components separately)?
  • Decide how general we want the BCs to be
    • What data should be made available to parabolic BCs? Solution values? Gradient values? Viscous flux values?
    • Right now, we keep it simple for now, just pass in solution values
  • Docstrings
  • Documentation, examples, tutorial
    • Explain Gradient/Divergence boundary conditions in docs
    • Add a Literate tutorial for compressible Navier-Stokes
  • Naming things
    • grad_u (current) vs. gradients or gradients_u
    • no_boundary_condition (current) vs. boundary_condition_do_nothing or something else (what?). Answer: boundary_condition_do_nothing
    • Types for viscous methods like BR1, LDG? Current naming is ViscousFormulation*** but this gets pretty long.
  • Gradient variables
  • Add an "enstrophy" callback as a proof of concept issue with "combined" equations. Compressible Navier-Stokes on TreeMesh3D #1239
  • Add experimental notes to all exported and/or docstring'd methods/types.
  • Rename elixir_navier_stokes_xxx.jl to elixir_navierstokes_xxx.jl for consistency (e.g., with elixir_shallowwater_xxx.jl).
  • StepsizeCallback with viscous CFL condition
@andrewwinters5000
Copy link
Member

Regarding the analysis callback, when we call, e.g.,

function analyze(::typeof(entropy_timederivative), du, u, t,

is the du here only the rhs from the hyperbolic solver or would it already contain the sum of hyperbolic and parabolic?

@ranocha
Copy link
Member Author

ranocha commented Aug 10, 2022

du is set as

@notimeit timer() rhs!(du_ode, integrator.u, semi, t)

so it should be the combined RHS, I think.

@ranocha
Copy link
Member Author

ranocha commented Oct 4, 2022

The analysis callback reports the hyperbolic equation as the system being used, e.g.,

 Simulation running 'CompressibleEulerEquations2D' with DGSEM(polydeg=3)

This should be specialized.

@DanielDoehring
Copy link
Contributor

Hi, what is the status on AMR & Mortars? Is someone currently working on this, or is there still some theoretical work needed to get e.g. the Mortars for parabolic terms right?
If it is just for the implementation, I would offer working on this.

@sloede
Copy link
Member

sloede commented Jul 14, 2023

As far as I know, nobody is working on that (except maybe someone around you, @jlchan?). The L2 mortars should work fine for parabolic terms - I think the same approach was/is used in FLUXO.

Thus, it would be great if you'd consider working on this - it would greatly increase the applicability of AMR for realistic flow setups 👍

@ranocha
Copy link
Member Author

ranocha commented Jul 14, 2023

I can only second what @sloede has said - it would be a great contribution!

@jlchan
Copy link
Contributor

jlchan commented Jul 14, 2023

@DanielDoehring @sloede parabolic mortars are on me and @apey236's list, but we were going to focus on MPI parallelization first. If you are interested in adding mortars for parabolic terms, I would be happy to help!

@DanielDoehring
Copy link
Contributor

DanielDoehring commented Jul 17, 2023

So I tried to come up with copy-paste based version originating from the mortars for purely hyperbolic equations (see #1571).

As spelled out in the conversation of that draft PR #1571 (comment) , #1571 (comment)
I could use some support/understanding in the calc_gradient function - did you implement this originally @jlchan ?

@jlchan
Copy link
Contributor

jlchan commented Jul 17, 2023

Hi @DanielDoehring - yes, I implemented this originally in DGMulti, with @sloede and @ranocha's help for TreeMesh. Let me take a look at that PR and see if I can respond.

@DanielDoehring
Copy link
Contributor

Regarding AMR: My very first steps foreshadow that the splitform of the ODE might complicate things, thus I would prefer working with the standard form first.
Ideally, we could try to get OrdinaryDiffEq do the work which should be somehow possible - otherwise, if somehow not, we might need to combine rhs! and rhs!_parabolic into something like rhs_hyperbolic_parabolic! ourselves.

@jlchan
Copy link
Contributor

jlchan commented Jul 19, 2023

Can you explain briefly what makes it hard to use SplitODEProblem with AMR?

@DanielDoehring
Copy link
Contributor

No, not really - for this I lack proper understanding of what is going on, which is of course easier if I could only change thing at a time.
What I observe is, however, that after the grid gets changed the boundscheck

Trixi.jl/src/solvers/dg.jl

Lines 538 to 541 in 3753846

@boundscheck begin
@assert length(u_ode) ==
nvariables(equations) * nnodes(dg)^ndims(mesh) * nelements(dg, cache)
end

fails when called from

function rhs_parabolic!(du_ode, u_ode, semi::SemidiscretizationHyperbolicParabolic, t)
@unpack mesh, equations_parabolic, initial_condition, boundary_conditions_parabolic, source_terms, solver, solver_parabolic, cache, cache_parabolic = semi
u = wrap_array(u_ode, mesh, equations_parabolic, solver, cache_parabolic)
du = wrap_array(du_ode, mesh, equations_parabolic, solver, cache_parabolic)
# TODO: Taal decide, do we need to pass the mesh?
time_start = time_ns()
@trixi_timeit timer() "parabolic rhs!" rhs_parabolic!(du, u, t, mesh,
equations_parabolic,
initial_condition,
boundary_conditions_parabolic,
source_terms,
solver, solver_parabolic,
cache, cache_parabolic)

which is in turn called from

function (f::SplitFunction)(du, u, p, t)
    f.f1(f.cache, u, p, t)
    f.f2(du, u, p, t)
    du .+= f.cache
end

which resides in SciMLBase.jl/src/scimlfunctions.jl.

The error will be in the AMR part, it is just a bit hard to track down where they originate since one has to navigate pretty much through the entire SciMLBase, OrdinaryDiffEq and DiffEqBase jungle.

@ranocha
Copy link
Member Author

ranocha commented Jul 19, 2023

Could you please create an MWE of a SplitODEProblem, setup an integrator, and resize! it? I guess some caches are not resized correctly or something like that.

@DanielDoehring
Copy link
Contributor

DanielDoehring commented Jul 19, 2023

Could you please create an MWE of a SplitODEProblem, setup an integrator, and resize! it? I guess some caches are not resized correctly or something like that.

Yes, that is probably what is going on - currently only the cache of the hyperbolic part is passed on

@inline function (amr_callback::AMRCallback)(u_ode::AbstractVector,
semi::SemidiscretizationHyperbolic,
t, iter;
kwargs...)
# Note that we don't `wrap_array` the vector `u_ode` to be able to `resize!`
# it when doing AMR while still dispatching on the `mesh` etc.
amr_callback(u_ode, mesh_equations_solver_cache(semi)..., semi, t, iter; kwargs...)
end

that needs to be adapted to incorporate (at least) the cache_parabolic as well when we pass semi of type SemidiscretizationHyperbolicParabolic.

@ranocha
Copy link
Member Author

ranocha commented Jul 19, 2023

Yes, that's the fix we need to make 👍

@DanielDoehring
Copy link
Contributor

DanielDoehring commented Jul 20, 2023

Could you please create an MWE of a SplitODEProblem, setup an integrator, and resize! it? I guess some caches are not resized correctly or something like that.

MWE is not so easy to provide, but there is certainly something wrong with the resize! function from OrdinaryDiffEq for problems with splitted RHS.

While debugging I changed this

if has_changed
resize!(integrator, length(u_ode))
u_modified!(integrator, true)
end

to

if has_changed
  println("Target length u_ode: ", length(u_ode))
  println("f.cache pre-change: ", length(integrator.f.cache))
  resize!(integrator, length(u_ode))
  println("f.cache post-change: ", length(integrator.f.cache))
          
  u_modified!(integrator, true)
end

which outputs

Target length u_ode: 1024
f.cache pre-change: 4096
f.cache post-change: 4096

which is the reason why the boundscheck

Trixi.jl/src/solvers/dg.jl

Lines 538 to 541 in 3753846

@boundscheck begin
@assert length(u_ode) ==
nvariables(equations) * nnodes(dg)^ndims(mesh) * nelements(dg, cache)
end

fails for rhs_parabolic! which is passed as f1
return SplitODEProblem{iip}(rhs_parabolic!, rhs!, u0_ode, tspan, semi)

and calls thus rhs_parabolic! with unmatching dimensions of du_ode and the contents of the cache

function (f::SplitFunction)(du, u, p, t)
    f.f1(f.cache, u, p, t) # Non-resized cache is put into f1!
    f.f2(du, u, p, t)
    du .+= f.cache
end

For the moment, I avoid the splitfunction entirely by basically replicating the erronous code above in a bug-free manner via

function semidiscretize(semi::SemidiscretizationHyperbolicParabolic, tspan; split_form::Bool=true)
    u0_ode = compute_coefficients(first(tspan), semi)
    # TODO: MPI, do we want to synchronize loading and print debug statements, e.g. using
    #       mpi_isparallel() && MPI.Barrier(mpi_comm())
    #       See https://github.com/trixi-framework/Trixi.jl/issues/328
    iip = true # is-inplace, i.e., we modify a vector when calling rhs_parabolic!, rhs!
    # Note that the IMEX time integration methods of OrdinaryDiffEq.jl treat the
    # first function implicitly and the second one explicitly. Thus, we pass the
    # stiffer parabolic function first.
    if split_form
        return SplitODEProblem{iip}(rhs_parabolic!, rhs!, u0_ode, tspan, semi)
    else
        specialize = SciMLBase.FullSpecialize # specialize on rhs! and parameters (semi)
        return ODEProblem{iip, specialize}(rhs_hyperbolic_parabolic!, u0_ode, tspan, semi)
    end
end

function rhs_hyperbolic_parabolic!(du_ode, u_ode, semi::SemidiscretizationHyperbolicParabolic, t)
    @trixi_timeit timer() "hyperbolic-parabolic rhs!" begin 
        du_ode_hyp = similar(du_ode) # TODO: Avoid allocations, make member variable of something?
        rhs!(du_ode_hyp, u_ode, semi, t)
        rhs_parabolic!(du_ode, u_ode, semi, t)
        du_ode .+= du_ode_hyp
    end
end

which actually works (currently 2D only).

I will try to report this bug in the OrdinaryDiffEq package, but it is not so easy to reproduce this since I am not quite sure how to get my hands on an object of ODEIntegrator (i.e., not a solver/algorithm).

@ranocha
Copy link
Member Author

ranocha commented Jul 20, 2023

how to get my hands on an object of ODEIntegrator (i.e., not a solver/algorithm).

integrator = init(ode, alg; kwargs...) instead of sol = solve(ode, alg; kwargs...)

@DanielDoehring
Copy link
Contributor

Having #1629 merged we can tick-off mortars & AMR for TreeMeshes

@DanielDoehring
Copy link
Contributor

Having merged #1765 AMR for 3D parabolic terms on p4est meshes is now also available.

@DanielDoehring
Copy link
Contributor

I would like to pick up the viscous CFL business. I would be grateful if someone could point me to

  • A reference for the DG CFL condition in Trixi.jl
  • The desired viscous CFL condition.

@andrewwinters5000
Copy link
Member

The CFL (both advective and viscous) are similar to the FLEXI implementation. Have a look at Section 3.5 of the FLEXI code paper

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants