-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add capability to handle with node variables #67
base: main
Are you sure you want to change the base?
Conversation
The current implementation supports both reinterpolation and no reinterpolation for the subcell limiting coefficient. That's the way it is handled for the solution itself. I already prepared an PR to |
IMHO, no. It's like an AMR marker (should this cell be refined or not), which is not a continuous field that happens to be represented by a piecewise-constant approximation. Thus reinterpoaltion does not give a meaningful representation of reality but can actually be harmful. Having said that, I am not sure how easy/annoying it is to always create two files, once once with reinterpolation (for the solution itself) and once without (for the node variables). It might thus be nice to have the ability for reinterpolation, just so you can have only one VTK file for a quick'n'dirty visualization. What we really need/want though, is a ParaView plugin that can read in our HDF5 natively without us having to convert to VTK first. But that's for another day I guess :-) |
I agree with @sloede |
I thought so too. But I implemented the reinterpolation of the coefficients before it was possible to not reinterpolate the solution and just kept the code after the update.
I'm not sure that I understand your thoughts correctly. I don't think we need two different files here either way. It should be enough to just delete the implication of parameter EDIT: Oh, I now see the problem. I didn't realized that it's a problem to save reinterpolated and non-reinterpolated node-level data in one
😌 |
No, I don't think that's a good idea. In some situations, it might still be advantageous to reinterpolate the solution variables to get a nice and smooth representation of the (part of the field that is a pure) DG solution. |
After understanding the problem correctly 😅 , I will keep the current implementation for now and add a warning in that case, if that's okay for you. |
I also think that it not very "meaningful" to reinterpolate the node-wise
I think that @sloede is correct. It would be annoying but you could create two separate files where the solution re-interpolates but the trixi2vtk(joinpath("out", "solution_*.h5"), output_directory="out")
trixi2vtk(joinpath("out", "solution_*_celldata.h5"), output_directory="out", reinterpolate=false) The second call might need to use |
So, to avoid the annoying part with two files, I just added a comment if For the testing, I added two new tests. After running |
After a long time, I remembered this PR and think it is important to merge it into main. Otherwise, no visualization of the subcell limiting coefficient is possible. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with minor comments
@sloede I adapted the hash number
|
Re 1: Here you should be able to get the artifacts files for older runs: https://github.com/trixi-framework/Trixi2Vtk.jl/actions/runs/9328145281. I am not 100% sure how to get them if the initial run fails - maybe try to make the test pass artificially, download the reference files, and then update them? I know this is a finicky business, but nobody has yet put in the time for a proper testing setup for Trixi2Vtk.jl... if you're up for it, we could sit together and try to come up with some strategies on how to mitigate this 😊 Re 2: Yes, Julia v1.7 heavily downgrades Trixi.jl. We switched to a minimum of Julia v1.8 over a year ago (https://github.com/trixi-framework/Trixi.jl/blame/06407f44041a04352800b7add4b20a9ac235f4ae/Project.toml), so your tests probably require Juila v1.8 or later. I'd just put the tests in an if condition for the Julia version to check if that fixes it. |
Okay thanks. It seems like the artifacts are saved anywhere. So, I can directly download the files from https://github.com/trixi-framework/Trixi2Vtk.jl/actions/runs/9614040983.
I'm not sure if I want to spend time on that 😶
Okay thanks. Will do it. |
I totally understand. Just keep in mind that all this infrastructure around Trixi.jl, including pre-/postprocessing was built by someone who probably should have been doing other things. Not saying it needs to be you, but things like Trixi2Vtk.jl don't build themselves (yet 😬). |
It worked with downloading the reference file from the artifact page. Unfortunately, the test fails on macOS.
So, the abs tolerance has to be about And for the test without reinterpolation almost the same. There the tests work with
I don't really know what to do with that 😕 @sloede What so you think? |
Do the differences in the solution between macOS and Ubuntu already exist before converting the files to VTK? That is, is the 1e-4 difference already present in the Trixi.jl solution (files)? And did you experience something similar for the Trixi.jl tests on macOS? |
That's a good question. No, within CI runs, I never really experienced differences between linux and macos in Trixi.jl before. What I experienced was, that the subcell limiting itself is very sensitive to small changed (per construction). So, small differences can grow bigger and bigger over time. Note: Since even the tests without any reinterpolation are failing for macOS, where all data is only copied and reshaped within Trixi2Vtk, it is not only a problem here but maybe in the compression in between. I'm no expert on the Vtk files, but for me, it looks like the vtk file saves the data with type
Then, in
Is it possible that subcell limiting produces slightly different solutions on the different machines, which grow bigger due to the compression in the vtu file? Although, the largest difference for rho is about |
The compression is lossless, so I don't really think that this is the reason.
But this indicates that the difference is already there in the Trixi.jl solution, i.e., before Trixi2Vtk touches it, doesn't it? |
Yes, that's true. I just took a look into the ci file in Trixi. Do I get this correctly that we don't run the "normal" tests on macos and windows? Only the threaded and mpi tests. In the past, I always had this issue, mostly with the blast elixirs. Not really sure why that is. Do you think I sould spend some time on investigating this? I mean, the method itself is very sensitive to small changes by construction. |
Yes. Since fewer macOS runners are available to us on the free plan, we only run some basic tests on macOS and assume that everything else "just works" as well. Originally, we tested everything everywhere, but this just wasn't feasible anymore due to long CI runtimes.
This is something you need to discuss with your advisor, if it is worth to investigate right now (maybe bring it up on Friday?). IMHO it is at least somewhat unfortunate that floating point truncation errors seem to have such a significant impact already. It indicates to me that either there's something wrong with the method itself (unlikely), the implementation (less unlikely, but still), or that the setups are not robust enough (my current favorite). By robust I mean that maybe you are not running a stable simulation but rather simulations that are just not crashing. I wonder if choosing slightly different parameters would allow you to alleviate the issue (thus confirming my suspicion) or whether it is really by design.
Yes. But please leave a short comment explaining the reason for this and x-ref this PR (#67) |
Yes, I agree. I took a quick look into it and made a list of failing tests. I will add this here but will present this also on friday:
After that (but of course this must be investigated further) I would assume your are right with:
|
This PR adds the functionality to handle node-level data beside the solution itself.
This is required to visualize subcell limiting coefficients, as used in #1476 and #1502.
It comes together with the PR to Trixi, which saves the coefficients in the
.h5
files.TODO:
Question: