-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use real-time GPU isosurface rendering for in Isodose module #80
Comments
I'm going to add support for GPU isosurface volume rendering for 3D view as a part of VTK practice. According to VTK example it shouldn't be very hard. A couple of questions beforehand:
|
As an experiment, you can just put the volume rendering actor and mapper directly in the VTK renderer (see helpful code snippets here). The most important thing to try if it works well with multi-volume rendering (if you use a separate volume raycast mapper for each volume then they are not composited correctly). It would be completely useless if we could only render a single volume at a time. In addition to this, you also need to test if point picking works on these surfaces (can you place a markup fiducial on the rendered surface?). Even if everything works well, you would still need a solution for slice display. You would also need to implement non-linear transform support for volume rendering. Volume rendered surfaces cannot benefit from advanced physical based rendering options, screen-space ambient occlusion, surface textures and shaders. Overall, it seems that using volume renderer for displaying isosurfaces would require lots of programming effort and you would end up with a solution that is much more limited in several aspects compared to what we have now. Probably there are other things that you could work on that have much better cost/benefit ratio. |
To solve the original problem of having many nodes in the scene, a much better solution could be to show isodose surfaces and lines using a single segmentation node. |
Maybe you are right, it doesn't worth the effort.
I can add a check box on the module panel to select if the isodose surfaces should be stored as a segmentation node. |
I like the idea of putting the isodose surfaces into a single segmentation node. In that case the key thing will be to make the master representation closed surface to avoid unnecessary and unexpected conversions. |
This is a partial solution of the issue SlicerRt#80. There are two modes of isodose representation have been implemented, single border and double border mode. Single border mode (solid surface) shows isosurface for dose high than a thresholdMin. Double border mode (hollow or ring-shaped surface) shows isosurface in dose range from thresholdMin up to thresholdMax. For example dose values: 10 Gy, 25 Gy, 30 Gy, 50Gy. Single border mode will generate isosurfaces: higher than 10Gy, higher than 25 Gy, higher than 30 Gy, higher than 50 Gy. Double border mode will generate isosurfaces: from 10 Gy to 25 Gy, from 25 Gy to 30 Gy, from 30 Gy to 50 Gy.
In addition to generating model nodes for the isosurfaces (which results in model hierarchies that are hard to keep in sync and hard to handle when multiple dose volumes are present), add support for GPU-based isosurface rendering. See Kitware blog post https://blog.kitware.com/gpu-rendering-of-isosurfaces/
Related tickets:
#18
#8
The text was updated successfully, but these errors were encountered: