A script to generate a 3D model using 2D confocal image slices. Thie code is used to re-construct 2-Photon_Polymerization (2PP) structures from 2D slices before and after being put under compression tests.
The Confocal Rendering script is housed within the generate.py python file. To generate the 3D model, use the terminal to run:
$ python3 generate.py
All the 2D .tif images should be placed in the images
directory The program essentially goes through the 'images' directlry and alphabetically reads through all of its .tif files, so only place the images you wish to process in there and ensure they are properly sorted.
Processed images will be placed in the processed_images
directory.
The 3D model will be saced in the main directory into structure.obj
.
-in_folder
specifies the input directory from which to read images from. It is set toimages
by default.-out_folder
specifies the directory to write post-processed images to. It is set toprocessed_images
by default.-threshold
specifies the image threshold. If the max intensity value from an image is less than this value, we will skip processing that image. Set to 40 by default.-filename
specifies the file name of the output .obj file. It is set tostructure
by default.-skeletonize
dicattes whether we want to dkeletonize and re-dialate the model. Can normalize line widths but doesnt always work. It is set toFalse
by default.
The code is broken into two parts: Image processing and 3D model generation The first pre-processes the images from th
This part reads the input .tif images from the in_folder
and runs pre-processing to determine where the structure is. It then outputs all of the post-processed images into the out_folder
. Do note: This will remove all files in the out_folder
, mainly to ensure we do not have collisions between the current iteration and any previous iterations.
The image processing uses the scikit-image machine vision library. The processing pipeline involves:
threshold_yen()
to differentiate the structure from the background of the image. Initially I tried using Otsu thresholding, but in the end, it seemed Yen worked better.closing()
function to help join any small gaps left by the thresholding.skeletonize()
the image. Since this software is primarily built for testing wireframe structures and we are mainly focused on seeing how the shape changes, this helps isolate just the features of the image. This is only done if-skeletonize
is set to Truedilation()
the skeleton to make the image thicker. This helps to actually give depth to the image and lets different layers better mesh together. We also assume the beams of the structure are of similar withs, so this helps ensure consistency between the sizing of each of the sturcture's rods. This is only done if-skeletonize
is set to True.
Then, the images are stacked into a 3D numpy array.
To generate the 3D model, we run the marching cubes algorithm on the 3D point array. The code uses an library implementation of the marching cubes algorithm which can be found here.
Generating the model involves:
- Use a gaussian blur to smooth the image. This helps get rid of any artifatcs that can ocurr due to the layers not being fully aligned. Moreover, it also helps decrease the complexity of the mode, helping to reduce the file size of the final 3D model.
- Run Marching Cubes to convert the 3D array to a surface mesh 3D model.
- Scale the structure vertically to make it a cube. Since the height of the model is calculated based on the number of z-slices available, we potentially run into issues with the image being streched based on how fine the z-resolution is. Thus, we re-scale the height to be the same size as the length/width. For the sake of this part, we assume the initial model was a cube,
The software was run and tested using Python 3.11.5
though it should be backwards compatible with older versions. The required libraries are listed below, all of them can be downloaded through pip