Skip to content

Inference Model

Mohamed E. Masoud edited this page Jul 9, 2023 · 22 revisions

Default Models

For end user convenience, the MeshNet segmentation models are trained and converted to Tensorflow.js(tfjs).

While MeshNet Model has fewer parameters than the classical segmentation model U-Net, it can also achieve a competitive DICE score.

New Models

If you need to import your own 3D segmentation model, please make sure your model layers are compatible with tfjs layers.

If you are using a layer not supported by tfjs, try to find a workaround. For example, Keras batchnorm5d will raise an issue with tfjs model because there is no batchnorm5d layer in tfjs. One possible workaround here is to use a fusion technique with Keras layers by merging the batch normalization layer with the convolution layer as shown in this link.

In addition to the full-volume inference option, Brainchop is designed also to accept batch input shape such that: [null, batch_D, batch_H, batch_W, 1] (e.g. [null, 38, 38, 38, 1]), the smaller the batch dimensions the better for browser resources management.

After training your model on 3D segmentation task, multiple converters to tfjs can be used from command line, or by Python code such as:

                        # Python sample code
                        import tensorflowjs as tfjs
                        # Loading the saved keras model 
                        keras_model = keras.models.load_model('path/to/model/location')
                        # Convert and save keras to tfjs_target_dir
                        tfjs.converters.save_keras_model(keras_model, tfjs_target_dir)

For more information about importing a model (e.g. Keras) into TensorFlow.js please refer to this tutorial

Successful conversion to tfjs will result in two main files, the model.json file, and the weights bin file, as shown here

  • The model.json file consists of model topology and weights manifest.
  • The binary weights file (i.e. *.bin) consists of the concatenated weight values.

Importing the above files can easily be done using the model browse option from the model list.



Screenshot 2023-03-07 at 21-43-30 Brainchop

Brainchop browsing window to load custom models

In addition to the model and weights files mentioned above, the model browsing form also has the following settings:

  • Labels: The labels.json file that has a schema such as:

                     {"0": "background", "1": "Grey Matter", "2": "White Matter"}
    
  • Colors: The colorLUT.json file that has a schema such as:

                    {"0": "rgb(0,0,0)", "1": "rgb(0,255,0)", "2": "rgb(0,0,255)"}
    
  • Transpose Input: Transpose 3D MRI input data axis for best inference input orientation.

  • Crop Input : To speed up the inference with the limited browser memory, cropping the brain from the background before feeding the result to the inference model can lower memory use.

  • Crop Padding : Add Padding to cropped brain 3D image for better inference results.

  • Pre-Model Masking: Select a masking model for cropping the brain.

  • Filter Output By Mask: This is a voxel-wise multiplication of the resulting output and the mask resulting from the pre-model inference. This option can filter any wrongly segmented regions (e.g., skull areas), but it can also result in removing some properly segmented regions.

Clone this wiki locally