Skip to content

This repo is used for the OHBM symposium on surface deep learning

Notifications You must be signed in to change notification settings

metrics-lab/surface-deep-learning-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OHBM 2024 - Surface Deep Learning Tutorial

This repository contains the codebase for the surface deep learning tutorial at the OHBM2024 Educational Symposium on Precision Surface Imaging.

Here, we introduce the tools to prepare surface data for surface deep learning. In particular, we detail the preprocessing steps to prepare cortical metrics and functional data in order to use the Surface Vision Transformer SiT and the Multiscale Surface Vision Transformer MS-SiT for cortical prediction & classification and cortical segmentation tasks.

Surface Vision Transformers

1. Installation & Set-up

A. Connectome Workbench

Connectome Workbench is a free software for visualising neuroimaging data and can be used for visualising cortical metrics on surfaces. Downloads and instructions here.

B. Conda usage

For PyTorch and dependencies installation with conda, please follow instructions in install.md.

C. Docker usage

For docker support, please follow instructions in docker.md

2. Data Preprocessing & Access to Preprocessed Data

To simplify reproducibility of our work, data already preprocessed as in in S. Dahan et al 2021 is available (see Section B). Otherwise, the following guideline provide the preprocessing steps for custom datasets (Section A).

A. Data preprocessing for Surface Deep Learning

The following methodology is intended for processing CIFTI files into cortical metrics and functional data in the format shape.gii and func.gii, for deep learning usage. We provide a bash script to recapitulate all the main preprocessing steps in ./tools/surface_preprocessing.sh. Below are the instructions for each step in the script.

Step-by-Step Instructions

a. CIFTI separation

First, we separate the CIFTI files in the format dscalar.nii into individual cortical metrics for the left and right hemispheres. This is done using the workbench command -cifti-separate. Each metric (e.g., cortical thickness, curvature, MyelinMap_BC, sulcal depth) is saved as a .shape.gii file.

wb_command -cifti-separate ${path_to_data}/${subjid}.corrThickness.32k_fs_LR.dscalar.nii COLUMN -metric CORTEX_LEFT ${output_folder_separate}/${subjid}.corrThickness.32k_fs_LR.L.shape.gii
wb_command -cifti-separate ${path_to_data}/${subjid}.corrThickness.32k_fs_LR.dscalar.nii COLUMN -metric CORTEX_RIGHT ${output_folder_separate}/${subjid}.corrThickness.32k_fs_LR.R.shape.gii

b. Merge Metrics:

Then, we merge the individual metric files into a single file for each hemisphere using the workbench -metric-merge. This combines multiple cortical metrics into one .shape.gii file for each hemisphere.

wb_command -metric-merge ${output_folder_separate}/${subjid}_R.shape.gii -metric ${output_folder_separate}/${subjid}.MyelinMap_BC.32k_fs_LR.R.shape.gii -metric ${output_folder_separate}/${subjid}.curvature.32k_fs_LR.R.shape.gii -metric ${output_folder_separate}/${subjid}.corrThickness.32k_fs_LR.R.shape.gii -metric ${output_folder_separate}/${subjid}.sulc.32k_fs_LR.R.shape.gii

wb_command -metric-merge ${output_folder_separate}/${subjid}_L.shape.gii -metric ${output_folder_separate}/${subjid}.MyelinMap_BC.32k_fs_LR.L.shape.gii -metric ${output_folder_separate}/${subjid}.curvature.32k_fs_LR.L.shape.gii -metric ${output_folder_separate}/${subjid}.corrThickness.32k_fs_LR.L.shape.gii -metric ${output_folder_separate}/${subjid}.sulc.32k_fs_LR.L.shape.gii

c. Metric resampling

Then, we resample the metrics to a standard icosahedral mesh (ico6) using the wb_command -metric-resample command. This ensures all metrics are aligned to a common spherical surface for consistent analysis. We provide ico6 meshes for both hemispheres in the folder ./surfaces. These icospheres work with our triangular mesh patching.

wb_command -metric-resample <metric-in> <current-sphere> <new-sphere> BARYCENTRIC <metric-out>

Where <metric-in> is the input metric or functional file, <new-sphere> being the ico6 sphere provided, <current-sphere> the sphere the input metric is currently registered to.

For further details about the metric-resample command please follow this.

Surface Vision Transformers

If the original input data, is low resolution, it can be resampled to higher resolution sequentially. For this we provide the icoN resolution surfaces. For instance:

wb_command -metric-resample <metric-in> ico-1.L.surf.gii ico-2.L.surf.gii BARYCENTRIC <metric-out>
wb_command -metric-resample <metric-in> ico-2.L.surf.gii ico-3.L.surf.gii BARYCENTRIC <metric-out>
etc.

d. Setting Cortex Left structure

For surface deep learning, by convention, right hemispheres are flipped such that they appear like left hemisphere on the sphere and all hemispheres are processed altogether in the training pipelines.

Therefore you can set the structure of the resampled metrics to CORTEX_LEFT for both hemispheres using the wb_command -set-structure command. This standardises the hemisphere structure for subsequent analysis.

for i in *; do wb_command -metric-resample ${i} ../ico-6.L.surf.gii BARYCENTRIC ${i}; done

Once symmetrised, both left and right hemispheres have the same orientation when visualised on a left hemipshere template. Surface Vision Transformers

Results: After following these steps you should get a set a shape.gii files in ico6 resolution and CORTEX LEFT orientation

e. (optional) Patching surface data

To run the Surface Vision Transformers, there are two possible approaches, (1) either preprocessed the metrics files to create numpy array with all the compiled data, or (2) use a custom-made dataset/dataloader which offer more flexibility in terms of data processing and data augmentation techniques.

To prepare the data in option 1, you can use the YAML file config/preprocessing/hparams.yml, change the path to data, set the parameters and run the ./tools/preprocessing.py script in ./tools:

cd tools
python preprocessing.py ../config/preprocessing/hparams.yml

B. (Optional) Accessing processed data

Cortical surface metrics already processed as in S. Dahan et al 2021 and A. Fawaz et al 2021 are available upon request.

How to access the processed data?

To access the data please:


G-Node GIN repository

Once the confirmation has been sent, you will have access to the G-Node GIN repository containing the data already processed. The data used for this project is in the zip files `regression_native_space_features.zip` and `regression_template_space_features.zip`. You also need to use the `ico-6.surf.gii` spherical mesh.
Surface Vision Transformers

Training and validation sets are available for the task of birth-age and scan-age prediction, in template and native configuration.

However the test set is not currently publicly available as used as testing set in the SLCN challenge on surface learning alongside the MLCN workshop at MICCAI 2022.

3.Training Surface Deep Learning Models

For training a SiT model, use the following command:

cd tools
python train.py ../config/SiT/hparams.yml

Where all hyperparameters for training and model design models are to be set in the yaml file config/SiT/hparams.yml, such as:

  • Transformer architecture
  • Data loading
  • Optimisation strategy
  • Patching configuration
  • Logging

A jupyter notebook is also provided as a tutorial for training SiT and MS-SiT models. You can find it in ./script/surface_vision_transformers_tutorial.ipynb.

4. Model Zoo

Here is a list of available pre-trained models on various datasets.

Dataset Surface Vision Transformer (SiT) Multiscale Surface Vision Transformer (MS-SiT)
dHCP (cortical metrics) Scan Age Prediction / Birth Age Prediction Scan Age Prediction / Birth Age Prediction
UKB (cortical metrics) Scan Age Prediction / Sex Classification Scan Age Prediction / Sex Classification
HCP (3T - cortical metrics) Scan Age Prediction / Sex Classification Scan Age Prediction / Sex Classification

Citation

Please cite these works if you found it useful:

Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis

@inproceedings{dahan2022surface,
  title={Surface vision transformers: Attention-based modelling applied to cortical analysis},
  author={Dahan, Simon and Fawaz, Abdulah and Williams, Logan ZJ and Yang, Chunhui and Coalson, Timothy S and Glasser, Matthew F and Edwards, A David and Rueckert, Daniel and Robinson, Emma C},
  booktitle={International Conference on Medical Imaging with Deep Learning},
  pages={282--303},
  year={2022},
  organization={PMLR}
}

The Multiscale Surface Vision Transformer

@misc{dahan2024multiscale,
      title={The Multiscale Surface Vision Transformer}, 
      author={Simon Dahan and Logan Z. J. Williams and Daniel Rueckert and Emma C. Robinson},
      year={2024},
      eprint={2303.11909},
      archivePrefix={arXiv},}
}

About

This repo is used for the OHBM symposium on surface deep learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published