This repository contains codebase to use surface vision transformers models on surface data (e.g cortical data). This repository contains the official PyTorch implementation of:
-
SiT - The Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis [MIDL2022]
-
MS-SiT - The Multiscale Surface Vision Transformer [MIDL2024]
This repository list the instructions to access preprocessed cortical data for regression, classification and segmentation tasks; and to train SiT and MS-SiT models.
Here, Surface Vision Transformer (SiT) is applied on cortical data for phenotype predictions.
V.3.1 - 24.09.24
Minor codebase update - 24.09.24- Updating pre-training script for Masked Patch Pre-training (MPP)
🔥 V.3.0 - 19.09.24
Major codebase update - 18.09.24- Adding MS-SiT segmentation codebase
- Adding metrics files dataloader for SiT and MS-SiT models (numpy loader stil available)
- Update GIN repository for dHCP access
- Adding new GIN repository with MindBoggle dataset
V.2.0 - 22.07.24
Major codebase update - 22.07.24- Adding MS-SiT model into the codebase
V.1.1 - 12.02.24
Major codebase update - 12.02.24- Adding masked patch pretraining code to codebase
- can be run as simply as with: python pretrain.py ../config/SiT/pretraining/mpp.yml
V.1.0 - 18.07.22
Major codebase update - 18.07.22- birth age and scan age prediction tasks
- simplifying training script
- adding birth age prediction script
- simplifying preprocessing script
- ingle config file tasks (scan age / birth age) and data configurations (template / native)
- adding mesh indices to extract non-overlapping triangular patches from a cortical mesh ico 6 sphere representation
V.0.2
Update - 25.05.22- testing file and config
- installation guidelines
- data access
V.0.1
Initial commits - 12.10.21- training script
- README
- config file for training
Connectome Workbench is a free software for visualising neuroimaging data and can be used for visualising cortical metrics on surfaces. Downloads and instructions here.
For PyTorch and dependencies installation with conda, please follow instructions in install.md.
Coming soon
For docker support, please follow instructions in docker.md
The data used in these projects for regression tasks are cortical metrics (cortical thickness, curvature, myelin maps and sulcal depth maps) from the dHCP dataset. Instructions for processing MRI scans and extract cortical metrics can be found in S. Dahan et al 2021 and references cited in.
To simplify reproducibility of the work, data has been already pre-processed (compiled into numpy array or into raw gifti files) and is made available following the next guidelines.
Cortical surface metrics (cortical thickness, curvature, myelin maps and sulcal depth maps) already processed as in S. Dahan et al 2021 and A. Fawaz et al 2021 are available upon request.
Sign dHCP access agreement
To access the data please:
- Sign in here
- Sign the dHCP open access agreement
- Forward the confirmation email to [email protected]
Create a G-Node GIN account
Please create an account on the GIN plateform here
Get access to the G-Node GIN repository
- Please also share your G-Node username to [email protected]
- Then, you will to be added to this repository SLCN 2023
Training, validation and testing sets are available, as used in as in S. Dahan et al 2021 and A. Fawaz et al 2021, for the task of birth-age (gestational age - GA) and scan-age (postmenstrual age at scan - PMA) prediction, in template and native configurations.
dHCP data has been resampled to ico6 (40k vertices) resolution. Left and right hemispheres are symmetrised, see image below.
Important, the dHCP data is accessible in two format: numpy and gifti format.
In numpy format, the surface data is already patched (as explained in S. Dahan et al 2021) with ico2 grid, and compiled into train, validation and test arrays. Each array has a shape of: (B,N,C,V)
with B the number of subjects, N the number of patches (320), C the number of input channels (4) and V the number of verticse per patch (153).
We also make available gifti files with the different cortical metrics merged per subject and per hemisphere. For instance, sub-CC00051XX02_ses-7702_L.shape.gii
contains the 4 cortical metrics merged into a single file at the ico6 (40k vertices) resolution.
This data format is more flexible for futher post-processing (if needed) but also to build more complex dataloading strategies (with data augmentations for instance, see below xx).
The MindBoggle dataset with cortical metrics (sulcal depth and curvature) has been further pre-processed with MSMSulc alignement and resampling to ico6 resolution (40k vertices).
Pre-processed MindBoggle data is available in the following G-Node GIN repository: MindBoggle processed dataset.
Please create an account and forward your username at [email protected] to be added to the repository and access the data.
This repository is thought as a modular framework. Most of the models and training hyperparameters can be set within config files, used as input to the training scripts. Training scripts are located within the tools/
folder.
Once in the tools
folder, one can start training an SiT or MS-SiT model with the following command:
python train.py ../config/SiT/training/hparams.yml
or
python train.py ../config/MS-SiT/training/hparams.yml
Where all hyperparameters for training and model design models are to be set in the yaml file config/SiT/training/hparams.yml
and config/MS-SiT/training/hparams.yml
, such as:
- Transformer architecture
- Training strategy: from scratch, ImageNet or SSL weights
- Optimisation strategy
- Patching configuration
- Logging
One important point, as explained previously in the dHCP section, data is available either in numpy or gifti format. The parameter data/loader
in the config files should be set accordingly.
The MS-SiT model can be used to train segmentation model as follows:
python train_segmentation.py ../config/MS-SiT/segmentation/hparams.yml
Here, only the metrics
dataloader is available.
Coming soon
This codebase uses the vision transformer implementation from
lucidrains/vit-pytorch and the pre-trained ViT models from the timm librairy.
Please cite these works if you found it useful:
Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis
@InProceedings{pmlr-v172-dahan22a,
title = {Surface Vision Transformers: Attention-Based Modelling applied to Cortical Analysis},
author = {Dahan, Simon and Fawaz, Abdulah and Williams, Logan Z. J. and Yang, Chunhui and Coalson, Timothy S. and Glasser, Matthew F. and Edwards, A. David and Rueckert, Daniel and Robinson, Emma C.},
booktitle = {Proceedings of The 5th International Conference on Medical Imaging with Deep Learning},
pages = {282--303},
year = {2022},
editor = {Konukoglu, Ender and Menze, Bjoern and Venkataraman, Archana and Baumgartner, Christian and Dou, Qi and Albarqouni, Shadi},
volume = {172},
series = {Proceedings of Machine Learning Research},
month = {06--08 Jul},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v172/dahan22a/dahan22a.pdf},
url = {https://proceedings.mlr.press/v172/dahan22a.html},
}
The Multiscale Surface Vision Transformers
@misc{dahan2024multiscalesurfacevisiontransformer,
title={The Multiscale Surface Vision Transformer},
author={Simon Dahan and Logan Z. J. Williams and Daniel Rueckert and Emma C. Robinson},
year={2024},
eprint={2303.11909},
archivePrefix={arXiv},
primaryClass={eess.IV},
url={https://arxiv.org/abs/2303.11909},
}