Skip to content

Commit

Permalink
Update docs and yaml for baselines
Browse files Browse the repository at this point in the history
  • Loading branch information
surajpaib committed Mar 25, 2024
1 parent 069e4cf commit ee038a5
Show file tree
Hide file tree
Showing 17 changed files with 248 additions and 644 deletions.
46 changes: 45 additions & 1 deletion docs/replication-guide/baselines.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,46 @@
# Reproduce Baselines
:hourglass_flowing_sand: Coming soon! :hourglass_flowing_sand:

Reproducing baselines used in this study is very similar to the adaptation of the FM as essentially we are just using the FM weights for adaptation.

## Randomly initialized

We provide the YAML configuration to train the random init baseline at `experiments/baselines/supervised_training/supervised_random_init.yaml`

By default, we configure this for Task 1. You can adapt this for Task 2 and Task 3 by searching for `Note: ` comments in the YAML that outline what must be changed.

You can start training by running this in the root code folder,
```bash
lighter fit --config_file ./experiments/baselines/supervised_training/supervised_random_init.yaml
```

## Transfer learning
We provide the YAML configuration to train the transfer learning baseline at `experiments/baselines/supervised_training/supervised_finetune.yaml`

This baseline is only used for Task 2 and Task 3 as we use the random init baseline from Task 1 for the transfer. Follow the `Note: ` comments to switch between Task 2 and Task 3 configurations.

You can start training by running this in the root code folder,
```bash
lighter fit --config_file ./experiments/baselines/supervised_training/supervised_finetune.yaml
```

## Med3D / MedicalNet
Original repo: https://github.com/Tencent/MedicalNet

We have provided re-implementations of Med3D to fit into our YAML workflows at `experiments/baselines/med3d/finetune.yaml`. Again, the `Note: ` comments help adapt for different tasks.


You can start training by running this in the root code folder,
```bash
lighter fit --config_file ./experiments/baselines/med3d/finetune.yaml
```

## Models Genesis
Original repo: https://github.com/MrGiovanni/ModelsGenesis

We have provided re-implementations of Models Genesis to fit into our YAML workflows at `experiments/baselines/models_genesis/finetune.yaml`. Again, the `Note: ` comments help adapt for different tasks.


You can start training by running this in the root code folder,
```bash
lighter fit --config_file ./experiments/baselines/models_genesis/finetune.yaml
```
2 changes: 1 addition & 1 deletion docs/replication-guide/fm_adaptation.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The FM was adapted by either fine-tuning all its weights or by freezing its weig

We provide the YAML configuration for this at `experiments/adaptation/fmcib_finetune.yaml`.

By default, we configure this for Task 1. You can adapt this for Task 2 and Task 3 by searching for 'Note: ' comments in the YAML that outline what must be changed. Make sure you download the weights for the pre-trained foundation model before attempting to reproduce this training.
By default, we configure this for Task 1. You can adapt this for Task 2 and Task 3 by searching for `Note: ` comments in the YAML that outline what must be changed. Make sure you download the weights for the pre-trained foundation model before attempting to reproduce this training.


You can start training by running this in the root code folder,
Expand Down
6 changes: 4 additions & 2 deletions docs/replication-guide/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@ lighter predict --config_file ./experiments/inference/extract_features.yaml
!!! note
While the above pipeline will allow you to extract features, we provide an easier and simpler, recommended API to do this. Please refer to [Quick Start](../getting-started/quick-start.md) or [Cloud Quick Start](../getting-started/cloud-quick-start.md)

However, this method might be preferred when features need to be extracted from different models (used as baselines in our study). Follow the `Note:` in the corresponding config file to change model paths.

## Running predictions from our supervised models (Finetuned FM/ Baselines)
However, this method might be preferred when features need to be extracted from different models (used as baselines in our study). Follow the `Note:` in the corresponding config file to change model paths and use different baselines tested.

## Running predictions from our supervised models (Finetuned FM/ Baselines)

To run predictions from our models (both supervised and self-supervised), we provide YAML files that can be run with the lighter interface. These are found in `experiments/inference`, namely `get_predictions.yaml` for getting the predictions.

Expand All @@ -37,3 +37,5 @@ lighter predict --config_file ./experiments/inference/get_predictions.yaml
```
As with the previous YAMLS, please follow the 'Note:' tags to place appropriate data paths and change relevant parameters. This YAML is to be used if you want to get target predictions from the models.

!!! note
The predictions can be extracted for different tasks as well as different baselines by following the `Note:` comments.
60 changes: 0 additions & 60 deletions experiments/baselines/med3d/extract_features.yaml

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -57,34 +57,32 @@ system:
criterion:
_target_: torch.nn.CrossEntropyLoss # Note: Change to torch.nn.BCEWithLogitsLoss for Task 2 and Task 3

optimizers:
optimizer:
_target_: torch.optim.Adam
params: "$@system#model.parameters()"
# lr: 0.001 # Compute L R dynamically for different batch sizes
# weight_decay: 0.0
# momentum: 0.9

schedulers:
scheduler:
scheduler:
_target_: torch.optim.lr_scheduler.StepLR
optimizer: "@system#optimizers"
step_size: 30
strict: True

train_metrics:
metrics:
train:
- _target_: torchmetrics.AveragePrecision
task: binary # Note: Change to `binary` for Task 2 and Task 3 and remove num_classes below
# num_classes: 8

- _target_: torchmetrics.AUROC
task: binary # Note: Change to `binary` for Task 2 and Task 3 and remove num_classes below
# num_classes: 8
val: "%#train"
test: "%#train"


val_metrics: "%system#train_metrics"
test_metrics: "%system#train_metrics"

train_dataset:
datasets:
train:
_target_: fmcib.datasets.SSLRadiomicsDataset
path: null # Note: Change path
label: "survival" # Note: Change to "malignancy" for Task 2 and "survival" for Task 3
Expand Down Expand Up @@ -118,7 +116,7 @@ system:
# subtrahend: -1024
# divisor: 3072

val_dataset:
val:
_target_: fmcib.datasets.SSLRadiomicsDataset
path: null # Note: Change path
label: "@system#train_dataset#label"
Expand All @@ -141,6 +139,4 @@ system:
- _target_: monai.transforms.SpatialPad
spatial_size: [50, 50, 50]


test_dataset: null
predict_dataset: null
test: null

This file was deleted.

This file was deleted.

This file was deleted.

Loading

0 comments on commit ee038a5

Please sign in to comment.