-
Notifications
You must be signed in to change notification settings - Fork 21
VIP processing through Shanoir
VIP is a web portal for medical imaging applications. It allows access to scientific applications as a service, as well as distributed computing resources available in the biomed virtual organization of the EGI e-infrastructure. VIP is developed and maintained by the CREATIS team of INSA Lyon.
Shanoir allow an authenticated user to process datasets on VIP platform pipelines. The resulting data is reintegrated into Shanoir as processed datasets.
Access VIP user documentation here : https://vip.creatis.insa-lyon.fr/documentation/
Your pipeline (i.e. script, executable) needs to be packaged into a docker container. For that you need to :
- install Docker on your machine
- write a Dockerfile
- build the Docker image from Docker file and test it
- export the image as an archive
You can follow this step-by-step official tutorial : https://docs.docker.com/guides/workshop/
In order to be integrated into VIP, your pipeline needs to be described following the Boutiques specification : https://github.com/boutiques/boutiques/tree/master/boutiques/schema
To access VIP pipelines and start execution from Shanoir, the authenticated user must have previously created an account on the VIP platform.
This user must have the same email address than the Shanoir account, so that the two accounts can be linked between the two applications.
To create an account on VIP, see https://vip.creatis.insa-lyon.fr/sign-up.html.
To deploy your pipeline, its Boutiques descriptor needs to be provided to a VIP admin that will deploy it.
Contact the VIP team : https://vip.creatis.insa-lyon.fr/index.html#team
Your pipeline Docker image needs to be available either :
- on the Docker official public repository : https://hub.docker.com/
- on the VIP private repository : your image needs to be provided to a VIP admin
Datasets to process through VIP pipeline are selected through the Dataset search (Solr) view.
Once the selection made, click the Run a process
button. Users must have admin rights on all the selected datasets.
Users can then select one of the VIP pipelines he has access to. They click Run this pipeline
button to configure the pipeline execution.
-
Execution name
is filled by default but can me modified by the user -
Group by
allow users to define the level of dataset files grouping (by dataset, acquisition, examination) expected by the VIP pipeline. -
Dataset export format
allow users to select the dataset export format (NIfTI or DICOM) expected by the VIP pipeline.
Other fields are execution parameters, specific to the chosen pipeline.
infile
type fields correspond to the selected datasets. User can alter their selection by using regular expressions in these fields.
Once the execution started (after a few seconds), users are redirected to the Details on dataset processing view.
Users can follow the state of the execution through the Jobs view.
In the Details on dataset processing view, they can check the status of the execution, and download some logs from VIP.
At the end of a successful execution, processings and processed datasets are created under each selected datasets.
The processings represents the VIP pipeline execution.
The processed dataset contains the raw results archive of the VIP execution, that can be downloaded through the Download process data
button in the Details on dataset view
Some pipelines (e.g. ofsep_sequence_identification
) can trigger specifically developed post-processing.