-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First pass and correct T2w segmentations #15
Comments
Thanks, I will do you that!
…________________________________
From: Julien Cohen-Adad ***@***.***>
Sent: March 14, 2024 16:01
To: sct-pipeline/spine-park ***@***.***>
Cc: Lydia Chougar, Dr ***@***.***>; Mention ***@***.***>
Subject: [sct-pipeline/spine-park] First pass and correct T2w segmentations (Issue #15)
@Kaonashi22<https://github.com/Kaonashi22> While waiting for the vertebral labeling #9<#9> I suggest to:
* run the pipeline on all subjects
* visually QC the segmentation of the T2w
* manually correct the segmentation when needed using https://github.com/spinalcordtoolbox/manual-correction. Manual correction should only be done for levels that will be used for morphometric calculations (C2-T5 if I remember correctly)
* save the manually corrected segmentations under the derivatives/labels folder inside the source database. Alternatively, we could also upload the manual corrections as part of a release on this repository (to make sure to track the modifications, to make sure it is not lost, and to facilitate collaboration-- eg: for me to try running the pipeline with your manual corrections).
—
Reply to this email directly, view it on GitHub<#15>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BFCFJYTCAMX6KAHIMNECBSLYYH6ZDAVCNFSM6AAAAABEWB5SPCVHI2DSMVQWIX3LMV43ASLTON2WKOZSGE4DMMZSGUZDKMQ>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@Kaonashi22 let me know when you have corrected 2-3 segmentations. At this point, you can send me the derivatives folder, and I can modify the processing script so that it accounts for these manual corrections. That way we can work in parallel (ie: you correcting the seg, and me updating the analysis script) |
Thanks, I will send you the derivative folders soon.
…________________________________
From: Julien Cohen-Adad ***@***.***>
Sent: March 19, 2024 09:52
To: sct-pipeline/spine-park ***@***.***>
Cc: Lydia Chougar, Dr ***@***.***>; Mention ***@***.***>
Subject: Re: [sct-pipeline/spine-park] First pass and correct T2w segmentations (Issue #15)
@Kaonashi22<https://github.com/Kaonashi22> let me know when you have corrected 2-3 segmentations. At this point, you can send me the derivatives folder, and I can modify the processing script so that it accounts for these manual corrections. That way we can work in parallel (ie: you correcting the seg, and me updating the analysis script)
—
Reply to this email directly, view it on GitHub<#15 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BFCFJYQSOF47U7ZNILF4ZO3YZA7JPAVCNFSM6AAAAABEWB5SPCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBXGIZTQMJYGM>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Overall, the segmentations of T2-w images are accurate. Sometimes, two or three voxels are missing on some slices; how precise the segmentation should be? Is it worth correcting manually these masks? |
I attached the derivatives folder with the masks manually corrected. After running the pipeline on all subjects, the script exited with errors at different steps after the segmentation of T2w images; here are some log files. |
It depends how precise you want the results to be. For example, if a slice is missing 2-3 pixels, and you average the CSA across, let's say, 20 slices, assuming a CSA of 80 mm2 at a resolution of 0.8mm2, 3 pixels correspond to 1.92 mm2, which represents 2.4% of the CSA computed on a single slice, or 0.14% of the CSA computed over 20 slices (so quite negligible, I would say).
It depends how important the lowest slice is for your analysis (eg: are you considering it? if not, not important) |
About the issues:
|
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
I've summarized our QC-related discussion in a new issue on the SCT repo (spinalcordtoolbox/spinalcordtoolbox#4423), just so that it doesn't distract from the |
Thanks @jcohenadad, I'll send you the images by email |
I'm running the script from a mounted drive (where the SCT and the current script are saved), but the input and output folders are located on a server. Does it have any influence?
…________________________________
From: Julien Cohen-Adad ***@***.***>
Sent: April 4, 2024 22:21
To: sct-pipeline/spine-park ***@***.***>
Cc: Lydia Chougar, Dr ***@***.***>; Mention ***@***.***>
Subject: Re: [sct-pipeline/spine-park] First pass and correct T2w segmentations (Issue #15)
I believe it's errors such as this that lead us to recommend not performing processing on mounted drives.
The problem is that, in many labs, data are located on a mounted drive and researchers are imposed to process their data that way. So I'm wondering if there is a workaround for this. I guess using -j 1 would be a workaround, but it would increase processing time by a lot.
—
Reply to this email directly, view it on GitHub<#15 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BFCFJYUQAHOIGRYOUZC6I5TY3YDDLAVCNFSM6AAAAABEWB5SPCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZYGY2DAMBZGA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@Kaonashi22 I've ran the script version 78129ca on the data you sent me (incl. those that produced the errors #15 (comment)) and I did not observe any error. Terminal outputjulien-macbook:~/temp/Lydia/results_20240405_125118/log $ ls -1
batch_processing_sub-BB277.log
batch_processing_sub-BJ170.log
batch_processing_sub-CG176.log
batch_processing_sub-DEV148Sujet01.log
batch_processing_sub-DEV203Sujet08.log
batch_processing_sub-DEV206Sujet10.log
batch_processing_sub-LC164.log
batch_processing_sub-LM166.log
batch_processing_sub-RC194.log Therefore, I suspect the issues you've observed were caused by parallel writing on the locked QC file. One way to overcome this is to use the flag |
Sounds good, thank you.
I'm not sure I see the script version <78129ca>
78129ca.Is it the file posted here: <78129ca> https://github.com/sct-pipeline/spine-park/pull/8/files?<https://github.com/sct-pipeline/spine-park/commit/78129ca6e3fe90f39b20af565a5e1918d2f6754e>
…________________________________
From: Julien Cohen-Adad ***@***.***>
Sent: April 6, 2024 15:37
To: sct-pipeline/spine-park ***@***.***>
Cc: Lydia Chougar, Dr ***@***.***>; Mention ***@***.***>
Subject: Re: [sct-pipeline/spine-park] First pass and correct T2w segmentations (Issue #15)
@Kaonashi22<https://github.com/Kaonashi22> I've ran the script version 78129ca<78129ca> on the data you sent me (incl. those that produced the errors #15 (comment)<#15 (comment)>) and I did not observe any error.
Terminal output
julien-macbook:~/temp/Lydia/results_20240405_125118/log $ ls -1
batch_processing_sub-BB277.log
batch_processing_sub-BJ170.log
batch_processing_sub-CG176.log
batch_processing_sub-DEV148Sujet01.log
batch_processing_sub-DEV203Sujet08.log
batch_processing_sub-DEV206Sujet10.log
batch_processing_sub-LC164.log
batch_processing_sub-LM166.log
batch_processing_sub-RC194.log
Therefore, I suspect the issues you've observed were caused by parallel writing on the locked QC file. One way to overcome this is to use the flag -jobs 1. Can you please try to see if it solves the issue?
—
Reply to this email directly, view it on GitHub<#15 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BFCFJYWFCW4BBLRUUZADSVLY4BFIRAVCNFSM6AAAAABEWB5SPCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBRGE3TKOBRGY>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
The number refers to a git SHA that points to a state of the repository. Here is the link to that version of the repository, that includes the file: https://github.com/sct-pipeline/spine-park/tree/78129ca6e3fe90f39b20af565a5e1918d2f6754e But what you should do instead of manually downloading the file from this repository, is go to your local repository and run:
Then you can make sure you are running the proper version by running:
which indicates the version (top item in the list):
|
Thank you! I'll let you know how it works
…________________________________
From: Julien Cohen-Adad ***@***.***>
Sent: April 8, 2024 09:28
To: sct-pipeline/spine-park ***@***.***>
Cc: Lydia Chougar, Dr ***@***.***>; Mention ***@***.***>
Subject: Re: [sct-pipeline/spine-park] First pass and correct T2w segmentations (Issue #15)
The number refers to a git SHA that points to a state of the repository. Here is the link to that version of the repository, that includes the file: https://github.com/sct-pipeline/spine-park/tree/78129ca6e3fe90f39b20af565a5e1918d2f6754e
But what you should do instead of manually downloading the file from this repository, is go to your local repository and run:
git pull
Then you can make sure you are running the proper version by running:
git log --pretty=oneline
which indicates the version (top item in the list):
julien-macbook:~/code/spine-park $ git log --pretty=oneline
78129ca (HEAD -> main, origin/main, origin/HEAD) Added .gitignore <--- THIS ONE
9bfb140 Sort DWI chunks from top to bottom (#18)
8e541e9 (jca/17-manual-corr) Create analysis script (#8)
ca8104e Update README.md
2c078e6 Added doc to convert to BIDS
183e7b9 Refactored zip_and_move_file()
eb9998b Cleanup
5d2a8d0 Convert to function with input arguments
7ab5639 Added printouts
b395dd2 Cleanup, added docstrings
4493681 Create directory inside the zip_and_move_nifti() function
96bb574 Put back .gz extension on output file name
be206ec Fixed duplicated 'sub-' prefix, removed .gz
a332bb5 Added printout for ignored file
85202a6 Added docstrings, added output path
ea22e25 Pushed first prototype that parses subject directory
0d348fc Initial commit
—
Reply to this email directly, view it on GitHub<#15 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BFCFJYUFI7HJXJZC6MSI7D3Y4KLQDAVCNFSM6AAAAABEWB5SPCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBSG43DCMBZHA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
I ran the pipeline on all the subjects. The processing time is quite longer (~2h30 for 15 subjects). I split the subjects into groups of 15 to make it smoother. I also had storage space issues, which didn't help... Here are some comments: error.log |
Oh dear! In that case, I can try to prioritize the SCT issue in order to avoid - jobs 1 and reduce the processing time. |
Sorry about that. We'll work on a fix which hopefully will solve the underlying issue
It should not be the case, as per: spine-park/batch_processing.sh Lines 78 to 79 in 8e541e9
Can you please give an example of the full path where the segmentation that is being overwritten is located originally? I suspect that you did not locate the manual segmentation in the right folder (it works on my end #17)
Good catch! I've opened an issue: #19
Ah! This is because the name for DTI metrics is generic (eg:
Yes, we should. Issue opened #21
Issue opened on SCT: spinalcordtoolbox/spinalcordtoolbox#4431 The log file says "could the file be damaged?". I'm wondering if this isn't a disk issue? You mentioned you had such issues. Possibly caused by #22. I'll fix it. |
I answered to the segmentation files being overwritten issue in #17) Sending you the subject images by email. |
Note: These two subjects that @jcohenadad has highlighted have different errors:
Given that the GB300 error occurred A) inside |
Then, I'll rerun the processing in a folder with more space. Though I had the error "no storage space left" in some other subjects, which was more straightforward. |
@Kaonashi22 Do you know if any of the filesystems you're working with (e.g.
If so, this might be the key to the QC locking issues: spinalcordtoolbox/spinalcordtoolbox#4423 (comment) |
The command returns this:
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
tubal:/home on /home/bic type nfs4 (rw,nodev,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.68.1.181,local_lock=none,addr=132.206.201.20)
nfs.isi.bic.mni.mcgill.ca:/dagher12 on /dagher/dagher12 type nfs4 (rw,nodev,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.68.1.181,local_lock=none,addr=132.216.133.23)
nfs.isi.bic.mni.mcgill.ca:/dagher11 on /dagher/dagher11 type nfs4 (rw,nodev,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.68.1.181,local_lock=none,addr=132.216.133.5)
…________________________________
From: Joshua Newton ***@***.***>
Sent: April 10, 2024 15:16
To: sct-pipeline/spine-park ***@***.***>
Cc: Lydia Chougar, Dr ***@***.***>; Mention ***@***.***>
Subject: Re: [sct-pipeline/spine-park] First pass and correct T2w segmentations (Issue #15)
@Kaonashi22<https://github.com/Kaonashi22> Do you know if any of the filesystems you're working with (e.g. /export02/data/lydiac/, /dagher/dagher11/lydia11/) are "NFS mounts<https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-nfs>" specifically? You can check this by running:
mount | grep nfs
If so, this might be the key to the QC locking issues: spinalcordtoolbox/spinalcordtoolbox#4423 (comment)<spinalcordtoolbox/spinalcordtoolbox#4423 (comment)>
—
Reply to this email directly, view it on GitHub<#15 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BFCFJYSJ2CAOXHEZ6DFPQHLY4WF2BAVCNFSM6AAAAABEWB5SPCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBYGI3TCMRRGU>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Aha! I think that explains it then! ( I'll take a look at trying |
Sounds good, thanks!
…________________________________
From: Joshua Newton ***@***.***>
Sent: April 10, 2024 19:58
To: sct-pipeline/spine-park ***@***.***>
Cc: Lydia Chougar, Dr ***@***.***>; Mention ***@***.***>
Subject: Re: [sct-pipeline/spine-park] First pass and correct T2w segmentations (Issue #15)
Aha! I think that explains it then! (/dagher/dagher11 -- where the data lives and where we perform locking for the QC index file -- is an NFS mount.)
I'll take a look at trying portalocker's proposed fix for handling NFS mounted drives. :)
—
Reply to this email directly, view it on GitHub<#15 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BFCFJYWXT5GF3LPMRT6ACRTY4XG2DAVCNFSM6AAAAABEWB5SPCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBYGYYTSMRRGY>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
All 4 subjects ran without error, so I suspect the issue was related to disk space and/or NFS portalocker issue Terminal outputProcessing 4 subjects in parallel. (Worker processes used: 4).
Started at 11h44m21s: sub-ER240. See log file /Users/julien/temp/Lydia/results_20240411_114414/log/batch_processing_sub-ER240.log
Started at 11h44m21s: sub-GB300. See log file /Users/julien/temp/Lydia/results_20240411_114414/log/batch_processing_sub-GB300.log
Started at 11h44m21s: sub-GE200. See log file /Users/julien/temp/Lydia/results_20240411_114414/log/batch_processing_sub-GE200.log
Started at 11h44m21s: sub-LD214. See log file /Users/julien/temp/Lydia/results_20240411_114414/log/batch_processing_sub-LD214.log
Hooray! your batch completed successfully :-)
Started: 2024-04-11 11h44m21s | Ended: 12h57m38s | Duration: 01h13m16s |
Thanks for this trial.
I'll make sure I have enough storage space for the next run.
…________________________________
From: Julien Cohen-Adad ***@***.***>
Sent: April 11, 2024 13:41
To: sct-pipeline/spine-park ***@***.***>
Cc: Lydia Chougar, Dr ***@***.***>; Mention ***@***.***>
Subject: Re: [sct-pipeline/spine-park] First pass and correct T2w segmentations (Issue #15)
All 4 subjects ran without error, so I suspect the issue was related to disk space
Terminal output
Processing 4 subjects in parallel. (Worker processes used: 4).
Started at 11h44m21s: sub-ER240. See log file /Users/julien/temp/Lydia/results_20240411_114414/log/batch_processing_sub-ER240.log
Started at 11h44m21s: sub-GB300. See log file /Users/julien/temp/Lydia/results_20240411_114414/log/batch_processing_sub-GB300.log
Started at 11h44m21s: sub-GE200. See log file /Users/julien/temp/Lydia/results_20240411_114414/log/batch_processing_sub-GE200.log
Started at 11h44m21s: sub-LD214. See log file /Users/julien/temp/Lydia/results_20240411_114414/log/batch_processing_sub-LD214.log
Hooray! your batch completed successfully :-)
Started: 2024-04-11 11h44m21s | Ended: 12h57m38s | Duration: 01h13m16s
—
Reply to this email directly, view it on GitHub<#15 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BFCFJYV43FH6NOYRT2GKU3LY43DODAVCNFSM6AAAAABEWB5SPCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJQGE4TAMJTGQ>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@Kaonashi22 While waiting for the vertebral labeling #9 I suggest to:
derivatives/labels
folder inside the source database. Alternatively, we could also upload the manual corrections as part of a release on this repository (to make sure to track the modifications, to make sure it is not lost, and to facilitate collaboration-- eg: for me to try running the pipeline with your manual corrections).The text was updated successfully, but these errors were encountered: