Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling of whole CT Volumes #4

Open
flipmeixner opened this issue Aug 6, 2024 · 0 comments
Open

Handling of whole CT Volumes #4

flipmeixner opened this issue Aug 6, 2024 · 0 comments

Comments

@flipmeixner
Copy link

Hi there,
I am trying to use your model to create captions for CT Scans which works fine for single layers (about 400 layers i.e. files per Scan in my case), but in order to get one caption for one volume, I am trying to pass all files of one scan into the model at once. The size of one volume is about 90 MB and I am running the model on an A100 80GB GPU. Unfortunately I get a CUDA OOM Error which tells me the script is trying to allocate additional 24GB. But where it gets interesting is the memory the script / model is trying to allocate with changing batch size:
grafik

Especially the extrem jump in between infering 120 files and 130 files keeps me quite clueless. Some additional info:

  • It doesnt matter whether I use dicom files or jpeg files (dicom files 10x as big as jpeg), memory usage is the same
  • It's not in the table, but infering with 400 files tries to allocate less memory than with 200 or even 130... (24 GB as mentioned above)

Any help is appreciated, I am running a bit out of ideas of where the issue could be, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant