You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use the same config epoch=1, work_num=8, batch_size=32
It works well in p2ch11, the RAM is stable at 6g
but when I run the code 'python -m p2ch12.training --balanced'
the RAM is very high, and exceeds maximum soon,
after that my computer didn't work, and I need to restart it.
What happened
The text was updated successfully, but these errors were encountered:
icegomic
changed the title
跑
p2ch12 the memory explosion when i train the balanced model with the same config in p2ch11
Jun 19, 2022
I have the same issue with the p2ch11 Code. While validating, the memory size explodes.
I am also very interested in what causes this ... or why the DataLoader sometimes uses the GPU memory efficiently and sometimes floods the computer memory (RAM?!) beforehand.
Side note: After I had programmed my own architecture of the code for practicing, my memory explodes for the training and validation. I have to tweak the worker and batch size for a good run.
I use the same config epoch=1, work_num=8, batch_size=32
It works well in p2ch11, the RAM is stable at 6g
but when I run the code 'python -m p2ch12.training --balanced'
the RAM is very high, and exceeds maximum soon,
after that my computer didn't work, and I need to restart it.
What happened
The text was updated successfully, but these errors were encountered: