You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I get why everything happens in memory, but sometimes you just cannot fit your whole test file into it.
Or maybe you can, but it's the combined size with the RAM requirements of the most "hungry compressors" to make it a bad day for you.
Or even, you could, but the "clue" you are looking for happens below the bandwidth threshold of your non volatile memory.
So, putting aside that it I/O bottlenecks could be detected and flagged (I suppose it may also be true for RAM in some crazy situation?), could we get some kind of knob to control the "disk type"?
I see three possible levels for this:
compression on disk (since all algorithms should be asymmetrically more tasking here), everything else as usual, meaning the ending archive gets copied back in memory before decompression
an intermediate mode where once a single file/element/part is decompressed, it may as well be deleted on the spot (making best case RAM requirement be: algorithm footprint + archive size + sector size)
everything happens on disk
The text was updated successfully, but these errors were encountered:
I get why everything happens in memory, but sometimes you just cannot fit your whole test file into it.
Or maybe you can, but it's the combined size with the RAM requirements of the most "hungry compressors" to make it a bad day for you.
Or even, you could, but the "clue" you are looking for happens below the bandwidth threshold of your non volatile memory.
So, putting aside that it I/O bottlenecks could be detected and flagged (I suppose it may also be true for RAM in some crazy situation?), could we get some kind of knob to control the "disk type"?
I see three possible levels for this:
The text was updated successfully, but these errors were encountered: