Skip to content

Commit

Permalink
Empty the cache when torch cache is more than 25% free mem.
Browse files Browse the repository at this point in the history
  • Loading branch information
comfyanonymous committed Oct 22, 2023
1 parent 8b65f5d commit 8594c8b
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion comfy/model_management.py
Original file line number Diff line number Diff line change
Expand Up @@ -339,7 +339,11 @@ def free_memory(memory_required, device, keep_loaded=[]):

if unloaded_model:
soft_empty_cache()

else:
if vram_state != VRAMState.HIGH_VRAM:
mem_free_total, mem_free_torch = get_free_memory(device, torch_free_too=True)
if mem_free_torch > mem_free_total * 0.25:
soft_empty_cache()

def load_models_gpu(models, memory_required=0):
global vram_state
Expand Down

0 comments on commit 8594c8b

Please sign in to comment.