Skip to content

Commit

Permalink
Try a different type of flux fp16 fix.
Browse files Browse the repository at this point in the history
  • Loading branch information
comfyanonymous committed Aug 21, 2024
1 parent 904bf58 commit 015f73d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions comfy/ldm/flux/layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ def forward(self, img: Tensor, txt: Tensor, vec: Tensor, pe: Tensor):
txt += txt_mod2.gate * self.txt_mlp((1 + txt_mod2.scale) * self.txt_norm2(txt) + txt_mod2.shift)

if txt.dtype == torch.float16:
txt = txt.clip(-65504, 65504)
txt = torch.nan_to_num(txt, nan=0.0, posinf=65504, neginf=-65504)

return img, txt

Expand Down Expand Up @@ -233,7 +233,7 @@ def forward(self, x: Tensor, vec: Tensor, pe: Tensor) -> Tensor:
output = self.linear2(torch.cat((attn, self.mlp_act(mlp)), 2))
x += mod.gate * output
if x.dtype == torch.float16:
x = x.clip(-65504, 65504)
x = torch.nan_to_num(x, nan=0.0, posinf=65504, neginf=-65504)
return x


Expand Down

0 comments on commit 015f73d

Please sign in to comment.