You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm encountering an issue with the dimensions of the text encoder output in a fine-tuned CLIP model. The fine-tuning output of my CLIP model based on RN50 is (1, 1024), whereas the output from CLIPTextModel in transformers is (1, 77, 768). This causes difficulties when integrating with Stable Diffusion models, as the encoder_hidden_states parameter in U-Net requires an embedding of shape (x, x, 768).
Can someone help me deal with this problem? Would truncating the output of encode_text to (1, 1, 768) potentially resolve this issue?
The text was updated successfully, but these errors were encountered:
I'm encountering an issue with the dimensions of the text encoder output in a fine-tuned CLIP model. The fine-tuning output of my CLIP model based on RN50 is (1, 1024), whereas the output from CLIPTextModel in transformers is (1, 77, 768). This causes difficulties when integrating with Stable Diffusion models, as the encoder_hidden_states parameter in U-Net requires an embedding of shape (x, x, 768).
Can someone help me deal with this problem? Would truncating the output of encode_text to (1, 1, 768) potentially resolve this issue?
The text was updated successfully, but these errors were encountered: