You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Awesome work first of all.
Is there a reason why you would combine both rotational as well as relative positional embedding in your Attention class?
I would assume one of both is enough to incorporate the positions of the frames to the attention model?
The text was updated successfully, but these errors were encountered:
Hi,
Awesome work first of all.
Is there a reason why you would combine both rotational as well as relative positional embedding in your Attention class?
I would assume one of both is enough to incorporate the positions of the frames to the attention model?
The text was updated successfully, but these errors were encountered: