You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are two related codes for the position encoding/embedding.
First of all, when preparing the input tokens, we concatenate a reference template mesh and the image features for position encoding. You can find relevant code here
Secondly, inside the transformer encoder module, we set the position embedding following the conventional BERT. Relevant code can be found here
hello, thx for your nice work. I want to know how do you set position embedding? Do you just follow the setting of BERT?
The text was updated successfully, but these errors were encountered: