You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given the features are tokenized sentences, and the targets y_pred are normalized rankings. A typical model accepts tokenized sentences as inputs and outputs their order/ranks.
If I understood your setup correctly I think all implemented loss functions should work. Depending if you choose pointwise/ pairwise or listwise loss you predict the normalised ranking directly / compare two sentences and learn to order them correctly / learn to order the entire list at once, respectively
Given the features are tokenized sentences, and the targets
y_pred
are normalized rankings. A typical model accepts tokenized sentences as inputs and outputs their order/ranks.Which of the loss function implementations is suitable for this kind of data?
The text was updated successfully, but these errors were encountered: