You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running into OOM when applying encode for token embeddings on a large dataset.
Currently the solution to resolve OOM using the encode method (see #522 and #487) is only applicable to sentence embeddings, not token embeddings.
I have resolved the issue by generalizing the previous solution to also be applicable to token embeddings via the use of an added 'move_to_cpu' flag. Is their an alternative approach that I have missed? If not, and you agree with the changes, feel free to merge #1812.
The text was updated successfully, but these errors were encountered:
I am running into OOM when applying encode for token embeddings on a large dataset.
Currently the solution to resolve OOM using the encode method (see #522 and #487) is only applicable to sentence embeddings, not token embeddings.
I have resolved the issue by generalizing the previous solution to also be applicable to token embeddings via the use of an added 'move_to_cpu' flag. Is their an alternative approach that I have missed? If not, and you agree with the changes, feel free to merge #1812.
The text was updated successfully, but these errors were encountered: