You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I realise no one is maintaining this anymore, but just for anyone who might come across a similar issue which was hard to debug:
With the default binarized dataset type in fairseq preprocess (mmap), it is possible to get integer overflow errors when processing big datasets. The key snippet of code is in fairseq/data/indexed_dataset.py:
for some reason, when using multiple workers it is possible for some of the values in sizes to be np.int32, rather than int. I have not worked out why this is. However, for large enough datasets this can lead to integer overflow (as address becomes type np.int32 rather than int).
The fix is just to change:
address += int(size * dtype_size)
The text was updated successfully, but these errors were encountered:
🐛 Bug
I realise no one is maintaining this anymore, but just for anyone who might come across a similar issue which was hard to debug:
With the default binarized dataset type in fairseq preprocess (mmap), it is possible to get integer overflow errors when processing big datasets. The key snippet of code is in
fairseq/data/indexed_dataset.py
:for some reason, when using multiple workers it is possible for some of the values in sizes to be np.int32, rather than int. I have not worked out why this is. However, for large enough datasets this can lead to integer overflow (as address becomes type np.int32 rather than int).
The fix is just to change:
address += int(size * dtype_size)
The text was updated successfully, but these errors were encountered: