You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Maybe add a CLI argument (index and parallel_count) to have multiple instance running in parallel.
This could be achived by having each instance only handle the IPFS hashes which correspond to the index after running them through a proper splitting algorithm.
The algorithm should split each IPFS hash into different buckets by equal probability, this in the simplest form might be implemented by something like converting the hash into a number and taking it modulo the parallel_count.
However one must take into account, that there might be some occurrences in the hash wich are more likely (especially if a special prefix is used) Therefore it might be a possibility to hash the data again beforehand. The new hashing algorithm of course must allow for proper splitting.
The text was updated successfully, but these errors were encountered:
Maybe add a CLI argument (index and parallel_count) to have multiple instance running in parallel.
This could be achived by having each instance only handle the IPFS hashes which correspond to the index after running them through a proper splitting algorithm.
The algorithm should split each IPFS hash into different buckets by equal probability, this in the simplest form might be implemented by something like converting the hash into a number and taking it modulo the parallel_count.
However one must take into account, that there might be some occurrences in the hash wich are more likely (especially if a special prefix is used) Therefore it might be a possibility to hash the data again beforehand. The new hashing algorithm of course must allow for proper splitting.
The text was updated successfully, but these errors were encountered: