Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
F_MALLOC: Optimize the free() operation
Commit bdaaf60 introduced a side-effect of gradually moving the fragment "action" towards the exponential, non-optimized hash table buckets (i.e. buckets 2049 ... 2100). Here, the fragments were inserted in a sorted fashion, with the sorted-insert algorithm costing a O(N) iteration on each free operation instead of a simple O(1). Consequently, the user experience of this effect is that "dr_reload" operations were stalling for 12 minutes (coming up from 24 seconds!), when working with large rule sets (millions of rules). Interestingly enough, the stalling was not due to the caching phase -- malloc() -- rather due to the cleanup phase, when clearing the old rules -- free()! To address this issue: * we drop the sorted insertion completely for buckets 2049 ... 2100, and simply do a list prepend operation: O(1), as with the others * we make all allocation requests from these buckets return the next bucket (!!), thus guarantee'ing our requested fragment. Examples: malloc(18K) -> now you always get a 32K+ frag, but instantly! malloc(37K) -> now you always get a 64K+ frag, but instantly! * this does not make F_MALLOC more wasteful, since the extra frag space gets split anyway into a new fragment, with the two eventually coalescing together again thanks to commit bdaaf60 (cherry picked from commit e6b4de5)
- Loading branch information