Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Otter is now available as a cache option in Gubernator V3 #15

Merged
merged 5 commits into from
May 20, 2024

Conversation

thrawn01
Copy link
Collaborator

Purpose

The result of the #7 benchmark for WorkerPool and Cache implementations showed a significant increase in performance when using Otter https://maypok86.github.io/otter/ over a standard LRU cache implementation. This PR gives users the option of using either the Mutex or Otter cache implementations

Otter Performance Benchmark on 32 core machine

Implementation

  • Removed WorkerPool implementation as that showed the worst performance
  • Introduced CacheManager which takes a similar role to the WorkerPool and provides an abstraction point for possible future management of cache types.
  • Renamed LRUCacheCollector to CacheCollector
  • Fixed some linting issues
  • algorithms.go functions now lock a rate limit before modifying the CacheItem. This avoids race conditions created when using a lock free cache like Otter.
  • Moved cache expiration out of the cache and into algorithms.go. This reduces the garbage collection burden by no longer dropping expired cache items from the cache. Now, if an item is expired, it remains in the cache until normal cache sweep clears it, or it's accessed again. If it's accessed again, the existing item is updated and gets a new expiration time.
  • Introduced rateContext struct which encapsulates all the state that must pass between several functions in algorithms.go
  • The major functions in algorithms.go now call themselves recursively in order to retry when a race condition occurs. Race conditions can occur when using lock less data structures like Otter. When this happens, we simply retry the method by calling it recursively. This is a common pattern, often used by prometheus metrics.
  • Switched benchmarks to use b.RunParallel() when preforming concurrent benchmarks.
  • Added TestHighContentionFromStore() to trigger race conditions in algorithms.go which also increases code coverage.
  • Removed direct dependence upon prometheus from Otter and LRUCache. (Fixed flapping test)
  • Added GUBER_CACHE_PROVIDER which defaults to otter

Fixed race condition in tokenBucket()

Fixed race conditions in leakybucket

Added Otter cost func, and reduced memory and time it takes to run the benchmarks

Fixed flapping cache eviction test

Added LRUMutexCache restored LRUCache
@thrawn01 thrawn01 added the enhancement New feature or request label May 17, 2024
@thrawn01 thrawn01 added this to the V3 milestone May 17, 2024
@thrawn01 thrawn01 self-assigned this May 17, 2024
@thrawn01 thrawn01 requested a review from Baliedge as a code owner May 17, 2024 21:29
@thrawn01 thrawn01 mentioned this pull request May 20, 2024
@thrawn01 thrawn01 merged commit 097220f into v3.0 May 20, 2024
1 check passed
@thrawn01 thrawn01 mentioned this pull request Jun 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant