Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor Evict #11

Closed
jdegoes opened this issue Jan 11, 2021 · 2 comments
Closed

Refactor Evict #11

jdegoes opened this issue Jan 11, 2021 · 2 comments

Comments

@jdegoes
Copy link
Member

jdegoes commented Jan 11, 2021

  1. Pull out of CachePolicy
  2. Simplify so it cannot look at time or EntryStats
  3. Pass it to cache constructor (Cache#make)

Separately, move "ttl" concerns as expirationTime member of EntryStats (#6 alternative idea).

@adamgfraser
Copy link
Contributor

I think that makes sense. That will support a more efficient implementation because we will know statically what is the next entry to evict based on time to live and when to evict it versus having to potentially traverse the entire cache to see if items need to be evicted (assuming they aren't accessed and evicted then).

@hollinwilkins
Copy link

The cache can remain as flexible as it is (Priority-based eviction for clearing space when needed) and also support TTL-based expiration efficiently. In order to accomplish this, I believe there are a couple of changes that would need to be made (inline with this story and #6 )

  1. Add an efficient data structure for ordering items by when they will expire.
  2. Rework the Evict logic to get rid of taking Instant.now as one of its parameters.

Overview of Potential Cache Strategy

There are two questions we need to answer for caching, taking into account the user-defined priorities:

  1. What is the next item that will expire based on TTL?
  2. What item should I evict now based on my prioritization because I no longer have capacity?

(1) Can be used to implement a ticker in a daemon fiber that evicts expired cache entries
(2) Can be used to free up space when it is needed

In order two support both operations efficiently, the cache needs to be composed of three data structures:

  1. A Map[Key, Entry[Value]] - already exists
  2. A SortedSet (maybe switch to BTree or BinaryTree) of prioritized cached entries - already exists
  3. A SortedSet (BTree or BinaryTree potentially) of ordered entries by TTL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants