-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle max_age: nil #122
Handle max_age: nil #122
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this @brian-kephart 👍
I had a couple of minor notes on the implementation, but this is looking good otherwise!
app/models/solid_cache/entry.rb
Outdated
elsif cache_full | ||
uncached do | ||
order(:id) | ||
.limit(count) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you do .limit(count * 3)
and .sample(count)
here like in the other query?
The idea is when you have concurrent attempts to delete records, by choosing a random selection of the latest candidates, you reduce the overlap between them.
Whether that's a useful thing to do in practice is maybe debatable and this might be removed, but we should be consistent until then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I was assuming the extra records were to compensate for reducing the count with filter_map
. I restored that behavior and added a comment and local variable to show the intent.
app/models/solid_cache/entry.rb
Outdated
.filter_map { |id, created_at| id if cache_full || created_at < min_created_at } | ||
.sample(count) | ||
end | ||
elsif cache_full |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we move the cache_full
check before the max_age
one, we can remove the cache_full
check from the filter_map
.
Something like:
if cache_full
uncached do
order(:id).limit(count * 3).pluck(:id).sample(count)
end
elsif max_age
uncached do
order(:id)
.limit(count * 3)
.pluck(:id, :created_at)
.filter_map { |id, created_at| id if created_at < min_created_at }
.sample(count)
end
else
[]
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went ahead and did this, but went further to try and DRY it up and show the intent of the two branches. Let me know if it doesn't pass muster.
044f4a7
to
d8fe0a2
Compare
app/models/solid_cache/entry.rb
Outdated
@@ -7,7 +7,7 @@ class Entry < Record | |||
# 2. We avoid the overhead of building queries and active record objects | |||
class << self | |||
def write(key, value) | |||
upsert_all_no_query_cache([ { key: key, value: value } ]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think value omission is available in Ruby 3.0, so we'll need to keep specifying them (here and below)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies, I didn't intend to change those lines. I'll have a stern discussion with my linter settings.
d8fe0a2
to
473f775
Compare
Right now, Solid Cache will accept a config with
max_age: nil
but will throw aNoMethodError
when attempting to expire entries with that config. Such a config should be handled or disallowed.Since
max_entries: nil
is allowed, this PR allows a nil value formax_age
as well.