Skip to content

Commit

Permalink
Merge #36
Browse files Browse the repository at this point in the history
36: Use hash table instead of eval-ing methods r=charleskawczynski a=charleskawczynski

This seems to really improve performance! 🚀

Co-authored-by: Charles Kawczynski <kawczynski.charles@gmail.com>
  • Loading branch information
bors[bot] and charleskawczynski authored Jul 2, 2021
2 parents d04fc5a + 18f5eb7 commit b34f5ae
Show file tree
Hide file tree
Showing 8 changed files with 112 additions and 78 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "PokerHandEvaluator"
uuid = "18ed25b1-892a-4a3b-b8fc-1036dc9a6a89"
authors = ["Charles Kawczynski <kawczynski.charles@gmail.com>"]
version = "0.2.3"
version = "0.2.4"

[deps]
Combinatorics = "861a8166-3701-5b0c-9a16-15d98fcdc6aa"
Expand Down
40 changes: 36 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,14 +64,46 @@ allcards = all_cards(fhe[winner_id]) # = (J♠, T♣, J♡, J♣, 2♣, 3♢, 5

## Performance

Here's a code snippet to see performance

!!! note
this needs some additional packages (StatsBase.jl, BenchmarkTools.jl, and Combinatorics.jl) that are not shipped with PokerHandEvaluator.jl

```julia
using PokerHandEvaluator
phe_dir = dirname(dirname(pathof(PokerHandEvaluator)));
include(joinpath(phe_dir, "perf.jl"))
```

Running this gives:

```julia
julia> using PokerHandEvaluator

julia> phe_dir = dirname(dirname(pathof(PokerHandEvaluator)));

julia> include(joinpath(phe_dir, "perf.jl")) # compile first
Δt_per_hand_eval = 1.4598465e-5

julia> include(joinpath(phe_dir, "perf.jl"))
Δt_per_hand_eval = 1.082814e-6
Δt_per_evaluate5 = 2.0215967156093207e-8
*******5-card hand evaluation benchmark*******
BechmarkTools.Trial: 10000 samples with 195 evaluations.
Range (min max): 487.949 ns 6.095 μs ┊ GC (min max): 0.00% 82.90%
Time (median): 509.082 ns ┊ GC (median): 0.00%
Time (mean ± σ): 549.924 ns ± 194.761 ns ┊ GC (mean ± σ): 1.47% ± 4.24%

▂▆█▄▂▃▂ ▁▁ ▁
██████████████▇▇▇▇███████▆▇▆▇▇▆▆▅▆▆▆▇▇▆▆▆▆▆▅▆▆▆▅▅▅▅▄▅▄▅▃▅▃▅▃▃ █
488 ns Histogram: log(frequency) by time 110 μs <

Memory estimate: 608 bytes, allocs estimate: 8.
*******7-card hand evaluation benchmark*******
BechmarkTools.Trial: 10000 samples with 15 evaluations.
Range (min max): 932.067 ns 57.009 μs ┊ GC (min max): 0.00% 97.53%
Time (median): 1.042 μs ┊ GC (median): 0.00%
Time (mean ± σ): 1.111 μs ± 633.655 ns ┊ GC (mean ± σ): 0.50% ± 0.98%

▅▇█▇▆▅▄▃▁ ▁ ▂
▇█████████▇▆▆▅▅▆▅▃▃▄▁▅▅▃▄▄▁▁▃▁▃▆████▇▇█▇▆▆▅▆▄▅▅▃▄▃▅▅▅▅▅▆▆▅▅▅▅ █
932 ns Histogram: log(frequency) by time 2.69 μs <

Memory estimate: 640 bytes, allocs estimate: 10.
```
1 change: 1 addition & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ Combinatorics = "861a8166-3701-5b0c-9a16-15d98fcdc6aa"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
PlayingCards = "ecfe714a-bcc2-4d11-ad00-25525ff8f984"
PokerHandEvaluator = "18ed25b1-892a-4a3b-b8fc-1036dc9a6a89"
StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
10 changes: 4 additions & 6 deletions docs/src/implementation.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
CurrentModule = PokerHandEvaluator
```

PokerHandEvaluator.jl's approach follows [Cactus Kev](http://suffe.cool/poker/evaluator.html), however, our implementation, described below, is different.
PokerHandEvaluator.jl's approach follows [Cactus Kev](http://suffe.cool/poker/evaluator.html). Here is a brief summary:

There are `combinations(52,5)`, or 2,598,960, unique 5-card hands. However, many of these hands have the exact same `rank` (e.g., (A♡,A♢,K♣,K♠,3♢) and (A♡,A♢,K♣,K♠,3♡)). There are only 7462 unique _hand ranks_:

Expand All @@ -20,18 +20,16 @@ const primes = (41,2,3,5,7,11,13,17,19,23,29,31,37)
prime(card::Card) = primes[rank(card)]
```

The product of prime numbers are (1) unique and (2) order-agnostic (due to the multiplication commutative property). This mapped relationship can be implemented in various ways, for example via lookup tables, binary search etc.. PokerHandEvaluator.jl simply loops over the combinations of hands (using [Combinatorics.jl](https://github.com/JuliaMath/Combinatorics.jl)) and `eval`s the methods (by dispatching on types `::Val{prod(primes.(cards))}`) to return the rank directly.
The product of prime numbers are (1) unique and (2) order-agnostic (due to the multiplication commutative property). This mapped relationship can be implemented in various ways, for example via lookup tables, binary search etc.. PokerHandEvaluator.jl leverages [Combinatorics.jl](https://github.com/JuliaMath/Combinatorics.jl) to generate a constant dictionary for the look-up over the combinations of hands.

Finally, PokerHandEvaluator.jl checks to see the card's `suit` to disambiguate flush vs off-suited hands:

```julia
function evaluate5(t::NTuple{N,Card}) where {N}
if suit(t[1]) == suit(t[2]) == suit(t[3]) == suit(t[4]) == suit(t[5])
evaluate5_flush(Val(prod(prime.(t))))
hash_table_suited[prod(prime.(t))]
else
evaluate5_offsuit(Val(prod(prime.(t))))
hash_table_offsuit[prod(prime.(t))]
end
end
```

This approach has performance / compile-time implications. See the [performance](./perf.md) documentation for more information.
26 changes: 12 additions & 14 deletions docs/src/perf.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,26 +4,24 @@
CurrentModule = PokerHandEvaluator
```

There is a [`perf.jl`](https://github.com/charleskawczynski/PokerHandEvaluator.jl/blob/main/perf.jl) file at the top level of the repo which roughly estimates PokerHandEvaluator.jl's performance. Here is a snapshot example of using [`BenchmarkTools`](https://github.com/JuliaCI/BenchmarkTools.jl) on PokerHandEvaluator.jl's base evaluation method [`evaluate5`](@ref):
Here is a snapshot example of using [`BenchmarkTools`](https://github.com/JuliaCI/BenchmarkTools.jl) on PokerHandEvaluator.jl's base evaluation method [`evaluate5`](@ref):

```@example perf
using BenchmarkTools, InteractiveUtils
using PlayingCards, PokerHandEvaluator
## Introspection

```@example
using InteractiveUtils, PlayingCards, PokerHandEvaluator
@code_typed PokerHandEvaluator.evaluate5((A♡, A♣, A♠, 3♡, 2♢))
```

```@example perf
@btime PokerHandEvaluator.evaluate5($(A♡, A♣, A♠, 3♡, 2♢))
nothing
```
## Benchmark

`eval`ing methods for all unique hands is a bit expensive for the compiler as there are many method definitions. This timing may not be representative of what users should expect, however. Running PokerHandEvaluator.jl's `perf.jl` file shows that performance is around 2 μs:
There is a [`perf.jl`](https://github.com/charleskawczynski/PokerHandEvaluator.jl/blob/main/perf.jl) file at the top level of the repo which estimates PokerHandEvaluator.jl's performance, here's how it can be run:

!!! note
This `perf.jl` file needs some additional packages (StatsBase.jl, BenchmarkTools.jl, and Combinatorics.jl) that are not shipped with PokerHandEvaluator.jl

```@example
using PokerHandEvaluator
phe_dir = dirname(dirname(pathof(PokerHandEvaluator)))
include(joinpath(phe_dir, "perf.jl")) # compile first
include(joinpath(phe_dir, "perf.jl"))
phe_dir = dirname(dirname(pathof(PokerHandEvaluator)));
@time include(joinpath(phe_dir, "perf.jl"))
```

`perf.jl` is configured to evaluate roughly 4% of all possible hands, but this can easily be adjusted.
27 changes: 22 additions & 5 deletions perf.jl
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
using PlayingCards
using BenchmarkTools
using Test
using Combinatorics
using PokerHandEvaluator

time_per_eval = true
# N_evals = 1
# N_evals = 2
N_evals = 10^5 # ~4% of all combinations
# N_evals = 10^5 # ~4% of all combinations
# N_evals = 10^6 # ~38% of all combinations
# N_evals = 2598960 # all combinations
N_evals = 2598960 # all combinations

### Only collect hands if N_evals has changed:
(@isdefined N_old) || (N_old = N_evals)
Expand All @@ -29,12 +28,30 @@ function main(hands, N_evals, time_per_eval)
@btime benchmark(hands)
elseif time_per_eval
Δt_all_combos = @elapsed benchmark(hands)
Δt_per_hand_eval = Δt_all_combos/N_evals
@show Δt_per_hand_eval
Δt_per_evaluate5 = Δt_all_combos/N_evals
@show Δt_per_evaluate5
return nothing
else
@time benchmark(hands)
end
end

main(hands, N_evals, time_per_eval)

# Practical use-case:
using StatsBase, BenchmarkTools, PlayingCards, PokerHandEvaluator
const cards_buffer5 = Vector{Card}(undef, 5);
println("*******5-card hand evaluation benchmark*******")
bm5 = @benchmark CompactHandEval(Tuple(sample!($(full_deck()), $cards_buffer5; replace=false)))
io = IOBuffer()
show(io, "text/plain", bm5)
println(String(take!(io)))

const cards_buffer7 = Vector{Card}(undef, 7);
println("*******7-card hand evaluation benchmark*******")
bm7 = @benchmark CompactHandEval(Tuple(sample!($(full_deck()), $cards_buffer7; replace=false)))
io = IOBuffer()
show(io, "text/plain", bm7)
println(String(take!(io)))

nothing
80 changes: 34 additions & 46 deletions src/evaluate5.jl
Original file line number Diff line number Diff line change
Expand Up @@ -29,53 +29,41 @@ evaluate5(cards::Card...)::Int = evaluate5(cards)
function evaluate5(t::NTuple{N,Card})::Int where {N}
@assert N == 5
if suit(t[1]) == suit(t[2]) == suit(t[3]) == suit(t[4]) == suit(t[5])
evaluate5_flush(Val(prod(prime.(t))))
return hash_table_suited[prod(prime.(t))]
else
evaluate5_offsuit(Val(prod(prime.(t))))
return hash_table_offsuit[prod(prime.(t))]
end
end

for (i,card_ranks) in enumerate(straight_ranks())
p = prod(prime.(card_ranks))
@eval evaluate5_flush(::Val{$p})::Int = $i # Rows 1:10 (Straight flush)
end

for (k,card_ranks) in enumerate(quad_ranks())
p = prod(prime.(card_ranks))
@eval evaluate5_offsuit(::Val{$p})::Int = 11+$k-1 # Rows 11:166 (4 of a kind)
end

for (k,card_ranks) in enumerate(full_house_ranks())
p = prod(prime.(card_ranks))
@eval evaluate5_offsuit(::Val{$p})::Int = 167+$k-1 # Rows 167:322 (full house)
end

for (k,card_ranks) in enumerate(flush_ranks())
p = prod(prime.(card_ranks))
@eval evaluate5_flush(::Val{$p})::Int = 323+$k-1 # Rows 323:1599 (flush)
end

for (k,card_ranks) in enumerate(straight_ranks())
p = prod(prime.(card_ranks))
@eval evaluate5_offsuit(::Val{$p})::Int = 1600+$k-1 # Rows 1600:1609 (off-suit straight)
end

for (k,card_ranks) in enumerate(trip_ranks())
p = prod(prime.(card_ranks))
@eval evaluate5_offsuit(::Val{$p})::Int = 1610+$k-1 # Rows 1610:2467 (trips)
end

for (k,card_ranks) in enumerate(two_pair_ranks())
p = prod(prime.(card_ranks))
@eval evaluate5_offsuit(::Val{$p})::Int = 2468+$k-1 # Rows 2468:3325 (two pair)
end

for (k,card_ranks) in enumerate(pair_ranks())
p = prod(prime.(card_ranks))
@eval evaluate5_offsuit(::Val{$p})::Int = 3326+$k-1 # Rows 3326:6185 (pair)
end

for (k,card_ranks) in enumerate(high_card_ranks())
p = prod(prime.(card_ranks))
@eval evaluate5_offsuit(::Val{$p})::Int = 6186+$k-1 # Rows 6186:7462 (high card)
end
const hash_table_suited = Dict{Int,Int}(hcat(
map(enumerate(straight_ranks())) do (i,card_ranks)
Pair(prod(prime.(card_ranks)), i) # Rows 1:10 (Straight flush)
end...,
map(enumerate(flush_ranks())) do (i,card_ranks)
Pair(prod(prime.(card_ranks)), 323+i-1) # Rows 323:1599 (flush)
end...,
))

const hash_table_offsuit = Dict{Int,Int}(hcat(
map(enumerate(quad_ranks())) do (i,card_ranks)
Pair(prod(prime.(card_ranks)), 11+i-1) # Rows 11:166 (4 of a kind)
end...,
map(enumerate(full_house_ranks())) do (i,card_ranks)
Pair(prod(prime.(card_ranks)), 167+i-1) # Rows 167:322 (full house)
end...,
map(enumerate(straight_ranks())) do (i,card_ranks)
Pair(prod(prime.(card_ranks)), 1600+i-1) # Rows 1600:1609 (off-suit straight)
end...,
map(enumerate(trip_ranks())) do (i,card_ranks)
Pair(prod(prime.(card_ranks)), 1610+i-1) # Rows 1610:2467 (trips)
end...,
map(enumerate(two_pair_ranks())) do (i,card_ranks)
Pair(prod(prime.(card_ranks)), 2468+i-1) # Rows 2468:3325 (two pair)
end...,
map(enumerate(pair_ranks())) do (i,card_ranks)
Pair(prod(prime.(card_ranks)), 3326+i-1) # Rows 3326:6185 (pair)
end...,
map(enumerate(high_card_ranks())) do (i,card_ranks)
Pair(prod(prime.(card_ranks)), 6186+i-1) # Rows 6186:7462 (high card)
end...,
))
4 changes: 2 additions & 2 deletions test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ end
end

@testset "N-methods" begin
N_offsuit = length(methods(PHE.evaluate5_offsuit))
N_flush = length(methods(PHE.evaluate5_flush))
N_offsuit = length(PHE.hash_table_offsuit)
N_flush = length(PHE.hash_table_suited)
@test N_offsuit+N_flush == 7462
end

Expand Down

2 comments on commit b34f5ae

@charleskawczynski
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/40060

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.2.4 -m "<description of version>" b34f5ae02abacc65a7315743d9615279d0405df4
git push origin v0.2.4

Please sign in to comment.