Add cache to http server and mutlithread with eio #119
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This substantially improves the throughput of the server, testing with random opcodes gives around 9000 opcodes / sec.
$ wrk2 -t 8 -c64 -d 30s -s stress.lua -R10000 --latency http://localhost:8000
We can see in these tests the in-memory cache has little effect as it is always cold (and the test opcodes are uniformly random, and there are far more of them than the size of the cache). For real programs the cache hit rate should be higher. The 'varnishcachehot' and 'varnishcachecold' are tested by putting the varnish http cache in front of the server, these in the hot case it has pre-cached all the opcodes, and in the cold cache it has just been restarted. We can probably get a nix wrapper to do this?
testing with varnish:
Before these changes we had:
I think its possibly worth separating the server and http client into a separate repo.
I also tested with an lru cache on the cpp side and it didn't have a positive impact, I suspect an LRU cache is not going to be effective with random input.
Parallelism strategy
We have 1 request handler thread (which the (not threadsafe) in-memory cache lives in) and multiple lifter threads which accept work through
Eio.ExecutorPool
.Basic benchmarks show that adding threads to the request handler doesn't improve performance as we are significantly bound by the lifting side.