mirror of
https://github.com/neondatabase/neon.git
synced 2025-12-26 07:39:58 +00:00
## Problem We don't have a well-documented, periodic benchmark for TPC-C like OLTP workload. ## Summary of changes # Benchbase TPC-C-like Performance Results Runs TPC-C-like benchmarks on Neon databases using [Benchbase](https://github.com/cmu-db/benchbase). Docker images are built [here](https://github.com/neondatabase-labs/benchbase-docker-images) We run the benchmarks at different scale factors aligned with different compute sizes we offer to customers. For each scale factor, we determine a max rate (see Throughput in warmup phase) and then run the benchmark at a target rate of approx. 70 % of the max rate. We use different warehouse sizes which determine the working set size - it is optimized for LFC size of the respected pricing tier. Usually we should get LFC hit rates above 70 % for this setup and quite good, consistent (non-flaky) latencies. ## Expected performance as of first testing this | Tier | CU | Warehouses | Terminals | Max TPS | LFC size | Working set size | LFC hit rate | Median latency | p95 latency | |------------|------------|---------------|-----------|---------|----------|------------------|--------------|----------------|-------------| | free | 0.25-2 | 50 - 5 GB | 150 | 800 | 5 GB | 6.3 GB | 95 % | 170 ms | 600 ms | | serverless | 2-8 | 500 - 50 GB | 230 | 2000 | 26 GB | ?? GB | 91 % | 50 ms | 200 ms | | business | 2-16 | 1000 - 100 GB | 330 | 2900 | 51 GB | 50 GB | 72 % | 40 ms | 180 ms | Each run - first loads the database (not shown in the dashboard). - Then we run a warmup phase for 20 minutes to warm up the database and the LFC at unlimited target rate (max rate) (highest throughput but flaky latencies). The warmup phase can be used to determine the max rate and adjust it in the github workflow in case Neon is faster in the future. - Then we run the benchmark at a target rate of approx. 70 % of the max rate for 1 hour (expecting consistent latencies and throughput). ## Important notes on implementation: - we want to eventually publish the process how to reproduce these benchmarks - thus we want to reduce all dependencies necessary to run the benchmark, the only thing needed are - docker - the docker images referenced above for benchbase - python >= 3.9 to run some config generation steps and create diagrams - to reduce dependencies we deliberatly do NOT use some of our python fixture test infrastructure to make the dependency chain really small - so pls don't add a review comment "should reuse fixture xy" - we also upload all generator python scripts, generated bash shell scripts and configs as well as raw results to S3 bucket that we later want to publish once this benchmark is reviewed and approved.