mirror of
https://github.com/neondatabase/neon.git
synced 2026-01-05 20:42:54 +00:00
we want to run some specific pagebench test cases on dedicated hardware to get reproducible results run1: 1 client per tenant => characterize throughput with n tenants. - 500 tenants - scale 13 (200 MB database) - 1 hour duration - ca 380 GB layer snapshot files run2.singleclient: 1 client per tenant => characterize latencies run2.manyclient: N clients per tenant => characterize throughput scalability within one tenant. - 1 tenant with 1 client for latencies - 1 tenant with 64 clients because typically for a high number of connections we recommend the connection pooler which by default uses 64 connections (for scalability) - scale 136 (2048 MB database) - 20 minutes each
How to reproduce benchmark results / run these benchmarks interactively.
- Get an EC2 instance with Instance Store. Use the same instance type as used for the benchmark run.
- Mount the Instance Store =>
neon.git/scripts/ps_ec2_setup_instance_store - Use a pytest command line (see other READMEs further up in the pytest hierarchy).
For tests that take a long time to set up / consume a lot of storage space,
we use the test suite's repo_dir snapshotting functionality (from_repo_dir).
It supports mounting snapshots using overlayfs, which improves iteration time.
Here's a full command line.
RUST_BACKTRACE=1 NEON_ENV_BUILDER_USE_OVERLAYFS_FOR_SNAPSHOTS=1 DEFAULT_PG_VERSION=15 BUILD_TYPE=release \
./scripts/pytest test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py