Files
neon/test_runner/performance/test_latency.py
Heikki Linnakangas 53f438a8a8 Rename "Postgres nodes" in control_plane to endpoints.
We use the term "endpoint" in for compute Postgres nodes in the web UI
and user-facing documentation now. Adjust the nomenclature in the code.

This changes the name of the "neon_local pg" command to "neon_local
endpoint". Also adjust names of classes, variables etc. in the python
tests accordingly.

This also changes the directory structure so that endpoints are now
stored in:

    .neon/endpoints/<endpoint id>

instead of:

    .neon/pgdatadirs/tenants/<tenant_id>/<endpoint (node) name>

The tenant ID is no longer part of the path. That means that you
cannot have two endpoints with the same name/ID in two different
tenants anymore. That's consistent with how we treat endpoints in the
real control plane and proxy: the endpoint ID must be globally unique.
2023-04-13 14:34:29 +03:00

31 lines
1.2 KiB
Python

import threading
import pytest
from fixtures.compare_fixtures import PgCompare
from fixtures.neon_fixtures import PgProtocol
from performance.test_perf_pgbench import get_scales_matrix
from performance.test_wal_backpressure import record_read_latency
def start_write_workload(pg: PgProtocol, scale: int = 10):
with pg.connect().cursor() as cur:
cur.execute(f"create table big as select generate_series(1,{scale*100_000})")
# Measure latency of reads on one table, while lots of writes are happening on another table.
# The fine-grained tracking of last-written LSNs helps to keep the latency low. Without it, the reads would
# often need to wait for the WAL records of the unrelated writes to be processed by the pageserver.
@pytest.mark.parametrize("scale", get_scales_matrix(1))
def test_measure_read_latency_heavy_write_workload(neon_with_baseline: PgCompare, scale: int):
env = neon_with_baseline
pg = env.pg
with pg.connect().cursor() as cur:
cur.execute(f"create table small as select generate_series(1,{scale*100_000})")
write_thread = threading.Thread(target=start_write_workload, args=(pg, scale * 100))
write_thread.start()
record_read_latency(env, lambda: write_thread.is_alive(), "SELECT count(*) from small")