mirror of
https://github.com/neondatabase/neon.git
synced 2025-12-26 07:39:58 +00:00
## Problem - We run the large tenant oltp workload with a fixed size (larger than existing customers' workloads). Our customer's workloads are continuously growing and our testing should stay ahead of the customers' production workloads. - we want to touch all tables in the tenant's database (updates) so that we simulate a continuous change in layer files like in a real production workload - our current oltp benchmark uses a mixture of read and write transactions, however we also want a separate test run with read-only transactions only ## Summary of changes - modify the existing workload to have a separate run with pgbench custom scripts that are read-only - create a new workload that - grows all large tables in each run (for the reuse branch in the large oltp tenant's project) - updates a percentage of rows in all large tables in each run (to enforce table bloat and auto-vacuum runs and layer rebuild in pageservers Each run of the new workflow increases the logical database size about 16 GB. We start with 6 runs per day which will give us about 96-100 GB growth per day. --------- Co-authored-by: Alexander Lakhin <alexander.lakhin@neon.tech>
10 lines
264 B
SQL
10 lines
264 B
SQL
-- update approximately 2500 rows or 1 MB in the values table
|
|
-- takes about 3 minutes
|
|
UPDATE workflows.values
|
|
SET bool_value = NOT bool_value
|
|
WHERE ctid in (
|
|
SELECT ctid
|
|
FROM workflows.values
|
|
TABLESAMPLE SYSTEM (0.0002)
|
|
) AND bool_value IS NOT NULL;
|