mirror of
https://github.com/neondatabase/neon.git
synced 2025-12-23 06:09:59 +00:00
## Problem - We run the large tenant oltp workload with a fixed size (larger than existing customers' workloads). Our customer's workloads are continuously growing and our testing should stay ahead of the customers' production workloads. - we want to touch all tables in the tenant's database (updates) so that we simulate a continuous change in layer files like in a real production workload - our current oltp benchmark uses a mixture of read and write transactions, however we also want a separate test run with read-only transactions only ## Summary of changes - modify the existing workload to have a separate run with pgbench custom scripts that are read-only - create a new workload that - grows all large tables in each run (for the reuse branch in the large oltp tenant's project) - updates a percentage of rows in all large tables in each run (to enforce table bloat and auto-vacuum runs and layer rebuild in pageservers Each run of the new workflow increases the logical database size about 16 GB. We start with 6 runs per day which will give us about 96-100 GB growth per day. --------- Co-authored-by: Alexander Lakhin <alexander.lakhin@neon.tech>
9 lines
254 B
SQL
9 lines
254 B
SQL
-- update approximately 10000 rows or 2 MB in the vertices table
|
|
-- takes about 1 minute
|
|
UPDATE workflows.vertices
|
|
SET has_been_visited = NOT has_been_visited
|
|
WHERE ctid in (
|
|
SELECT ctid
|
|
FROM workflows.vertices
|
|
TABLESAMPLE SYSTEM (0.0002)
|
|
); |