From ca9d8761ffb71e9eb12631c9da04ae58b468847b Mon Sep 17 00:00:00 2001 From: Peter Bendel Date: Mon, 7 Jul 2025 08:44:41 +0200 Subject: [PATCH 01/65] Move some perf benchmarks from hetzner to aws arm github runners (#12393) ## Problem We want to move some benchmarks from hetzner runners to aws graviton runners ## Summary of changes Adjust the runner labels for some workflows. Adjust the pagebench number of clients to match the latecny knee at 8 cores of the new instance type Add `--security-opt seccomp=unconfined` to docker run command to bypass IO_URING EPERM error. ## New runners https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#Instances:instanceState=running;search=:github-unit-perf-runner-arm;v=3;$case=tags:true%5C,client:false;$regex=tags:false%5C,client:false;sort=tag:Name ## Important Notes I added the run-benchmarks label to get this tested **before we merge it**. [See](https://github.com/neondatabase/neon/actions/runs/15974141360) I also test a run of pagebench with the new setup from this branch, see https://github.com/neondatabase/neon/actions/runs/15972523054 - Update: the benchmarking workflow had failures, [see] (https://github.com/neondatabase/neon/actions/runs/15974141360/job/45055897591) - changed docker run command to avoid io_uring EPERM error, new run [see](https://github.com/neondatabase/neon/actions/runs/15997965633/job/45125689920?pr=12393) Update: the pagebench test run on the new runner [completed successfully](https://github.com/neondatabase/neon/actions/runs/15972523054/job/45046772556) Update 2025-07-07: the latest runs with instance store ext4 have been successful and resolved the direct I/O issues we have been seeing before in some runs. We only had one perf testcase failing (shard split) that had been flaky before. So I think we can merge this now. ## Follow up if this is merged and works successfully we must create a separate issue to de-provision the hetzner unit-perf runners defined [here](https://github.com/neondatabase/runner/blob/91a41729af5b168d42e8acab0a7c38fde031113e/ansible/inventory/hosts_metal#L111) --- .github/actionlint.yml | 1 + .github/workflows/build_and_test.yml | 4 ++-- .github/workflows/periodic_pagebench.yml | 4 ++-- .github/workflows/proxy-benchmark.yml | 4 ++-- ...test_pageserver_max_throughput_getpage_at_latest_lsn.py | 7 ++++--- 5 files changed, 11 insertions(+), 9 deletions(-) diff --git a/.github/actionlint.yml b/.github/actionlint.yml index b7e0be761a..3142a36fa0 100644 --- a/.github/actionlint.yml +++ b/.github/actionlint.yml @@ -7,6 +7,7 @@ self-hosted-runner: - small-metal - small-arm64 - unit-perf + - unit-perf-aws-arm - us-east-2 config-variables: - AWS_ECR_REGION diff --git a/.github/workflows/build_and_test.yml b/.github/workflows/build_and_test.yml index 456c7b8c92..0ceaa96fb0 100644 --- a/.github/workflows/build_and_test.yml +++ b/.github/workflows/build_and_test.yml @@ -306,14 +306,14 @@ jobs: statuses: write contents: write pull-requests: write - runs-on: [ self-hosted, unit-perf ] + runs-on: [ self-hosted, unit-perf-aws-arm ] container: image: ${{ needs.build-build-tools-image.outputs.image }}-bookworm credentials: username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} # for changed limits, see comments on `options:` earlier in this file - options: --init --shm-size=512mb --ulimit memlock=67108864:67108864 + options: --init --shm-size=512mb --ulimit memlock=67108864:67108864 --ulimit nofile=65536:65536 --security-opt seccomp=unconfined strategy: fail-fast: false matrix: diff --git a/.github/workflows/periodic_pagebench.yml b/.github/workflows/periodic_pagebench.yml index 317db94052..728a6d4956 100644 --- a/.github/workflows/periodic_pagebench.yml +++ b/.github/workflows/periodic_pagebench.yml @@ -1,4 +1,4 @@ -name: Periodic pagebench performance test on unit-perf hetzner runner +name: Periodic pagebench performance test on unit-perf-aws-arm runners on: schedule: @@ -40,7 +40,7 @@ jobs: statuses: write contents: write pull-requests: write - runs-on: [ self-hosted, unit-perf ] + runs-on: [ self-hosted, unit-perf-aws-arm ] container: image: ghcr.io/neondatabase/build-tools:pinned-bookworm credentials: diff --git a/.github/workflows/proxy-benchmark.yml b/.github/workflows/proxy-benchmark.yml index 3a98ad4e8e..0ae93ce295 100644 --- a/.github/workflows/proxy-benchmark.yml +++ b/.github/workflows/proxy-benchmark.yml @@ -1,4 +1,4 @@ -name: Periodic proxy performance test on unit-perf hetzner runner +name: Periodic proxy performance test on unit-perf-aws-arm runners on: push: # TODO: remove after testing @@ -32,7 +32,7 @@ jobs: statuses: write contents: write pull-requests: write - runs-on: [self-hosted, unit-perf] + runs-on: [self-hosted, unit-perf-aws-arm] timeout-minutes: 60 # 1h timeout container: image: ghcr.io/neondatabase/build-tools:pinned-bookworm diff --git a/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py b/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py index 41696bf887..68bfa81b25 100644 --- a/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py +++ b/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py @@ -55,9 +55,10 @@ def test_pageserver_characterize_throughput_with_n_tenants( @pytest.mark.parametrize("duration", [20 * 60]) @pytest.mark.parametrize("pgbench_scale", [get_scale_for_db(2048)]) # we use 1 client to characterize latencies, and 64 clients to characterize throughput/scalability -# we use 64 clients because typically for a high number of connections we recommend the connection pooler -# which by default uses 64 connections -@pytest.mark.parametrize("n_clients", [1, 64]) +# we use 8 clients because we see a latency knee around 6-8 clients on im4gn.2xlarge instance type, +# which we use for this periodic test - at a cpu utilization of around 70 % - which is considered +# a good utilization for pageserver. +@pytest.mark.parametrize("n_clients", [1, 8]) @pytest.mark.parametrize("n_tenants", [1]) @pytest.mark.timeout(2400) def test_pageserver_characterize_latencies_with_1_client_and_throughput_with_many_clients_one_tenant( From 4b5c75b52f62935f42526954d104467f0828e323 Mon Sep 17 00:00:00 2001 From: Dmitrii Kovalkov <34828390+DimasKovas@users.noreply.github.com> Date: Mon, 7 Jul 2025 11:25:15 +0400 Subject: [PATCH 02/65] docs: revise safekeeper migration rfc (#12432) ## Problem The safekeeper migration code/logic slightly diverges from the initial RFC. This PR aims to address these differences. - Part of https://github.com/neondatabase/neon/issues/12192 ## Summary of changes - Adjust the RFC to reflect that we implemented the safekeeper reconciler with in-memory queue. - Add `sk_set_notified_generation` field to the `timelines` table in the RFC to address the "finish migration atomically" problem. - Describe how we are going to make the timeline migration handler fully retriable with in-memory reconciler queue. - Unify type/field/method names in the code and RFC. - Fix typos --- ...35-safekeeper-dynamic-membership-change.md | 133 +++++++++++------- 1 file changed, 79 insertions(+), 54 deletions(-) diff --git a/docs/rfcs/035-safekeeper-dynamic-membership-change.md b/docs/rfcs/035-safekeeper-dynamic-membership-change.md index 9b320c7285..8619f83ff5 100644 --- a/docs/rfcs/035-safekeeper-dynamic-membership-change.md +++ b/docs/rfcs/035-safekeeper-dynamic-membership-change.md @@ -20,7 +20,7 @@ In our case consensus leader is compute (walproposer), and we don't want to wake up all computes for the change. Neither we want to fully reimplement the leader logic second time outside compute. Because of that the proposed algorithm relies for issuing configurations on the external fault tolerant (distributed) strongly -consisent storage with simple API: CAS (compare-and-swap) on the single key. +consistent storage with simple API: CAS (compare-and-swap) on the single key. Properly configured postgres suits this. In the system consensus is implemented at the timeline level, so algorithm below @@ -34,7 +34,7 @@ A configuration is ``` struct Configuration { - generation: Generation, // a number uniquely identifying configuration + generation: SafekeeperGeneration, // a number uniquely identifying configuration sk_set: Vec, // current safekeeper set new_sk_set: Optional>, } @@ -81,11 +81,11 @@ configuration generation in them is less than its current one. Namely, it refuses to vote, to truncate WAL in `handle_elected` and to accept WAL. In response it sends its current configuration generation to let walproposer know. -Safekeeper gets `PUT /v1/tenants/{tenant_id}/timelines/{timeline_id}/configuration` -accepting `Configuration`. Safekeeper switches to the given conf it is higher than its +Safekeeper gets `PUT /v1/tenants/{tenant_id}/timelines/{timeline_id}/membership` +accepting `Configuration`. Safekeeper switches to the given conf if it is higher than its current one and ignores it otherwise. In any case it replies with ``` -struct ConfigurationSwitchResponse { +struct TimelineMembershipSwitchResponse { conf: Configuration, term: Term, last_log_term: Term, @@ -108,7 +108,7 @@ establishes this configuration as its own and moves to voting. It should stop talking to safekeepers not listed in the configuration at this point, though it is not unsafe to continue doing so. -To be elected it must receive votes from both majorites if `new_sk_set` is present. +To be elected it must receive votes from both majorities if `new_sk_set` is present. Similarly, to commit WAL it must receive flush acknowledge from both majorities. If walproposer hears from safekeeper configuration higher than his own (i.e. @@ -130,7 +130,7 @@ storage are reachable. 1) Fetch current timeline configuration from the configuration storage. 2) If it is already joint one and `new_set` is different from `desired_set` refuse to change. However, assign join conf to (in memory) var - `join_conf` and proceed to step 4 to finish the ongoing change. + `joint_conf` and proceed to step 4 to finish the ongoing change. 3) Else, create joint `joint_conf: Configuration`: increment current conf number `n` and put `desired_set` to `new_sk_set`. Persist it in the configuration storage by doing CAS on the current generation: change happens only if @@ -161,11 +161,11 @@ storage are reachable. because `pull_timeline` already includes it and plus additionally would be broadcast by compute. More importantly, we may proceed to the next step only when `` on the majority of the new set reached - `sync_position`. Similarly, on the happy path no waiting is not needed because + `sync_position`. Similarly, on the happy path no waiting is needed because `pull_timeline` already includes it. However, we should double check to be safe. For example, timeline could have been created earlier e.g. manually or after try-to-migrate, abort, try-to-migrate-again sequence. -7) Create `new_conf: Configuration` incrementing `join_conf` generation and having new +7) Create `new_conf: Configuration` incrementing `joint_conf` generation and having new safekeeper set as `sk_set` and None `new_sk_set`. Write it to configuration storage under one more CAS. 8) Call `PUT` `configuration` on safekeepers from the new set, @@ -178,12 +178,12 @@ spec of it. Description above focuses on safety. To make the flow practical and live, here a few more considerations. -1) It makes sense to ping new set to ensure it we are migrating to live node(s) before +1) It makes sense to ping new set to ensure we are migrating to live node(s) before step 3. 2) If e.g. accidentally wrong new sk set has been specified, before CAS in step `6` is completed it is safe to rollback to the old conf with one more CAS. 3) On step 4 timeline might be already created on members of the new set for various reasons; - the simplest is the procedure restart. There are more complicated scenarious like mentioned + the simplest is the procedure restart. There are more complicated scenarios like mentioned in step 5. Deleting and re-doing `pull_timeline` is generally unsafe without involving generations, so seems simpler to treat existing timeline as success. However, this also has a disadvantage: you might imagine an surpassingly unlikely schedule where condition in @@ -192,7 +192,7 @@ considerations. 4) In the end timeline should be locally deleted on the safekeeper(s) which are in the old set but not in the new one, unless they are unreachable. To be safe this also should be done under generation number (deletion proceeds only if - current configuration is <= than one in request and safekeeper is not memeber of it). + current configuration is <= than one in request and safekeeper is not member of it). 5) If current conf fetched on step 1 is already not joint and members equal to `desired_set`, jump to step 7, using it as `new_conf`. @@ -261,14 +261,14 @@ Timeline (branch) creation in cplane should call storage_controller POST Response should be augmented with `safekeepers_generation` and `safekeepers` fields like described in `/notify-safekeepers` above. Initially (currently) these fields may be absent; in this case cplane chooses safekeepers on its own -like it currently does. The call should be retried until succeeds. +like it currently does. The call should be retried until it succeeds. Timeline deletion and tenant deletion in cplane should call appropriate storage_controller endpoints like it currently does for sharded tenants. The calls should be retried until they succeed. -When compute receives safekeepers list from control plane it needs to know the -generation to checked whether it should be updated (note that compute may get +When compute receives safekeeper list from control plane it needs to know the +generation to check whether it should be updated (note that compute may get safekeeper list from either cplane or safekeepers). Currently `neon.safekeepers` GUC is just a comma separates list of `host:port`. Let's prefix it with `g#:` to this end, so it will look like @@ -305,8 +305,8 @@ enum MigrationRequest { ``` `FinishPending` requests to run the procedure to ensure state is clean: current -configuration is not joint and majority of safekeepers are aware of it, but do -not attempt to migrate anywhere. If current configuration fetched on step 1 is +configuration is not joint and the majority of safekeepers are aware of it, but do +not attempt to migrate anywhere. If the current configuration fetched on step 1 is not joint it jumps to step 7. It should be run at startup for all timelines (but similarly, in the first version it is ok to trigger it manually). @@ -315,7 +315,7 @@ similarly, in the first version it is ok to trigger it manually). `safekeepers` table mirroring current `nodes` should be added, except that for `scheduling_policy`: it is enough to have at least in the beginning only 3 fields: 1) `active` 2) `paused` (initially means only not assign new tlis there -3) `decomissioned` (node is removed). +3) `decommissioned` (node is removed). `timelines` table: ``` @@ -326,9 +326,10 @@ table! { tenant_id -> Varchar, start_lsn -> pg_lsn, generation -> Int4, - sk_set -> Array, // list of safekeeper ids + sk_set -> Array, // list of safekeeper ids new_sk_set -> Nullable>, // list of safekeeper ids, null if not joint conf cplane_notified_generation -> Int4, + sk_set_notified_generation -> Int4, // the generation a quorum of sk_set knows about deleted_at -> Nullable, } } @@ -338,13 +339,23 @@ table! { might also want to add ancestor_timeline_id to preserve the hierarchy, but for this RFC it is not needed. +`cplane_notified_generation` and `sk_set_notified_generation` fields are used to +track the last stage of the algorithm, when we need to notify safekeeper set and cplane +with the final configuration after it's already committed to DB. + +The timeline is up-to-date (no migration in progress) if `new_sk_set` is null and +`*_notified_generation` fields are up to date with `generation`. + +It's possible to replace `*_notified_generation` with one boolean field `migration_completed`, +but for better observability it's nice to have them separately. + #### API Node management is similar to pageserver: -1) POST `/control/v1/safekeepers` inserts safekeeper. -2) GET `/control/v1/safekeepers` lists safekeepers. -3) GET `/control/v1/safekeepers/:node_id` gets safekeeper. -4) PUT `/control/v1/safekepers/:node_id/status` changes status to e.g. +1) POST `/control/v1/safekeeper` inserts safekeeper. +2) GET `/control/v1/safekeeper` lists safekeepers. +3) GET `/control/v1/safekeeper/:node_id` gets safekeeper. +4) PUT `/control/v1/safekeper/:node_id/scheduling_policy` changes status to e.g. `offline` or `decomissioned`. Initially it is simpler not to schedule any migrations here. @@ -368,8 +379,8 @@ Migration API: the first version is the simplest and the most imperative: all timelines from one safekeeper to another. It accepts json ``` { - "src_sk": u32, - "dst_sk": u32, + "src_sk": NodeId, + "dst_sk": NodeId, "limit": Optional, } ``` @@ -379,12 +390,15 @@ Returns list of scheduled requests. 2) PUT `/control/v1/tenant/:tenant_id/timeline/:timeline_id/safekeeper_migrate` schedules `MigrationRequest` to move single timeline to given set of safekeepers: ``` -{ - "desired_set": Vec, +struct TimelineSafekeeperMigrateRequest { + "new_sk_set": Vec, } ``` -Returns scheduled request. +In the first version the handler migrates the timeline to `new_sk_set` synchronously. +Should be retried until success. + +In the future we might change it to asynchronous API and return scheduled request. Similar call should be added for the tenant. @@ -434,6 +448,9 @@ table! { } ``` +We load all pending ops from the table on startup into the memory. +The table is needed only to preserve the state between restarts. + `op_type` can be `include` (seed from peers and ensure generation is up to date), `exclude` (remove locally) and `delete`. Field is actually not strictly needed as it can be computed from current configuration, but gives more explicit @@ -474,7 +491,7 @@ actions must be idempotent. Now, a tricky point here is timeline start LSN. For the initial (tenant creation) call cplane doesn't know it. However, setting start_lsn on safekeepers during creation is a good thing -- it provides a guarantee that walproposer can always find a common point in WAL histories of -safekeeper and its own, and so absense of it would be a clear sign of +safekeeper and its own, and so absence of it would be a clear sign of corruption. The following sequence works: 1) Create timeline (or observe that it exists) on pageserver, figuring out last_record_lsn in response. @@ -497,11 +514,9 @@ corruption. The following sequence works: retries the call until 200 response. There is a small question how request handler (timeline creation in this - case) would interact with per sk reconciler. As always I prefer to do the - simplest possible thing and here it seems to be just waking it up so it - re-reads the db for work to do. Passing work in memory is faster, but - that shouldn't matter, and path to scan db for work will exist anyway, - simpler to reuse it. + case) would interact with per sk reconciler. In the current implementation + we first persist the request in the DB, and then send an in-memory request + to each safekeeper reconciler to process it. For pg version / wal segment size: while we may persist them in `timelines` table, it is not necessary as initial creation at step 3 can take them from @@ -509,30 +524,40 @@ pageserver or cplane creation call and later pull_timeline will carry them around. Timeline migration. -1) CAS to the db to create joint conf, and in the same transaction create - `safekeeper_timeline_pending_ops` `include` entries to initialize new members - as well as deliver this conf to current ones; poke per sk reconcilers to work - on it. Also any conf change should also poke cplane notifier task(s). -2) Once it becomes possible per alg description above, get out of joint conf - with another CAS. Task should get wakeups from per sk reconcilers because - conf switch is required for advancement; however retries should be sleep - based as well as LSN advancement might be needed, though in happy path - it isn't. To see whether further transition is possible on wakup migration - executor polls safekeepers per the algorithm. CAS creating new conf with only - new members should again insert entries to `safekeeper_timeline_pending_ops` - to switch them there, as well as `exclude` rows to remove timeline from - old members. +1) CAS to the db to create joint conf. Since this moment the migration is considered to be + "in progress". We can detect all "in-progress" migrations looking into the database. +2) Do steps 4-6 from the algorithm, including `pull_timeline` onto `new_sk_set`, update membership + configuration on all safekeepers, notify cplane, etc. All operations are idempotent, + so we don't need to persist anything in the database at this stage. If any errors occur, + it's safe to retry or abort the migration. +3) Once it becomes possible per alg description above, get out of joint conf + with another CAS. Also should insert `exclude` entries into `safekeeper_timeline_pending_ops` + in the same DB transaction. Adding `exclude` entries atomically is nesessary because after + CAS we don't have the list of excluded safekeepers in the `timelines` table anymore, but we + need to have them persisted somewhere in case the migration is interrupted right after the CAS. +4) Finish the migration. The final membership configuration is committed to the DB at this stage. + So, the migration can not be aborted anymore. But it can still be retried if the migration fails + past stage 3. To finish the migration we need to send the new membership configuration to + a new quorum of safekeepers, notify cplane with the new safekeeper list and schedule the `exclude` + requests to in-memory queue for safekeeper reconciler. If the algrorithm is retried, it's + possible that we have already committed `exclude` requests to DB, but didn't send them to + the in-memory queue. In this case we need to read them from `safekeeper_timeline_pending_ops` + because it's the only place where they are persistent. The fields `sk_set_notified_generation` + and `cplane_notified_generation` are updated after each step. The migration is considered + fully completed when they match the `generation` field. + +In practice, we can report "success" after stage 3 and do the "finish" step in per-timeline +reconciler (if we implement it). But it's wise to at least try to finish them synchronously, +so the timeline is always in a "good state" and doesn't require an old quorum to commit +WAL after the migration reported "success". Timeline deletion: just set `deleted_at` on the timeline row and insert `safekeeper_timeline_pending_ops` entries in the same xact, the rest is done by per sk reconcilers. -When node is removed (set to `decomissioned`), `safekeeper_timeline_pending_ops` +When node is removed (set to `decommissioned`), `safekeeper_timeline_pending_ops` for it must be cleared in the same transaction. -One more task pool should infinitely retry notifying control plane about changed -safekeeper sets (trying making `cplane_notified_generation` equal `generation`). - #### Dealing with multiple instances of storage_controller Operations described above executed concurrently might create some errors but do @@ -541,7 +566,7 @@ of storage_controller it is fine to have it temporarily, e.g. during redeploy. To harden against some controller instance creating some work in `safekeeper_timeline_pending_ops` and then disappearing without anyone pickup up -the job per sk reconcilers apart from explicit wakups should scan for work +the job per sk reconcilers apart from explicit wakeups should scan for work periodically. It is possible to remove that though if all db updates are protected with leadership token/term -- then such scans are needed only after leadership is acquired. @@ -563,7 +588,7 @@ There should be following layers of tests: safekeeper communication and pull_timeline need to be mocked and main switch procedure wrapped to as a node (thread) in simulation tests, using these mocks. Test would inject migrations like it currently injects - safekeeper/walproposer restars. Main assert is the same -- committed WAL must + safekeeper/walproposer restarts. Main assert is the same -- committed WAL must not be lost. 3) Since simulation testing injects at relatively high level points (not @@ -613,7 +638,7 @@ Let's have the following implementation bits for gradual rollout: `notify-safekeepers`. Then the rollout for a region would be: -- Current situation: safekeepers are choosen by control_plane. +- Current situation: safekeepers are chosen by control_plane. - We manually migrate some timelines, test moving them around. - Then we enable `--set-safekeepers` so that all new timelines are on storage controller. From fc10bb9438fa58efebde744179725bd55bbf9ee1 Mon Sep 17 00:00:00 2001 From: Dmitrii Kovalkov <34828390+DimasKovas@users.noreply.github.com> Date: Mon, 7 Jul 2025 13:22:03 +0400 Subject: [PATCH 03/65] storage: rename term -> last_log_term in TimelineMembershipSwitchResponse (#12481) ## Problem Names are not consistent between safekeeper migration RFC and the actual implementation. It's not used anywhere in production yet, so it's safe to rename. We don't need to worry about backward compatibility. - Follow up on https://github.com/neondatabase/neon/pull/12432 ## Summary of changes - rename term -> last_log_term in TimelineMembershipSwitchResponse - add missing fields to TimelineMembershipSwitchResponse in python --- libs/safekeeper_api/src/models.rs | 2 +- safekeeper/src/timeline.rs | 2 +- storage_controller/src/http.rs | 2 +- storage_controller/src/service/safekeeper_service.rs | 6 +++--- test_runner/fixtures/safekeeper/http.py | 8 +++++++- 5 files changed, 13 insertions(+), 7 deletions(-) diff --git a/libs/safekeeper_api/src/models.rs b/libs/safekeeper_api/src/models.rs index 1774489c1c..e87232474b 100644 --- a/libs/safekeeper_api/src/models.rs +++ b/libs/safekeeper_api/src/models.rs @@ -221,7 +221,7 @@ pub struct TimelineMembershipSwitchRequest { pub struct TimelineMembershipSwitchResponse { pub previous_conf: Configuration, pub current_conf: Configuration, - pub term: Term, + pub last_log_term: Term, pub flush_lsn: Lsn, } diff --git a/safekeeper/src/timeline.rs b/safekeeper/src/timeline.rs index 95b5fe6d5d..dbe510a019 100644 --- a/safekeeper/src/timeline.rs +++ b/safekeeper/src/timeline.rs @@ -197,7 +197,7 @@ impl StateSK { Ok(TimelineMembershipSwitchResponse { previous_conf: result.previous_conf, current_conf: result.current_conf, - term: self.state().acceptor_state.term, + last_log_term: self.state().acceptor_state.term, flush_lsn: self.flush_lsn(), }) } diff --git a/storage_controller/src/http.rs b/storage_controller/src/http.rs index ee446ea65d..e5a3a969d4 100644 --- a/storage_controller/src/http.rs +++ b/storage_controller/src/http.rs @@ -2371,7 +2371,7 @@ pub fn make_router( named_request_span( r, handle_safekeeper_scheduling_policy, - RequestName("v1_safekeeper_status"), + RequestName("v1_safekeeper_scheduling_policy"), ) }) // Tenant Shard operations diff --git a/storage_controller/src/service/safekeeper_service.rs b/storage_controller/src/service/safekeeper_service.rs index 90ea48dd7b..d7179372b2 100644 --- a/storage_controller/src/service/safekeeper_service.rs +++ b/storage_controller/src/service/safekeeper_service.rs @@ -914,13 +914,13 @@ impl Service { // so it isn't counted toward the quorum. if let Some(min_position) = min_position { if let Ok(ok_res) = &res { - if (ok_res.term, ok_res.flush_lsn) < min_position { + if (ok_res.last_log_term, ok_res.flush_lsn) < min_position { // Use Error::Timeout to make this error retriable. res = Err(mgmt_api::Error::Timeout( format!( "safekeeper {} returned position {:?} which is less than minimum required position {:?}", client.node_id_label(), - (ok_res.term, ok_res.flush_lsn), + (ok_res.last_log_term, ok_res.flush_lsn), min_position ) )); @@ -1216,7 +1216,7 @@ impl Service { let mut sync_position = (INITIAL_TERM, Lsn::INVALID); for res in results.into_iter().flatten() { - let sk_position = (res.term, res.flush_lsn); + let sk_position = (res.last_log_term, res.flush_lsn); if sync_position < sk_position { sync_position = sk_position; } diff --git a/test_runner/fixtures/safekeeper/http.py b/test_runner/fixtures/safekeeper/http.py index 839e985419..942b620be6 100644 --- a/test_runner/fixtures/safekeeper/http.py +++ b/test_runner/fixtures/safekeeper/http.py @@ -112,12 +112,18 @@ class TimelineCreateRequest: class TimelineMembershipSwitchResponse: previous_conf: MembershipConfiguration current_conf: MembershipConfiguration + last_log_term: int + flush_lsn: Lsn @classmethod def from_json(cls, d: dict[str, Any]) -> TimelineMembershipSwitchResponse: previous_conf = MembershipConfiguration.from_json(d["previous_conf"]) current_conf = MembershipConfiguration.from_json(d["current_conf"]) - return TimelineMembershipSwitchResponse(previous_conf, current_conf) + last_log_term = d["last_log_term"] + flush_lsn = Lsn(d["flush_lsn"]) + return TimelineMembershipSwitchResponse( + previous_conf, current_conf, last_log_term, flush_lsn + ) class SafekeeperHttpClient(requests.Session, MetricsGetter): From 1eef961f09c46ede620b223f6ad71ce69d2e6a41 Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Mon, 7 Jul 2025 14:24:06 +0200 Subject: [PATCH 04/65] pageserver: add gRPC error logging (#12445) ## Problem We don't log gRPC request errors on the server. Touches #11728. ## Summary of changes Automatically log non-OK gRPC response statuses in the observability middleware, and add corresponding logging for the `get_pages` stream. Also adds the peer address and gRPC method to the gRPC tracing span. Example output: ``` 2025-07-02T20:18:16.813718Z WARN grpc:pageservice{peer=127.0.0.1:56698 method=CheckRelExists tenant_id=c7b45faa1924b1958f05c5fdee8b0d04 timeline_id=4a36ee64fd2f97781b9dcc2c3cddd51b shard_id=0000}: request failed with NotFound: Tenant c7b45faa1924b1958f05c5fdee8b0d04 not found ``` --- pageserver/src/page_service.rs | 81 ++++++++++++++++++++++++++++------ 1 file changed, 67 insertions(+), 14 deletions(-) diff --git a/pageserver/src/page_service.rs b/pageserver/src/page_service.rs index 1d824ac846..49928a9036 100644 --- a/pageserver/src/page_service.rs +++ b/pageserver/src/page_service.rs @@ -50,6 +50,7 @@ use tokio::io::{AsyncRead, AsyncReadExt as _, AsyncWrite, AsyncWriteExt as _, Bu use tokio::task::JoinHandle; use tokio_util::sync::CancellationToken; use tonic::service::Interceptor as _; +use tonic::transport::server::TcpConnectInfo; use tracing::*; use utils::auth::{Claims, Scope, SwappableJwtAuth}; use utils::id::{TenantId, TenantTimelineId, TimelineId}; @@ -3685,8 +3686,15 @@ impl proto::PageService for GrpcPageServiceHandler { yield match result { Ok(resp) => resp, // Convert per-request errors to GetPageResponses as appropriate, or terminate - // the stream with a tonic::Status. - Err(err) => page_api::GetPageResponse::try_from_status(err, req_id)?.into(), + // the stream with a tonic::Status. Log the error regardless, since + // ObservabilityLayer can't automatically log stream errors. + Err(status) => { + // TODO: it would be nice if we could propagate the get_page() fields here. + span.in_scope(|| { + warn!("request failed with {:?}: {}", status.code(), status.message()); + }); + page_api::GetPageResponse::try_from_status(status, req_id)?.into() + } } } }; @@ -3824,40 +3832,85 @@ impl tonic::server::NamedService for Observabili const NAME: &'static str = S::NAME; // propagate inner service name } -impl tower::Service> for ObservabilityLayerService +impl tower::Service> for ObservabilityLayerService where - S: tower::Service>, + S: tower::Service, Response = http::Response> + Send, S::Future: Send + 'static, { type Response = S::Response; type Error = S::Error; type Future = BoxFuture<'static, Result>; - fn call(&mut self, mut req: http::Request) -> Self::Future { + fn call(&mut self, mut req: http::Request) -> Self::Future { // Record the request start time as a request extension. // // TODO: we should start a timer here instead, but it currently requires a timeline handle // and SmgrQueryType, which we don't have yet. Refactor it to provide it later. req.extensions_mut().insert(ReceivedAt(Instant::now())); - // Create a basic tracing span. Enter the span for the current thread (to use it for inner - // sync code like interceptors), and instrument the future (to use it for inner async code - // like the page service itself). + // Extract the peer address and gRPC method. + let peer = req + .extensions() + .get::() + .and_then(|info| info.remote_addr()) + .map(|addr| addr.to_string()) + .unwrap_or_default(); + + let method = req + .uri() + .path() + .split('/') + .nth(2) + .unwrap_or(req.uri().path()) + .to_string(); + + // Create a basic tracing span. // - // The instrument() call below is not sufficient. It only affects the returned future, and - // only takes effect when the caller polls it. Any sync code executed when we call - // self.inner.call() below (such as interceptors) runs outside of the returned future, and - // is not affected by it. We therefore have to enter the span on the current thread too. + // Enter the span for the current thread and instrument the future. It is not sufficient to + // only instrument the future, since it only takes effect after the future is returned and + // polled, not when the inner service is called below (e.g. during interceptor execution). let span = info_span!( "grpc:pageservice", - // Set by TenantMetadataInterceptor. + // These will be populated by TenantMetadataInterceptor. tenant_id = field::Empty, timeline_id = field::Empty, shard_id = field::Empty, + // NB: empty fields must be listed first above. Otherwise, the field names will be + // clobbered when the empty fields are populated. They will be output last regardless. + %peer, + %method, ); let _guard = span.enter(); - Box::pin(self.inner.call(req).instrument(span.clone())) + // Construct a future for calling the inner service, but don't await it. This avoids having + // to clone the inner service into the future below. + let call = self.inner.call(req); + + async move { + // Await the inner service call. + let result = call.await; + + // Log gRPC error statuses. This won't include request info from handler spans, but it + // will catch all errors (even those emitted before handler spans are constructed). Only + // unary request errors are logged here, not streaming response errors. + if let Ok(ref resp) = result + && let Some(status) = tonic::Status::from_header_map(resp.headers()) + && status.code() != tonic::Code::Ok + { + // TODO: it would be nice if we could propagate the handler span's request fields + // here. This could e.g. be done by attaching the request fields to + // tonic::Status::metadata via a proc macro. + warn!( + "request failed with {:?}: {}", + status.code(), + status.message() + ); + } + + result + } + .instrument(span.clone()) + .boxed() } fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { From 55aef2993d1712dff35045f024baae2b9055e41b Mon Sep 17 00:00:00 2001 From: Conrad Ludgate Date: Mon, 7 Jul 2025 16:12:02 +0100 Subject: [PATCH 05/65] introduce a JSON serialization lib (#12417) See #11992 and #11961 for some examples of usecases. This introduces a JSON serialization lib, designed for more flexibility than serde_json offers. ## Dynamic construction Sometimes you have dynamic values you want to serialize, that are not already in a serde-aware model like a struct or a Vec etc. To achieve this with serde, you need to implement a lot of different traits on a lot of different new-types. Because of this, it's often easier to give-in and pull all the data into a serde-aware model (serde_json::Value or some intermediate struct), but that is often not very efficient. This crate allows full control over the JSON encoding without needing to implement any extra traits. Just call the relevant functions, and it will guarantee a correctly encoded JSON value. ## Async construction Similar to the above, sometimes the values arrive asynchronously. Often collecting those values in memory is more expensive than writing them as JSON, since the overheads of `Vec` and `String` is much higher, however there are exceptions. Serializing to JSON all in one go is also more CPU intensive and can cause lag spikes, whereas serializing values incrementally spreads out the CPU load and reduces lag. --- .config/hakari.toml | 1 + Cargo.lock | 11 + Cargo.toml | 1 + libs/proxy/json/Cargo.toml | 12 + libs/proxy/json/src/lib.rs | 412 ++++++++++++++++++++++++++++++++++ libs/proxy/json/src/macros.rs | 86 +++++++ libs/proxy/json/src/str.rs | 166 ++++++++++++++ libs/proxy/json/src/value.rs | 168 ++++++++++++++ workspace_hack/Cargo.toml | 2 + 9 files changed, 859 insertions(+) create mode 100644 libs/proxy/json/Cargo.toml create mode 100644 libs/proxy/json/src/lib.rs create mode 100644 libs/proxy/json/src/macros.rs create mode 100644 libs/proxy/json/src/str.rs create mode 100644 libs/proxy/json/src/value.rs diff --git a/.config/hakari.toml b/.config/hakari.toml index 3b6d9d8822..9991cd92b0 100644 --- a/.config/hakari.toml +++ b/.config/hakari.toml @@ -33,6 +33,7 @@ workspace-members = [ "compute_api", "consumption_metrics", "desim", + "json", "metrics", "pageserver_api", "postgres_backend", diff --git a/Cargo.lock b/Cargo.lock index 0d4dc10149..237defaec3 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -3489,6 +3489,15 @@ dependencies = [ "wasm-bindgen", ] +[[package]] +name = "json" +version = "0.1.0" +dependencies = [ + "futures", + "itoa", + "ryu", +] + [[package]] name = "json-structural-diff" version = "0.2.0" @@ -8693,8 +8702,10 @@ dependencies = [ "fail", "form_urlencoded", "futures-channel", + "futures-core", "futures-executor", "futures-io", + "futures-sink", "futures-util", "generic-array", "getrandom 0.2.11", diff --git a/Cargo.toml b/Cargo.toml index 68016a08a9..840e3c6036 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -42,6 +42,7 @@ members = [ "libs/walproposer", "libs/wal_decoder", "libs/postgres_initdb", + "libs/proxy/json", "libs/proxy/postgres-protocol2", "libs/proxy/postgres-types2", "libs/proxy/tokio-postgres2", diff --git a/libs/proxy/json/Cargo.toml b/libs/proxy/json/Cargo.toml new file mode 100644 index 0000000000..2f163c141d --- /dev/null +++ b/libs/proxy/json/Cargo.toml @@ -0,0 +1,12 @@ +[package] +name = "json" +version = "0.1.0" +edition.workspace = true +license.workspace = true + +[dependencies] +ryu = "1" +itoa = "1" + +[dev-dependencies] +futures = "0.3" diff --git a/libs/proxy/json/src/lib.rs b/libs/proxy/json/src/lib.rs new file mode 100644 index 0000000000..a8b2e6b509 --- /dev/null +++ b/libs/proxy/json/src/lib.rs @@ -0,0 +1,412 @@ +//! A JSON serialization lib, designed for more flexibility than `serde_json` offers. +//! +//! Features: +//! +//! ## Dynamic construction +//! +//! Sometimes you have dynamic values you want to serialize, that are not already in a serde-aware model like a struct or a Vec etc. +//! To achieve this with serde, you need to implement a lot of different traits on a lot of different new-types. +//! Because of this, it's often easier to give-in and pull all the data into a serde-aware model (`serde_json::Value` or some intermediate struct), +//! but that is often not very efficient. +//! +//! This crate allows full control over the JSON encoding without needing to implement any extra traits. Just call the +//! relevant functions, and it will guarantee a correctly encoded JSON value. +//! +//! ## Async construction +//! +//! Similar to the above, sometimes the values arrive asynchronously. Often collecting those values in memory +//! is more expensive than writing them as JSON, since the overheads of `Vec` and `String` is much higher, however +//! there are exceptions. +//! +//! Serializing to JSON all in one go is also more CPU intensive and can cause lag spikes, +//! whereas serializing values incrementally spreads out the CPU load and reduces lag. +//! +//! ## Examples +//! +//! To represent the following JSON as a compact string +//! +//! ```json +//! { +//! "results": { +//! "rows": [ +//! { +//! "id": 1, +//! "value": null +//! }, +//! { +//! "id": 2, +//! "value": "hello" +//! } +//! ] +//! } +//! } +//! ``` +//! +//! We can use the following code: +//! +//! ``` +//! // create the outer object +//! let s = json::value_to_string!(|v| json::value_as_object!(|v| { +//! // create an entry with key "results" and start an object value associated with it. +//! let results = v.key("results"); +//! json::value_as_object!(|results| { +//! // create an entry with key "rows" and start an list value associated with it. +//! let rows = results.key("rows"); +//! json::value_as_list!(|rows| { +//! // create a list entry and start an object value associated with it. +//! let row = rows.entry(); +//! json::value_as_object!(|row| { +//! // add entry "id": 1 +//! row.entry("id", 1); +//! // add entry "value": null +//! row.entry("value", json::Null); +//! }); +//! +//! // create a list entry and start an object value associated with it. +//! let row = rows.entry(); +//! json::value_as_object!(|row| { +//! // add entry "id": 2 +//! row.entry("id", 2); +//! // add entry "value": "hello" +//! row.entry("value", "hello"); +//! }); +//! }); +//! }); +//! })); +//! +//! assert_eq!(s, r#"{"results":{"rows":[{"id":1,"value":null},{"id":2,"value":"hello"}]}}"#); +//! ``` + +mod macros; +mod str; +mod value; + +pub use value::{Null, ValueEncoder}; + +#[must_use] +/// Serialize a single json value. +pub struct ValueSer<'buf> { + buf: &'buf mut Vec, + start: usize, +} + +impl<'buf> ValueSer<'buf> { + /// Create a new json value serializer. + pub fn new(buf: &'buf mut Vec) -> Self { + Self { buf, start: 0 } + } + + /// Borrow the underlying buffer + pub fn as_buffer(&self) -> &[u8] { + self.buf + } + + #[inline] + pub fn value(self, e: impl ValueEncoder) { + e.encode(self); + } + + /// Write raw bytes to the buf. This must be already JSON encoded. + #[inline] + pub fn write_raw_json(self, data: &[u8]) { + self.buf.extend_from_slice(data); + self.finish(); + } + + /// Start a new object serializer. + #[inline] + pub fn object(self) -> ObjectSer<'buf> { + ObjectSer::new(self) + } + + /// Start a new list serializer. + #[inline] + pub fn list(self) -> ListSer<'buf> { + ListSer::new(self) + } + + /// Finish the value ser. + #[inline] + fn finish(self) { + // don't trigger the drop handler which triggers a rollback. + // this won't cause memory leaks because `ValueSet` owns no allocations. + std::mem::forget(self); + } +} + +impl Drop for ValueSer<'_> { + fn drop(&mut self) { + self.buf.truncate(self.start); + } +} + +#[must_use] +/// Serialize a json object. +pub struct ObjectSer<'buf> { + value: ValueSer<'buf>, + start: usize, +} + +impl<'buf> ObjectSer<'buf> { + /// Start a new object serializer. + #[inline] + pub fn new(value: ValueSer<'buf>) -> Self { + value.buf.push(b'{'); + let start = value.buf.len(); + Self { value, start } + } + + /// Borrow the underlying buffer + pub fn as_buffer(&self) -> &[u8] { + self.value.as_buffer() + } + + /// Start a new object entry with the given string key, returning a [`ValueSer`] for the associated value. + #[inline] + pub fn key(&mut self, key: impl KeyEncoder) -> ValueSer<'_> { + key.write_key(self) + } + + /// Write an entry (key-value pair) to the object. + #[inline] + pub fn entry(&mut self, key: impl KeyEncoder, val: impl ValueEncoder) { + self.key(key).value(val); + } + + #[inline] + fn entry_inner(&mut self, f: impl FnOnce(&mut Vec)) -> ValueSer<'_> { + // track before the separator so we the value is rolled back it also removes the separator. + let start = self.value.buf.len(); + + // push separator if necessary + if self.value.buf.len() > self.start { + self.value.buf.push(b','); + } + // push key + f(self.value.buf); + // push value separator + self.value.buf.push(b':'); + + // return value writer. + ValueSer { + buf: self.value.buf, + start, + } + } + + /// Reset the buffer back to before this object was started. + #[inline] + pub fn rollback(self) -> ValueSer<'buf> { + // Do not fully reset the value, only reset it to before the `{`. + // This ensures any `,` before this value are not clobbered. + self.value.buf.truncate(self.start - 1); + self.value + } + + /// Finish the object ser. + #[inline] + pub fn finish(self) { + self.value.buf.push(b'}'); + self.value.finish(); + } +} + +pub trait KeyEncoder { + fn write_key<'a>(self, obj: &'a mut ObjectSer) -> ValueSer<'a>; +} + +#[must_use] +/// Serialize a json object. +pub struct ListSer<'buf> { + value: ValueSer<'buf>, + start: usize, +} + +impl<'buf> ListSer<'buf> { + /// Start a new list serializer. + #[inline] + pub fn new(value: ValueSer<'buf>) -> Self { + value.buf.push(b'['); + let start = value.buf.len(); + Self { value, start } + } + + /// Borrow the underlying buffer + pub fn as_buffer(&self) -> &[u8] { + self.value.as_buffer() + } + + /// Write an value to the list. + #[inline] + pub fn push(&mut self, val: impl ValueEncoder) { + self.entry().value(val); + } + + /// Start a new value entry in this list. + #[inline] + pub fn entry(&mut self) -> ValueSer<'_> { + // track before the separator so we the value is rolled back it also removes the separator. + let start = self.value.buf.len(); + + // push separator if necessary + if self.value.buf.len() > self.start { + self.value.buf.push(b','); + } + + // return value writer. + ValueSer { + buf: self.value.buf, + start, + } + } + + /// Reset the buffer back to before this object was started. + #[inline] + pub fn rollback(self) -> ValueSer<'buf> { + // Do not fully reset the value, only reset it to before the `[`. + // This ensures any `,` before this value are not clobbered. + self.value.buf.truncate(self.start - 1); + self.value + } + + /// Finish the object ser. + #[inline] + pub fn finish(self) { + self.value.buf.push(b']'); + self.value.finish(); + } +} + +#[cfg(test)] +mod tests { + use crate::{Null, ValueSer}; + + #[test] + fn object() { + let mut buf = vec![]; + let mut object = ValueSer::new(&mut buf).object(); + object.entry("foo", "bar"); + object.entry("baz", Null); + object.finish(); + + assert_eq!(buf, br#"{"foo":"bar","baz":null}"#); + } + + #[test] + fn list() { + let mut buf = vec![]; + let mut list = ValueSer::new(&mut buf).list(); + list.entry().value("bar"); + list.entry().value(Null); + list.finish(); + + assert_eq!(buf, br#"["bar",null]"#); + } + + #[test] + fn object_macro() { + let res = crate::value_to_string!(|obj| { + crate::value_as_object!(|obj| { + obj.entry("foo", "bar"); + obj.entry("baz", Null); + }) + }); + + assert_eq!(res, r#"{"foo":"bar","baz":null}"#); + } + + #[test] + fn list_macro() { + let res = crate::value_to_string!(|list| { + crate::value_as_list!(|list| { + list.entry().value("bar"); + list.entry().value(Null); + }) + }); + + assert_eq!(res, r#"["bar",null]"#); + } + + #[test] + fn rollback_on_drop() { + let res = crate::value_to_string!(|list| { + crate::value_as_list!(|list| { + list.entry().value("bar"); + + 'cancel: { + let nested_list = list.entry(); + crate::value_as_list!(|nested_list| { + nested_list.entry().value(1); + + assert_eq!(nested_list.as_buffer(), br#"["bar",[1"#); + if true { + break 'cancel; + } + }) + } + + assert_eq!(list.as_buffer(), br#"["bar""#); + + list.entry().value(Null); + }) + }); + + assert_eq!(res, r#"["bar",null]"#); + } + + #[test] + fn rollback_object() { + let res = crate::value_to_string!(|obj| { + crate::value_as_object!(|obj| { + let entry = obj.key("1"); + entry.value(1_i32); + + let entry = obj.key("2"); + let entry = { + let mut nested_obj = entry.object(); + nested_obj.entry("foo", "bar"); + nested_obj.rollback() + }; + + entry.value(2_i32); + }) + }); + + assert_eq!(res, r#"{"1":1,"2":2}"#); + } + + #[test] + fn rollback_list() { + let res = crate::value_to_string!(|list| { + crate::value_as_list!(|list| { + let entry = list.entry(); + entry.value(1_i32); + + let entry = list.entry(); + let entry = { + let mut nested_list = entry.list(); + nested_list.push("foo"); + nested_list.rollback() + }; + + entry.value(2_i32); + }) + }); + + assert_eq!(res, r#"[1,2]"#); + } + + #[test] + fn string_escaping() { + let mut buf = vec![]; + let mut object = ValueSer::new(&mut buf).object(); + + let key = "hello"; + let value = "\n world"; + + object.entry(format_args!("{key:?}"), value); + object.finish(); + + assert_eq!(buf, br#"{"\"hello\"":"\n world"}"#); + } +} diff --git a/libs/proxy/json/src/macros.rs b/libs/proxy/json/src/macros.rs new file mode 100644 index 0000000000..d3b5cfed10 --- /dev/null +++ b/libs/proxy/json/src/macros.rs @@ -0,0 +1,86 @@ +//! # Examples +//! +//! ``` +//! use futures::{StreamExt, TryStream, TryStreamExt}; +//! +//! async fn stream_to_json_list(mut s: S) -> Result +//! where +//! S: TryStream + Unpin, +//! T: json::ValueEncoder +//! { +//! Ok(json::value_to_string!(|val| json::value_as_list!(|val| { +//! // note how we can use `.await` and `?` in here. +//! while let Some(value) = s.try_next().await? { +//! val.push(value); +//! } +//! }))) +//! } +//! +//! let stream = futures::stream::iter([1, 2, 3]).map(Ok::); +//! let json_string = futures::executor::block_on(stream_to_json_list(stream)).unwrap(); +//! assert_eq!(json_string, "[1,2,3]"); +//! ``` + +/// A helper to create a new JSON vec. +/// +/// Implemented as a macro to preserve all control flow. +#[macro_export] +macro_rules! value_to_vec { + (|$val:ident| $body:expr) => {{ + let mut buf = vec![]; + let $val = $crate::ValueSer::new(&mut buf); + let _: () = $body; + buf + }}; +} + +/// A helper to create a new JSON string. +/// +/// Implemented as a macro to preserve all control flow. +#[macro_export] +macro_rules! value_to_string { + (|$val:ident| $body:expr) => {{ + ::std::string::String::from_utf8($crate::value_to_vec!(|$val| $body)) + .expect("json should be valid utf8") + }}; +} + +/// A helper that ensures the [`ObjectSer::finish`](crate::ObjectSer::finish) method is called on completion. +/// +/// Consumes `$val` and assigns it as an [`ObjectSer`](crate::ObjectSer) serializer. +/// The serializer is only 'finished' if the body completes. +/// The serializer is rolled back if `break`/`return` escapes the body. +/// +/// Implemented as a macro to preserve all control flow. +#[macro_export] +macro_rules! value_as_object { + (|$val:ident| $body:expr) => {{ + let mut obj = $crate::ObjectSer::new($val); + + let $val = &mut obj; + let res = $body; + + obj.finish(); + res + }}; +} + +/// A helper that ensures the [`ListSer::finish`](crate::ListSer::finish) method is called on completion. +/// +/// Consumes `$val` and assigns it as an [`ListSer`](crate::ListSer) serializer. +/// The serializer is only 'finished' if the body completes. +/// The serializer is rolled back if `break`/`return` escapes the body. +/// +/// Implemented as a macro to preserve all control flow. +#[macro_export] +macro_rules! value_as_list { + (|$val:ident| $body:expr) => {{ + let mut list = $crate::ListSer::new($val); + + let $val = &mut list; + let res = $body; + + list.finish(); + res + }}; +} diff --git a/libs/proxy/json/src/str.rs b/libs/proxy/json/src/str.rs new file mode 100644 index 0000000000..b092fd50ec --- /dev/null +++ b/libs/proxy/json/src/str.rs @@ -0,0 +1,166 @@ +//! Helpers for serializing escaped strings. +//! +//! ## License +//! +//! +//! +//! Licensed by David Tolnay under MIT or Apache-2.0. +//! +//! With modifications by Conrad Ludgate on behalf of Databricks. + +use std::fmt::{self, Write}; + +/// Represents a character escape code in a type-safe manner. +pub enum CharEscape { + /// An escaped quote `"` + Quote, + /// An escaped reverse solidus `\` + ReverseSolidus, + // /// An escaped solidus `/` + // Solidus, + /// An escaped backspace character (usually escaped as `\b`) + Backspace, + /// An escaped form feed character (usually escaped as `\f`) + FormFeed, + /// An escaped line feed character (usually escaped as `\n`) + LineFeed, + /// An escaped carriage return character (usually escaped as `\r`) + CarriageReturn, + /// An escaped tab character (usually escaped as `\t`) + Tab, + /// An escaped ASCII plane control character (usually escaped as + /// `\u00XX` where `XX` are two hex characters) + AsciiControl(u8), +} + +impl CharEscape { + #[inline] + fn from_escape_table(escape: u8, byte: u8) -> CharEscape { + match escape { + self::BB => CharEscape::Backspace, + self::TT => CharEscape::Tab, + self::NN => CharEscape::LineFeed, + self::FF => CharEscape::FormFeed, + self::RR => CharEscape::CarriageReturn, + self::QU => CharEscape::Quote, + self::BS => CharEscape::ReverseSolidus, + self::UU => CharEscape::AsciiControl(byte), + _ => unreachable!(), + } + } +} + +pub(crate) fn format_escaped_str(writer: &mut Vec, value: &str) { + writer.reserve(2 + value.len()); + + writer.push(b'"'); + + let rest = format_escaped_str_contents(writer, value); + writer.extend_from_slice(rest); + + writer.push(b'"'); +} + +pub(crate) fn format_escaped_fmt(writer: &mut Vec, args: fmt::Arguments) { + writer.push(b'"'); + + Collect { buf: writer } + .write_fmt(args) + .expect("formatting should not error"); + + writer.push(b'"'); +} + +struct Collect<'buf> { + buf: &'buf mut Vec, +} + +impl fmt::Write for Collect<'_> { + fn write_str(&mut self, s: &str) -> fmt::Result { + let last = format_escaped_str_contents(self.buf, s); + self.buf.extend(last); + Ok(()) + } +} + +// writes any escape sequences, and returns the suffix still needed to be written. +fn format_escaped_str_contents<'a>(writer: &mut Vec, value: &'a str) -> &'a [u8] { + let bytes = value.as_bytes(); + + let mut start = 0; + + for (i, &byte) in bytes.iter().enumerate() { + let escape = ESCAPE[byte as usize]; + if escape == 0 { + continue; + } + + writer.extend_from_slice(&bytes[start..i]); + + let char_escape = CharEscape::from_escape_table(escape, byte); + write_char_escape(writer, char_escape); + + start = i + 1; + } + + &bytes[start..] +} + +const BB: u8 = b'b'; // \x08 +const TT: u8 = b't'; // \x09 +const NN: u8 = b'n'; // \x0A +const FF: u8 = b'f'; // \x0C +const RR: u8 = b'r'; // \x0D +const QU: u8 = b'"'; // \x22 +const BS: u8 = b'\\'; // \x5C +const UU: u8 = b'u'; // \x00...\x1F except the ones above +const __: u8 = 0; + +// Lookup table of escape sequences. A value of b'x' at index i means that byte +// i is escaped as "\x" in JSON. A value of 0 means that byte i is not escaped. +static ESCAPE: [u8; 256] = [ + // 1 2 3 4 5 6 7 8 9 A B C D E F + UU, UU, UU, UU, UU, UU, UU, UU, BB, TT, NN, UU, FF, RR, UU, UU, // 0 + UU, UU, UU, UU, UU, UU, UU, UU, UU, UU, UU, UU, UU, UU, UU, UU, // 1 + __, __, QU, __, __, __, __, __, __, __, __, __, __, __, __, __, // 2 + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // 3 + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // 4 + __, __, __, __, __, __, __, __, __, __, __, __, BS, __, __, __, // 5 + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // 6 + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // 7 + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // 8 + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // 9 + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // A + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // B + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // C + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // D + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // E + __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, __, // F +]; + +fn write_char_escape(writer: &mut Vec, char_escape: CharEscape) { + let s = match char_escape { + CharEscape::Quote => b"\\\"", + CharEscape::ReverseSolidus => b"\\\\", + // CharEscape::Solidus => b"\\/", + CharEscape::Backspace => b"\\b", + CharEscape::FormFeed => b"\\f", + CharEscape::LineFeed => b"\\n", + CharEscape::CarriageReturn => b"\\r", + CharEscape::Tab => b"\\t", + CharEscape::AsciiControl(byte) => { + static HEX_DIGITS: [u8; 16] = *b"0123456789abcdef"; + let bytes = &[ + b'\\', + b'u', + b'0', + b'0', + HEX_DIGITS[(byte >> 4) as usize], + HEX_DIGITS[(byte & 0xF) as usize], + ]; + return writer.extend_from_slice(bytes); + } + }; + + writer.extend_from_slice(s); +} diff --git a/libs/proxy/json/src/value.rs b/libs/proxy/json/src/value.rs new file mode 100644 index 0000000000..705af9603e --- /dev/null +++ b/libs/proxy/json/src/value.rs @@ -0,0 +1,168 @@ +use core::fmt; +use std::collections::{BTreeMap, HashMap}; + +use crate::str::{format_escaped_fmt, format_escaped_str}; +use crate::{KeyEncoder, ObjectSer, ValueSer, value_as_list, value_as_object}; + +/// Write a value to the underlying json representation. +pub trait ValueEncoder { + fn encode(self, v: ValueSer<'_>); +} + +pub(crate) fn write_int(x: impl itoa::Integer, b: &mut Vec) { + b.extend_from_slice(itoa::Buffer::new().format(x).as_bytes()); +} + +pub(crate) fn write_float(x: impl ryu::Float, b: &mut Vec) { + b.extend_from_slice(ryu::Buffer::new().format(x).as_bytes()); +} + +impl ValueEncoder for &T { + #[inline] + fn encode(self, v: ValueSer<'_>) { + T::encode(*self, v); + } +} + +impl ValueEncoder for &str { + #[inline] + fn encode(self, v: ValueSer<'_>) { + format_escaped_str(v.buf, self); + v.finish(); + } +} + +impl ValueEncoder for fmt::Arguments<'_> { + #[inline] + fn encode(self, v: ValueSer<'_>) { + if let Some(s) = self.as_str() { + format_escaped_str(v.buf, s); + } else { + format_escaped_fmt(v.buf, self); + } + v.finish(); + } +} + +macro_rules! int { + [$($t:ty),*] => { + $( + impl ValueEncoder for $t { + #[inline] + fn encode(self, v: ValueSer<'_>) { + write_int(self, v.buf); + v.finish(); + } + } + )* + }; +} + +int![u8, u16, u32, u64, usize, u128]; +int![i8, i16, i32, i64, isize, i128]; + +macro_rules! float { + [$($t:ty),*] => { + $( + impl ValueEncoder for $t { + #[inline] + fn encode(self, v: ValueSer<'_>) { + write_float(self, v.buf); + v.finish(); + } + } + )* + }; +} + +float![f32, f64]; + +impl ValueEncoder for bool { + #[inline] + fn encode(self, v: ValueSer<'_>) { + v.write_raw_json(if self { b"true" } else { b"false" }); + } +} + +impl ValueEncoder for Option { + #[inline] + fn encode(self, v: ValueSer<'_>) { + match self { + Some(value) => value.encode(v), + None => Null.encode(v), + } + } +} + +impl KeyEncoder for &str { + #[inline] + fn write_key<'a>(self, obj: &'a mut ObjectSer) -> ValueSer<'a> { + let obj = &mut *obj; + obj.entry_inner(|b| format_escaped_str(b, self)) + } +} + +impl KeyEncoder for fmt::Arguments<'_> { + #[inline] + fn write_key<'a>(self, obj: &'a mut ObjectSer) -> ValueSer<'a> { + if let Some(key) = self.as_str() { + obj.entry_inner(|b| format_escaped_str(b, key)) + } else { + obj.entry_inner(|b| format_escaped_fmt(b, self)) + } + } +} + +/// Represents the JSON null value. +pub struct Null; + +impl ValueEncoder for Null { + #[inline] + fn encode(self, v: ValueSer<'_>) { + v.write_raw_json(b"null"); + } +} + +impl ValueEncoder for Vec { + #[inline] + fn encode(self, v: ValueSer<'_>) { + value_as_list!(|v| { + for t in self { + v.entry().value(t); + } + }); + } +} + +impl ValueEncoder for &[T] { + #[inline] + fn encode(self, v: ValueSer<'_>) { + value_as_list!(|v| { + for t in self { + v.entry().value(t); + } + }); + } +} + +impl ValueEncoder for HashMap { + #[inline] + fn encode(self, o: ValueSer<'_>) { + value_as_object!(|o| { + for (k, v) in self { + o.entry(k, v); + } + }); + } +} + +impl ValueEncoder for BTreeMap { + #[inline] + fn encode(self, o: ValueSer<'_>) { + value_as_object!(|o| { + for (k, v) in self { + o.entry(k, v); + } + }); + } +} diff --git a/workspace_hack/Cargo.toml b/workspace_hack/Cargo.toml index fb10e27d2a..fc01deb92d 100644 --- a/workspace_hack/Cargo.toml +++ b/workspace_hack/Cargo.toml @@ -40,8 +40,10 @@ env_logger = { version = "0.11" } fail = { version = "0.5", default-features = false, features = ["failpoints"] } form_urlencoded = { version = "1" } futures-channel = { version = "0.3", features = ["sink"] } +futures-core = { version = "0.3" } futures-executor = { version = "0.3" } futures-io = { version = "0.3" } +futures-sink = { version = "0.3" } futures-util = { version = "0.3", features = ["channel", "io", "sink"] } generic-array = { version = "0.14", default-features = false, features = ["more_lengths", "zeroize"] } getrandom = { version = "0.2", default-features = false, features = ["std"] } From e65d5f73697a55bac9966c9cb91124ba5cadaaaa Mon Sep 17 00:00:00 2001 From: Folke Behrens Date: Mon, 7 Jul 2025 19:46:33 +0200 Subject: [PATCH 06/65] proxy: Remove the endpoint filter cache (#12488) ## Problem The endpoint filter cache is still unused because it's not yet reliable enough to be used. It only consumes a lot of memory. ## Summary of changes Remove the code. Needs a new design. neondatabase/cloud#30634 --- proxy/src/binary/proxy.rs | 23 +- proxy/src/cache/endpoints.rs | 283 ------------------ proxy/src/cache/mod.rs | 1 - proxy/src/config.rs | 75 +---- proxy/src/context/mod.rs | 48 +-- .../control_plane/client/cplane_proxy_v1.rs | 17 -- proxy/src/control_plane/client/mod.rs | 7 +- proxy/src/control_plane/errors.rs | 11 - proxy/src/rate_limiter/limiter.rs | 38 --- proxy/src/rate_limiter/mod.rs | 2 +- proxy/src/types.rs | 10 - 11 files changed, 11 insertions(+), 504 deletions(-) delete mode 100644 proxy/src/cache/endpoints.rs diff --git a/proxy/src/binary/proxy.rs b/proxy/src/binary/proxy.rs index c10678dc68..691709ce2a 100644 --- a/proxy/src/binary/proxy.rs +++ b/proxy/src/binary/proxy.rs @@ -21,7 +21,7 @@ use tokio::net::TcpListener; use tokio::sync::Notify; use tokio::task::JoinSet; use tokio_util::sync::CancellationToken; -use tracing::{Instrument, error, info, warn}; +use tracing::{error, info, warn}; use utils::sentry_init::init_sentry; use utils::{project_build_tag, project_git_version}; @@ -195,7 +195,9 @@ struct ProxyCliArgs { #[clap(long, default_value = config::ProjectInfoCacheOptions::CACHE_DEFAULT_OPTIONS)] project_info_cache: String, /// cache for all valid endpoints - #[clap(long, default_value = config::EndpointCacheConfig::CACHE_DEFAULT_OPTIONS)] + // TODO: remove after a couple of releases. + #[clap(long, default_value_t = String::new())] + #[deprecated] endpoint_cache_config: String, #[clap(flatten)] parquet_upload: ParquetUploadArgs, @@ -558,13 +560,6 @@ pub async fn run() -> anyhow::Result<()> { } } } - - // listen for notifications of new projects/endpoints/branches - let cache = api.caches.endpoints_cache.clone(); - let span = tracing::info_span!("endpoints_cache"); - maintenance_tasks.spawn( - async move { cache.do_read(client, cancellation_token.clone()).await }.instrument(span), - ); } let maintenance = loop { @@ -712,18 +707,15 @@ fn build_auth_backend( let wake_compute_cache_config: CacheOptions = args.wake_compute_cache.parse()?; let project_info_cache_config: ProjectInfoCacheOptions = args.project_info_cache.parse()?; - let endpoint_cache_config: config::EndpointCacheConfig = - args.endpoint_cache_config.parse()?; info!("Using NodeInfoCache (wake_compute) with options={wake_compute_cache_config:?}"); info!( "Using AllowedIpsCache (wake_compute) with options={project_info_cache_config:?}" ); - info!("Using EndpointCacheConfig with options={endpoint_cache_config:?}"); + let caches = Box::leak(Box::new(control_plane::caches::ApiCaches::new( wake_compute_cache_config, project_info_cache_config, - endpoint_cache_config, ))); let config::ConcurrencyLockOptions { @@ -793,18 +785,15 @@ fn build_auth_backend( let wake_compute_cache_config: CacheOptions = args.wake_compute_cache.parse()?; let project_info_cache_config: ProjectInfoCacheOptions = args.project_info_cache.parse()?; - let endpoint_cache_config: config::EndpointCacheConfig = - args.endpoint_cache_config.parse()?; info!("Using NodeInfoCache (wake_compute) with options={wake_compute_cache_config:?}"); info!( "Using AllowedIpsCache (wake_compute) with options={project_info_cache_config:?}" ); - info!("Using EndpointCacheConfig with options={endpoint_cache_config:?}"); + let caches = Box::leak(Box::new(control_plane::caches::ApiCaches::new( wake_compute_cache_config, project_info_cache_config, - endpoint_cache_config, ))); let config::ConcurrencyLockOptions { diff --git a/proxy/src/cache/endpoints.rs b/proxy/src/cache/endpoints.rs deleted file mode 100644 index 3c88e07484..0000000000 --- a/proxy/src/cache/endpoints.rs +++ /dev/null @@ -1,283 +0,0 @@ -use std::convert::Infallible; -use std::future::pending; -use std::sync::atomic::{AtomicBool, Ordering}; -use std::sync::{Arc, Mutex}; - -use clashmap::ClashSet; -use redis::streams::{StreamReadOptions, StreamReadReply}; -use redis::{AsyncCommands, FromRedisValue, Value}; -use serde::Deserialize; -use tokio_util::sync::CancellationToken; -use tracing::info; - -use crate::config::EndpointCacheConfig; -use crate::context::RequestContext; -use crate::ext::LockExt; -use crate::intern::{BranchIdInt, EndpointIdInt, ProjectIdInt}; -use crate::metrics::{Metrics, RedisErrors, RedisEventsCount}; -use crate::rate_limiter::GlobalRateLimiter; -use crate::redis::connection_with_credentials_provider::ConnectionWithCredentialsProvider; -use crate::types::EndpointId; - -// TODO: this could be an enum, but events in Redis need to be fixed first. -// ProjectCreated was sent with type:branch_created. So we ignore type. -#[derive(Deserialize, Debug, Clone, PartialEq)] -struct ControlPlaneEvent { - endpoint_created: Option, - branch_created: Option, - project_created: Option, - #[serde(rename = "type")] - _type: Option, -} - -#[derive(Deserialize, Debug, Clone, PartialEq)] -struct EndpointCreated { - endpoint_id: EndpointIdInt, -} - -#[derive(Deserialize, Debug, Clone, PartialEq)] -struct BranchCreated { - branch_id: BranchIdInt, -} - -#[derive(Deserialize, Debug, Clone, PartialEq)] -struct ProjectCreated { - project_id: ProjectIdInt, -} - -impl TryFrom<&Value> for ControlPlaneEvent { - type Error = anyhow::Error; - fn try_from(value: &Value) -> Result { - let json = String::from_redis_value(value)?; - Ok(serde_json::from_str(&json)?) - } -} - -pub struct EndpointsCache { - config: EndpointCacheConfig, - endpoints: ClashSet, - branches: ClashSet, - projects: ClashSet, - ready: AtomicBool, - limiter: Arc>, -} - -impl EndpointsCache { - pub(crate) fn new(config: EndpointCacheConfig) -> Self { - Self { - limiter: Arc::new(Mutex::new(GlobalRateLimiter::new( - config.limiter_info.clone(), - ))), - config, - endpoints: ClashSet::new(), - branches: ClashSet::new(), - projects: ClashSet::new(), - ready: AtomicBool::new(false), - } - } - - pub(crate) fn is_valid(&self, ctx: &RequestContext, endpoint: &EndpointId) -> bool { - if !self.ready.load(Ordering::Acquire) { - // the endpoint cache is not yet fully initialised. - return true; - } - - if !self.should_reject(endpoint) { - ctx.set_rejected(false); - return true; - } - - // report that we might want to reject this endpoint - ctx.set_rejected(true); - - // If cache is disabled, just collect the metrics and return. - if self.config.disable_cache { - return true; - } - - // If the limiter allows, we can pretend like it's valid - // (incase it is, due to redis channel lag). - if self.limiter.lock_propagate_poison().check() { - return true; - } - - // endpoint not found, and there's too much load. - false - } - - fn should_reject(&self, endpoint: &EndpointId) -> bool { - if endpoint.is_endpoint() { - let Some(endpoint) = EndpointIdInt::get(endpoint) else { - // if we haven't interned this endpoint, it's not in the cache. - return true; - }; - !self.endpoints.contains(&endpoint) - } else if endpoint.is_branch() { - let Some(branch) = BranchIdInt::get(endpoint) else { - // if we haven't interned this branch, it's not in the cache. - return true; - }; - !self.branches.contains(&branch) - } else { - let Some(project) = ProjectIdInt::get(endpoint) else { - // if we haven't interned this project, it's not in the cache. - return true; - }; - !self.projects.contains(&project) - } - } - - fn insert_event(&self, event: ControlPlaneEvent) { - if let Some(endpoint_created) = event.endpoint_created { - self.endpoints.insert(endpoint_created.endpoint_id); - Metrics::get() - .proxy - .redis_events_count - .inc(RedisEventsCount::EndpointCreated); - } else if let Some(branch_created) = event.branch_created { - self.branches.insert(branch_created.branch_id); - Metrics::get() - .proxy - .redis_events_count - .inc(RedisEventsCount::BranchCreated); - } else if let Some(project_created) = event.project_created { - self.projects.insert(project_created.project_id); - Metrics::get() - .proxy - .redis_events_count - .inc(RedisEventsCount::ProjectCreated); - } - } - - pub async fn do_read( - &self, - mut con: ConnectionWithCredentialsProvider, - cancellation_token: CancellationToken, - ) -> anyhow::Result { - let mut last_id = "0-0".to_string(); - loop { - if let Err(e) = con.connect().await { - tracing::error!("error connecting to redis: {:?}", e); - self.ready.store(false, Ordering::Release); - } - if let Err(e) = self.read_from_stream(&mut con, &mut last_id).await { - tracing::error!("error reading from redis: {:?}", e); - self.ready.store(false, Ordering::Release); - } - if cancellation_token.is_cancelled() { - info!("cancellation token is cancelled, exiting"); - // Maintenance tasks run forever. Sleep forever when canceled. - pending::<()>().await; - } - tokio::time::sleep(self.config.retry_interval).await; - } - } - - async fn read_from_stream( - &self, - con: &mut ConnectionWithCredentialsProvider, - last_id: &mut String, - ) -> anyhow::Result<()> { - tracing::info!("reading endpoints/branches/projects from redis"); - self.batch_read( - con, - StreamReadOptions::default().count(self.config.initial_batch_size), - last_id, - true, - ) - .await?; - tracing::info!("ready to filter user requests"); - self.ready.store(true, Ordering::Release); - self.batch_read( - con, - StreamReadOptions::default() - .count(self.config.default_batch_size) - .block(self.config.xread_timeout.as_millis() as usize), - last_id, - false, - ) - .await - } - - async fn batch_read( - &self, - conn: &mut ConnectionWithCredentialsProvider, - opts: StreamReadOptions, - last_id: &mut String, - return_when_finish: bool, - ) -> anyhow::Result<()> { - let mut total: usize = 0; - loop { - let mut res: StreamReadReply = conn - .xread_options(&[&self.config.stream_name], &[last_id.as_str()], &opts) - .await?; - - if res.keys.is_empty() { - if return_when_finish { - if total != 0 { - break; - } - anyhow::bail!( - "Redis stream {} is empty, cannot be used to filter endpoints", - self.config.stream_name - ); - } - // If we are not returning when finish, we should wait for more data. - continue; - } - if res.keys.len() != 1 { - anyhow::bail!("Cannot read from redis stream {}", self.config.stream_name); - } - - let key = res.keys.pop().expect("Checked length above"); - let len = key.ids.len(); - for stream_id in key.ids { - total += 1; - for value in stream_id.map.values() { - match value.try_into() { - Ok(event) => self.insert_event(event), - Err(err) => { - Metrics::get().proxy.redis_errors_total.inc(RedisErrors { - channel: &self.config.stream_name, - }); - tracing::error!("error parsing value {value:?}: {err:?}"); - } - } - } - if total.is_power_of_two() { - tracing::debug!("endpoints read {}", total); - } - *last_id = stream_id.id; - } - if return_when_finish && len <= self.config.default_batch_size { - break; - } - } - tracing::info!("read {} endpoints/branches/projects from redis", total); - Ok(()) - } -} - -#[cfg(test)] -mod tests { - use super::*; - - #[test] - fn test_parse_control_plane_event() { - let s = r#"{"branch_created":null,"endpoint_created":{"endpoint_id":"ep-rapid-thunder-w0qqw2q9"},"project_created":null,"type":"endpoint_created"}"#; - - let endpoint_id: EndpointId = "ep-rapid-thunder-w0qqw2q9".into(); - - assert_eq!( - serde_json::from_str::(s).unwrap(), - ControlPlaneEvent { - endpoint_created: Some(EndpointCreated { - endpoint_id: endpoint_id.into(), - }), - branch_created: None, - project_created: None, - _type: Some("endpoint_created".into()), - } - ); - } -} diff --git a/proxy/src/cache/mod.rs b/proxy/src/cache/mod.rs index 6c168144a7..ce7f781213 100644 --- a/proxy/src/cache/mod.rs +++ b/proxy/src/cache/mod.rs @@ -1,5 +1,4 @@ pub(crate) mod common; -pub(crate) mod endpoints; pub(crate) mod project_info; mod timed_lru; diff --git a/proxy/src/config.rs b/proxy/src/config.rs index f97006e206..6157dc8a6a 100644 --- a/proxy/src/config.rs +++ b/proxy/src/config.rs @@ -18,7 +18,7 @@ use crate::control_plane::locks::ApiLocks; use crate::control_plane::messages::{EndpointJwksResponse, JwksSettings}; use crate::ext::TaskExt; use crate::intern::RoleNameInt; -use crate::rate_limiter::{RateBucketInfo, RateLimitAlgorithm, RateLimiterConfig}; +use crate::rate_limiter::{RateLimitAlgorithm, RateLimiterConfig}; use crate::scram::threadpool::ThreadPool; use crate::serverless::GlobalConnPoolOptions; use crate::serverless::cancel_set::CancelSet; @@ -80,79 +80,6 @@ pub struct AuthenticationConfig { pub console_redirect_confirmation_timeout: tokio::time::Duration, } -#[derive(Debug)] -pub struct EndpointCacheConfig { - /// Batch size to receive all endpoints on the startup. - pub initial_batch_size: usize, - /// Batch size to receive endpoints. - pub default_batch_size: usize, - /// Timeouts for the stream read operation. - pub xread_timeout: Duration, - /// Stream name to read from. - pub stream_name: String, - /// Limiter info (to distinguish when to enable cache). - pub limiter_info: Vec, - /// Disable cache. - /// If true, cache is ignored, but reports all statistics. - pub disable_cache: bool, - /// Retry interval for the stream read operation. - pub retry_interval: Duration, -} - -impl EndpointCacheConfig { - /// Default options for [`crate::control_plane::NodeInfoCache`]. - /// Notice that by default the limiter is empty, which means that cache is disabled. - pub const CACHE_DEFAULT_OPTIONS: &'static str = "initial_batch_size=1000,default_batch_size=10,xread_timeout=5m,stream_name=controlPlane,disable_cache=true,limiter_info=1000@1s,retry_interval=1s"; - - /// Parse cache options passed via cmdline. - /// Example: [`Self::CACHE_DEFAULT_OPTIONS`]. - fn parse(options: &str) -> anyhow::Result { - let mut initial_batch_size = None; - let mut default_batch_size = None; - let mut xread_timeout = None; - let mut stream_name = None; - let mut limiter_info = vec![]; - let mut disable_cache = false; - let mut retry_interval = None; - - for option in options.split(',') { - let (key, value) = option - .split_once('=') - .with_context(|| format!("bad key-value pair: {option}"))?; - - match key { - "initial_batch_size" => initial_batch_size = Some(value.parse()?), - "default_batch_size" => default_batch_size = Some(value.parse()?), - "xread_timeout" => xread_timeout = Some(humantime::parse_duration(value)?), - "stream_name" => stream_name = Some(value.to_string()), - "limiter_info" => limiter_info.push(RateBucketInfo::from_str(value)?), - "disable_cache" => disable_cache = value.parse()?, - "retry_interval" => retry_interval = Some(humantime::parse_duration(value)?), - unknown => bail!("unknown key: {unknown}"), - } - } - RateBucketInfo::validate(&mut limiter_info)?; - - Ok(Self { - initial_batch_size: initial_batch_size.context("missing `initial_batch_size`")?, - default_batch_size: default_batch_size.context("missing `default_batch_size`")?, - xread_timeout: xread_timeout.context("missing `xread_timeout`")?, - stream_name: stream_name.context("missing `stream_name`")?, - disable_cache, - limiter_info, - retry_interval: retry_interval.context("missing `retry_interval`")?, - }) - } -} - -impl FromStr for EndpointCacheConfig { - type Err = anyhow::Error; - - fn from_str(options: &str) -> Result { - let error = || format!("failed to parse endpoint cache options '{options}'"); - Self::parse(options).with_context(error) - } -} #[derive(Debug)] pub struct MetricBackupCollectionConfig { pub remote_storage_config: Option, diff --git a/proxy/src/context/mod.rs b/proxy/src/context/mod.rs index 7b0549e76f..3a8828e70c 100644 --- a/proxy/src/context/mod.rs +++ b/proxy/src/context/mod.rs @@ -7,7 +7,7 @@ use once_cell::sync::OnceCell; use smol_str::SmolStr; use tokio::sync::mpsc; use tracing::field::display; -use tracing::{Span, debug, error, info_span}; +use tracing::{Span, error, info_span}; use try_lock::TryLock; use uuid::Uuid; @@ -15,10 +15,7 @@ use self::parquet::RequestData; use crate::control_plane::messages::{ColdStartInfo, MetricsAuxInfo}; use crate::error::ErrorKind; use crate::intern::{BranchIdInt, ProjectIdInt}; -use crate::metrics::{ - ConnectOutcome, InvalidEndpointsGroup, LatencyAccumulated, LatencyTimer, Metrics, Protocol, - Waiting, -}; +use crate::metrics::{LatencyAccumulated, LatencyTimer, Metrics, Protocol, Waiting}; use crate::pqproto::StartupMessageParams; use crate::protocol2::{ConnectionInfo, ConnectionInfoExtra}; use crate::types::{DbName, EndpointId, RoleName}; @@ -70,8 +67,6 @@ struct RequestContextInner { // This sender is only used to log the length of session in case of success. disconnect_sender: Option>, pub(crate) latency_timer: LatencyTimer, - // Whether proxy decided that it's not a valid endpoint end rejected it before going to cplane. - rejected: Option, disconnect_timestamp: Option>, } @@ -106,7 +101,6 @@ impl Clone for RequestContext { auth_method: inner.auth_method.clone(), jwt_issuer: inner.jwt_issuer.clone(), success: inner.success, - rejected: inner.rejected, cold_start_info: inner.cold_start_info, pg_options: inner.pg_options.clone(), testodrome_query_id: inner.testodrome_query_id.clone(), @@ -151,7 +145,6 @@ impl RequestContext { auth_method: None, jwt_issuer: None, success: false, - rejected: None, cold_start_info: ColdStartInfo::Unknown, pg_options: None, testodrome_query_id: None, @@ -183,11 +176,6 @@ impl RequestContext { ) } - pub(crate) fn set_rejected(&self, rejected: bool) { - let mut this = self.0.try_lock().expect("should not deadlock"); - this.rejected = Some(rejected); - } - pub(crate) fn set_cold_start_info(&self, info: ColdStartInfo) { self.0 .try_lock() @@ -461,38 +449,6 @@ impl RequestContextInner { } fn log_connect(&mut self) { - let outcome = if self.success { - ConnectOutcome::Success - } else { - ConnectOutcome::Failed - }; - - // TODO: get rid of entirely/refactor - // check for false positives - // AND false negatives - if let Some(rejected) = self.rejected { - let ep = self - .endpoint_id - .as_ref() - .map(|x| x.as_str()) - .unwrap_or_default(); - // This makes sense only if cache is disabled - debug!( - ?outcome, - ?rejected, - ?ep, - "check endpoint is valid with outcome" - ); - Metrics::get() - .proxy - .invalid_endpoints_total - .inc(InvalidEndpointsGroup { - protocol: self.protocol, - rejected: rejected.into(), - outcome, - }); - } - if let Some(tx) = self.sender.take() { // If type changes, this error handling needs to be updated. let tx: mpsc::UnboundedSender = tx; diff --git a/proxy/src/control_plane/client/cplane_proxy_v1.rs b/proxy/src/control_plane/client/cplane_proxy_v1.rs index fbacc97661..fc263b73b1 100644 --- a/proxy/src/control_plane/client/cplane_proxy_v1.rs +++ b/proxy/src/control_plane/client/cplane_proxy_v1.rs @@ -159,13 +159,6 @@ impl NeonControlPlaneClient { ctx: &RequestContext, endpoint: &EndpointId, ) -> Result, GetEndpointJwksError> { - if !self - .caches - .endpoints_cache - .is_valid(ctx, &endpoint.normalize()) - { - return Err(GetEndpointJwksError::EndpointNotFound); - } let request_id = ctx.session_id().to_string(); async { let request = self @@ -300,11 +293,6 @@ impl super::ControlPlaneApi for NeonControlPlaneClient { return Ok(secret); } - if !self.caches.endpoints_cache.is_valid(ctx, normalized_ep) { - info!("endpoint is not valid, skipping the request"); - return Err(GetAuthInfoError::UnknownEndpoint); - } - let auth_info = self.do_get_auth_req(ctx, endpoint, role).await?; let control = EndpointAccessControl { @@ -346,11 +334,6 @@ impl super::ControlPlaneApi for NeonControlPlaneClient { return Ok(control); } - if !self.caches.endpoints_cache.is_valid(ctx, normalized_ep) { - info!("endpoint is not valid, skipping the request"); - return Err(GetAuthInfoError::UnknownEndpoint); - } - let auth_info = self.do_get_auth_req(ctx, endpoint, role).await?; let control = EndpointAccessControl { diff --git a/proxy/src/control_plane/client/mod.rs b/proxy/src/control_plane/client/mod.rs index 2ffc589df6..ecd4db29b2 100644 --- a/proxy/src/control_plane/client/mod.rs +++ b/proxy/src/control_plane/client/mod.rs @@ -13,9 +13,8 @@ use tracing::{debug, info}; use super::{EndpointAccessControl, RoleAccessControl}; use crate::auth::backend::ComputeUserInfo; use crate::auth::backend::jwt::{AuthRule, FetchAuthRules, FetchAuthRulesError}; -use crate::cache::endpoints::EndpointsCache; use crate::cache::project_info::ProjectInfoCacheImpl; -use crate::config::{CacheOptions, EndpointCacheConfig, ProjectInfoCacheOptions}; +use crate::config::{CacheOptions, ProjectInfoCacheOptions}; use crate::context::RequestContext; use crate::control_plane::{CachedNodeInfo, ControlPlaneApi, NodeInfoCache, errors}; use crate::error::ReportableError; @@ -121,15 +120,12 @@ pub struct ApiCaches { pub(crate) node_info: NodeInfoCache, /// Cache which stores project_id -> endpoint_ids mapping. pub project_info: Arc, - /// List of all valid endpoints. - pub endpoints_cache: Arc, } impl ApiCaches { pub fn new( wake_compute_cache_config: CacheOptions, project_info_cache_config: ProjectInfoCacheOptions, - endpoint_cache_config: EndpointCacheConfig, ) -> Self { Self { node_info: NodeInfoCache::new( @@ -139,7 +135,6 @@ impl ApiCaches { true, ), project_info: Arc::new(ProjectInfoCacheImpl::new(project_info_cache_config)), - endpoints_cache: Arc::new(EndpointsCache::new(endpoint_cache_config)), } } } diff --git a/proxy/src/control_plane/errors.rs b/proxy/src/control_plane/errors.rs index 77312c89c5..f640657d90 100644 --- a/proxy/src/control_plane/errors.rs +++ b/proxy/src/control_plane/errors.rs @@ -99,10 +99,6 @@ pub(crate) enum GetAuthInfoError { #[error(transparent)] ApiError(ControlPlaneError), - - /// Proxy does not know about the endpoint in advanced - #[error("endpoint not found in endpoint cache")] - UnknownEndpoint, } // This allows more useful interactions than `#[from]`. @@ -119,8 +115,6 @@ impl UserFacingError for GetAuthInfoError { Self::BadSecret => REQUEST_FAILED.to_owned(), // However, API might return a meaningful error. Self::ApiError(e) => e.to_string_client(), - // pretend like control plane returned an error. - Self::UnknownEndpoint => REQUEST_FAILED.to_owned(), } } } @@ -130,8 +124,6 @@ impl ReportableError for GetAuthInfoError { match self { Self::BadSecret => crate::error::ErrorKind::ControlPlane, Self::ApiError(_) => crate::error::ErrorKind::ControlPlane, - // we only apply endpoint filtering if control plane is under high load. - Self::UnknownEndpoint => crate::error::ErrorKind::ServiceRateLimit, } } } @@ -200,9 +192,6 @@ impl CouldRetry for WakeComputeError { #[derive(Debug, Error)] pub enum GetEndpointJwksError { - #[error("endpoint not found")] - EndpointNotFound, - #[error("failed to build control plane request: {0}")] RequestBuild(#[source] reqwest::Error), diff --git a/proxy/src/rate_limiter/limiter.rs b/proxy/src/rate_limiter/limiter.rs index 61d4636c2b..fd1b2af023 100644 --- a/proxy/src/rate_limiter/limiter.rs +++ b/proxy/src/rate_limiter/limiter.rs @@ -16,44 +16,6 @@ use super::LeakyBucketConfig; use crate::ext::LockExt; use crate::intern::EndpointIdInt; -pub struct GlobalRateLimiter { - data: Vec, - info: Vec, -} - -impl GlobalRateLimiter { - pub fn new(info: Vec) -> Self { - Self { - data: vec![ - RateBucket { - start: Instant::now(), - count: 0, - }; - info.len() - ], - info, - } - } - - /// Check that number of connections is below `max_rps` rps. - pub fn check(&mut self) -> bool { - let now = Instant::now(); - - let should_allow_request = self - .data - .iter_mut() - .zip(&self.info) - .all(|(bucket, info)| bucket.should_allow_request(info, now, 1)); - - if should_allow_request { - // only increment the bucket counts if the request will actually be accepted - self.data.iter_mut().for_each(|b| b.inc(1)); - } - - should_allow_request - } -} - // Simple per-endpoint rate limiter. // // Check that number of connections to the endpoint is below `max_rps` rps. diff --git a/proxy/src/rate_limiter/mod.rs b/proxy/src/rate_limiter/mod.rs index 112b95873a..828bb63aac 100644 --- a/proxy/src/rate_limiter/mod.rs +++ b/proxy/src/rate_limiter/mod.rs @@ -8,4 +8,4 @@ pub(crate) use limit_algorithm::aimd::Aimd; pub(crate) use limit_algorithm::{ DynamicLimiter, Outcome, RateLimitAlgorithm, RateLimiterConfig, Token, }; -pub use limiter::{GlobalRateLimiter, RateBucketInfo, WakeComputeRateLimiter}; +pub use limiter::{RateBucketInfo, WakeComputeRateLimiter}; diff --git a/proxy/src/types.rs b/proxy/src/types.rs index d5952d1d8b..43b8dc5b29 100644 --- a/proxy/src/types.rs +++ b/proxy/src/types.rs @@ -107,13 +107,3 @@ smol_str_wrapper!(DbName); // postgres hostname, will likely be a port:ip addr smol_str_wrapper!(Host); - -// Endpoints are a bit tricky. Rare they might be branches or projects. -impl EndpointId { - pub(crate) fn is_endpoint(&self) -> bool { - self.0.starts_with("ep-") - } - pub(crate) fn is_branch(&self) -> bool { - self.0.starts_with("br-") - } -} From 2f3fc7cb57d0ed2c844f41f482a8778e973718a4 Mon Sep 17 00:00:00 2001 From: Aleksandr Sarantsev <99037063+ephemeralsad@users.noreply.github.com> Date: Tue, 8 Jul 2025 12:51:50 +0400 Subject: [PATCH 07/65] Fix keep-failing reconciles test & add logs (#12497) ## Problem Test is flaky due to the following warning in the logs: ``` Keeping extra secondaries: can't determine which of [NodeId(1), NodeId(2)] to remove (some nodes offline?) ``` Some nodes being offline is expected behavior in this test. ## Summary of changes - Added `Keeping extra secondaries` to the list of allowed errors - Improved logging for better debugging experience Co-authored-by: Aleksandr Sarantsev --- test_runner/regress/test_storage_controller.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/test_runner/regress/test_storage_controller.py b/test_runner/regress/test_storage_controller.py index 8471ab9f57..10845ef02e 100644 --- a/test_runner/regress/test_storage_controller.py +++ b/test_runner/regress/test_storage_controller.py @@ -1034,16 +1034,19 @@ def test_storage_controller_compute_hook_keep_failing( alive_pageservers = [p for p in env.pageservers if p.id != banned_tenant_ps.id] # Stop pageserver and ban tenant to trigger failed reconciliation + log.info(f"Banning tenant {banned_tenant} and stopping pageserver {banned_tenant_ps.id}") status_by_tenant[banned_tenant] = 423 banned_tenant_ps.stop() env.storage_controller.allowed_errors.append(NOTIFY_BLOCKED_LOG) env.storage_controller.allowed_errors.extend(NOTIFY_FAILURE_LOGS) + env.storage_controller.allowed_errors.append(".*Keeping extra secondaries.*") env.storage_controller.allowed_errors.append(".*Shard reconciliation is keep-failing.*") env.storage_controller.node_configure(banned_tenant_ps.id, {"availability": "Offline"}) # Migrate all allowed tenant shards to the first alive pageserver # to trigger storage controller optimizations due to affinity rules for shard_number in range(shard_count): + log.info(f"Migrating shard {shard_number} of {allowed_tenant} to {alive_pageservers[0].id}") env.storage_controller.tenant_shard_migrate( TenantShardId(allowed_tenant, shard_number, shard_count), alive_pageservers[0].id, From 18796fd1dd2337592b559ffc2fa745a973a5fe77 Mon Sep 17 00:00:00 2001 From: Dmitrii Kovalkov <34828390+DimasKovas@users.noreply.github.com> Date: Tue, 8 Jul 2025 13:15:29 +0400 Subject: [PATCH 08/65] tests: more allowed errors for test_safekeeper_migration (#12495) ## Problem Pageserver now writes errors in the log during the safekeeper migration. Some errors are added to allowed errors, but "timeline not found in global map" is not. - Will be properly fixed in https://github.com/neondatabase/neon/issues/12191 ## Summary of changes Add "timeline not found in global map" error in a list of allowed errors in `test_safekeeper_migration_simple` --- test_runner/regress/test_safekeeper_migration.py | 1 + 1 file changed, 1 insertion(+) diff --git a/test_runner/regress/test_safekeeper_migration.py b/test_runner/regress/test_safekeeper_migration.py index 057371175c..b82d7b9bb0 100644 --- a/test_runner/regress/test_safekeeper_migration.py +++ b/test_runner/regress/test_safekeeper_migration.py @@ -27,6 +27,7 @@ def test_safekeeper_migration_simple(neon_env_builder: NeonEnvBuilder): [ ".*Timeline .* was cancelled and cannot be used anymore.*", ".*Timeline .* has been deleted.*", + ".*Timeline .* was not found in global map.*", ".*wal receiver task finished with an error.*", ] ) From 4f16ab3f56ea6fee142e097012d58ad0a5170f12 Mon Sep 17 00:00:00 2001 From: Mikhail Date: Tue, 8 Jul 2025 10:34:01 +0100 Subject: [PATCH 09/65] add lfc offload and prewarm error metrics (#12486) Add `compute_ctl_lfc_prewarm_errors_total` and `compute_ctl_lfc_offload_errors_total` metrics. Add comments in `test_lfc_prewarm`. Correction PR for https://github.com/neondatabase/neon/pull/12447 https://github.com/neondatabase/cloud/issues/19011 --- compute_tools/src/compute_prewarm.rs | 6 ++- compute_tools/src/metrics.rs | 18 +++++++ test_runner/regress/test_lfc_prewarm.py | 63 ++++++++++++++++--------- 3 files changed, 62 insertions(+), 25 deletions(-) diff --git a/compute_tools/src/compute_prewarm.rs b/compute_tools/src/compute_prewarm.rs index 4190580e5e..3f6f9a7ecc 100644 --- a/compute_tools/src/compute_prewarm.rs +++ b/compute_tools/src/compute_prewarm.rs @@ -105,7 +105,8 @@ impl ComputeNode { cloned.state.lock().unwrap().lfc_prewarm_state = LfcPrewarmState::Completed; return; }; - error!(%err); + crate::metrics::LFC_PREWARM_ERRORS.inc(); + error!(%err, "prewarming lfc"); cloned.state.lock().unwrap().lfc_prewarm_state = LfcPrewarmState::Failed { error: err.to_string(), }; @@ -180,7 +181,8 @@ impl ComputeNode { self.state.lock().unwrap().lfc_offload_state = LfcOffloadState::Completed; return; }; - error!(%err); + crate::metrics::LFC_OFFLOAD_ERRORS.inc(); + error!(%err, "offloading lfc"); self.state.lock().unwrap().lfc_offload_state = LfcOffloadState::Failed { error: err.to_string(), }; diff --git a/compute_tools/src/metrics.rs b/compute_tools/src/metrics.rs index 8f81675c49..91dedbb42a 100644 --- a/compute_tools/src/metrics.rs +++ b/compute_tools/src/metrics.rs @@ -105,6 +105,14 @@ pub(crate) static LFC_PREWARMS: Lazy = Lazy::new(|| { .expect("failed to define a metric") }); +pub(crate) static LFC_PREWARM_ERRORS: Lazy = Lazy::new(|| { + register_int_counter!( + "compute_ctl_lfc_prewarm_errors_total", + "Total number of LFC prewarms errors requested by compute_ctl or autoprewarm option", + ) + .expect("failed to define a metric") +}); + pub(crate) static LFC_OFFLOADS: Lazy = Lazy::new(|| { register_int_counter!( "compute_ctl_lfc_offloads_total", @@ -113,6 +121,14 @@ pub(crate) static LFC_OFFLOADS: Lazy = Lazy::new(|| { .expect("failed to define a metric") }); +pub(crate) static LFC_OFFLOAD_ERRORS: Lazy = Lazy::new(|| { + register_int_counter!( + "compute_ctl_lfc_offload_errors_total", + "Total number of LFC offload errors requested by compute_ctl or lfc_offload_period_seconds option", + ) + .expect("failed to define a metric") +}); + pub fn collect() -> Vec { let mut metrics = COMPUTE_CTL_UP.collect(); metrics.extend(INSTALLED_EXTENSIONS.collect()); @@ -123,6 +139,8 @@ pub fn collect() -> Vec { metrics.extend(PG_CURR_DOWNTIME_MS.collect()); metrics.extend(PG_TOTAL_DOWNTIME_MS.collect()); metrics.extend(LFC_PREWARMS.collect()); + metrics.extend(LFC_PREWARM_ERRORS.collect()); metrics.extend(LFC_OFFLOADS.collect()); + metrics.extend(LFC_OFFLOAD_ERRORS.collect()); metrics } diff --git a/test_runner/regress/test_lfc_prewarm.py b/test_runner/regress/test_lfc_prewarm.py index 1fa1ead034..ae36bbda79 100644 --- a/test_runner/regress/test_lfc_prewarm.py +++ b/test_runner/regress/test_lfc_prewarm.py @@ -1,7 +1,6 @@ import random import threading from enum import StrEnum -from time import sleep from typing import Any import pytest @@ -20,28 +19,32 @@ class PrewarmMethod(StrEnum): PREWARM_LABEL = "compute_ctl_lfc_prewarms_total" +PREWARM_ERR_LABEL = "compute_ctl_lfc_prewarm_errors_total" OFFLOAD_LABEL = "compute_ctl_lfc_offloads_total" +OFFLOAD_ERR_LABEL = "compute_ctl_lfc_offload_errors_total" METHOD_VALUES = [e for e in PrewarmMethod] METHOD_IDS = [e.value for e in PrewarmMethod] def check_pinned_entries(cur: Cursor): - # some LFC buffer can be temporary locked by autovacuum or background writer - for _ in range(10): + """ + Wait till none of LFC buffers are pinned + """ + + def none_pinned(): cur.execute("select lfc_value from neon_lfc_stats where lfc_key='file_cache_chunks_pinned'") - n_pinned = cur.fetchall()[0][0] - if n_pinned == 0: - break - sleep(1) - assert n_pinned == 0 + assert cur.fetchall()[0][0] == 0 + + wait_until(none_pinned) def prom_parse(client: EndpointHttpClient) -> dict[str, float]: + labels = PREWARM_LABEL, OFFLOAD_LABEL, PREWARM_ERR_LABEL, OFFLOAD_ERR_LABEL return { - sample.name: sample.value + sample.name: int(sample.value) for family in prom_parse_impl(client.metrics()) for sample in family.samples - if sample.name in (PREWARM_LABEL, OFFLOAD_LABEL) + if sample.name in labels } @@ -54,7 +57,9 @@ def offload_lfc(method: PrewarmMethod, client: EndpointHttpClient, cur: Cursor) assert "error" not in status client.offload_lfc() assert client.prewarm_lfc_status()["status"] == "not_prewarmed" - assert prom_parse(client) == {OFFLOAD_LABEL: 1, PREWARM_LABEL: 0} + parsed = prom_parse(client) + desired = {OFFLOAD_LABEL: 1, PREWARM_LABEL: 0, OFFLOAD_ERR_LABEL: 0, PREWARM_ERR_LABEL: 0} + assert parsed == desired, f"{parsed=} != {desired=}" elif method == PrewarmMethod.POSTGRES: cur.execute("select get_local_cache_state()") return cur.fetchall()[0][0] @@ -81,12 +86,17 @@ def check_prewarmed( assert prom_parse(client)[PREWARM_LABEL] == 1 elif method == PrewarmMethod.COMPUTE_CTL: assert client.prewarm_lfc_status() == desired_status - assert prom_parse(client) == {OFFLOAD_LABEL: 0, PREWARM_LABEL: 1} + desired = {OFFLOAD_LABEL: 0, PREWARM_LABEL: 1, PREWARM_ERR_LABEL: 0, OFFLOAD_ERR_LABEL: 0} + assert prom_parse(client) == desired @pytest.mark.skipif(not USE_LFC, reason="LFC is disabled, skipping") @pytest.mark.parametrize("method", METHOD_VALUES, ids=METHOD_IDS) def test_lfc_prewarm(neon_simple_env: NeonEnv, method: PrewarmMethod): + """ + Test we can offload endpoint's LFC cache to endpoint storage. + Test we can prewarm endpoint with LFC cache loaded from endpoint storage. + """ env = neon_simple_env n_records = 1000000 cfg = [ @@ -140,18 +150,15 @@ def test_lfc_prewarm(neon_simple_env: NeonEnv, method: PrewarmMethod): lfc_used_pages = pg_cur.fetchall()[0][0] log.info(f"Used LFC size: {lfc_used_pages}") pg_cur.execute("select * from get_prewarm_info()") - prewarm_info = pg_cur.fetchall()[0] - log.info(f"Prewarm info: {prewarm_info}") - total, prewarmed, skipped, _ = prewarm_info + total, prewarmed, skipped, _ = pg_cur.fetchall()[0] + log.info(f"Prewarm info: {total=} {prewarmed=} {skipped=}") progress = (prewarmed + skipped) * 100 // total log.info(f"Prewarm progress: {progress}%") - assert lfc_used_pages > 10000 - assert ( - prewarm_info[0] > 0 - and prewarm_info[1] > 0 - and prewarm_info[0] == prewarm_info[1] + prewarm_info[2] - ) + assert total > 0 + assert prewarmed > 0 + assert total == prewarmed + skipped + lfc_cur.execute("select sum(pk) from t") assert lfc_cur.fetchall()[0][0] == n_records * (n_records + 1) / 2 @@ -168,6 +175,9 @@ WORKLOAD_IDS = METHOD_IDS[:-1] @pytest.mark.skipif(not USE_LFC, reason="LFC is disabled, skipping") @pytest.mark.parametrize("method", WORKLOAD_VALUES, ids=WORKLOAD_IDS) def test_lfc_prewarm_under_workload(neon_simple_env: NeonEnv, method: PrewarmMethod): + """ + Test continiously prewarming endpoint when there is a write-heavy workload going in parallel + """ env = neon_simple_env n_records = 10000 n_threads = 4 @@ -247,5 +257,12 @@ def test_lfc_prewarm_under_workload(neon_simple_env: NeonEnv, method: PrewarmMet assert total_balance == 0 check_pinned_entries(pg_cur) - if method != PrewarmMethod.POSTGRES: - assert prom_parse(http_client) == {OFFLOAD_LABEL: 1, PREWARM_LABEL: n_prewarms} + if method == PrewarmMethod.POSTGRES: + return + desired = { + OFFLOAD_LABEL: 1, + PREWARM_LABEL: n_prewarms, + OFFLOAD_ERR_LABEL: 0, + PREWARM_ERR_LABEL: 0, + } + assert prom_parse(http_client) == desired From f51ed4a2c40b863b0d8e993e690270e7307b879c Mon Sep 17 00:00:00 2001 From: Peter Bendel Date: Tue, 8 Jul 2025 11:38:06 +0200 Subject: [PATCH 10/65] "disable" disk eviction in pagebench periodic benchmark (#12487) ## Problem https://github.com/neondatabase/neon/pull/12464 introduced new defaults for pageserver disk based eviction which activated disk based eviction for pagebench periodic pagebench. This caused the testcase to fail. ## Summary of changes Override the new defaults during testcase execution. ## Test run https://github.com/neondatabase/neon/actions/runs/16120217757/job/45483869734 Test run was successful, so merging this now --- .../test_pageserver_max_throughput_getpage_at_latest_lsn.py | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py b/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py index 68bfa81b25..bf998a2a0a 100644 --- a/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py +++ b/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py @@ -104,9 +104,7 @@ def setup_and_run_pagebench_benchmark( # configure cache sizes like in prod page_cache_size = 16384 max_file_descriptors = 500000 - neon_env_builder.pageserver_config_override = ( - f"page_cache_size={page_cache_size}; max_file_descriptors={max_file_descriptors}" - ) + neon_env_builder.pageserver_config_override = f"page_cache_size={page_cache_size}; max_file_descriptors={max_file_descriptors}; disk_usage_based_eviction={{max_usage_pct=99, min_avail_bytes=0, period = '999y'}}" tracing_config = PageserverTracingConfig( sampling_ratio=(0, 1000), From 59e393aef35fea56bbbf5dd1feeebfb3c518731d Mon Sep 17 00:00:00 2001 From: a-masterov <72613290+a-masterov@users.noreply.github.com> Date: Tue, 8 Jul 2025 13:28:39 +0200 Subject: [PATCH 11/65] Enable parallel execution of extension tests (#12118) ## Problem Extension tests were previously run sequentially, resulting in unnecessary wait time and underutilization of available CPU cores. ## Summary of changes Tests are now executed in a customizable number of parallel threads using separate database branches. This reduces overall test time by approximately 50% (e.g., on my laptop, parallel test lasts 173s, while sequential one lasts 340s) and increases the load on the pageserver, providing better test coverage. --------- Co-authored-by: Alexander Bayandin Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Alexey Masterov --- .github/workflows/build_and_test.yml | 1 + .gitignore | 1 + compute/compute-node.Dockerfile | 4 +- .../compute_wrapper/shell/compute.sh | 18 ++-- docker-compose/docker-compose.yml | 25 +++-- docker-compose/docker_compose_test.sh | 92 +++++++++++++------ docker-compose/run-tests.sh | 69 +++++++++----- docker-compose/test_extensions_upgrade.sh | 6 +- 8 files changed, 144 insertions(+), 72 deletions(-) diff --git a/.github/workflows/build_and_test.yml b/.github/workflows/build_and_test.yml index 0ceaa96fb0..864abad574 100644 --- a/.github/workflows/build_and_test.yml +++ b/.github/workflows/build_and_test.yml @@ -986,6 +986,7 @@ jobs: - name: Verify docker-compose example and test extensions timeout-minutes: 60 env: + PARALLEL_COMPUTES: 3 TAG: >- ${{ needs.meta.outputs.run-kind == 'compute-rc-pr' diff --git a/.gitignore b/.gitignore index 6574d7b9de..4857972f1d 100644 --- a/.gitignore +++ b/.gitignore @@ -15,6 +15,7 @@ neon.iml /.neon /integration_tests/.neon compaction-suite-results.* +docker-compose/docker-compose-parallel.yml # Coverage *.profraw diff --git a/compute/compute-node.Dockerfile b/compute/compute-node.Dockerfile index 0dd32011fb..39136fe573 100644 --- a/compute/compute-node.Dockerfile +++ b/compute/compute-node.Dockerfile @@ -1915,10 +1915,10 @@ RUN cd /ext-src/pg_repack-src && patch -p1 /etc/ld.so.conf.d/00-neon.conf && /sbin/ldconfig -RUN apt-get update && apt-get install -y libtap-parser-sourcehandler-pgtap-perl jq \ +RUN apt-get update && apt-get install -y libtap-parser-sourcehandler-pgtap-perl jq parallel \ && apt clean && rm -rf /ext-src/*.tar.gz /ext-src/*.patch /var/lib/apt/lists/* ENV PATH=/usr/local/pgsql/bin:$PATH -ENV PGHOST=compute +ENV PGHOST=compute1 ENV PGPORT=55433 ENV PGUSER=cloud_admin ENV PGDATABASE=postgres diff --git a/docker-compose/compute_wrapper/shell/compute.sh b/docker-compose/compute_wrapper/shell/compute.sh index 1e62e91fd0..6f36b4358e 100755 --- a/docker-compose/compute_wrapper/shell/compute.sh +++ b/docker-compose/compute_wrapper/shell/compute.sh @@ -54,14 +54,16 @@ else printf '%s\n' "${result}" | jq . fi - echo "Check if a timeline present" - PARAMS=( - -X GET - -H "Content-Type: application/json" - "http://pageserver:9898/v1/tenant/${tenant_id}/timeline" - ) - timeline_id=$(curl "${PARAMS[@]}" | jq -r .[0].timeline_id) - if [[ -z "${timeline_id}" || "${timeline_id}" = null ]]; then + if [[ "${RUN_PARALLEL:-false}" != "true" ]]; then + echo "Check if a timeline present" + PARAMS=( + -X GET + -H "Content-Type: application/json" + "http://pageserver:9898/v1/tenant/${tenant_id}/timeline" + ) + timeline_id=$(curl "${PARAMS[@]}" | jq -r .[0].timeline_id) + fi + if [[ -z "${timeline_id:-}" || "${timeline_id:-}" = null ]]; then generate_id timeline_id PARAMS=( -sbf diff --git a/docker-compose/docker-compose.yml b/docker-compose/docker-compose.yml index 2519b75c7f..19c3bc74e2 100644 --- a/docker-compose/docker-compose.yml +++ b/docker-compose/docker-compose.yml @@ -142,7 +142,7 @@ services: - "storage_broker" - "--listen-addr=0.0.0.0:50051" - compute: + compute1: restart: always build: context: ./compute_wrapper/ @@ -152,6 +152,7 @@ services: - TAG=${COMPUTE_TAG:-${TAG:-latest}} - http_proxy=${http_proxy:-} - https_proxy=${https_proxy:-} + image: built-compute environment: - PG_VERSION=${PG_VERSION:-16} - TENANT_ID=${TENANT_ID:-} @@ -166,6 +167,11 @@ services: - 3080:3080 # http endpoints entrypoint: - "/shell/compute.sh" + # Ad an alias for compute1 for compatibility + networks: + default: + aliases: + - compute depends_on: - safekeeper1 - safekeeper2 @@ -174,15 +180,20 @@ services: compute_is_ready: image: postgres:latest + environment: + - PARALLEL_COMPUTES=1 entrypoint: - - "/bin/bash" + - "/bin/sh" - "-c" command: - - "until pg_isready -h compute -p 55433 -U cloud_admin ; do - echo 'Waiting to start compute...' && sleep 1; - done" + - "for i in $(seq 1 $${PARALLEL_COMPUTES}); do + until pg_isready -h compute$$i -p 55433 -U cloud_admin ; do + sleep 1; + done; + done; + echo All computes are started" depends_on: - - compute + - compute1 neon-test-extensions: profiles: ["test-extensions"] @@ -196,4 +207,4 @@ services: command: - sleep 3600 depends_on: - - compute + - compute1 diff --git a/docker-compose/docker_compose_test.sh b/docker-compose/docker_compose_test.sh index 6edf90ca8d..063b8dee85 100755 --- a/docker-compose/docker_compose_test.sh +++ b/docker-compose/docker_compose_test.sh @@ -1,4 +1,4 @@ -#!/bin/bash +#!/usr/bin/env bash # A basic test to ensure Docker images are built correctly. # Build a wrapper around the compute, start all services and runs a simple SQL query. @@ -13,9 +13,36 @@ # set -eux -o pipefail +cd "$(dirname "${0}")" export COMPOSE_FILE='docker-compose.yml' export COMPOSE_PROFILES=test-extensions -cd "$(dirname "${0}")" +export PARALLEL_COMPUTES=${PARALLEL_COMPUTES:-1} +READY_MESSAGE="All computes are started" +COMPUTES=() +for i in $(seq 1 "${PARALLEL_COMPUTES}"); do + COMPUTES+=("compute${i}") +done +CURRENT_TMPDIR=$(mktemp -d) +trap 'rm -rf ${CURRENT_TMPDIR} docker-compose-parallel.yml' EXIT +if [[ ${PARALLEL_COMPUTES} -gt 1 ]]; then + export COMPOSE_FILE=docker-compose-parallel.yml + cp docker-compose.yml docker-compose-parallel.yml + # Replace the environment variable PARALLEL_COMPUTES with the actual value + yq eval -i ".services.compute_is_ready.environment |= map(select(. | test(\"^PARALLEL_COMPUTES=\") | not)) + [\"PARALLEL_COMPUTES=${PARALLEL_COMPUTES}\"]" ${COMPOSE_FILE} + for i in $(seq 2 "${PARALLEL_COMPUTES}"); do + # Duplicate compute1 as compute${i} for parallel execution + yq eval -i ".services.compute${i} = .services.compute1" ${COMPOSE_FILE} + # We don't need these sections, so delete them + yq eval -i "(del .services.compute${i}.build) | (del .services.compute${i}.ports) | (del .services.compute${i}.networks)" ${COMPOSE_FILE} + # Let the compute 1 be the only dependence + yq eval -i ".services.compute${i}.depends_on = [\"compute1\"]" ${COMPOSE_FILE} + # Set RUN_PARALLEL=true for compute2. They will generate tenant_id and timeline_id to avoid using the same as other computes + yq eval -i ".services.compute${i}.environment += [\"RUN_PARALLEL=true\"]" ${COMPOSE_FILE} + # Remove TENANT_ID and TIMELINE_ID from the environment variables of the generated computes + # They will create new TENANT_ID and TIMELINE_ID anyway. + yq eval -i ".services.compute${i}.environment |= map(select(. | (test(\"^TENANT_ID=\") or test(\"^TIMELINE_ID=\")) | not))" ${COMPOSE_FILE} + done +fi PSQL_OPTION="-h localhost -U cloud_admin -p 55433 -d postgres" function cleanup() { @@ -27,11 +54,11 @@ function cleanup() { for pg_version in ${TEST_VERSION_ONLY-14 15 16 17}; do pg_version=${pg_version/v/} - echo "clean up containers if exists" + echo "clean up containers if exist" cleanup PG_TEST_VERSION=$((pg_version < 16 ? 16 : pg_version)) - PG_VERSION=${pg_version} PG_TEST_VERSION=${PG_TEST_VERSION} docker compose up --quiet-pull --build -d - + PG_VERSION=${pg_version} PG_TEST_VERSION=${PG_TEST_VERSION} docker compose build compute1 + PG_VERSION=${pg_version} PG_TEST_VERSION=${PG_TEST_VERSION} docker compose up --quiet-pull -d echo "wait until the compute is ready. timeout after 60s. " cnt=0 while sleep 3; do @@ -41,45 +68,50 @@ for pg_version in ${TEST_VERSION_ONLY-14 15 16 17}; do echo "timeout before the compute is ready." exit 1 fi - if docker compose logs "compute_is_ready" | grep -q "accepting connections"; then + if docker compose logs compute_is_ready | grep -q "${READY_MESSAGE}"; then echo "OK. The compute is ready to connect." echo "execute simple queries." - docker compose exec compute /bin/bash -c "psql ${PSQL_OPTION} -c 'SELECT 1'" + for compute in "${COMPUTES[@]}"; do + docker compose exec "${compute}" /bin/bash -c "psql ${PSQL_OPTION} -c 'SELECT 1'" + done break fi done if [[ ${pg_version} -ge 16 ]]; then - # This is required for the pg_hint_plan test, to prevent flaky log message causing the test to fail - # It cannot be moved to Dockerfile now because the database directory is created after the start of the container - echo Adding dummy config - docker compose exec compute touch /var/db/postgres/compute/compute_ctl_temp_override.conf - # Prepare for the PostGIS test - docker compose exec compute mkdir -p /tmp/pgis_reg/pgis_reg_tmp - TMPDIR=$(mktemp -d) - docker compose cp neon-test-extensions:/ext-src/postgis-src/raster/test "${TMPDIR}" - docker compose cp neon-test-extensions:/ext-src/postgis-src/regress/00-regress-install "${TMPDIR}" - docker compose exec compute mkdir -p /ext-src/postgis-src/raster /ext-src/postgis-src/regress /ext-src/postgis-src/regress/00-regress-install - docker compose cp "${TMPDIR}/test" compute:/ext-src/postgis-src/raster/test - docker compose cp "${TMPDIR}/00-regress-install" compute:/ext-src/postgis-src/regress - rm -rf "${TMPDIR}" - # The following block copies the files for the pg_hintplan test to the compute node for the extension test in an isolated docker-compose environment - TMPDIR=$(mktemp -d) - docker compose cp neon-test-extensions:/ext-src/pg_hint_plan-src/data "${TMPDIR}/data" - docker compose cp "${TMPDIR}/data" compute:/ext-src/pg_hint_plan-src/ - rm -rf "${TMPDIR}" - # The following block does the same for the contrib/file_fdw test - TMPDIR=$(mktemp -d) - docker compose cp neon-test-extensions:/postgres/contrib/file_fdw/data "${TMPDIR}/data" - docker compose cp "${TMPDIR}/data" compute:/postgres/contrib/file_fdw/data - rm -rf "${TMPDIR}" + mkdir "${CURRENT_TMPDIR}"/{pg_hint_plan-src,file_fdw,postgis-src} + docker compose cp neon-test-extensions:/ext-src/postgis-src/raster/test "${CURRENT_TMPDIR}/postgis-src/test" + docker compose cp neon-test-extensions:/ext-src/postgis-src/regress/00-regress-install "${CURRENT_TMPDIR}/postgis-src/00-regress-install" + docker compose cp neon-test-extensions:/ext-src/pg_hint_plan-src/data "${CURRENT_TMPDIR}/pg_hint_plan-src/data" + docker compose cp neon-test-extensions:/postgres/contrib/file_fdw/data "${CURRENT_TMPDIR}/file_fdw/data" + + for compute in "${COMPUTES[@]}"; do + # This is required for the pg_hint_plan test, to prevent flaky log message causing the test to fail + # It cannot be moved to Dockerfile now because the database directory is created after the start of the container + echo Adding dummy config on "${compute}" + docker compose exec "${compute}" touch /var/db/postgres/compute/compute_ctl_temp_override.conf + # Prepare for the PostGIS test + docker compose exec "${compute}" mkdir -p /tmp/pgis_reg/pgis_reg_tmp /ext-src/postgis-src/raster /ext-src/postgis-src/regress /ext-src/postgis-src/regress/00-regress-install + docker compose cp "${CURRENT_TMPDIR}/postgis-src/test" "${compute}":/ext-src/postgis-src/raster/test + docker compose cp "${CURRENT_TMPDIR}/postgis-src/00-regress-install" "${compute}":/ext-src/postgis-src/regress + # The following block copies the files for the pg_hintplan test to the compute node for the extension test in an isolated docker-compose environment + docker compose cp "${CURRENT_TMPDIR}/pg_hint_plan-src/data" "${compute}":/ext-src/pg_hint_plan-src/ + # The following block does the same for the contrib/file_fdw test + docker compose cp "${CURRENT_TMPDIR}/file_fdw/data" "${compute}":/postgres/contrib/file_fdw/data + done # Apply patches docker compose exec -T neon-test-extensions bash -c "(cd /postgres && patch -p1)" <"../compute/patches/contrib_pg${pg_version}.patch" # We are running tests now rm -f testout.txt testout_contrib.txt + # We want to run the longest tests first to better utilize parallelization and reduce overall test time. + # Tests listed in the RUN_FIRST variable will be run before others. + # If parallelization is not used, this environment variable will be ignored. + docker compose exec -e USE_PGXS=1 -e SKIP=timescaledb-src,rdkit-src,pg_jsonschema-src,kq_imcx-src,wal2json_2_5-src,rag_jina_reranker_v1_tiny_en-src,rag_bge_small_en_v15-src \ + -e RUN_FIRST=hll-src,postgis-src,pgtap-src -e PARALLEL_COMPUTES="${PARALLEL_COMPUTES}" \ neon-test-extensions /run-tests.sh /ext-src | tee testout.txt && EXT_SUCCESS=1 || EXT_SUCCESS=0 docker compose exec -e SKIP=start-scripts,postgres_fdw,ltree_plpython,jsonb_plpython,jsonb_plperl,hstore_plpython,hstore_plperl,dblink,bool_plperl \ + -e PARALLEL_COMPUTES="${PARALLEL_COMPUTES}" \ neon-test-extensions /run-tests.sh /postgres/contrib | tee testout_contrib.txt && CONTRIB_SUCCESS=1 || CONTRIB_SUCCESS=0 if [[ ${EXT_SUCCESS} -eq 0 || ${CONTRIB_SUCCESS} -eq 0 ]]; then CONTRIB_FAILED= diff --git a/docker-compose/run-tests.sh b/docker-compose/run-tests.sh index 930402ce66..b37b9363fa 100755 --- a/docker-compose/run-tests.sh +++ b/docker-compose/run-tests.sh @@ -1,4 +1,4 @@ -#!/bin/bash +#!/usr/bin/env bash set -x if [[ -v BENCHMARK_CONNSTR ]]; then @@ -26,8 +26,9 @@ if [[ -v BENCHMARK_CONNSTR ]]; then fi fi REGULAR_USER=false -while getopts r arg; do - case $arg in +PARALLEL_COMPUTES=${PARALLEL_COMPUTES:-1} +while getopts pr arg; do + case ${arg} in r) REGULAR_USER=true shift $((OPTIND-1)) @@ -41,26 +42,49 @@ extdir=${1} cd "${extdir}" || exit 2 FAILED= -LIST=$( (echo -e "${SKIP//","/"\n"}"; ls) | sort | uniq -u) -for d in ${LIST}; do - [ -d "${d}" ] || continue - if ! psql -w -c "select 1" >/dev/null; then - FAILED="${d} ${FAILED}" - break - fi - if [[ ${REGULAR_USER} = true ]] && [ -f "${d}"/regular-test.sh ]; then - "${d}/regular-test.sh" || FAILED="${d} ${FAILED}" - continue - fi +export FAILED_FILE=/tmp/failed +rm -f ${FAILED_FILE} +mapfile -t LIST < <( (echo -e "${SKIP//","/"\n"}"; ls) | sort | uniq -u) +if [[ ${PARALLEL_COMPUTES} -gt 1 ]]; then + # Avoid errors if RUN_FIRST is not defined + RUN_FIRST=${RUN_FIRST:-} + # Move entries listed in the RUN_FIRST variable to the beginning + ORDERED_LIST=$(printf "%s\n" "${LIST[@]}" | grep -x -Ff <(echo -e "${RUN_FIRST//,/$'\n'}"); printf "%s\n" "${LIST[@]}" | grep -vx -Ff <(echo -e "${RUN_FIRST//,/$'\n'}")) + parallel -j"${PARALLEL_COMPUTES}" "[[ -d {} ]] || exit 0 + export PGHOST=compute{%} + if ! psql -c 'select 1'>/dev/null; then + exit 1 + fi + echo Running on \${PGHOST} + if [[ -f ${extdir}/{}/neon-test.sh ]]; then + echo Running from script + ${extdir}/{}/neon-test.sh || echo {} >> ${FAILED_FILE}; + else + echo Running using make; + USE_PGXS=1 make -C {} installcheck || echo {} >> ${FAILED_FILE}; + fi" ::: ${ORDERED_LIST} + [[ ! -f ${FAILED_FILE} ]] && exit 0 +else + for d in "${LIST[@]}"; do + [ -d "${d}" ] || continue + if ! psql -w -c "select 1" >/dev/null; then + FAILED="${d} ${FAILED}" + break + fi + if [[ ${REGULAR_USER} = true ]] && [ -f "${d}"/regular-test.sh ]; then + "${d}/regular-test.sh" || FAILED="${d} ${FAILED}" + continue + fi - if [ -f "${d}/neon-test.sh" ]; then - "${d}/neon-test.sh" || FAILED="${d} ${FAILED}" - else - USE_PGXS=1 make -C "${d}" installcheck || FAILED="${d} ${FAILED}" - fi -done -[ -z "${FAILED}" ] && exit 0 -for d in ${FAILED}; do + if [ -f "${d}/neon-test.sh" ]; then + "${d}/neon-test.sh" || FAILED="${d} ${FAILED}" + else + USE_PGXS=1 make -C "${d}" installcheck || FAILED="${d} ${FAILED}" + fi + done + [[ -z ${FAILED} ]] && exit 0 +fi +for d in ${FAILED} $([[ ! -f ${FAILED_FILE} ]] || cat ${FAILED_FILE}); do cat "$(find $d -name regression.diffs)" done for postgis_diff in /tmp/pgis_reg/*_diff; do @@ -68,4 +92,5 @@ for postgis_diff in /tmp/pgis_reg/*_diff; do cat "${postgis_diff}" done echo "${FAILED}" +cat ${FAILED_FILE} exit 1 diff --git a/docker-compose/test_extensions_upgrade.sh b/docker-compose/test_extensions_upgrade.sh index f1cf17f531..1d39fc029e 100755 --- a/docker-compose/test_extensions_upgrade.sh +++ b/docker-compose/test_extensions_upgrade.sh @@ -1,4 +1,4 @@ -#!/bin/bash +#!/usr/bin/env bash set -eux -o pipefail cd "$(dirname "${0}")" # Takes a variable name as argument. The result is stored in that variable. @@ -60,8 +60,8 @@ function check_timeline() { # Restarts the compute node with the required compute tag and timeline. # Accepts the tag for the compute node and the timeline as parameters. function restart_compute() { - docker compose down compute compute_is_ready - COMPUTE_TAG=${1} TENANT_ID=${tenant_id} TIMELINE_ID=${2} docker compose up --quiet-pull -d --build compute compute_is_ready + docker compose down compute1 compute_is_ready + COMPUTE_TAG=${1} TENANT_ID=${tenant_id} TIMELINE_ID=${2} docker compose up --quiet-pull -d --build compute1 compute_is_ready wait_for_ready check_timeline ${2} } From 2b2a54767183da8784557b5c4ab714674a6694c8 Mon Sep 17 00:00:00 2001 From: Christian Schwarz Date: Tue, 8 Jul 2025 14:56:22 +0200 Subject: [PATCH 12/65] fix(tests): periodic and immediate gc is effectively a no-op in tests (#12431) The introduction of the default lease deadline feature 9 months ago made it so that after PS restart, `.timeline_gc()` calls in Python tests are no-ops for 10 minute after pageserver startup: the `gc_iteration()` bails with `Skipping GC because lsn lease deadline is not reached`. I did some impact analysis in the following PR. About 30 Python tests are affected: - https://github.com/neondatabase/neon/pull/12411 Rust tests that don't explicitly enable periodic GC or invoke GC manually are unaffected because we disable periodic GC by default in the `TenantHarness`'s tenant config. Two tests explicitly did `start_paused=true` + `tokio::time::advance()`, but it would add cognitive and code bloat to each existing and future test case that uses TenantHarness if we took that route. So, this PR sets the default lease deadline feature in both Python and Rust tests to zero by default. Tests that test the feature were thus identified by failing the test: - Python test `test_readonly_node_gc` + `test_lsn_lease_size` - Rust test `test_lsn_lease`. To accomplish the above, I changed the code that computes the initial lease deadline to respect the pageserver.toml's default tenant config, which it didn't before (and I would consider a bug). The Python test harness and the Rust TenantHarness test harness then simply set the default tenant config field to zero. Drive-by: - `test_lsn_lease_size` was writing a lot of data unnecessarily; reduce the amount and speed up the test refs - PR that introduced default lease deadline: https://github.com/neondatabase/neon/pull/9055/files - fixes https://databricks.atlassian.net/browse/LKB-92 --------- Co-authored-by: Christian Schwarz --- pageserver/src/config.rs | 11 ++++- pageserver/src/tenant.rs | 59 ++++++++++++++--------- pageserver/src/tenant/mgr.rs | 15 +++--- test_runner/fixtures/neon_fixtures.py | 8 +++ test_runner/regress/test_readonly_node.py | 4 +- test_runner/regress/test_tenant_size.py | 20 +++----- 6 files changed, 72 insertions(+), 45 deletions(-) diff --git a/pageserver/src/config.rs b/pageserver/src/config.rs index 75b41b9b60..6e22f9f36e 100644 --- a/pageserver/src/config.rs +++ b/pageserver/src/config.rs @@ -635,7 +635,7 @@ impl PageServerConf { pub fn dummy_conf(repo_dir: Utf8PathBuf) -> Self { let pg_distrib_dir = Utf8PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("../pg_install"); - let config_toml = pageserver_api::config::ConfigToml { + let mut config_toml = pageserver_api::config::ConfigToml { wait_lsn_timeout: Duration::from_secs(60), wal_redo_timeout: Duration::from_secs(60), pg_distrib_dir: Some(pg_distrib_dir), @@ -647,6 +647,15 @@ impl PageServerConf { control_plane_api: Some(Url::parse("http://localhost:6666").unwrap()), ..Default::default() }; + + // Test authors tend to forget about the default 10min initial lease deadline + // when writing tests, which turns their immediate gc requests via mgmt API + // into no-ops. Override the binary default here, such that there is no initial + // lease deadline by default in tests. Tests that care can always override it + // themselves. + // Cf https://databricks.atlassian.net/browse/LKB-92?focusedCommentId=6722329 + config_toml.tenant_config.lsn_lease_length = Duration::from_secs(0); + PageServerConf::parse_and_validate(NodeId(0), config_toml, &repo_dir).unwrap() } } diff --git a/pageserver/src/tenant.rs b/pageserver/src/tenant.rs index f4877fd763..9aabd6341f 100644 --- a/pageserver/src/tenant.rs +++ b/pageserver/src/tenant.rs @@ -34,7 +34,7 @@ use once_cell::sync::Lazy; pub use pageserver_api::models::TenantState; use pageserver_api::models::{self, RelSizeMigration}; use pageserver_api::models::{ - CompactInfoResponse, LsnLease, TimelineArchivalState, TimelineState, TopTenantShardItem, + CompactInfoResponse, TimelineArchivalState, TimelineState, TopTenantShardItem, WalRedoManagerStatus, }; use pageserver_api::shard::{ShardIdentity, ShardStripeSize, TenantShardId}; @@ -180,6 +180,7 @@ pub(super) struct AttachedTenantConf { impl AttachedTenantConf { fn new( + conf: &'static PageServerConf, tenant_conf: pageserver_api::models::TenantConfig, location: AttachedLocationConfig, ) -> Self { @@ -191,9 +192,7 @@ impl AttachedTenantConf { let lsn_lease_deadline = if location.attach_mode == AttachmentMode::Single { Some( tokio::time::Instant::now() - + tenant_conf - .lsn_lease_length - .unwrap_or(LsnLease::DEFAULT_LENGTH), + + TenantShard::get_lsn_lease_length_impl(conf, &tenant_conf), ) } else { // We don't use `lsn_lease_deadline` to delay GC in AttachedMulti and AttachedStale @@ -208,10 +207,13 @@ impl AttachedTenantConf { } } - fn try_from(location_conf: LocationConf) -> anyhow::Result { + fn try_from( + conf: &'static PageServerConf, + location_conf: LocationConf, + ) -> anyhow::Result { match &location_conf.mode { LocationMode::Attached(attach_conf) => { - Ok(Self::new(location_conf.tenant_conf, *attach_conf)) + Ok(Self::new(conf, location_conf.tenant_conf, *attach_conf)) } LocationMode::Secondary(_) => { anyhow::bail!( @@ -4205,10 +4207,16 @@ impl TenantShard { } pub fn get_lsn_lease_length(&self) -> Duration { - let tenant_conf = self.tenant_conf.load().tenant_conf.clone(); + Self::get_lsn_lease_length_impl(self.conf, &self.tenant_conf.load().tenant_conf) + } + + pub fn get_lsn_lease_length_impl( + conf: &'static PageServerConf, + tenant_conf: &pageserver_api::models::TenantConfig, + ) -> Duration { tenant_conf .lsn_lease_length - .unwrap_or(self.conf.default_tenant_conf.lsn_lease_length) + .unwrap_or(conf.default_tenant_conf.lsn_lease_length) } pub fn get_timeline_offloading_enabled(&self) -> bool { @@ -6020,11 +6028,14 @@ pub(crate) mod harness { let tenant = Arc::new(TenantShard::new( TenantState::Attaching, self.conf, - AttachedTenantConf::try_from(LocationConf::attached_single( - self.tenant_conf.clone(), - self.generation, - ShardParameters::default(), - )) + AttachedTenantConf::try_from( + self.conf, + LocationConf::attached_single( + self.tenant_conf.clone(), + self.generation, + ShardParameters::default(), + ), + ) .unwrap(), self.shard_identity, Some(walredo_mgr), @@ -6125,7 +6136,7 @@ mod tests { use pageserver_api::keyspace::KeySpace; #[cfg(feature = "testing")] use pageserver_api::keyspace::KeySpaceRandomAccum; - use pageserver_api::models::{CompactionAlgorithm, CompactionAlgorithmSettings}; + use pageserver_api::models::{CompactionAlgorithm, CompactionAlgorithmSettings, LsnLease}; use pageserver_compaction::helpers::overlaps_with; #[cfg(feature = "testing")] use rand::SeedableRng; @@ -6675,17 +6686,13 @@ mod tests { tline.freeze_and_flush().await.map_err(|e| e.into()) } - #[tokio::test(start_paused = true)] + #[tokio::test] async fn test_prohibit_branch_creation_on_garbage_collected_data() -> anyhow::Result<()> { let (tenant, ctx) = TenantHarness::create("test_prohibit_branch_creation_on_garbage_collected_data") .await? .load() .await; - // Advance to the lsn lease deadline so that GC is not blocked by - // initial transition into AttachedSingle. - tokio::time::advance(tenant.get_lsn_lease_length()).await; - tokio::time::resume(); let tline = tenant .create_test_timeline(TIMELINE_ID, Lsn(0x10), DEFAULT_PG_VERSION, &ctx) .await?; @@ -9384,17 +9391,21 @@ mod tests { Ok(()) } - #[tokio::test(start_paused = true)] + #[tokio::test] async fn test_lsn_lease() -> anyhow::Result<()> { let (tenant, ctx) = TenantHarness::create("test_lsn_lease") .await .unwrap() .load() .await; - // Advance to the lsn lease deadline so that GC is not blocked by - // initial transition into AttachedSingle. - tokio::time::advance(tenant.get_lsn_lease_length()).await; - tokio::time::resume(); + // set a non-zero lease length to test the feature + tenant + .update_tenant_config(|mut conf| { + conf.lsn_lease_length = Some(LsnLease::DEFAULT_LENGTH); + Ok(conf) + }) + .unwrap(); + let key = Key::from_hex("010000000033333333444444445500000000").unwrap(); let end_lsn = Lsn(0x100); diff --git a/pageserver/src/tenant/mgr.rs b/pageserver/src/tenant/mgr.rs index 95f5c60170..be18b40862 100644 --- a/pageserver/src/tenant/mgr.rs +++ b/pageserver/src/tenant/mgr.rs @@ -664,7 +664,7 @@ pub async fn init_tenant_mgr( tenant_shard_id, &tenant_dir_path, resources.clone(), - AttachedTenantConf::new(location_conf.tenant_conf, attached_conf), + AttachedTenantConf::new(conf, location_conf.tenant_conf, attached_conf), shard_identity, Some(init_order.clone()), SpawnMode::Lazy, @@ -842,8 +842,11 @@ impl TenantManager { // take our fast path and just provide the updated configuration // to the tenant. tenant.set_new_location_config( - AttachedTenantConf::try_from(new_location_config.clone()) - .map_err(UpsertLocationError::BadRequest)?, + AttachedTenantConf::try_from( + self.conf, + new_location_config.clone(), + ) + .map_err(UpsertLocationError::BadRequest)?, ); Some(FastPathModified::Attached(tenant.clone())) @@ -1046,7 +1049,7 @@ impl TenantManager { // Testing hack: if we are configured with no control plane, then drop the generation // from upserts. This enables creating generation-less tenants even though neon_local // always uses generations when calling the location conf API. - let attached_conf = AttachedTenantConf::try_from(new_location_config) + let attached_conf = AttachedTenantConf::try_from(self.conf, new_location_config) .map_err(UpsertLocationError::BadRequest)?; let tenant = tenant_spawn( @@ -1250,7 +1253,7 @@ impl TenantManager { tenant_shard_id, &tenant_path, self.resources.clone(), - AttachedTenantConf::try_from(config)?, + AttachedTenantConf::try_from(self.conf, config)?, shard_identity, None, SpawnMode::Eager, @@ -2131,7 +2134,7 @@ impl TenantManager { tenant_shard_id, &tenant_path, self.resources.clone(), - AttachedTenantConf::try_from(config).map_err(Error::DetachReparent)?, + AttachedTenantConf::try_from(self.conf, config).map_err(Error::DetachReparent)?, shard_identity, None, SpawnMode::Eager, diff --git a/test_runner/fixtures/neon_fixtures.py b/test_runner/fixtures/neon_fixtures.py index f2ec022666..f54d5be635 100644 --- a/test_runner/fixtures/neon_fixtures.py +++ b/test_runner/fixtures/neon_fixtures.py @@ -1315,6 +1315,14 @@ class NeonEnv: # This feature is pending rollout. # tenant_config["rel_size_v2_enabled"] = True + # Test authors tend to forget about the default 10min initial lease deadline + # when writing tests, which turns their immediate gc requests via mgmt API + # into no-ops. Override the binary default here, such that there is no initial + # lease deadline by default in tests. Tests that care can always override it + # themselves. + # Cf https://databricks.atlassian.net/browse/LKB-92?focusedCommentId=6722329 + tenant_config["lsn_lease_length"] = "0s" + if self.pageserver_remote_storage is not None: ps_cfg["remote_storage"] = remote_storage_to_toml_dict( self.pageserver_remote_storage diff --git a/test_runner/regress/test_readonly_node.py b/test_runner/regress/test_readonly_node.py index ee934a900d..5612236250 100644 --- a/test_runner/regress/test_readonly_node.py +++ b/test_runner/regress/test_readonly_node.py @@ -201,11 +201,11 @@ def test_readonly_node_gc(neon_env_builder: NeonEnvBuilder): for shard, ps in tenant_get_shards(env, env.initial_tenant): client = ps.http_client() layers_guarded_before_gc = get_layers_protected_by_lease( - client, shard, env.initial_timeline, lease_lsn=lsn + client, shard, env.initial_timeline, lease_lsn=lease_lsn ) gc_result = client.timeline_gc(shard, env.initial_timeline, 0) layers_guarded_after_gc = get_layers_protected_by_lease( - client, shard, env.initial_timeline, lease_lsn=lsn + client, shard, env.initial_timeline, lease_lsn=lease_lsn ) # Note: cannot assert on `layers_removed` here because it could be layers diff --git a/test_runner/regress/test_tenant_size.py b/test_runner/regress/test_tenant_size.py index 190dd914ee..8b291b7cbe 100644 --- a/test_runner/regress/test_tenant_size.py +++ b/test_runner/regress/test_tenant_size.py @@ -740,6 +740,10 @@ def test_lsn_lease_size(neon_env_builder: NeonEnvBuilder, test_output_dir: Path, "pitr_interval": "0s" if zero_gc else "3600s", "gc_period": "0s", "compaction_period": "0s", + # The test exercises leases API, so we need non-zero lease length. + # If this tests ever does GC, we need to accomodate for the initial lease deadline + # after tenant attach, which is also controlled by this variable. + "lsn_lease_length": "600s", } env = neon_env_builder.init_start(initial_tenant_conf=conf) @@ -824,9 +828,7 @@ def insert_with_action( log.info(f"initial size: {initial_size}") with ep.cursor() as cur: - cur.execute( - "CREATE TABLE t0 AS SELECT i::bigint n FROM generate_series(0, 1000000) s(i)" - ) + cur.execute("CREATE TABLE t0 AS SELECT i::bigint n FROM generate_series(0, 10000) s(i)") last_flush_lsn = wait_for_last_flush_lsn(env, ep, tenant, timeline) if action == "lease": @@ -841,15 +843,9 @@ def insert_with_action( raise AssertionError("Invalid action type, only `lease` and `branch`are accepted") with ep.cursor() as cur: - cur.execute( - "CREATE TABLE t1 AS SELECT i::bigint n FROM generate_series(0, 1000000) s(i)" - ) - cur.execute( - "CREATE TABLE t2 AS SELECT i::bigint n FROM generate_series(0, 1000000) s(i)" - ) - cur.execute( - "CREATE TABLE t3 AS SELECT i::bigint n FROM generate_series(0, 1000000) s(i)" - ) + cur.execute("CREATE TABLE t1 AS SELECT i::bigint n FROM generate_series(0, 10000) s(i)") + cur.execute("CREATE TABLE t2 AS SELECT i::bigint n FROM generate_series(0, 10000) s(i)") + cur.execute("CREATE TABLE t3 AS SELECT i::bigint n FROM generate_series(0, 10000) s(i)") last_flush_lsn = wait_for_last_flush_lsn(env, ep, tenant, timeline) From 38384c37ac50f04fe992795c27f0d5cf4291c2a5 Mon Sep 17 00:00:00 2001 From: Aleksandr Sarantsev <99037063+ephemeralsad@users.noreply.github.com> Date: Tue, 8 Jul 2025 17:15:14 +0400 Subject: [PATCH 13/65] Make node deletion context-aware (#12494) ## Problem Deletion process does not calculate preferred nodes correctly - it doesn't consider current tenant-shard layout among all pageservers. ## Summary of changes - Added a schedule context calculation for node deletion Co-authored-by: Aleksandr Sarantsev --- storage_controller/src/service.rs | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/storage_controller/src/service.rs b/storage_controller/src/service.rs index 9360225396..403ae15b59 100644 --- a/storage_controller/src/service.rs +++ b/storage_controller/src/service.rs @@ -7208,6 +7208,12 @@ impl Service { let mut locked = self.inner.write().unwrap(); let (nodes, tenants, scheduler) = locked.parts_mut(); + // Calculate a schedule context here to avoid borrow checker issues. + let mut schedule_context = ScheduleContext::default(); + for (_, shard) in tenants.range(TenantShardId::tenant_range(tid.tenant_id)) { + schedule_context.avoid(&shard.intent.all_pageservers()); + } + let tenant_shard = match tenants.get_mut(&tid) { Some(tenant_shard) => tenant_shard, None => { @@ -7233,9 +7239,6 @@ impl Service { } if tenant_shard.deref_node(node_id) { - // TODO(ephemeralsad): we should process all shards in a tenant at once, so - // we can avoid settling the tenant unevenly. - let mut schedule_context = ScheduleContext::new(ScheduleMode::Normal); if let Err(e) = tenant_shard.schedule(scheduler, &mut schedule_context) { tracing::error!( "Refusing to delete node, shard {} can't be rescheduled: {e}", From 7458d031b1f7adfc6cbc652cc0903d58141f6746 Mon Sep 17 00:00:00 2001 From: Christian Schwarz Date: Tue, 8 Jul 2025 15:59:45 +0200 Subject: [PATCH 14/65] clippy: fix unfounded warning on macOS (#12501) Before this PR, macOS builds would get clippy warning ``` warning: `tokio_epoll_uring::thread_local_system` does not refer to an existing function ``` The reason is that the `thread_local_system` function is only defined on Linux. Add `allow-invalid = true` to make macOS clippy pass, and manually test that on Linux builds, clippy still fails when we use it. refs - https://databricks.slack.com/archives/C09254R641L/p1751917655527099 Co-authored-by: Christian Schwarz --- clippy.toml | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/clippy.toml b/clippy.toml index 408232488c..c03059053a 100644 --- a/clippy.toml +++ b/clippy.toml @@ -1,9 +1,12 @@ disallowed-methods = [ "tokio::task::block_in_place", + # Allow this for now, to deny it later once we stop using Handle::block_on completely # "tokio::runtime::Handle::block_on", - # use tokio_epoll_uring_ext instead - "tokio_epoll_uring::thread_local_system", + + # tokio-epoll-uring: + # - allow-invalid because the method doesn't exist on macOS + { path = "tokio_epoll_uring::thread_local_system", replacement = "tokio_epoll_uring_ext module inside pageserver crate", allow-invalid = true } ] disallowed-macros = [ From f72115d0a99c148953a15ec5fbf5a49f86ca741f Mon Sep 17 00:00:00 2001 From: Mikhail Date: Tue, 8 Jul 2025 15:37:24 +0100 Subject: [PATCH 15/65] Endpoint storage openapi spec (#12361) https://github.com/neondatabase/cloud/issues/19011 --- endpoint_storage/src/app.rs | 2 + endpoint_storage/src/openapi_spec.yml | 146 ++++++++++++++++++++++++++ 2 files changed, 148 insertions(+) create mode 100644 endpoint_storage/src/openapi_spec.yml diff --git a/endpoint_storage/src/app.rs b/endpoint_storage/src/app.rs index 42431c0066..a7a18743ef 100644 --- a/endpoint_storage/src/app.rs +++ b/endpoint_storage/src/app.rs @@ -13,6 +13,8 @@ use utils::backoff::retry; pub fn app(state: Arc) -> Router<()> { use axum::routing::{delete as _delete, get as _get}; let delete_prefix = _delete(delete_prefix); + // NB: On any changes do not forget to update the OpenAPI spec + // in /endpoint_storage/src/openapi_spec.yml. Router::new() .route( "/{tenant_id}/{timeline_id}/{endpoint_id}/{*path}", diff --git a/endpoint_storage/src/openapi_spec.yml b/endpoint_storage/src/openapi_spec.yml new file mode 100644 index 0000000000..8d9abf902c --- /dev/null +++ b/endpoint_storage/src/openapi_spec.yml @@ -0,0 +1,146 @@ +openapi: "3.0.2" +info: + title: Endpoint Storage API + description: Endpoint Storage API + version: "1.0" + license: + name: "Apache" + url: https://github.com/neondatabase/neon/blob/main/LICENSE +servers: + - url: "" +paths: + /status: + description: Healthcheck endpoint + get: + description: Healthcheck + security: [] + responses: + "200": + description: OK + + /{tenant_id}/{timeline_id}/{endpoint_id}/{key}: + parameters: + - name: tenant_id + in: path + required: true + schema: + type: string + - name: timeline_id + in: path + required: true + schema: + type: string + - name: endpoint_id + in: path + required: true + schema: + type: string + - name: key + in: path + required: true + schema: + type: string + get: + description: Get file from blob storage + responses: + "200": + description: "File stream from blob storage" + content: + application/octet-stream: + schema: + type: string + format: binary + "400": + description: File was not found + "403": + description: JWT does not authorize request to this route + put: + description: Insert file into blob storage. If file exists, override it + requestBody: + content: + application/octet-stream: + schema: + type: string + format: binary + responses: + "200": + description: File was inserted successfully + "403": + description: JWT does not authorize request to this route + delete: + description: Delete file from blob storage + responses: + "200": + description: File was successfully deleted or not found + "403": + description: JWT does not authorize request to this route + + /{tenant_id}/{timeline_id}/{endpoint_id}: + parameters: + - name: tenant_id + in: path + required: true + schema: + type: string + - name: timeline_id + in: path + required: true + schema: + type: string + - name: endpoint_id + in: path + required: true + schema: + type: string + delete: + description: Delete endpoint data from blob storage + responses: + "200": + description: Endpoint data was deleted + "403": + description: JWT does not authorize request to this route + + /{tenant_id}/{timeline_id}: + parameters: + - name: tenant_id + in: path + required: true + schema: + type: string + - name: timeline_id + in: path + required: true + schema: + type: string + delete: + description: Delete timeline data from blob storage + responses: + "200": + description: Timeline data was deleted + "403": + description: JWT does not authorize request to this route + + /{tenant_id}: + parameters: + - name: tenant_id + in: path + required: true + schema: + type: string + delete: + description: Delete tenant data from blob storage + responses: + "200": + description: Tenant data was deleted + "403": + description: JWT does not authorize request to this route + +components: + securitySchemes: + JWT: + type: http + scheme: bearer + bearerFormat: JWT + +security: + - JWT: [] From 8a042fb8ed62d56641836577fb01ccee4103dfb8 Mon Sep 17 00:00:00 2001 From: Christian Schwarz Date: Tue, 8 Jul 2025 17:03:32 +0200 Subject: [PATCH 16/65] refactor(compaction): eliminate `CompactionError::Offload` variant, map to `::Other` (#12505) Looks can be deceiving: the match blocks in `maybe_trip_compaction_breaker` and at the end of `compact_with_options` seem like differentiated error handling, but in reality, these branches are unreachable at runtime because the only source of `CompactionError::Offload` within the compaction code is at the end of `Tenant::compaction_iteration`. We can simply map offload cancellation to CompactionError::Cancelled and all other offload errors to ::Other, since there's no differentiated handling for them in the compaction code. Also, the OffloadError::RemoteStorage variant has no differentiated handling, but was wrapping the remote storage anyhow::Error in a `anyhow(thiserror(anyhow))` sandwich. This PR removes that variant, mapping all RemoteStorage errors to `OffloadError::Other`. Thereby, the sandwich is gone and we will get a proper anyhow backtrace to the remote storage error location if when we debug-print the OffloadError (or the CompactionError if we map it to that). refs - https://databricks.atlassian.net/browse/LKB-182 - the observation that there's no need for differentiated handling of CompactionError::Offload was made in https://databricks.slack.com/archives/C09254R641L/p1751286453930269?thread_ts=1751284317.955159&cid=C09254R641L --- pageserver/src/http/routes.rs | 1 - pageserver/src/tenant.rs | 7 +++---- pageserver/src/tenant/tasks.rs | 1 - pageserver/src/tenant/timeline.rs | 17 ----------------- pageserver/src/tenant/timeline/offload.rs | 4 +--- 5 files changed, 4 insertions(+), 26 deletions(-) diff --git a/pageserver/src/http/routes.rs b/pageserver/src/http/routes.rs index 02094e6aa9..23a090045a 100644 --- a/pageserver/src/http/routes.rs +++ b/pageserver/src/http/routes.rs @@ -2501,7 +2501,6 @@ async fn timeline_checkpoint_handler( .map_err(|e| match e { CompactionError::ShuttingDown => ApiError::ShuttingDown, - CompactionError::Offload(e) => ApiError::InternalServerError(anyhow::anyhow!(e)), CompactionError::CollectKeySpaceError(e) => ApiError::InternalServerError(anyhow::anyhow!(e)), CompactionError::Other(e) => ApiError::InternalServerError(e), CompactionError::AlreadyRunning(_) => ApiError::InternalServerError(anyhow::anyhow!(e)), diff --git a/pageserver/src/tenant.rs b/pageserver/src/tenant.rs index 9aabd6341f..ad767a1672 100644 --- a/pageserver/src/tenant.rs +++ b/pageserver/src/tenant.rs @@ -3288,7 +3288,9 @@ impl TenantShard { // Ignore this, we likely raced with unarchival. OffloadError::NotArchived => Ok(()), OffloadError::AlreadyInProgress => Ok(()), - err => Err(err), + OffloadError::Cancelled => Err(CompactionError::ShuttingDown), + // don't break the anyhow chain + OffloadError::Other(err) => Err(CompactionError::Other(err)), })?; } @@ -3319,9 +3321,6 @@ impl TenantShard { match err { err if err.is_cancel() => {} CompactionError::ShuttingDown => (), - // Offload failures don't trip the circuit breaker, since they're cheap to retry and - // shouldn't block compaction. - CompactionError::Offload(_) => {} CompactionError::CollectKeySpaceError(err) => { // CollectKeySpaceError::Cancelled and PageRead::Cancelled are handled in `err.is_cancel` branch. self.compaction_circuit_breaker diff --git a/pageserver/src/tenant/tasks.rs b/pageserver/src/tenant/tasks.rs index 954dd38bb4..2ba1ad2674 100644 --- a/pageserver/src/tenant/tasks.rs +++ b/pageserver/src/tenant/tasks.rs @@ -303,7 +303,6 @@ pub(crate) fn log_compaction_error( let level = match err { e if e.is_cancel() => return, ShuttingDown => return, - Offload(_) => Level::ERROR, AlreadyRunning(_) => Level::ERROR, CollectKeySpaceError(_) => Level::ERROR, _ if task_cancelled => Level::INFO, diff --git a/pageserver/src/tenant/timeline.rs b/pageserver/src/tenant/timeline.rs index aeced98859..c2b49c0296 100644 --- a/pageserver/src/tenant/timeline.rs +++ b/pageserver/src/tenant/timeline.rs @@ -40,7 +40,6 @@ use layer_manager::{ Shutdown, }; -use offload::OffloadError; use once_cell::sync::Lazy; use pageserver_api::config::tenant_conf_defaults::DEFAULT_PITR_INTERVAL; use pageserver_api::key::{ @@ -2078,9 +2077,6 @@ impl Timeline { // Cancelled errors are covered by the `Err(e) if e.is_cancel()` branch. self.compaction_failed.store(true, AtomicOrdering::Relaxed) } - // Don't change the current value on offload failure or shutdown. We don't want to - // abruptly stall nor resume L0 flushes in these cases. - Err(CompactionError::Offload(_)) => {} }; result @@ -6017,9 +6013,6 @@ impl Drop for Timeline { pub(crate) enum CompactionError { #[error("The timeline or pageserver is shutting down")] ShuttingDown, - /// Compaction tried to offload a timeline and failed - #[error("Failed to offload timeline: {0}")] - Offload(OffloadError), /// Compaction cannot be done right now; page reconstruction and so on. #[error("Failed to collect keyspace: {0}")] CollectKeySpaceError(#[from] CollectKeySpaceError), @@ -6040,7 +6033,6 @@ impl CompactionError { | Self::CollectKeySpaceError(CollectKeySpaceError::PageRead( PageReconstructError::Cancelled )) - | Self::Offload(OffloadError::Cancelled) ) } @@ -6058,15 +6050,6 @@ impl CompactionError { } } -impl From for CompactionError { - fn from(e: OffloadError) -> Self { - match e { - OffloadError::Cancelled => Self::ShuttingDown, - _ => Self::Offload(e), - } - } -} - impl From for CompactionError { fn from(value: super::upload_queue::NotInitialized) -> Self { match value { diff --git a/pageserver/src/tenant/timeline/offload.rs b/pageserver/src/tenant/timeline/offload.rs index 9464f034c7..e9cf2e9aa7 100644 --- a/pageserver/src/tenant/timeline/offload.rs +++ b/pageserver/src/tenant/timeline/offload.rs @@ -17,8 +17,6 @@ pub(crate) enum OffloadError { Cancelled, #[error("Timeline is not archived")] NotArchived, - #[error(transparent)] - RemoteStorage(anyhow::Error), #[error("Offload or deletion already in progress")] AlreadyInProgress, #[error("Unexpected offload error: {0}")] @@ -29,7 +27,7 @@ impl From for OffloadError { fn from(e: TenantManifestError) -> Self { match e { TenantManifestError::Cancelled => Self::Cancelled, - TenantManifestError::RemoteStorage(e) => Self::RemoteStorage(e), + TenantManifestError::RemoteStorage(e) => Self::Other(e), } } } From 29d73e140453cc18acf96475c8a1dbf57f354468 Mon Sep 17 00:00:00 2001 From: Folke Behrens Date: Tue, 8 Jul 2025 17:49:42 +0200 Subject: [PATCH 17/65] http-utils: Temporarily accept duplicate params (#12504) ## Problem Grafana Alloy in cluster mode seems to send duplicate "seconds" scrape URL parameters when one of its instances is disrupted. ## Summary of changes Temporarily accept duplicate parameters as long as their value is identical. --- libs/http-utils/src/request.rs | 68 ++++++++++++++++++++++++++++++---- 1 file changed, 61 insertions(+), 7 deletions(-) diff --git a/libs/http-utils/src/request.rs b/libs/http-utils/src/request.rs index 9024a90a82..afb2ae8f47 100644 --- a/libs/http-utils/src/request.rs +++ b/libs/http-utils/src/request.rs @@ -41,17 +41,35 @@ pub fn get_query_param<'a>( Some(q) => q, None => return Ok(None), }; - let mut values = url::form_urlencoded::parse(query.as_bytes()) + let values = url::form_urlencoded::parse(query.as_bytes()) .filter_map(|(k, v)| if k == param_name { Some(v) } else { None }) // we call .next() twice below. If it's None the first time, .fuse() ensures it's None afterwards .fuse(); - let value1 = values.next(); - if values.next().is_some() { - return Err(ApiError::BadRequest(anyhow!( - "param {param_name} specified more than once" - ))); - } + // Work around an issue with Alloy's pyroscope scrape where the "seconds" + // parameter is added several times. https://github.com/grafana/alloy/issues/3026 + // TODO: revert after Alloy is fixed. + let value1 = values + .map(Ok) + .reduce(|acc, i| { + match acc { + Err(_) => acc, + + // It's okay to have duplicates as along as they have the same value. + Ok(ref a) if a == &i.unwrap() => acc, + + _ => Err(ApiError::BadRequest(anyhow!( + "param {param_name} specified more than once" + ))), + } + }) + .transpose()?; + // if values.next().is_some() { + // return Err(ApiError::BadRequest(anyhow!( + // "param {param_name} specified more than once" + // ))); + // } + Ok(value1) } @@ -92,3 +110,39 @@ pub async fn ensure_no_body(request: &mut Request) -> Result<(), ApiError> None => Ok(()), } } + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_get_query_param_duplicate() { + let req = Request::builder() + .uri("http://localhost:12345/testuri?testparam=1") + .body(hyper::Body::empty()) + .unwrap(); + let value = get_query_param(&req, "testparam").unwrap(); + assert_eq!(value.unwrap(), "1"); + + let req = Request::builder() + .uri("http://localhost:12345/testuri?testparam=1&testparam=1") + .body(hyper::Body::empty()) + .unwrap(); + let value = get_query_param(&req, "testparam").unwrap(); + assert_eq!(value.unwrap(), "1"); + + let req = Request::builder() + .uri("http://localhost:12345/testuri") + .body(hyper::Body::empty()) + .unwrap(); + let value = get_query_param(&req, "testparam").unwrap(); + assert!(value.is_none()); + + let req = Request::builder() + .uri("http://localhost:12345/testuri?testparam=1&testparam=2&testparam=3") + .body(hyper::Body::empty()) + .unwrap(); + let value = get_query_param(&req, "testparam"); + assert!(value.is_err()); + } +} From f9b05a42d7a408bc98f485463c13f58496101160 Mon Sep 17 00:00:00 2001 From: Christian Schwarz Date: Tue, 8 Jul 2025 18:45:34 +0200 Subject: [PATCH 18/65] refactor(compaction): remove `CompactionError::AlreadyRunning` variant, use `::Other` instead (#12512) The only call stack that can emit the `::AlreadyRunning` variant is ``` -> iteration_inner -> iteration -> compaction_iteration -> compaction_loop -> start_background_loops ``` And on that call stack, the only differentiated handling of it is its invocations of `log_compaction_error -> CompactionError::is_cancel()`, which returns `true` for `::AlreadyRunning`. I think the condition of `AlreadyRunning` is severe; it really shouldn't happen. So, this PR starts treating it as something that is to be logged at `ERROR` / `WARN` level, depending on the `degrate_to_warning` argument to `log_compaction_error`. refs - https://databricks.atlassian.net/browse/LKB-182 --- pageserver/src/http/routes.rs | 3 +-- pageserver/src/tenant.rs | 1 - pageserver/src/tenant/tasks.rs | 1 - pageserver/src/tenant/timeline.rs | 6 ------ pageserver/src/tenant/timeline/compaction.rs | 8 ++++---- 5 files changed, 5 insertions(+), 14 deletions(-) diff --git a/pageserver/src/http/routes.rs b/pageserver/src/http/routes.rs index 23a090045a..55582659df 100644 --- a/pageserver/src/http/routes.rs +++ b/pageserver/src/http/routes.rs @@ -2503,7 +2503,6 @@ async fn timeline_checkpoint_handler( CompactionError::ShuttingDown => ApiError::ShuttingDown, CompactionError::CollectKeySpaceError(e) => ApiError::InternalServerError(anyhow::anyhow!(e)), CompactionError::Other(e) => ApiError::InternalServerError(e), - CompactionError::AlreadyRunning(_) => ApiError::InternalServerError(anyhow::anyhow!(e)), } )?; } @@ -3697,7 +3696,7 @@ async fn tenant_evaluate_feature_flag( let tenant = state .tenant_manager .get_attached_tenant_shard(tenant_shard_id)?; - // TODO: the properties we get here might be stale right after it is collected. But such races are rare (updated every 10s) + // TODO: the properties we get here might be stale right after it is collected. But such races are rare (updated every 10s) // and we don't need to worry about it for now. let properties = tenant.feature_resolver.collect_properties(); if as_type.as_deref() == Some("boolean") { diff --git a/pageserver/src/tenant.rs b/pageserver/src/tenant.rs index ad767a1672..49b92915da 100644 --- a/pageserver/src/tenant.rs +++ b/pageserver/src/tenant.rs @@ -3334,7 +3334,6 @@ impl TenantShard { .unwrap() .fail(&CIRCUIT_BREAKERS_BROKEN, err); } - CompactionError::AlreadyRunning(_) => {} } } diff --git a/pageserver/src/tenant/tasks.rs b/pageserver/src/tenant/tasks.rs index 2ba1ad2674..356f495972 100644 --- a/pageserver/src/tenant/tasks.rs +++ b/pageserver/src/tenant/tasks.rs @@ -303,7 +303,6 @@ pub(crate) fn log_compaction_error( let level = match err { e if e.is_cancel() => return, ShuttingDown => return, - AlreadyRunning(_) => Level::ERROR, CollectKeySpaceError(_) => Level::ERROR, _ if task_cancelled => Level::INFO, Other(err) => { diff --git a/pageserver/src/tenant/timeline.rs b/pageserver/src/tenant/timeline.rs index c2b49c0296..296b922599 100644 --- a/pageserver/src/tenant/timeline.rs +++ b/pageserver/src/tenant/timeline.rs @@ -2067,9 +2067,6 @@ impl Timeline { Err(CompactionError::ShuttingDown) => { // Covered by the `Err(e) if e.is_cancel()` branch. } - Err(CompactionError::AlreadyRunning(_)) => { - // Covered by the `Err(e) if e.is_cancel()` branch. - } Err(CompactionError::Other(_)) => { self.compaction_failed.store(true, AtomicOrdering::Relaxed) } @@ -6018,8 +6015,6 @@ pub(crate) enum CompactionError { CollectKeySpaceError(#[from] CollectKeySpaceError), #[error(transparent)] Other(anyhow::Error), - #[error("Compaction already running: {0}")] - AlreadyRunning(&'static str), } impl CompactionError { @@ -6028,7 +6023,6 @@ impl CompactionError { matches!( self, Self::ShuttingDown - | Self::AlreadyRunning(_) | Self::CollectKeySpaceError(CollectKeySpaceError::Cancelled) | Self::CollectKeySpaceError(CollectKeySpaceError::PageRead( PageReconstructError::Cancelled diff --git a/pageserver/src/tenant/timeline/compaction.rs b/pageserver/src/tenant/timeline/compaction.rs index ac3930fb71..2c0b98c1e2 100644 --- a/pageserver/src/tenant/timeline/compaction.rs +++ b/pageserver/src/tenant/timeline/compaction.rs @@ -572,7 +572,7 @@ impl GcCompactionQueue { match res { Ok(res) => Ok(res), Err(CompactionError::ShuttingDown) => Err(CompactionError::ShuttingDown), - Err(_) => { + Err(CompactionError::CollectKeySpaceError(_) | CompactionError::Other(_)) => { // There are some cases where traditional gc might collect some layer // files causing gc-compaction cannot read the full history of the key. // This needs to be resolved in the long-term by improving the compaction @@ -591,9 +591,9 @@ impl GcCompactionQueue { timeline: &Arc, ) -> Result { let Ok(_one_op_at_a_time_guard) = self.consumer_lock.try_lock() else { - return Err(CompactionError::AlreadyRunning( - "cannot run gc-compaction because another gc-compaction is running. This should not happen because we only call this function from the gc-compaction queue.", - )); + return Err(CompactionError::Other(anyhow::anyhow!( + "cannot run gc-compaction because another gc-compaction is running. This should not happen because we only call this function from the gc-compaction queue." + ))); }; let has_pending_tasks; let mut yield_for_l0 = false; From 477ab12b691be1e44f557d45dbe64294009259d4 Mon Sep 17 00:00:00 2001 From: Vlad Lazar Date: Tue, 8 Jul 2025 17:46:55 +0100 Subject: [PATCH 19/65] pageserver: touch up broker subscription reset (#12503) ## Problem The goal of this code was to test out if resetting the broker subscription helps alleviate the issues we've been seeing in staging. Looks like it did the trick. However, the original version was too eager. ## Summary of Changes Only reset the stream when: * we are waiting for WAL * there's no connection candidates lined up * we're not already connected to a safekeeper --- .../timeline/walreceiver/connection_manager.rs | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/pageserver/src/tenant/timeline/walreceiver/connection_manager.rs b/pageserver/src/tenant/timeline/walreceiver/connection_manager.rs index 9b151d2449..aba94244a3 100644 --- a/pageserver/src/tenant/timeline/walreceiver/connection_manager.rs +++ b/pageserver/src/tenant/timeline/walreceiver/connection_manager.rs @@ -182,12 +182,19 @@ pub(super) async fn connection_manager_loop_step( } }, + // If we've not received any updates from the broker from a while, are waiting for WAL + // and have no safekeeper connection or connection candidates, then it might be that + // the broker subscription is wedged. Drop the currrent subscription and re-subscribe + // with the goal of unblocking it. _ = broker_reset_interval.tick() => { - if wait_lsn_status.borrow().is_some() { - tracing::warn!("No broker updates received for a while, but waiting for WAL. Re-setting stream ...") - } + let awaiting_lsn = wait_lsn_status.borrow().is_some(); + let no_candidates = connection_manager_state.wal_stream_candidates.is_empty(); + let no_connection = connection_manager_state.wal_connection.is_none(); - broker_subscription = subscribe_for_timeline_updates(broker_client, id, cancel).await?; + if awaiting_lsn && no_candidates && no_connection { + tracing::warn!("No broker updates received for a while, but waiting for WAL. Re-setting stream ..."); + broker_subscription = subscribe_for_timeline_updates(broker_client, id, cancel).await?; + } }, new_event = async { From a06c560ad05ecec0c13901f97916807259665bfa Mon Sep 17 00:00:00 2001 From: "Alex Chi Z." <4198311+skyzh@users.noreply.github.com> Date: Tue, 8 Jul 2025 12:55:00 -0400 Subject: [PATCH 20/65] feat(pageserver): critical path feature flags (#12449) ## Problem Some feature flags are used heavily on the critical path and we want the "get feature flag" operation as cheap as possible. ## Summary of changes Add a `test_remote_size_flag` as an example of such flags. In the future, we can use macro to generate all those fields. The flag is updated in the housekeeping loop. The retrieval of the flag is simply reading an atomic flag. --------- Signed-off-by: Alex Chi Z --- pageserver/src/feature_resolver.rs | 29 ++++++++++++++++++++++++----- pageserver/src/tenant.rs | 8 ++++---- pageserver/src/tenant/timeline.rs | 6 +++--- 3 files changed, 31 insertions(+), 12 deletions(-) diff --git a/pageserver/src/feature_resolver.rs b/pageserver/src/feature_resolver.rs index 6ce4522080..65cac8eea1 100644 --- a/pageserver/src/feature_resolver.rs +++ b/pageserver/src/feature_resolver.rs @@ -1,4 +1,8 @@ -use std::{collections::HashMap, sync::Arc, time::Duration}; +use std::{ + collections::HashMap, + sync::{Arc, atomic::AtomicBool}, + time::Duration, +}; use arc_swap::ArcSwap; use pageserver_api::config::NodeMetadata; @@ -355,11 +359,17 @@ impl PerTenantProperties { } } -#[derive(Clone)] pub struct TenantFeatureResolver { inner: FeatureResolver, tenant_id: TenantId, - cached_tenant_properties: Arc>>, + cached_tenant_properties: ArcSwap>, + + // Add feature flag on the critical path below. + // + // If a feature flag will be used on the critical path, we will update it in the tenant housekeeping loop insetad of + // resolving directly by calling `evaluate_multivariate` or `evaluate_boolean`. Remember to update the flag in the + // housekeeping loop. The user should directly read this atomic flag instead of using the set of evaluate functions. + pub feature_test_remote_size_flag: AtomicBool, } impl TenantFeatureResolver { @@ -367,7 +377,8 @@ impl TenantFeatureResolver { Self { inner, tenant_id, - cached_tenant_properties: Arc::new(ArcSwap::new(Arc::new(HashMap::new()))), + cached_tenant_properties: ArcSwap::new(Arc::new(HashMap::new())), + feature_test_remote_size_flag: AtomicBool::new(false), } } @@ -396,7 +407,8 @@ impl TenantFeatureResolver { self.inner.is_feature_flag_boolean(flag_key) } - pub fn update_cached_tenant_properties(&self, tenant_shard: &TenantShard) { + /// Refresh the cached properties and flags on the critical path. + pub fn refresh_properties_and_flags(&self, tenant_shard: &TenantShard) { let mut remote_size_mb = None; for timeline in tenant_shard.list_timelines() { let size = timeline.metrics.resident_physical_size_get(); @@ -410,5 +422,12 @@ impl TenantFeatureResolver { self.cached_tenant_properties.store(Arc::new( PerTenantProperties { remote_size_mb }.into_posthog_properties(), )); + + // BEGIN: Update the feature flag on the critical path. + self.feature_test_remote_size_flag.store( + self.evaluate_boolean("test-remote-size-flag").is_ok(), + std::sync::atomic::Ordering::Relaxed, + ); + // END: Update the feature flag on the critical path. } } diff --git a/pageserver/src/tenant.rs b/pageserver/src/tenant.rs index 49b92915da..96ed4672a6 100644 --- a/pageserver/src/tenant.rs +++ b/pageserver/src/tenant.rs @@ -388,7 +388,7 @@ pub struct TenantShard { l0_flush_global_state: L0FlushGlobalState, - pub(crate) feature_resolver: TenantFeatureResolver, + pub(crate) feature_resolver: Arc, } impl std::fmt::Debug for TenantShard { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { @@ -3411,7 +3411,7 @@ impl TenantShard { } // Update the feature resolver with the latest tenant-spcific data. - self.feature_resolver.update_cached_tenant_properties(self); + self.feature_resolver.refresh_properties_and_flags(self); } pub fn timeline_has_no_attached_children(&self, timeline_id: TimelineId) -> bool { @@ -4500,10 +4500,10 @@ impl TenantShard { gc_block: Default::default(), l0_flush_global_state, basebackup_cache, - feature_resolver: TenantFeatureResolver::new( + feature_resolver: Arc::new(TenantFeatureResolver::new( feature_resolver, tenant_shard_id.tenant_id, - ), + )), } } diff --git a/pageserver/src/tenant/timeline.rs b/pageserver/src/tenant/timeline.rs index 296b922599..44a4f1e911 100644 --- a/pageserver/src/tenant/timeline.rs +++ b/pageserver/src/tenant/timeline.rs @@ -201,7 +201,7 @@ pub struct TimelineResources { pub l0_compaction_trigger: Arc, pub l0_flush_global_state: l0_flush::L0FlushGlobalState, pub basebackup_cache: Arc, - pub feature_resolver: TenantFeatureResolver, + pub feature_resolver: Arc, } pub struct Timeline { @@ -449,7 +449,7 @@ pub struct Timeline { /// A channel to send async requests to prepare a basebackup for the basebackup cache. basebackup_cache: Arc, - feature_resolver: TenantFeatureResolver, + feature_resolver: Arc, } pub(crate) enum PreviousHeatmap { @@ -3122,7 +3122,7 @@ impl Timeline { basebackup_cache: resources.basebackup_cache, - feature_resolver: resources.feature_resolver, + feature_resolver: resources.feature_resolver.clone(), }; result.repartition_threshold = From 81e7218c278288cd8e95ed3f50f4e4a423391dc8 Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Tue, 8 Jul 2025 20:15:13 +0200 Subject: [PATCH 21/65] pageserver: tighten up gRPC `page_api::Client` (#12396) This patch tightens up `page_api::Client`. It's mostly superficial changes, but also adds a new constructor that takes an existing gRPC channel, for use with the communicator connection pool. --- compute_tools/src/compute.rs | 2 +- compute_tools/src/lsn_lease.rs | 2 +- pageserver/page_api/src/client.rs | 327 +++++++++--------- pageserver/pagebench/src/cmd/basebackup.rs | 2 +- .../pagebench/src/cmd/getpage_latest_lsn.rs | 2 +- 5 files changed, 159 insertions(+), 176 deletions(-) diff --git a/compute_tools/src/compute.rs b/compute_tools/src/compute.rs index e870cecc58..f25aff1110 100644 --- a/compute_tools/src/compute.rs +++ b/compute_tools/src/compute.rs @@ -1057,7 +1057,7 @@ impl ComputeNode { }; let (reader, connected) = tokio::runtime::Handle::current().block_on(async move { - let mut client = page_api::Client::new( + let mut client = page_api::Client::connect( shard0_connstr, spec.tenant_id, spec.timeline_id, diff --git a/compute_tools/src/lsn_lease.rs b/compute_tools/src/lsn_lease.rs index 3346c18c0d..bb0828429d 100644 --- a/compute_tools/src/lsn_lease.rs +++ b/compute_tools/src/lsn_lease.rs @@ -192,7 +192,7 @@ fn acquire_lsn_lease_grpc( lsn: Lsn, ) -> Result> { tokio::runtime::Handle::current().block_on(async move { - let mut client = page_api::Client::new( + let mut client = page_api::Client::connect( connstring.to_string(), tenant_shard_id.tenant_id, timeline_id, diff --git a/pageserver/page_api/src/client.rs b/pageserver/page_api/src/client.rs index 65e41540b8..6523d00d3d 100644 --- a/pageserver/page_api/src/client.rs +++ b/pageserver/page_api/src/client.rs @@ -1,23 +1,151 @@ -use anyhow::Result; +use anyhow::Context as _; use futures::{Stream, StreamExt as _, TryStreamExt as _}; use tokio::io::AsyncRead; use tokio_util::io::StreamReader; +use tonic::codec::CompressionEncoding; use tonic::metadata::AsciiMetadataValue; -use tonic::metadata::errors::InvalidMetadataValue; -use tonic::transport::Channel; -use tonic::{Request, Streaming}; +use tonic::service::Interceptor; +use tonic::service::interceptor::InterceptedService; +use tonic::transport::{Channel, Endpoint}; -use utils::id::TenantId; -use utils::id::TimelineId; +use utils::id::{TenantId, TimelineId}; use utils::shard::ShardIndex; -use crate::model; +use crate::model::*; use crate::proto; -/// -/// AuthInterceptor adds tenant, timeline, and auth header to the channel. These -/// headers are required at the pageserver. -/// +/// A basic Pageserver gRPC client, for a single tenant shard. This API uses native Rust domain +/// types from `model` rather than generated Protobuf types. +pub struct Client { + inner: proto::PageServiceClient>, +} + +impl Client { + /// Connects to the given gRPC endpoint. + pub async fn connect( + endpoint: E, + tenant_id: TenantId, + timeline_id: TimelineId, + shard_id: ShardIndex, + auth_token: Option, + compression: Option, + ) -> anyhow::Result + where + E: TryInto + Send + Sync + 'static, + >::Error: std::error::Error + Send + Sync, + { + let endpoint: Endpoint = endpoint.try_into().context("invalid endpoint")?; + let channel = endpoint.connect().await?; + Self::new( + channel, + tenant_id, + timeline_id, + shard_id, + auth_token, + compression, + ) + } + + /// Creates a new client using the given gRPC channel. + pub fn new( + channel: Channel, + tenant_id: TenantId, + timeline_id: TimelineId, + shard_id: ShardIndex, + auth_token: Option, + compression: Option, + ) -> anyhow::Result { + let auth = AuthInterceptor::new(tenant_id, timeline_id, shard_id, auth_token)?; + let mut inner = proto::PageServiceClient::with_interceptor(channel, auth); + + if let Some(compression) = compression { + // TODO: benchmark this (including network latency). + inner = inner + .accept_compressed(compression) + .send_compressed(compression); + } + + Ok(Self { inner }) + } + + /// Returns whether a relation exists. + pub async fn check_rel_exists( + &mut self, + req: CheckRelExistsRequest, + ) -> tonic::Result { + let req = proto::CheckRelExistsRequest::from(req); + let resp = self.inner.check_rel_exists(req).await?.into_inner(); + Ok(resp.into()) + } + + /// Fetches a base backup. + pub async fn get_base_backup( + &mut self, + req: GetBaseBackupRequest, + ) -> tonic::Result> { + let req = proto::GetBaseBackupRequest::from(req); + let chunks = self.inner.get_base_backup(req).await?.into_inner(); + Ok(StreamReader::new( + chunks + .map_ok(|resp| resp.chunk) + .map_err(std::io::Error::other), + )) + } + + /// Returns the total size of a database, as # of bytes. + pub async fn get_db_size(&mut self, req: GetDbSizeRequest) -> tonic::Result { + let req = proto::GetDbSizeRequest::from(req); + let resp = self.inner.get_db_size(req).await?.into_inner(); + Ok(resp.into()) + } + + /// Fetches pages. + /// + /// This is implemented as a bidirectional streaming RPC for performance. Per-request errors are + /// typically returned as status_code instead of errors, to avoid tearing down the entire stream + /// via a tonic::Status error. + pub async fn get_pages( + &mut self, + reqs: impl Stream + Send + 'static, + ) -> tonic::Result> + Send + 'static> { + let reqs = reqs.map(proto::GetPageRequest::from); + let resps = self.inner.get_pages(reqs).await?.into_inner(); + Ok(resps.map_ok(GetPageResponse::from)) + } + + /// Returns the size of a relation, as # of blocks. + pub async fn get_rel_size( + &mut self, + req: GetRelSizeRequest, + ) -> tonic::Result { + let req = proto::GetRelSizeRequest::from(req); + let resp = self.inner.get_rel_size(req).await?.into_inner(); + Ok(resp.into()) + } + + /// Fetches an SLRU segment. + pub async fn get_slru_segment( + &mut self, + req: GetSlruSegmentRequest, + ) -> tonic::Result { + let req = proto::GetSlruSegmentRequest::from(req); + let resp = self.inner.get_slru_segment(req).await?.into_inner(); + Ok(resp.try_into()?) + } + + /// Acquires or extends a lease on the given LSN. This guarantees that the Pageserver won't + /// garbage collect the LSN until the lease expires. Must be acquired on all relevant shards. + /// + /// Returns the lease expiration time, or a FailedPrecondition status if the lease could not be + /// acquired because the LSN has already been garbage collected. + pub async fn lease_lsn(&mut self, req: LeaseLsnRequest) -> tonic::Result { + let req = proto::LeaseLsnRequest::from(req); + let resp = self.inner.lease_lsn(req).await?.into_inner(); + Ok(resp.try_into()?) + } +} + +/// Adds authentication metadata to gRPC requests. #[derive(Clone)] struct AuthInterceptor { tenant_id: AsciiMetadataValue, @@ -30,174 +158,29 @@ impl AuthInterceptor { fn new( tenant_id: TenantId, timeline_id: TimelineId, - auth_token: Option, shard_id: ShardIndex, - ) -> Result { - let tenant_ascii: AsciiMetadataValue = tenant_id.to_string().try_into()?; - let timeline_ascii: AsciiMetadataValue = timeline_id.to_string().try_into()?; - let shard_ascii: AsciiMetadataValue = shard_id.to_string().try_into()?; - - let auth_header: Option = match auth_token { - Some(token) => Some(format!("Bearer {token}").try_into()?), - None => None, - }; - + auth_token: Option, + ) -> anyhow::Result { Ok(Self { - tenant_id: tenant_ascii, - shard_id: shard_ascii, - timeline_id: timeline_ascii, - auth_header, + tenant_id: tenant_id.to_string().try_into()?, + timeline_id: timeline_id.to_string().try_into()?, + shard_id: shard_id.to_string().try_into()?, + auth_header: auth_token + .map(|token| format!("Bearer {token}").try_into()) + .transpose()?, }) } } -impl tonic::service::Interceptor for AuthInterceptor { - fn call(&mut self, mut req: tonic::Request<()>) -> Result, tonic::Status> { - req.metadata_mut() - .insert("neon-tenant-id", self.tenant_id.clone()); - req.metadata_mut() - .insert("neon-shard-id", self.shard_id.clone()); - req.metadata_mut() - .insert("neon-timeline-id", self.timeline_id.clone()); - if let Some(auth_header) = &self.auth_header { - req.metadata_mut() - .insert("authorization", auth_header.clone()); +impl Interceptor for AuthInterceptor { + fn call(&mut self, mut req: tonic::Request<()>) -> tonic::Result> { + let metadata = req.metadata_mut(); + metadata.insert("neon-tenant-id", self.tenant_id.clone()); + metadata.insert("neon-timeline-id", self.timeline_id.clone()); + metadata.insert("neon-shard-id", self.shard_id.clone()); + if let Some(ref auth_header) = self.auth_header { + metadata.insert("authorization", auth_header.clone()); } Ok(req) } } - -#[derive(Clone)] -pub struct Client { - client: proto::PageServiceClient< - tonic::service::interceptor::InterceptedService, - >, -} - -impl Client { - pub async fn new + Send + Sync + 'static>( - into_endpoint: T, - tenant_id: TenantId, - timeline_id: TimelineId, - shard_id: ShardIndex, - auth_header: Option, - compression: Option, - ) -> anyhow::Result { - let endpoint: tonic::transport::Endpoint = into_endpoint - .try_into() - .map_err(|_e| anyhow::anyhow!("failed to convert endpoint"))?; - let channel = endpoint.connect().await?; - let auth = AuthInterceptor::new(tenant_id, timeline_id, auth_header, shard_id) - .map_err(|e| anyhow::anyhow!(e.to_string()))?; - let mut client = proto::PageServiceClient::with_interceptor(channel, auth); - - if let Some(compression) = compression { - // TODO: benchmark this (including network latency). - client = client - .accept_compressed(compression) - .send_compressed(compression); - } - - Ok(Self { client }) - } - - /// Returns whether a relation exists. - pub async fn check_rel_exists( - &mut self, - req: model::CheckRelExistsRequest, - ) -> Result { - let proto_req = proto::CheckRelExistsRequest::from(req); - - let response = self.client.check_rel_exists(proto_req).await?; - - let proto_resp = response.into_inner(); - Ok(proto_resp.into()) - } - - /// Fetches a base backup. - pub async fn get_base_backup( - &mut self, - req: model::GetBaseBackupRequest, - ) -> Result, tonic::Status> { - let req = proto::GetBaseBackupRequest::from(req); - let chunks = self.client.get_base_backup(req).await?.into_inner(); - let reader = StreamReader::new( - chunks - .map_ok(|resp| resp.chunk) - .map_err(std::io::Error::other), - ); - Ok(reader) - } - - /// Returns the total size of a database, as # of bytes. - pub async fn get_db_size( - &mut self, - req: model::GetDbSizeRequest, - ) -> Result { - let proto_req = proto::GetDbSizeRequest::from(req); - - let response = self.client.get_db_size(proto_req).await?; - Ok(response.into_inner().into()) - } - - /// Fetches pages. - /// - /// This is implemented as a bidirectional streaming RPC for performance. - /// Per-request errors are often returned as status_code instead of errors, - /// to avoid tearing down the entire stream via tonic::Status. - pub async fn get_pages( - &mut self, - inbound: ReqSt, - ) -> Result< - impl Stream> + Send + 'static, - tonic::Status, - > - where - ReqSt: Stream + Send + 'static, - { - let outbound_proto = inbound.map(|domain_req| domain_req.into()); - - let req_new = Request::new(outbound_proto); - - let response_stream: Streaming = - self.client.get_pages(req_new).await?.into_inner(); - - let domain_stream = response_stream.map_ok(model::GetPageResponse::from); - - Ok(domain_stream) - } - - /// Returns the size of a relation, as # of blocks. - pub async fn get_rel_size( - &mut self, - req: model::GetRelSizeRequest, - ) -> Result { - let proto_req = proto::GetRelSizeRequest::from(req); - let response = self.client.get_rel_size(proto_req).await?; - let proto_resp = response.into_inner(); - Ok(proto_resp.into()) - } - - /// Fetches an SLRU segment. - pub async fn get_slru_segment( - &mut self, - req: model::GetSlruSegmentRequest, - ) -> Result { - let proto_req = proto::GetSlruSegmentRequest::from(req); - let response = self.client.get_slru_segment(proto_req).await?; - Ok(response.into_inner().try_into()?) - } - - /// Acquires or extends a lease on the given LSN. This guarantees that the Pageserver won't - /// garbage collect the LSN until the lease expires. Must be acquired on all relevant shards. - /// - /// Returns the lease expiration time, or a FailedPrecondition status if the lease could not be - /// acquired because the LSN has already been garbage collected. - pub async fn lease_lsn( - &mut self, - req: model::LeaseLsnRequest, - ) -> Result { - let req = proto::LeaseLsnRequest::from(req); - Ok(self.client.lease_lsn(req).await?.into_inner().try_into()?) - } -} diff --git a/pageserver/pagebench/src/cmd/basebackup.rs b/pageserver/pagebench/src/cmd/basebackup.rs index 4b7a70504a..c14bb73136 100644 --- a/pageserver/pagebench/src/cmd/basebackup.rs +++ b/pageserver/pagebench/src/cmd/basebackup.rs @@ -326,7 +326,7 @@ impl GrpcClient { ttid: TenantTimelineId, compression: bool, ) -> anyhow::Result { - let inner = page_api::Client::new( + let inner = page_api::Client::connect( connstring.to_string(), ttid.tenant_id, ttid.timeline_id, diff --git a/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs b/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs index a297819e9b..f14caf548c 100644 --- a/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs +++ b/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs @@ -625,7 +625,7 @@ impl GrpcClient { ttid: TenantTimelineId, compression: bool, ) -> anyhow::Result { - let mut client = page_api::Client::new( + let mut client = page_api::Client::connect( connstring.to_string(), ttid.tenant_id, ttid.timeline_id, From 3dad4698ec134e5e5491fa7c9c5195041216e248 Mon Sep 17 00:00:00 2001 From: HaoyuHuang Date: Tue, 8 Jul 2025 12:43:01 -0700 Subject: [PATCH 22/65] PS changes #1 (#12467) # TLDR All changes are no-op except 1. publishing additional metrics. 2. problem VI ## Problem I It has come to my attention that the Neon Storage Controller doesn't correctly update its "observed" state of tenants previously associated with PSs that has come back up after a local data loss. It would still think that the old tenants are still attached to page servers and won't ask more questions. The pageserver has enough information from the reattach request/response to tell that something is wrong, but it doesn't do anything about it either. We need to detect this situation in production while I work on a fix. (I think there is just some misunderstanding about how Neon manages their pageserver deployments which got me confused about all the invariants.) ## Summary of changes I Added a `pageserver_local_data_loss_suspected` gauge metric that will be set to 1 if we detect a problematic situation from the reattch response. The problematic situation is when the PS doesn't have any local tenants but received a reattach response containing tenants. We can set up an alert using this metric. The alert should be raised whenever this metric reports non-zero number. Also added a HTTP PUT `http://pageserver/hadron-internal/reset_alert_gauges` API on the pageserver that can be used to reset the gauge and the alert once we manually rectify the situation (by restarting the HCC). ## Problem II Azure upload is 3x slower than AWS. -> 3x slower ingestion. The reason for the slower upload is that Azure upload in page server is much slower => higher flush latency => higher disk consistent LSN => higher back pressure. ## Summary of changes II Use Azure put_block API to uploads a 1 GB layer file in 8 blocks in parallel. I set the put_block block size to be 128 MB by default in azure config. To minimize neon changes, upload function passes the layer file path to the azure upload code through the storage metadata. This allows the azure put block to use FileChunkStreamRead to stream read from one partition in the file instead of loading all file data in memory and split it into 8 128 MB chunks. ## How is this tested? II 1. rust test_real_azure tests the put_block change. 3. I deployed the change in azure dev and saw flush latency reduces from ~30 seconds to 10 seconds. 4. I also did a bunch of stress test using sqlsmith and 100 GB TPCDS runs. ## Problem III Currently Neon limits the compaction tasks as 3/4 * CPU cores. This limits the overall compaction throughput and it can easily cause head-of-the-line blocking problems when a few large tenants are compacting. ## Summary of changes III This PR increases the limit of compaction tasks as `BG_TASKS_PER_THREAD` (default 4) * CPU cores. Note that `CONCURRENT_BACKGROUND_TASKS` also limits some other tasks `logical_size_calculation` and `layer eviction` . But compaction should be the most frequent and time-consuming task. ## Summary of changes IV This PR adds the following PageServer metrics: 1. `pageserver_disk_usage_based_eviction_evicted_bytes_total`: captures the total amount of bytes evicted. It's more straightforward to see the bytes directly instead of layers. 2. `pageserver_active_storage_operations_count`: captures the active storage operation, e.g., flush, L0 compaction, image creation etc. It's useful to visualize these active operations to get a better idea of what PageServers are spending cycles on in the background. ## Summary of changes V When investigating data corruptions, it's useful to search the base image and all WAL records of a page up to an LSN, i.e., a breakdown of GetPage@LSN request. This PR implements this functionality with two tools: 1. Extended `pagectl` with a new command to search the layer files for a given key up to a given LSN from the `index_part.json` file. The output can be used to download the files from S3 and then search the file contents using the second tool. Example usage: ``` cargo run --bin pagectl index-part search --tenant-id 09b99ea3239bbb3b2d883a59f087659d --timeline-id 7bedf4a6995baff7c0421ff9aebbcdab --path ~/Downloads/corruption/index_part.json-0000000c-formatted --key 000000067F000080140000802100000D61BD --lsn 70C/BF3D61D8 ``` Example output: ``` tenants/09b99ea3239bbb3b2d883a59f087659d-0304/timelines/7bedf4a6995baff7c0421ff9aebbcdab/000000067F0000801400000B180000000002-000000067F0000801400008028000002FEFF__000007089F0B5381-0000070C7679EEB9-0000000c tenants/09b99ea3239bbb3b2d883a59f087659d-0304/timelines/7bedf4a6995baff7c0421ff9aebbcdab/000000000000000000000000000000000000-000000067F0000801400008028000002F3F1__000006DD95B6F609-000006E2BA14C369-0000000c tenants/09b99ea3239bbb3b2d883a59f087659d-0304/timelines/7bedf4a6995baff7c0421ff9aebbcdab/000000067F0000801400000B180000000002-000000067F000080140000802100001B0973__000006D33429F539-000006DD95B6F609-0000000c tenants/09b99ea3239bbb3b2d883a59f087659d-0304/timelines/7bedf4a6995baff7c0421ff9aebbcdab/000000067F0000801400000B180000000002-000000067F00008014000080210000164D81__000006C6343B2D31-000006D33429F539-0000000b tenants/09b99ea3239bbb3b2d883a59f087659d-0304/timelines/7bedf4a6995baff7c0421ff9aebbcdab/000000067F0000801400000B180000000002-000000067F0000801400008021000017687B__000006BA344FA7F1-000006C6343B2D31-0000000b tenants/09b99ea3239bbb3b2d883a59f087659d-0304/timelines/7bedf4a6995baff7c0421ff9aebbcdab/000000067F0000801400000B180000000002-000000067F00008014000080210000165BAB__000006AD34613D19-000006BA344FA7F1-0000000b tenants/09b99ea3239bbb3b2d883a59f087659d-0304/timelines/7bedf4a6995baff7c0421ff9aebbcdab/000000067F0000801400000B180000000002-000000067F00008014000080210000137A39__0000069F34773461-000006AD34613D19-0000000b tenants/09b99ea3239bbb3b2d883a59f087659d-0304/timelines/7bedf4a6995baff7c0421ff9aebbcdab/000000067F000080140000802100000D4000-000000067F000080140000802100000F0000__0000069F34773460-0000000b ``` 2. Added a unit test to search the layer file contents. It's not implemented part of `pagectl` because it depends on some test harness code, which can only be used by unit tests. Example usage: ``` cargo test --package pageserver --lib -- tenant::debug::test_search_key --exact --nocapture -- --tenant-id 09b99ea3239bbb3b2d883a59f087659d --timeline-id 7bedf4a6995baff7c0421ff9aebbcdab --data-dir /Users/chen.luo/Downloads/corruption --key 000000067F000080140000802100000D61BD --lsn 70C/BF3D61D8 ``` Example output: ``` # omitted image for brievity delta: 69F/769D8180: will_init: false, "OgAAALGkuwXwYp12nwYAAECGAAASIqLHAAAAAH8GAAAUgAAAIYAAAL1hDQD/DLGkuwUDAAAAEAAWAA==" delta: 69F/769CB6D8: will_init: false, "PQAAALGkuwXotZx2nwYAABAJAAAFk7tpACAGAH8GAAAUgAAAIYAAAL1hDQD/CQUAEAASALExuwUBAAAAAA==" ``` ## Problem VI Currently when page service resolves shards from page numbers, it doesn't fully support the case that the shard could be split in the middle. This will lead to query failures during the tenant split for either commit or abort cases (it's mostly for abort). ## Summary of changes VI This PR adds retry logic in `Cache::get()` to deal with shard resolution errors more gracefully. Specifically, it'll clear the cache and retry, instead of failing the query immediately. It also reduces the internal timeout to make retries faster. The PR also fixes a very obvious bug in `TenantManager::resolve_attached_shard` where the code tries to cache the computed the shard number, but forgot to recompute when the shard count is different. --------- Co-authored-by: William Huang Co-authored-by: Haoyu Huang Co-authored-by: Chen Luo Co-authored-by: Vlad Lazar Co-authored-by: Vlad Lazar --- Cargo.lock | 3 + libs/remote_storage/Cargo.toml | 3 + libs/remote_storage/src/azure_blob.rs | 143 ++++++- libs/remote_storage/src/config.rs | 18 + libs/remote_storage/tests/common/mod.rs | 34 +- libs/remote_storage/tests/test_real_azure.rs | 3 + pageserver/Cargo.toml | 1 + pageserver/ctl/src/index_part.rs | 102 ++++- pageserver/src/disk_usage_eviction_task.rs | 3 + pageserver/src/http/routes.rs | 15 + pageserver/src/metrics.rs | 91 ++++- pageserver/src/page_service.rs | 6 +- pageserver/src/tenant.rs | 16 +- pageserver/src/tenant/debug.rs | 366 ++++++++++++++++++ pageserver/src/tenant/mgr.rs | 17 +- .../tenant/remote_timeline_client/upload.rs | 22 +- pageserver/src/tenant/tasks.rs | 15 + pageserver/src/tenant/timeline.rs | 2 +- pageserver/src/tenant/timeline/handle.rs | 49 ++- pageserver/src/walredo.rs | 70 ++-- test_runner/fixtures/metrics.py | 3 + .../fixtures/pageserver/allowed_errors.py | 8 + test_runner/regress/test_sharding.py | 168 ++++++++ 23 files changed, 1097 insertions(+), 61 deletions(-) create mode 100644 pageserver/src/tenant/debug.rs diff --git a/Cargo.lock b/Cargo.lock index 237defaec3..39c43d94a3 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -4339,6 +4339,7 @@ dependencies = [ "arc-swap", "async-compression", "async-stream", + "base64 0.22.1", "bincode", "bit_field", "byteorder", @@ -5684,6 +5685,8 @@ dependencies = [ "azure_identity", "azure_storage", "azure_storage_blobs", + "base64 0.22.1", + "byteorder", "bytes", "camino", "camino-tempfile", diff --git a/libs/remote_storage/Cargo.toml b/libs/remote_storage/Cargo.toml index bd18d80915..69316fd493 100644 --- a/libs/remote_storage/Cargo.toml +++ b/libs/remote_storage/Cargo.toml @@ -13,6 +13,7 @@ aws-smithy-async.workspace = true aws-smithy-types.workspace = true aws-config.workspace = true aws-sdk-s3.workspace = true +base64.workspace = true bytes.workspace = true camino = { workspace = true, features = ["serde1"] } humantime-serde.workspace = true @@ -41,6 +42,8 @@ http-body-util.workspace = true itertools.workspace = true sync_wrapper = { workspace = true, features = ["futures"] } +byteorder = "1.4" + [dev-dependencies] camino-tempfile.workspace = true test-context.workspace = true diff --git a/libs/remote_storage/src/azure_blob.rs b/libs/remote_storage/src/azure_blob.rs index e9c24ac723..db30829216 100644 --- a/libs/remote_storage/src/azure_blob.rs +++ b/libs/remote_storage/src/azure_blob.rs @@ -14,17 +14,25 @@ use anyhow::{Context, Result, anyhow}; use azure_core::request_options::{IfMatchCondition, MaxResults, Metadata, Range}; use azure_core::{Continuable, HttpClient, RetryOptions, TransportOptions}; use azure_storage::StorageCredentials; -use azure_storage_blobs::blob::operations::GetBlobBuilder; +use azure_storage_blobs::blob::BlobBlockType; +use azure_storage_blobs::blob::BlockList; use azure_storage_blobs::blob::{Blob, CopyStatus}; use azure_storage_blobs::container::operations::ListBlobsBuilder; -use azure_storage_blobs::prelude::{ClientBuilder, ContainerClient}; +use azure_storage_blobs::prelude::ClientBuilder; +use azure_storage_blobs::{blob::operations::GetBlobBuilder, prelude::ContainerClient}; +use base64::{Engine as _, engine::general_purpose::URL_SAFE}; +use byteorder::{BigEndian, ByteOrder}; use bytes::Bytes; +use camino::Utf8Path; use futures::FutureExt; use futures::future::Either; use futures::stream::Stream; use futures_util::{StreamExt, TryStreamExt}; use http_types::{StatusCode, Url}; use scopeguard::ScopeGuard; +use tokio::fs::File; +use tokio::io::AsyncReadExt; +use tokio::io::AsyncSeekExt; use tokio_util::sync::CancellationToken; use tracing::debug; use utils::backoff; @@ -51,6 +59,9 @@ pub struct AzureBlobStorage { // Alternative timeout used for metadata objects which are expected to be small pub small_timeout: Duration, + /* BEGIN_HADRON */ + pub put_block_size_mb: Option, + /* END_HADRON */ } impl AzureBlobStorage { @@ -107,6 +118,9 @@ impl AzureBlobStorage { concurrency_limiter: ConcurrencyLimiter::new(azure_config.concurrency_limit.get()), timeout, small_timeout, + /* BEGIN_HADRON */ + put_block_size_mb: azure_config.put_block_size_mb, + /* END_HADRON */ }) } @@ -583,31 +597,137 @@ impl RemoteStorage for AzureBlobStorage { let started_at = start_measuring_requests(kind); - let op = async { + let mut metadata_map = metadata.unwrap_or([].into()); + let timeline_file_path = metadata_map.0.remove("databricks_azure_put_block"); + + /* BEGIN_HADRON */ + let op = async move { let blob_client = self.client.blob_client(self.relative_path_to_name(to)); + let put_block_size = self.put_block_size_mb.unwrap_or(0) * 1024 * 1024; + if timeline_file_path.is_none() || put_block_size == 0 { + // Use put_block_blob directly. + let from: Pin< + Box> + Send + Sync + 'static>, + > = Box::pin(from); + let from = NonSeekableStream::new(from, data_size_bytes); + let body = azure_core::Body::SeekableStream(Box::new(from)); - let from: Pin> + Send + Sync + 'static>> = - Box::pin(from); + let mut builder = blob_client.put_block_blob(body); + if !metadata_map.0.is_empty() { + builder = builder.metadata(to_azure_metadata(metadata_map)); + } + let fut = builder.into_future(); + let fut = tokio::time::timeout(self.timeout, fut); + let result = fut.await; + match result { + Ok(Ok(_response)) => return Ok(()), + Ok(Err(azure)) => return Err(azure.into()), + Err(_timeout) => return Err(TimeoutOrCancel::Timeout.into()), + }; + } + // Upload chunks concurrently using Put Block. + // Each PutBlock uploads put_block_size bytes of the file. + let mut upload_futures: Vec>> = + vec![]; + let mut block_list = BlockList::default(); + let mut start_bytes = 0u64; + let mut remaining_bytes = data_size_bytes; + let mut block_list_count = 0; - let from = NonSeekableStream::new(from, data_size_bytes); + while remaining_bytes > 0 { + let block_size = std::cmp::min(remaining_bytes, put_block_size); + let end_bytes = start_bytes + block_size as u64; + let block_id = block_list_count; + let timeout = self.timeout; + let blob_client = blob_client.clone(); + let timeline_file = timeline_file_path.clone().unwrap().clone(); - let body = azure_core::Body::SeekableStream(Box::new(from)); + let mut encoded_block_id = [0u8; 8]; + BigEndian::write_u64(&mut encoded_block_id, block_id); + URL_SAFE.encode(encoded_block_id); - let mut builder = blob_client.put_block_blob(body); + // Put one block. + let part_fut = async move { + let mut file = File::open(Utf8Path::new(&timeline_file.clone())).await?; + file.seek(io::SeekFrom::Start(start_bytes)).await?; + let limited_reader = file.take(block_size as u64); + let file_chunk_stream = + tokio_util::io::ReaderStream::with_capacity(limited_reader, 1024 * 1024); + let file_chunk_stream_pin: Pin< + Box> + Send + Sync + 'static>, + > = Box::pin(file_chunk_stream); + let stream_wrapper = NonSeekableStream::new(file_chunk_stream_pin, block_size); + let body = azure_core::Body::SeekableStream(Box::new(stream_wrapper)); + // Azure put block takes URL-encoded block ids and all blocks must have the same byte length. + // https://learn.microsoft.com/en-us/rest/api/storageservices/put-block?tabs=microsoft-entra-id#uri-parameters + let builder = blob_client.put_block(encoded_block_id.to_vec(), body); + let fut = builder.into_future(); + let fut = tokio::time::timeout(timeout, fut); + let result = fut.await; + tracing::debug!( + "azure put block id-{} size {} start {} end {} file {} response {:#?}", + block_id, + block_size, + start_bytes, + end_bytes, + timeline_file, + result + ); + match result { + Ok(Ok(_response)) => Ok(()), + Ok(Err(azure)) => Err(azure), + Err(_timeout) => Err(azure_core::Error::new( + azure_core::error::ErrorKind::Io, + std::io::Error::new( + std::io::ErrorKind::TimedOut, + "Operation timed out", + ), + )), + } + }; + upload_futures.push(tokio::spawn(part_fut)); - if let Some(metadata) = metadata { - builder = builder.metadata(to_azure_metadata(metadata)); + block_list_count += 1; + remaining_bytes -= block_size; + start_bytes += block_size as u64; + + block_list + .blocks + .push(BlobBlockType::Uncommitted(encoded_block_id.to_vec().into())); } + tracing::debug!( + "azure put blocks {} total MB: {} chunk size MB: {}", + block_list_count, + data_size_bytes / 1024 / 1024, + put_block_size / 1024 / 1024 + ); + // Wait for all blocks to be uploaded. + let upload_results = futures::future::try_join_all(upload_futures).await; + if upload_results.is_err() { + return Err(anyhow::anyhow!(format!( + "Failed to upload all blocks {:#?}", + upload_results.unwrap_err() + ))); + } + + // Commit the blocks. + let mut builder = blob_client.put_block_list(block_list); + if !metadata_map.0.is_empty() { + builder = builder.metadata(to_azure_metadata(metadata_map)); + } let fut = builder.into_future(); let fut = tokio::time::timeout(self.timeout, fut); + let result = fut.await; + tracing::debug!("azure put block list response {:#?}", result); - match fut.await { + match result { Ok(Ok(_response)) => Ok(()), Ok(Err(azure)) => Err(azure.into()), Err(_timeout) => Err(TimeoutOrCancel::Timeout.into()), } }; + /* END_HADRON */ let res = tokio::select! { res = op => res, @@ -622,7 +742,6 @@ impl RemoteStorage for AzureBlobStorage { crate::metrics::BUCKET_METRICS .req_seconds .observe_elapsed(kind, outcome, started_at); - res } diff --git a/libs/remote_storage/src/config.rs b/libs/remote_storage/src/config.rs index 5bc1f678ae..e13e17d544 100644 --- a/libs/remote_storage/src/config.rs +++ b/libs/remote_storage/src/config.rs @@ -195,8 +195,19 @@ pub struct AzureConfig { pub max_keys_per_list_response: Option, #[serde(default = "default_azure_conn_pool_size")] pub conn_pool_size: usize, + /* BEGIN_HADRON */ + #[serde(default = "default_azure_put_block_size_mb")] + pub put_block_size_mb: Option, + /* END_HADRON */ } +/* BEGIN_HADRON */ +fn default_azure_put_block_size_mb() -> Option { + // Disable parallel upload by default. + Some(0) +} +/* END_HADRON */ + fn default_remote_storage_azure_concurrency_limit() -> NonZeroUsize { NonZeroUsize::new(DEFAULT_REMOTE_STORAGE_AZURE_CONCURRENCY_LIMIT).unwrap() } @@ -213,6 +224,9 @@ impl Debug for AzureConfig { "max_keys_per_list_response", &self.max_keys_per_list_response, ) + /* BEGIN_HADRON */ + .field("put_block_size_mb", &self.put_block_size_mb) + /* END_HADRON */ .finish() } } @@ -352,6 +366,7 @@ timeout = '5s'"; upload_storage_class = 'INTELLIGENT_TIERING' timeout = '7s' conn_pool_size = 8 + put_block_size_mb = 1024 "; let config = parse(toml).unwrap(); @@ -367,6 +382,9 @@ timeout = '5s'"; concurrency_limit: default_remote_storage_azure_concurrency_limit(), max_keys_per_list_response: DEFAULT_MAX_KEYS_PER_LIST_RESPONSE, conn_pool_size: 8, + /* BEGIN_HADRON */ + put_block_size_mb: Some(1024), + /* END_HADRON */ }), timeout: Duration::from_secs(7), small_timeout: RemoteStorageConfig::DEFAULT_SMALL_TIMEOUT diff --git a/libs/remote_storage/tests/common/mod.rs b/libs/remote_storage/tests/common/mod.rs index daab05d91a..fb7d6fd482 100644 --- a/libs/remote_storage/tests/common/mod.rs +++ b/libs/remote_storage/tests/common/mod.rs @@ -165,10 +165,42 @@ pub(crate) async fn upload_remote_data( let (data, data_len) = upload_stream(format!("remote blob data {i}").into_bytes().into()); + + /* BEGIN_HADRON */ + let mut metadata = None; + if matches!(&*task_client, GenericRemoteStorage::AzureBlob(_)) { + let file_path = "/tmp/dbx_upload_tmp_file.txt"; + { + // Open the file in append mode + let mut file = std::fs::OpenOptions::new() + .append(true) + .create(true) // Create the file if it doesn't exist + .open(file_path)?; + // Append some bytes to the file + std::io::Write::write_all( + &mut file, + &format!("remote blob data {i}").into_bytes(), + )?; + file.sync_all()?; + } + metadata = Some(remote_storage::StorageMetadata::from([( + "databricks_azure_put_block", + file_path, + )])); + } + /* END_HADRON */ + task_client - .upload(data, data_len, &blob_path, None, &cancel) + .upload(data, data_len, &blob_path, metadata, &cancel) .await?; + // TODO: Check upload is using the put_block upload. + // We cannot consume data here since data is moved inside the upload. + // let total_bytes = data.fold(0, |acc, chunk| async move { + // acc + chunk.map(|bytes| bytes.len()).unwrap_or(0) + // }).await; + // assert_eq!(total_bytes, data_len); + Ok::<_, anyhow::Error>((blob_prefix, blob_path)) }); } diff --git a/libs/remote_storage/tests/test_real_azure.rs b/libs/remote_storage/tests/test_real_azure.rs index 31c9ca3200..4d7caabd39 100644 --- a/libs/remote_storage/tests/test_real_azure.rs +++ b/libs/remote_storage/tests/test_real_azure.rs @@ -219,6 +219,9 @@ async fn create_azure_client( concurrency_limit: NonZeroUsize::new(100).unwrap(), max_keys_per_list_response, conn_pool_size: 8, + /* BEGIN_HADRON */ + put_block_size_mb: Some(1), + /* END_HADRON */ }), timeout: RemoteStorageConfig::DEFAULT_TIMEOUT, small_timeout: RemoteStorageConfig::DEFAULT_SMALL_TIMEOUT, diff --git a/pageserver/Cargo.toml b/pageserver/Cargo.toml index 8a2e2ed3be..1fd0dccff0 100644 --- a/pageserver/Cargo.toml +++ b/pageserver/Cargo.toml @@ -112,6 +112,7 @@ twox-hash.workspace = true procfs.workspace = true [dev-dependencies] +base64.workspace = true criterion.workspace = true hex-literal.workspace = true tokio = { workspace = true, features = ["process", "sync", "fs", "rt", "io-util", "time", "test-util"] } diff --git a/pageserver/ctl/src/index_part.rs b/pageserver/ctl/src/index_part.rs index 6cce2844c7..838d00e490 100644 --- a/pageserver/ctl/src/index_part.rs +++ b/pageserver/ctl/src/index_part.rs @@ -1,10 +1,101 @@ +use std::str::FromStr; + use anyhow::Context; use camino::Utf8PathBuf; -use pageserver::tenant::IndexPart; +use pageserver::tenant::{ + IndexPart, + layer_map::{LayerMap, SearchResult}, + remote_timeline_client::remote_layer_path, + storage_layer::{PersistentLayerDesc, ReadableLayerWeak}, +}; +use pageserver_api::key::Key; +use utils::{ + id::{TenantId, TimelineId}, + lsn::Lsn, + shard::TenantShardId, +}; #[derive(clap::Subcommand)] pub(crate) enum IndexPartCmd { - Dump { path: Utf8PathBuf }, + Dump { + path: Utf8PathBuf, + }, + /// Find all layers that need to be searched to construct the given page at the given LSN. + Search { + #[arg(long)] + tenant_id: String, + #[arg(long)] + timeline_id: String, + #[arg(long)] + path: Utf8PathBuf, + #[arg(long)] + key: String, + #[arg(long)] + lsn: String, + }, +} + +async fn search_layers( + tenant_id: &str, + timeline_id: &str, + path: &Utf8PathBuf, + key: &str, + lsn: &str, +) -> anyhow::Result<()> { + let tenant_id = TenantId::from_str(tenant_id).unwrap(); + let tenant_shard_id = TenantShardId::unsharded(tenant_id); + let timeline_id = TimelineId::from_str(timeline_id).unwrap(); + let index_json = { + let bytes = tokio::fs::read(path).await?; + IndexPart::from_json_bytes(&bytes).unwrap() + }; + let mut layer_map = LayerMap::default(); + { + let mut updates = layer_map.batch_update(); + for (key, value) in index_json.layer_metadata.iter() { + updates.insert_historic(PersistentLayerDesc::from_filename( + tenant_shard_id, + timeline_id, + key.clone(), + value.file_size, + )); + } + } + let key = Key::from_hex(key)?; + + let lsn = Lsn::from_str(lsn).unwrap(); + let mut end_lsn = lsn; + loop { + let result = layer_map.search(key, end_lsn); + match result { + Some(SearchResult { layer, lsn_floor }) => { + let disk_layer = match layer { + ReadableLayerWeak::PersistentLayer(layer) => layer, + ReadableLayerWeak::InMemoryLayer(_) => { + anyhow::bail!("unexpected in-memory layer") + } + }; + + let metadata = index_json + .layer_metadata + .get(&disk_layer.layer_name()) + .unwrap(); + println!( + "{}", + remote_layer_path( + &tenant_id, + &timeline_id, + metadata.shard, + &disk_layer.layer_name(), + metadata.generation + ) + ); + end_lsn = lsn_floor; + } + None => break, + } + } + Ok(()) } pub(crate) async fn main(cmd: &IndexPartCmd) -> anyhow::Result<()> { @@ -16,5 +107,12 @@ pub(crate) async fn main(cmd: &IndexPartCmd) -> anyhow::Result<()> { println!("{output}"); Ok(()) } + IndexPartCmd::Search { + tenant_id, + timeline_id, + path, + key, + lsn, + } => search_layers(tenant_id, timeline_id, path, key, lsn).await, } } diff --git a/pageserver/src/disk_usage_eviction_task.rs b/pageserver/src/disk_usage_eviction_task.rs index f13b3709f5..e6529fb201 100644 --- a/pageserver/src/disk_usage_eviction_task.rs +++ b/pageserver/src/disk_usage_eviction_task.rs @@ -458,6 +458,9 @@ pub(crate) async fn disk_usage_eviction_task_iteration_impl( match next { Ok(Ok(file_size)) => { METRICS.layers_evicted.inc(); + /*BEGIN_HADRON */ + METRICS.bytes_evicted.inc_by(file_size); + /*END_HADRON */ usage_assumed.add_available_bytes(file_size); } Ok(Err(( diff --git a/pageserver/src/http/routes.rs b/pageserver/src/http/routes.rs index 55582659df..0e40dbcd15 100644 --- a/pageserver/src/http/routes.rs +++ b/pageserver/src/http/routes.rs @@ -61,6 +61,7 @@ use crate::context; use crate::context::{DownloadBehavior, RequestContext, RequestContextBuilder}; use crate::deletion_queue::DeletionQueueClient; use crate::feature_resolver::FeatureResolver; +use crate::metrics::LOCAL_DATA_LOSS_SUSPECTED; use crate::pgdatadir_mapping::LsnForTimestamp; use crate::task_mgr::TaskKind; use crate::tenant::config::LocationConf; @@ -3628,6 +3629,17 @@ async fn activate_post_import_handler( .await } +// [Hadron] Reset gauge metrics that are used to raised alerts. We need this API as a stop-gap measure to reset alerts +// after we manually rectify situations such as local SSD data loss. We will eventually automate this. +async fn hadron_reset_alert_gauges( + request: Request, + _cancel: CancellationToken, +) -> Result, ApiError> { + check_permission(&request, None)?; + LOCAL_DATA_LOSS_SUSPECTED.set(0); + json_response(StatusCode::OK, ()) +} + /// Read the end of a tar archive. /// /// A tar archive normally ends with two consecutive blocks of zeros, 512 bytes each. @@ -4154,5 +4166,8 @@ pub fn make_router( .post("/v1/feature_flag_spec", |r| { api_handler(r, update_feature_flag_spec) }) + .post("/hadron-internal/reset_alert_gauges", |r| { + api_handler(r, hadron_reset_alert_gauges) + }) .any(handler_404)) } diff --git a/pageserver/src/metrics.rs b/pageserver/src/metrics.rs index 21faceef49..eb89e166b2 100644 --- a/pageserver/src/metrics.rs +++ b/pageserver/src/metrics.rs @@ -1,3 +1,4 @@ +use std::cell::Cell; use std::collections::HashMap; use std::num::NonZeroUsize; use std::os::fd::RawFd; @@ -102,7 +103,18 @@ pub(crate) static STORAGE_TIME_COUNT_PER_TIMELINE: Lazy = Lazy::n .expect("failed to define a metric") }); -// Buckets for background operation duration in seconds, like compaction, GC, size calculation. +/* BEGIN_HADRON */ +pub(crate) static STORAGE_ACTIVE_COUNT_PER_TIMELINE: Lazy = Lazy::new(|| { + register_int_gauge_vec!( + "pageserver_active_storage_operations_count", + "Count of active storage operations with operation, tenant and timeline dimensions", + &["operation", "tenant_id", "shard_id", "timeline_id"], + ) + .expect("failed to define a metric") +}); +/*END_HADRON */ + +// Buckets for background operations like compaction, GC, size calculation const STORAGE_OP_BUCKETS: &[f64] = &[0.010, 0.100, 1.0, 10.0, 100.0, 1000.0]; pub(crate) static STORAGE_TIME_GLOBAL: Lazy = Lazy::new(|| { @@ -2810,6 +2822,31 @@ pub(crate) static WALRECEIVER_CANDIDATES_ADDED: Lazy = pub(crate) static WALRECEIVER_CANDIDATES_REMOVED: Lazy = Lazy::new(|| WALRECEIVER_CANDIDATES_EVENTS.with_label_values(&["remove"])); +pub(crate) static LOCAL_DATA_LOSS_SUSPECTED: Lazy = Lazy::new(|| { + register_int_gauge!( + "pageserver_local_data_loss_suspected", + "Non-zero value indicates that pageserver local data loss is suspected (and highly likely)." + ) + .expect("failed to define a metric") +}); + +// Counter keeping track of misrouted PageStream requests. Spelling out PageStream requests here to distinguish +// it from other types of reqeusts (SK wal replication, http requests, etc.). PageStream requests are used by +// Postgres compute to fetch data from pageservers. +// A misrouted PageStream request is registered if the pageserver cannot find the tenant identified in the +// request, or if the pageserver is not the "primary" serving the tenant shard. These error almost always identify +// issues with compute configuration, caused by either the compute node itself being stuck in the wrong +// configuration or Storage Controller reconciliation bugs. Misrouted requests are expected during tenant migration +// and/or during recovery following a pageserver failure, but persistently high rates of misrouted requests +// are indicative of bugs (and unavailability). +pub(crate) static MISROUTED_PAGESTREAM_REQUESTS: Lazy = Lazy::new(|| { + register_int_counter!( + "pageserver_misrouted_pagestream_requests_total", + "Number of pageserver pagestream requests that were routed to the wrong pageserver" + ) + .expect("failed to define a metric") +}); + // Metrics collected on WAL redo operations // // We collect the time spent in actual WAL redo ('redo'), and time waiting @@ -3048,13 +3085,19 @@ pub(crate) static WAL_REDO_PROCESS_COUNTERS: Lazy = pub(crate) struct StorageTimeMetricsTimer { metrics: StorageTimeMetrics, start: Instant, + stopped: Cell, } impl StorageTimeMetricsTimer { fn new(metrics: StorageTimeMetrics) -> Self { + /*BEGIN_HADRON */ + // record the active operation as the timer starts + metrics.timeline_active_count.inc(); + /*END_HADRON */ Self { metrics, start: Instant::now(), + stopped: Cell::new(false), } } @@ -3070,6 +3113,10 @@ impl StorageTimeMetricsTimer { self.metrics.timeline_sum.inc_by(seconds); self.metrics.timeline_count.inc(); self.metrics.global_histogram.observe(seconds); + /* BEGIN_HADRON*/ + self.stopped.set(true); + self.metrics.timeline_active_count.dec(); + /*END_HADRON */ duration } @@ -3080,6 +3127,16 @@ impl StorageTimeMetricsTimer { } } +/*BEGIN_HADRON */ +impl Drop for StorageTimeMetricsTimer { + fn drop(&mut self) { + if !self.stopped.get() { + self.metrics.timeline_active_count.dec(); + } + } +} +/*END_HADRON */ + pub(crate) struct AlwaysRecordingStorageTimeMetricsTimer(Option); impl Drop for AlwaysRecordingStorageTimeMetricsTimer { @@ -3105,6 +3162,10 @@ pub(crate) struct StorageTimeMetrics { timeline_sum: Counter, /// Number of oeprations, per operation, tenant_id and timeline_id timeline_count: IntCounter, + /*BEGIN_HADRON */ + /// Number of active operations per operation, tenant_id, and timeline_id + timeline_active_count: IntGauge, + /*END_HADRON */ /// Global histogram having only the "operation" label. global_histogram: Histogram, } @@ -3124,6 +3185,11 @@ impl StorageTimeMetrics { let timeline_count = STORAGE_TIME_COUNT_PER_TIMELINE .get_metric_with_label_values(&[operation, tenant_id, shard_id, timeline_id]) .unwrap(); + /*BEGIN_HADRON */ + let timeline_active_count = STORAGE_ACTIVE_COUNT_PER_TIMELINE + .get_metric_with_label_values(&[operation, tenant_id, shard_id, timeline_id]) + .unwrap(); + /*END_HADRON */ let global_histogram = STORAGE_TIME_GLOBAL .get_metric_with_label_values(&[operation]) .unwrap(); @@ -3131,6 +3197,7 @@ impl StorageTimeMetrics { StorageTimeMetrics { timeline_sum, timeline_count, + timeline_active_count, global_histogram, } } @@ -3544,6 +3611,14 @@ impl TimelineMetrics { shard_id, timeline_id, ]); + /* BEGIN_HADRON */ + let _ = STORAGE_ACTIVE_COUNT_PER_TIMELINE.remove_label_values(&[ + op, + tenant_id, + shard_id, + timeline_id, + ]); + /*END_HADRON */ } for op in StorageIoSizeOperation::VARIANTS { @@ -4336,6 +4411,9 @@ pub(crate) mod disk_usage_based_eviction { pub(crate) layers_collected: IntCounter, pub(crate) layers_selected: IntCounter, pub(crate) layers_evicted: IntCounter, + /*BEGIN_HADRON */ + pub(crate) bytes_evicted: IntCounter, + /*END_HADRON */ } impl Default for Metrics { @@ -4372,12 +4450,21 @@ pub(crate) mod disk_usage_based_eviction { ) .unwrap(); + /*BEGIN_HADRON */ + let bytes_evicted = register_int_counter!( + "pageserver_disk_usage_based_eviction_evicted_bytes_total", + "Amount of bytes successfully evicted" + ) + .unwrap(); + /*END_HADRON */ + Self { tenant_collection_time, tenant_layer_count, layers_collected, layers_selected, layers_evicted, + bytes_evicted, } } } @@ -4497,6 +4584,7 @@ pub fn preinitialize_metrics( &CIRCUIT_BREAKERS_UNBROKEN, &PAGE_SERVICE_SMGR_FLUSH_INPROGRESS_MICROS_GLOBAL, &WAIT_LSN_IN_PROGRESS_GLOBAL_MICROS, + &MISROUTED_PAGESTREAM_REQUESTS, ] .into_iter() .for_each(|c| { @@ -4534,6 +4622,7 @@ pub fn preinitialize_metrics( // gauges WALRECEIVER_ACTIVE_MANAGERS.get(); + LOCAL_DATA_LOSS_SUSPECTED.get(); // histograms [ diff --git a/pageserver/src/page_service.rs b/pageserver/src/page_service.rs index 49928a9036..6b614deac8 100644 --- a/pageserver/src/page_service.rs +++ b/pageserver/src/page_service.rs @@ -70,7 +70,7 @@ use crate::context::{ }; use crate::metrics::{ self, COMPUTE_COMMANDS_COUNTERS, ComputeCommandKind, GetPageBatchBreakReason, LIVE_CONNECTIONS, - SmgrOpTimer, TimelineMetrics, + MISROUTED_PAGESTREAM_REQUESTS, SmgrOpTimer, TimelineMetrics, }; use crate::pgdatadir_mapping::{LsnRange, Version}; use crate::span::{ @@ -91,7 +91,8 @@ use crate::{CancellableTask, PERF_TRACE_TARGET, timed_after_cancellation}; /// is not yet in state [`TenantState::Active`]. /// /// NB: this is a different value than [`crate::http::routes::ACTIVE_TENANT_TIMEOUT`]. -const ACTIVE_TENANT_TIMEOUT: Duration = Duration::from_millis(30000); +/// HADRON: reduced timeout and we will retry in Cache::get(). +const ACTIVE_TENANT_TIMEOUT: Duration = Duration::from_millis(5000); /// Threshold at which to log slow GetPage requests. const LOG_SLOW_GETPAGE_THRESHOLD: Duration = Duration::from_secs(30); @@ -1128,6 +1129,7 @@ impl PageServerHandler { // Closing the connection by returning ``::Reconnect` has the side effect of rate-limiting above message, via // client's reconnect backoff, as well as hopefully prompting the client to load its updated configuration // and talk to a different pageserver. + MISROUTED_PAGESTREAM_REQUESTS.inc(); return respond_error!( span, PageStreamError::Reconnect( diff --git a/pageserver/src/tenant.rs b/pageserver/src/tenant.rs index 96ed4672a6..b0969a96c1 100644 --- a/pageserver/src/tenant.rs +++ b/pageserver/src/tenant.rs @@ -142,6 +142,9 @@ mod gc_block; mod gc_result; pub(crate) mod throttle; +#[cfg(test)] +pub mod debug; + pub(crate) use timeline::{LogicalSizeCalculationCause, PageReconstructError, Timeline}; pub(crate) use crate::span::debug_assert_current_span_has_tenant_and_timeline_id; @@ -6015,12 +6018,11 @@ pub(crate) mod harness { } #[instrument(skip_all, fields(tenant_id=%self.tenant_shard_id.tenant_id, shard_id=%self.tenant_shard_id.shard_slug()))] - pub(crate) async fn do_try_load( + pub(crate) async fn do_try_load_with_redo( &self, + walredo_mgr: Arc, ctx: &RequestContext, ) -> anyhow::Result> { - let walredo_mgr = Arc::new(WalRedoManager::from(TestRedoManager)); - let (basebackup_cache, _) = BasebackupCache::new(Utf8PathBuf::new(), None); let tenant = Arc::new(TenantShard::new( @@ -6058,6 +6060,14 @@ pub(crate) mod harness { Ok(tenant) } + pub(crate) async fn do_try_load( + &self, + ctx: &RequestContext, + ) -> anyhow::Result> { + let walredo_mgr = Arc::new(WalRedoManager::from(TestRedoManager)); + self.do_try_load_with_redo(walredo_mgr, ctx).await + } + pub fn timeline_path(&self, timeline_id: &TimelineId) -> Utf8PathBuf { self.conf.timeline_path(&self.tenant_shard_id, timeline_id) } diff --git a/pageserver/src/tenant/debug.rs b/pageserver/src/tenant/debug.rs new file mode 100644 index 0000000000..604f7f265e --- /dev/null +++ b/pageserver/src/tenant/debug.rs @@ -0,0 +1,366 @@ +use std::{ops::Range, str::FromStr, sync::Arc}; + +use crate::walredo::RedoAttemptType; +use base64::{Engine as _, engine::general_purpose::STANDARD}; +use bytes::{Bytes, BytesMut}; +use camino::Utf8PathBuf; +use clap::Parser; +use itertools::Itertools; +use pageserver_api::{ + key::Key, + keyspace::KeySpace, + shard::{ShardIdentity, ShardStripeSize}, +}; +use postgres_ffi::PgMajorVersion; +use postgres_ffi::{BLCKSZ, page_is_new, page_set_lsn}; +use tracing::Instrument; +use utils::{ + generation::Generation, + id::{TenantId, TimelineId}, + lsn::Lsn, + shard::{ShardCount, ShardIndex, ShardNumber}, +}; +use wal_decoder::models::record::NeonWalRecord; + +use crate::{ + context::{DownloadBehavior, RequestContext}, + task_mgr::TaskKind, + tenant::storage_layer::ValueReconstructState, + walredo::harness::RedoHarness, +}; + +use super::{ + WalRedoManager, WalredoManagerId, + harness::TenantHarness, + remote_timeline_client::LayerFileMetadata, + storage_layer::{AsLayerDesc, IoConcurrency, Layer, LayerName, ValuesReconstructState}, +}; + +fn process_page_image(next_record_lsn: Lsn, is_fpw: bool, img_bytes: Bytes) -> Bytes { + // To match the logic in libs/wal_decoder/src/serialized_batch.rs + let mut new_image: BytesMut = img_bytes.into(); + if is_fpw && !page_is_new(&new_image) { + page_set_lsn(&mut new_image, next_record_lsn); + } + assert_eq!(new_image.len(), BLCKSZ as usize); + new_image.freeze() +} + +async fn redo_wals(input: &str, key: Key) -> anyhow::Result<()> { + let tenant_id = TenantId::generate(); + let timeline_id = TimelineId::generate(); + let redo_harness = RedoHarness::new()?; + let span = redo_harness.span(); + let tenant_conf = pageserver_api::models::TenantConfig { + ..Default::default() + }; + + let ctx = RequestContext::new(TaskKind::DebugTool, DownloadBehavior::Error); + let tenant = TenantHarness::create_custom( + "search_key", + tenant_conf, + tenant_id, + ShardIdentity::unsharded(), + Generation::new(1), + ) + .await? + .do_try_load_with_redo( + Arc::new(WalRedoManager::Prod( + WalredoManagerId::next(), + redo_harness.manager, + )), + &ctx, + ) + .await + .unwrap(); + let timeline = tenant + .create_test_timeline(timeline_id, Lsn(0x10), PgMajorVersion::PG16, &ctx) + .await?; + let contents = tokio::fs::read_to_string(input) + .await + .map_err(|e| anyhow::Error::msg(format!("Failed to read input file {input}: {e}"))) + .unwrap(); + let lines = contents.lines(); + let mut last_wal_lsn: Option = None; + let state = { + let mut state = ValueReconstructState::default(); + let mut is_fpw = false; + let mut is_first_line = true; + for line in lines { + if is_first_line { + is_first_line = false; + if line.trim() == "FPW" { + is_fpw = true; + } + continue; // Skip the first line. + } + // Each input line is in the "," format. + let (lsn_str, payload_b64) = line + .split_once(',') + .expect("Invalid input format: expected ','"); + + // Parse the LSN and decode the payload. + let lsn = Lsn::from_str(lsn_str.trim()).expect("Invalid LSN format"); + let bytes = Bytes::from( + STANDARD + .decode(payload_b64.trim()) + .expect("Invalid base64 payload"), + ); + + // The first line is considered the base image, the rest are WAL records. + if state.img.is_none() { + state.img = Some((lsn, process_page_image(lsn, is_fpw, bytes))); + } else { + let wal_record = NeonWalRecord::Postgres { + will_init: false, + rec: bytes, + }; + state.records.push((lsn, wal_record)); + last_wal_lsn.replace(lsn); + } + } + state + }; + + assert!(state.img.is_some(), "No base image found"); + assert!(!state.records.is_empty(), "No WAL records found"); + let result = timeline + .reconstruct_value(key, last_wal_lsn.unwrap(), state, RedoAttemptType::ReadPage) + .instrument(span.clone()) + .await?; + + eprintln!("final image: {:?}", STANDARD.encode(result)); + + Ok(()) +} + +async fn search_key( + tenant_id: TenantId, + timeline_id: TimelineId, + dir: String, + key: Key, + lsn: Lsn, +) -> anyhow::Result<()> { + let shard_index = ShardIndex { + shard_number: ShardNumber(0), + shard_count: ShardCount(4), + }; + + let redo_harness = RedoHarness::new()?; + let span = redo_harness.span(); + let tenant_conf = pageserver_api::models::TenantConfig { + ..Default::default() + }; + let ctx = RequestContext::new(TaskKind::DebugTool, DownloadBehavior::Error); + let tenant = TenantHarness::create_custom( + "search_key", + tenant_conf, + tenant_id, + ShardIdentity::new( + shard_index.shard_number, + shard_index.shard_count, + ShardStripeSize(32768), + ) + .unwrap(), + Generation::new(1), + ) + .await? + .do_try_load_with_redo( + Arc::new(WalRedoManager::Prod( + WalredoManagerId::next(), + redo_harness.manager, + )), + &ctx, + ) + .await + .unwrap(); + + let timeline = tenant + .create_test_timeline(timeline_id, Lsn(0x10), PgMajorVersion::PG16, &ctx) + .await?; + + let mut delta_layers: Vec = Vec::new(); + let mut img_layer: Option = Option::None; + let mut dir = tokio::fs::read_dir(dir).await?; + loop { + let entry = dir.next_entry().await?; + if entry.is_none() || !entry.as_ref().unwrap().file_type().await?.is_file() { + break; + } + let path = Utf8PathBuf::from_path_buf(entry.unwrap().path()).unwrap(); + let layer_name = match LayerName::from_str(path.file_name().unwrap()) { + Ok(name) => name, + Err(_) => { + eprintln!("Skipped invalid layer: {path}"); + continue; + } + }; + let layer = Layer::for_resident( + tenant.conf, + &timeline, + path.clone(), + layer_name, + LayerFileMetadata::new( + tokio::fs::metadata(path.clone()).await?.len(), + Generation::new(1), + shard_index, + ), + ); + if layer.layer_desc().is_delta() { + delta_layers.push(layer.into()); + } else if img_layer.is_none() { + img_layer = Some(layer.into()); + } else { + anyhow::bail!("Found multiple image layers"); + } + } + // sort delta layers based on the descending order of LSN + delta_layers.sort_by(|a, b| { + b.layer_desc() + .get_lsn_range() + .start + .cmp(&a.layer_desc().get_lsn_range().start) + }); + + let mut state = ValuesReconstructState::new(IoConcurrency::Sequential); + + let key_space = KeySpace::single(Range { + start: key, + end: key.next(), + }); + let lsn_range = Range { + start: img_layer + .as_ref() + .map_or(Lsn(0x00), |img| img.layer_desc().image_layer_lsn()), + end: lsn, + }; + for delta_layer in delta_layers.iter() { + delta_layer + .get_values_reconstruct_data(key_space.clone(), lsn_range.clone(), &mut state, &ctx) + .await?; + } + + img_layer + .as_ref() + .unwrap() + .get_values_reconstruct_data(key_space.clone(), lsn_range.clone(), &mut state, &ctx) + .await?; + + for (_key, result) in std::mem::take(&mut state.keys) { + let state = result.collect_pending_ios().await?; + if state.img.is_some() { + eprintln!( + "image: {}: {:x?}", + state.img.as_ref().unwrap().0, + STANDARD.encode(state.img.as_ref().unwrap().1.clone()) + ); + } + for delta in state.records.iter() { + match &delta.1 { + NeonWalRecord::Postgres { will_init, rec } => { + eprintln!( + "delta: {}: will_init: {}, {:x?}", + delta.0, + will_init, + STANDARD.encode(rec) + ); + } + _ => { + eprintln!("delta: {}: {:x?}", delta.0, delta.1); + } + } + } + + let result = timeline + .reconstruct_value(key, lsn_range.end, state, RedoAttemptType::ReadPage) + .instrument(span.clone()) + .await?; + eprintln!("final image: {lsn} : {result:?}"); + } + + Ok(()) +} + +/// Redo all WALs against the base image in the input file. Return the base64 encoded final image. +/// Each line in the input file must be in the form "," where: +/// * `` is a PostgreSQL LSN in hexadecimal notation, e.g. `0/16ABCDE`. +/// * `` is the base64‐encoded page image (first line) or WAL record (subsequent lines). +/// +/// The first line provides the base image of a page. The LSN is the LSN of "next record" following +/// the record containing the FPI. For example, if the FPI was extracted from a WAL record occuping +/// [0/1, 0/200) in the WAL stream, the LSN appearing along side the page image here should be 0/200. +/// +/// The subsequent lines are WAL records, ordered from the oldest to the newest. The LSN is the +/// record LSN of the WAL record, not the "next record" LSN. For example, if the WAL record here +/// occupies [0/1, 0/200) in the WAL stream, the LSN appearing along side the WAL record here should +/// be 0/1. +#[derive(Parser)] +struct RedoWalsCmd { + #[clap(long)] + input: String, + #[clap(long)] + key: String, +} + +#[tokio::test] +async fn test_redo_wals() -> anyhow::Result<()> { + let args = std::env::args().collect_vec(); + let pos = args + .iter() + .position(|arg| arg == "--") + .unwrap_or(args.len()); + let slice = &args[pos..args.len()]; + let cmd = match RedoWalsCmd::try_parse_from(slice) { + Ok(cmd) => cmd, + Err(err) => { + eprintln!("{err}"); + return Ok(()); + } + }; + + let key = Key::from_hex(&cmd.key).unwrap(); + redo_wals(&cmd.input, key).await?; + + Ok(()) +} + +/// Search for a page at the given LSN in all layers of the data_dir. +/// Return the base64-encoded image and all WAL records, as well as the final reconstructed image. +#[derive(Parser)] +struct SearchKeyCmd { + #[clap(long)] + tenant_id: String, + #[clap(long)] + timeline_id: String, + #[clap(long)] + data_dir: String, + #[clap(long)] + key: String, + #[clap(long)] + lsn: String, +} + +#[tokio::test] +async fn test_search_key() -> anyhow::Result<()> { + let args = std::env::args().collect_vec(); + let pos = args + .iter() + .position(|arg| arg == "--") + .unwrap_or(args.len()); + let slice = &args[pos..args.len()]; + let cmd = match SearchKeyCmd::try_parse_from(slice) { + Ok(cmd) => cmd, + Err(err) => { + eprintln!("{err}"); + return Ok(()); + } + }; + + let tenant_id = TenantId::from_str(&cmd.tenant_id).unwrap(); + let timeline_id = TimelineId::from_str(&cmd.timeline_id).unwrap(); + let key = Key::from_hex(&cmd.key).unwrap(); + let lsn = Lsn::from_str(&cmd.lsn).unwrap(); + search_key(tenant_id, timeline_id, cmd.data_dir, key, lsn).await?; + + Ok(()) +} diff --git a/pageserver/src/tenant/mgr.rs b/pageserver/src/tenant/mgr.rs index be18b40862..15853d3614 100644 --- a/pageserver/src/tenant/mgr.rs +++ b/pageserver/src/tenant/mgr.rs @@ -43,7 +43,7 @@ use crate::controller_upcall_client::{ }; use crate::deletion_queue::DeletionQueueClient; use crate::http::routes::ACTIVE_TENANT_TIMEOUT; -use crate::metrics::{TENANT, TENANT_MANAGER as METRICS}; +use crate::metrics::{LOCAL_DATA_LOSS_SUSPECTED, TENANT, TENANT_MANAGER as METRICS}; use crate::task_mgr::{BACKGROUND_RUNTIME, TaskKind}; use crate::tenant::config::{ AttachedLocationConfig, AttachmentMode, LocationConf, LocationMode, SecondaryLocationConfig, @@ -538,6 +538,21 @@ pub async fn init_tenant_mgr( // Determine which tenants are to be secondary or attached, and in which generation let tenant_modes = init_load_generations(conf, &tenant_configs, resources, cancel).await?; + // Hadron local SSD check: Raise an alert if our local filesystem does not contain any tenants but the re-attach request returned tenants. + // This can happen if the PS suffered a Kubernetes node failure resulting in loss of all local data, but recovered quickly on another node + // so the Storage Controller has not had the time to move tenants out. + let data_loss_suspected = if let Some(tenant_modes) = &tenant_modes { + tenant_configs.is_empty() && !tenant_modes.is_empty() + } else { + false + }; + if data_loss_suspected { + tracing::error!( + "Local data loss suspected: no tenants found on local filesystem, but re-attach request returned tenants" + ); + } + LOCAL_DATA_LOSS_SUSPECTED.set(if data_loss_suspected { 1 } else { 0 }); + tracing::info!( "Attaching {} tenants at startup, warming up {} at a time", tenant_configs.len(), diff --git a/pageserver/src/tenant/remote_timeline_client/upload.rs b/pageserver/src/tenant/remote_timeline_client/upload.rs index ffb4717d9f..f2fbf656a6 100644 --- a/pageserver/src/tenant/remote_timeline_client/upload.rs +++ b/pageserver/src/tenant/remote_timeline_client/upload.rs @@ -141,11 +141,29 @@ pub(super) async fn upload_timeline_layer<'a>( let fs_size = usize::try_from(fs_size) .with_context(|| format!("convert {local_path:?} size {fs_size} usize"))?; - + /* BEGIN_HADRON */ + let mut metadata = None; + match storage { + // Pass the file path as a storage metadata to minimize changes to neon. + // Otherwise, we need to change the upload interface. + GenericRemoteStorage::AzureBlob(s) => { + let block_size_mb = s.put_block_size_mb.unwrap_or(0); + if block_size_mb > 0 && fs_size > block_size_mb * 1024 * 1024 { + metadata = Some(remote_storage::StorageMetadata::from([( + "databricks_azure_put_block", + local_path.as_str(), + )])); + } + } + GenericRemoteStorage::LocalFs(_) => {} + GenericRemoteStorage::AwsS3(_) => {} + GenericRemoteStorage::Unreliable(_) => {} + }; + /* END_HADRON */ let reader = tokio_util::io::ReaderStream::with_capacity(source_file, super::BUFFER_SIZE); storage - .upload(reader, fs_size, remote_path, None, cancel) + .upload(reader, fs_size, remote_path, metadata, cancel) .await .with_context(|| format!("upload layer from local path '{local_path}'")) } diff --git a/pageserver/src/tenant/tasks.rs b/pageserver/src/tenant/tasks.rs index 356f495972..2ae6b7ff3d 100644 --- a/pageserver/src/tenant/tasks.rs +++ b/pageserver/src/tenant/tasks.rs @@ -34,6 +34,21 @@ use crate::virtual_file::owned_buffers_io::write::FlushTaskError; /// We use 3/4 Tokio threads, to avoid blocking all threads in case we do any CPU-heavy work. static CONCURRENT_BACKGROUND_TASKS: Lazy = Lazy::new(|| { let total_threads = TOKIO_WORKER_THREADS.get(); + + /*BEGIN_HADRON*/ + // ideally we should run at least one compaction task per tenant in order to (1) maximize + // compaction throughput (2) avoid head-of-line blocking of large compactions. However doing + // that may create too many compaction tasks with lots of memory overheads. So we limit the + // number of compaction tasks based on the available CPU core count. + // Need to revisit. + // let tasks_per_thread = std::env::var("BG_TASKS_PER_THREAD") + // .ok() + // .and_then(|s| s.parse().ok()) + // .unwrap_or(4); + // let permits = usize::max(1, total_threads * tasks_per_thread); + // // assert!(permits < total_threads, "need threads for other work"); + /*END_HADRON*/ + let permits = max(1, (total_threads * 3).checked_div(4).unwrap_or(0)); assert_ne!(permits, 0, "we will not be adding in permits later"); assert!(permits < total_threads, "need threads for other work"); diff --git a/pageserver/src/tenant/timeline.rs b/pageserver/src/tenant/timeline.rs index 44a4f1e911..293f3c484d 100644 --- a/pageserver/src/tenant/timeline.rs +++ b/pageserver/src/tenant/timeline.rs @@ -6742,7 +6742,7 @@ impl Timeline { } /// Reconstruct a value, using the given base image and WAL records in 'data'. - async fn reconstruct_value( + pub(crate) async fn reconstruct_value( &self, key: Key, request_lsn: Lsn, diff --git a/pageserver/src/tenant/timeline/handle.rs b/pageserver/src/tenant/timeline/handle.rs index 2dbff20ab2..33c97287c0 100644 --- a/pageserver/src/tenant/timeline/handle.rs +++ b/pageserver/src/tenant/timeline/handle.rs @@ -212,8 +212,12 @@ //! to the parent shard during a shard split. Eventually, the shard split task will //! shut down the parent => case (1). -use std::collections::{HashMap, hash_map}; -use std::sync::{Arc, Mutex, Weak}; +use std::collections::HashMap; +use std::collections::hash_map; +use std::sync::Arc; +use std::sync::Mutex; +use std::sync::Weak; +use std::time::Duration; use pageserver_api::shard::ShardIdentity; use tracing::{instrument, trace}; @@ -333,6 +337,44 @@ enum RoutingResult { } impl Cache { + /* BEGIN_HADRON */ + /// A wrapper of do_get to resolve the tenant shard for a get page request. + #[instrument(level = "trace", skip_all)] + pub(crate) async fn get( + &mut self, + timeline_id: TimelineId, + shard_selector: ShardSelector, + tenant_manager: &T::TenantManager, + ) -> Result, GetError> { + const GET_MAX_RETRIES: usize = 10; + const RETRY_BACKOFF: Duration = Duration::from_millis(100); + let mut attempt = 0; + loop { + attempt += 1; + match self + .do_get(timeline_id, shard_selector, tenant_manager) + .await + { + Ok(handle) => return Ok(handle), + Err(e) => { + // Retry on tenant manager error to handle tenant split more gracefully + if attempt < GET_MAX_RETRIES { + tracing::warn!( + "Fail to resolve tenant shard in attempt {}: {:?}. Retrying...", + attempt, + e + ); + tokio::time::sleep(RETRY_BACKOFF).await; + continue; + } else { + return Err(e); + } + } + } + } + } + /* END_HADRON */ + /// See module-level comment for details. /// /// Does NOT check for the shutdown state of [`Types::Timeline`]. @@ -341,7 +383,7 @@ impl Cache { /// and if so, return an error that causes the page service to /// close the connection. #[instrument(level = "trace", skip_all)] - pub(crate) async fn get( + async fn do_get( &mut self, timeline_id: TimelineId, shard_selector: ShardSelector, @@ -879,6 +921,7 @@ mod tests { .await .err() .expect("documented behavior: can't get new handle after shutdown"); + assert_eq!(cache.map.len(), 1, "next access cleans up the cache"); cache diff --git a/pageserver/src/walredo.rs b/pageserver/src/walredo.rs index b17b5a15f9..c6d3cafe9a 100644 --- a/pageserver/src/walredo.rs +++ b/pageserver/src/walredo.rs @@ -566,22 +566,55 @@ impl PostgresRedoManager { } } +#[cfg(test)] +pub(crate) mod harness { + use super::PostgresRedoManager; + use crate::config::PageServerConf; + use utils::{id::TenantId, shard::TenantShardId}; + + pub struct RedoHarness { + // underscored because unused, except for removal at drop + _repo_dir: camino_tempfile::Utf8TempDir, + pub manager: PostgresRedoManager, + tenant_shard_id: TenantShardId, + } + + impl RedoHarness { + pub fn new() -> anyhow::Result { + crate::tenant::harness::setup_logging(); + + let repo_dir = camino_tempfile::tempdir()?; + let conf = PageServerConf::dummy_conf(repo_dir.path().to_path_buf()); + let conf = Box::leak(Box::new(conf)); + let tenant_shard_id = TenantShardId::unsharded(TenantId::generate()); + + let manager = PostgresRedoManager::new(conf, tenant_shard_id); + + Ok(RedoHarness { + _repo_dir: repo_dir, + manager, + tenant_shard_id, + }) + } + pub fn span(&self) -> tracing::Span { + tracing::info_span!("RedoHarness", tenant_id=%self.tenant_shard_id.tenant_id, shard_id=%self.tenant_shard_id.shard_slug()) + } + } +} + #[cfg(test)] mod tests { use std::str::FromStr; use bytes::Bytes; use pageserver_api::key::Key; - use pageserver_api::shard::TenantShardId; use postgres_ffi::PgMajorVersion; use tracing::Instrument; - use utils::id::TenantId; use utils::lsn::Lsn; use wal_decoder::models::record::NeonWalRecord; - use super::PostgresRedoManager; - use crate::config::PageServerConf; use crate::walredo::RedoAttemptType; + use crate::walredo::harness::RedoHarness; #[tokio::test] async fn test_ping() { @@ -692,33 +725,4 @@ mod tests { ) ] } - - struct RedoHarness { - // underscored because unused, except for removal at drop - _repo_dir: camino_tempfile::Utf8TempDir, - manager: PostgresRedoManager, - tenant_shard_id: TenantShardId, - } - - impl RedoHarness { - fn new() -> anyhow::Result { - crate::tenant::harness::setup_logging(); - - let repo_dir = camino_tempfile::tempdir()?; - let conf = PageServerConf::dummy_conf(repo_dir.path().to_path_buf()); - let conf = Box::leak(Box::new(conf)); - let tenant_shard_id = TenantShardId::unsharded(TenantId::generate()); - - let manager = PostgresRedoManager::new(conf, tenant_shard_id); - - Ok(RedoHarness { - _repo_dir: repo_dir, - manager, - tenant_shard_id, - }) - } - fn span(&self) -> tracing::Span { - tracing::info_span!("RedoHarness", tenant_id=%self.tenant_shard_id.tenant_id, shard_id=%self.tenant_shard_id.shard_slug()) - } - } } diff --git a/test_runner/fixtures/metrics.py b/test_runner/fixtures/metrics.py index 1dd4fe8316..6e600b5a86 100644 --- a/test_runner/fixtures/metrics.py +++ b/test_runner/fixtures/metrics.py @@ -159,6 +159,9 @@ PAGESERVER_GLOBAL_METRICS: tuple[str, ...] = ( ) PAGESERVER_PER_TENANT_METRICS: tuple[str, ...] = ( + # BEGIN_HADRON + "pageserver_active_storage_operations_count", + # END_HADRON "pageserver_current_logical_size", "pageserver_resident_physical_size", "pageserver_io_operations_bytes_total", diff --git a/test_runner/fixtures/pageserver/allowed_errors.py b/test_runner/fixtures/pageserver/allowed_errors.py index 6a715c4b93..0e4dd571c0 100755 --- a/test_runner/fixtures/pageserver/allowed_errors.py +++ b/test_runner/fixtures/pageserver/allowed_errors.py @@ -111,6 +111,14 @@ DEFAULT_PAGESERVER_ALLOWED_ERRORS = ( ".*stalling layer flushes for compaction backpressure.*", ".*layer roll waiting for flush due to compaction backpressure.*", ".*BatchSpanProcessor.*", + # Can happen in tests that purposely wipe pageserver "local disk" data. + ".*Local data loss suspected.*", + # Too many frozen layers error is normal during intensive benchmarks + ".*too many frozen layers.*", + # Transient errors when resolving tenant shards by page service + ".*Fail to resolve tenant shard in attempt.*", + # Expected warnings when pageserver has not refreshed GC info yet + ".*pitr LSN/interval not found, skipping force image creation LSN calculation.*", ".*No broker updates received for a while.*", *( [ diff --git a/test_runner/regress/test_sharding.py b/test_runner/regress/test_sharding.py index 93c621f564..8ff767eca4 100644 --- a/test_runner/regress/test_sharding.py +++ b/test_runner/regress/test_sharding.py @@ -1,8 +1,11 @@ from __future__ import annotations import os +import random +import threading import time from collections import defaultdict +from threading import Event from typing import TYPE_CHECKING, Any import pytest @@ -1505,6 +1508,171 @@ def test_sharding_split_failures( env.storage_controller.consistency_check() +@pytest.mark.skip(reason="The backpressure change has not been merged yet.") +def test_back_pressure_during_split(neon_env_builder: NeonEnvBuilder): + """ + Test backpressure can ignore new shards during tenant split so that if we abort the split, + PG can continue without being blocked. + """ + DBNAME = "regression" + + init_shard_count = 4 + neon_env_builder.num_pageservers = init_shard_count + stripe_size = 32 + + env = neon_env_builder.init_start( + initial_tenant_shard_count=init_shard_count, initial_tenant_shard_stripe_size=stripe_size + ) + + env.storage_controller.allowed_errors.extend( + [ + # All split failures log a warning when then enqueue the abort operation + ".*Enqueuing background abort.*", + # Tolerate any error lots that mention a failpoint + ".*failpoint.*", + ] + ) + + endpoint = env.endpoints.create( + "main", + config_lines=[ + "max_replication_write_lag = 1MB", + "databricks.max_wal_mb_per_second = 1", + "neon.max_cluster_size = 10GB", + ], + ) + endpoint.respec(skip_pg_catalog_updates=False) # Needed for databricks_system to get created. + endpoint.start() + + endpoint.safe_psql(f"CREATE DATABASE {DBNAME}") + + endpoint.safe_psql("CREATE TABLE usertable ( YCSB_KEY INT, FIELD0 TEXT);") + write_done = Event() + + def write_data(write_done): + while not write_done.is_set(): + endpoint.safe_psql( + "INSERT INTO usertable SELECT random(), repeat('a', 1000);", log_query=False + ) + log.info("write_data thread exiting") + + writer_thread = threading.Thread(target=write_data, args=(write_done,)) + writer_thread.start() + + env.storage_controller.configure_failpoints(("shard-split-pre-complete", "return(1)")) + # split the tenant + with pytest.raises(StorageControllerApiException): + env.storage_controller.tenant_shard_split(env.initial_tenant, shard_count=16) + + write_done.set() + writer_thread.join() + + # writing more data to page servers after split is aborted + for _i in range(5000): + endpoint.safe_psql( + "INSERT INTO usertable SELECT random(), repeat('a', 1000);", log_query=False + ) + + # wait until write lag becomes 0 + def check_write_lag_is_zero(): + res = endpoint.safe_psql( + """ + SELECT + pg_wal_lsn_diff(pg_current_wal_flush_lsn(), received_lsn) as received_lsn_lag + FROM neon.backpressure_lsns(); + """, + dbname="databricks_system", + log_query=False, + ) + log.info(f"received_lsn_lag = {res[0][0]}") + assert res[0][0] == 0 + + wait_until(check_write_lag_is_zero) + endpoint.stop_and_destroy() + + +# BEGIN_HADRON +def test_shard_resolve_during_split_abort(neon_env_builder: NeonEnvBuilder): + """ + Tests that page service is able to resolve the correct shard during tenant split without causing query errors + """ + DBNAME = "regression" + WORKER_THREADS = 16 + ROW_COUNT = 10000 + + init_shard_count = 4 + neon_env_builder.num_pageservers = 1 + stripe_size = 16 + + env = neon_env_builder.init_start( + initial_tenant_shard_count=init_shard_count, initial_tenant_shard_stripe_size=stripe_size + ) + + env.storage_controller.allowed_errors.extend( + [ + # All split failures log a warning when then enqueue the abort operation + ".*Enqueuing background abort.*", + # Tolerate any error lots that mention a failpoint + ".*failpoint.*", + ] + ) + + endpoint = env.endpoints.create("main") + endpoint.respec(skip_pg_catalog_updates=False) # Needed for databricks_system to get created. + endpoint.start() + + endpoint.safe_psql(f"CREATE DATABASE {DBNAME}") + + # generate 10MB of data + endpoint.safe_psql( + f"CREATE TABLE usertable AS SELECT s AS KEY, repeat('a', 1000) as VALUE from generate_series(1, {ROW_COUNT}) s;" + ) + read_done = Event() + + def read_data(read_done): + i = 0 + while not read_done.is_set() or i < 10: + endpoint.safe_psql( + f"SELECT * FROM usertable where KEY = {random.randint(1, ROW_COUNT)}", + log_query=False, + ) + i += 1 + log.info(f"read_data thread exiting. Executed {i} queries.") + + reader_threads = [] + for _i in range(WORKER_THREADS): + reader_thread = threading.Thread(target=read_data, args=(read_done,)) + reader_thread.start() + reader_threads.append(reader_thread) + + env.storage_controller.configure_failpoints(("shard-split-pre-complete", "return(1)")) + # split the tenant + with pytest.raises(StorageControllerApiException): + env.storage_controller.tenant_shard_split(env.initial_tenant, shard_count=16) + + # wait until abort is done + def check_tenant_status(): + active_count = 0 + for i in range(init_shard_count): + status = env.pageserver.http_client().tenant_status( + TenantShardId(env.initial_tenant, i, init_shard_count) + ) + if status["state"]["slug"] == "Active": + active_count += 1 + assert active_count == 4 + + wait_until(check_tenant_status) + + read_done.set() + for thread in reader_threads: + thread.join() + + endpoint.stop() + + +# END_HADRON + + def test_sharding_backpressure(neon_env_builder: NeonEnvBuilder): """ Check a scenario when one of the shards is much slower than others. From 8223c1ba9d9806ad4bf267d418d065717d70a71f Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Tue, 8 Jul 2025 22:58:18 +0200 Subject: [PATCH 23/65] pageserver/client_grpc: add initial gRPC client pools (#12434) ## Problem The communicator will need gRPC channel/client/stream pools for efficient reuse across many backends. Touches #11735. Requires #12396. ## Summary of changes Adds three nested resource pools: * `ChannelPool` for gRPC channels (i.e. TCP connections). * `ClientPool` for gRPC clients (i.e. `page_api::Client`). Acquires channels from `ChannelPool`. * `StreamPool` for gRPC GetPage streams. Acquires clients from `ClientPool`. These are minimal functional implementations that will need further improvements and performance optimization. However, the overall structure is expected to be roughly final, so reviews should focus on that. The pools are not yet in use, but will form the foundation of a rich gRPC Pageserver client used by the communicator (see #12462). This PR also adds the initial crate scaffolding for that client. See doc comments for details. --- Cargo.lock | 15 + Cargo.toml | 1 + pageserver/client_grpc/Cargo.toml | 16 + pageserver/client_grpc/src/lib.rs | 14 + pageserver/client_grpc/src/pool.rs | 586 +++++++++++++++++++++++++++++ 5 files changed, 632 insertions(+) create mode 100644 pageserver/client_grpc/Cargo.toml create mode 100644 pageserver/client_grpc/src/lib.rs create mode 100644 pageserver/client_grpc/src/pool.rs diff --git a/Cargo.lock b/Cargo.lock index 39c43d94a3..893932fb9d 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -4493,6 +4493,21 @@ dependencies = [ "workspace_hack", ] +[[package]] +name = "pageserver_client_grpc" +version = "0.1.0" +dependencies = [ + "anyhow", + "futures", + "pageserver_page_api", + "tokio", + "tokio-stream", + "tonic 0.13.1", + "tracing", + "utils", + "workspace_hack", +] + [[package]] name = "pageserver_compaction" version = "0.1.0" diff --git a/Cargo.toml b/Cargo.toml index 840e3c6036..14f2cfcb56 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -8,6 +8,7 @@ members = [ "pageserver/compaction", "pageserver/ctl", "pageserver/client", + "pageserver/client_grpc", "pageserver/pagebench", "pageserver/page_api", "proxy", diff --git a/pageserver/client_grpc/Cargo.toml b/pageserver/client_grpc/Cargo.toml new file mode 100644 index 0000000000..5a3a2761c2 --- /dev/null +++ b/pageserver/client_grpc/Cargo.toml @@ -0,0 +1,16 @@ +[package] +name = "pageserver_client_grpc" +version = "0.1.0" +edition.workspace = true +license.workspace = true + +[dependencies] +anyhow.workspace = true +futures.workspace = true +pageserver_page_api.workspace = true +tokio.workspace = true +tokio-stream.workspace = true +tonic.workspace = true +tracing.workspace = true +utils.workspace = true +workspace_hack.workspace = true diff --git a/pageserver/client_grpc/src/lib.rs b/pageserver/client_grpc/src/lib.rs new file mode 100644 index 0000000000..c900e1a939 --- /dev/null +++ b/pageserver/client_grpc/src/lib.rs @@ -0,0 +1,14 @@ +//! A rich Pageserver gRPC client. This client is more capable than the basic `page_api::Client` +//! gRPC client, and supports: +//! +//! * Sharded tenants across multiple Pageservers. +//! * Pooling of connections, clients, and streams for efficient resource use. +//! * Concurrent use by many callers. +//! * Internal handling of GetPage bidirectional streams. +//! * Automatic retries. +//! * Observability. +//! +//! The client is under development, this package is just a shell. + +#[allow(unused)] +mod pool; diff --git a/pageserver/client_grpc/src/pool.rs b/pageserver/client_grpc/src/pool.rs new file mode 100644 index 0000000000..518e4e5b84 --- /dev/null +++ b/pageserver/client_grpc/src/pool.rs @@ -0,0 +1,586 @@ +//! This module provides various Pageserver gRPC client resource pools. +//! +//! These pools are designed to reuse gRPC resources (connections, clients, and streams) across +//! multiple concurrent callers (i.e. Postgres backends). This avoids the resource cost and latency +//! of creating dedicated TCP connections and server tasks for every Postgres backend. +//! +//! Each resource has its own, nested pool. The pools are custom-built for the properties of each +//! resource -- they are different enough that a generic pool isn't suitable. +//! +//! * ChannelPool: manages gRPC channels (TCP connections) to a single Pageserver. Multiple clients +//! can acquire and use the same channel concurrently (via HTTP/2 stream multiplexing), up to a +//! per-channel client limit. Channels may be closed when they are no longer used by any clients. +//! +//! * ClientPool: manages gRPC clients for a single tenant shard. Each client acquires a (shared) +//! channel from the ChannelPool for the client's lifetime. A client can only be acquired by a +//! single caller at a time, and is returned to the pool when dropped. Idle clients may be removed +//! from the pool after some time, to free up the channel. +//! +//! * StreamPool: manages bidirectional gRPC GetPage streams. Each stream acquires a client from the +//! ClientPool for the stream's lifetime. Internal streams are not exposed to callers; instead, it +//! returns a guard that can be used to send a single request, to properly enforce queue depth and +//! route responses. Internally, the pool will reuse or spin up a suitable stream for the request, +//! possibly pipelining multiple requests from multiple callers on the same stream (up to some +//! queue depth). Idle streams may be removed from the pool after a while to free up the client. +//! +//! Each channel corresponds to one TCP connection. Each client unary request and each stream +//! corresponds to one HTTP/2 stream and server task. +//! +//! TODO: error handling (including custom error types). +//! TODO: observability. + +use std::collections::{BTreeMap, HashMap}; +use std::ops::{Deref, DerefMut}; +use std::sync::atomic::{AtomicUsize, Ordering}; +use std::sync::{Arc, Mutex, Weak}; + +use futures::StreamExt as _; +use tokio::sync::mpsc::{Receiver, Sender}; +use tokio::sync::{OwnedSemaphorePermit, Semaphore, mpsc, oneshot}; +use tonic::transport::{Channel, Endpoint}; +use tracing::{error, warn}; + +use pageserver_page_api as page_api; +use utils::id::{TenantId, TimelineId}; +use utils::shard::ShardIndex; + +/// Max number of concurrent clients per channel. +/// +/// TODO: tune these constants, and make them configurable. +/// TODO: consider separate limits for unary and streaming clients, so we don't fill up channels +/// with only streams. +const CLIENTS_PER_CHANNEL: usize = 16; + +/// Maximum number of concurrent clients per `ClientPool`. +const CLIENT_LIMIT: usize = 64; + +/// Max number of pipelined requests per gRPC GetPage stream. +const STREAM_QUEUE_DEPTH: usize = 2; + +/// A gRPC channel pool, for a single Pageserver. A channel is shared by many clients (via HTTP/2 +/// stream multiplexing), up to `CLIENTS_PER_CHANNEL`. The pool does not limit the number of +/// channels, and instead relies on `ClientPool` to limit the number of concurrent clients. +/// +/// The pool is always wrapped in an outer `Arc`, to allow long-lived guards across tasks/threads. +/// +/// TODO: reap idle channels. +/// TODO: consider prewarming a set of channels, to avoid initial connection latency. +/// TODO: consider adding a circuit breaker for errors and fail fast. +pub struct ChannelPool { + /// Pageserver endpoint to connect to. + endpoint: Endpoint, + /// Open channels. + channels: Mutex>, + /// Channel ID generator. + next_channel_id: AtomicUsize, +} + +type ChannelID = usize; + +struct ChannelEntry { + /// The gRPC channel (i.e. TCP connection). Shared by multiple clients. + channel: Channel, + /// Number of clients using this channel. + clients: usize, +} + +impl ChannelPool { + /// Creates a new channel pool for the given Pageserver endpoint. + pub fn new(endpoint: E) -> anyhow::Result> + where + E: TryInto + Send + Sync + 'static, + >::Error: std::error::Error + Send + Sync, + { + Ok(Arc::new(Self { + endpoint: endpoint.try_into()?, + channels: Mutex::default(), + next_channel_id: AtomicUsize::default(), + })) + } + + /// Acquires a gRPC channel for a client. Multiple clients may acquire the same channel. + /// + /// This never blocks (except for mutex acquisition). The channel is connected lazily on first + /// use, and the `ChannelPool` does not have a channel limit. Channels will be re-established + /// automatically on failure (TODO: verify). + /// + /// Callers should not clone the returned channel, and must hold onto the returned guard as long + /// as the channel is in use. It is unfortunately not possible to enforce this: the Protobuf + /// client requires an owned `Channel` and we don't have access to the channel's internal + /// refcount. + /// + /// This is not performance-sensitive. It is only called when creating a new client, and clients + /// are pooled and reused by `ClientPool`. The total number of channels will also be small. O(n) + /// performance is therefore okay. + pub fn get(self: &Arc) -> ChannelGuard { + let mut channels = self.channels.lock().unwrap(); + + // Try to find an existing channel with available capacity. We check entries in BTreeMap + // order, to fill up the lower-ordered channels first. The ClientPool also prefers clients + // with lower-ordered channel IDs first. This will cluster clients in lower-ordered + // channels, and free up higher-ordered channels such that they can be reaped. + for (&id, entry) in channels.iter_mut() { + assert!(entry.clients <= CLIENTS_PER_CHANNEL, "channel overflow"); + if entry.clients < CLIENTS_PER_CHANNEL { + entry.clients += 1; + return ChannelGuard { + pool: Arc::downgrade(self), + id, + channel: Some(entry.channel.clone()), + }; + } + } + + // Create a new channel. We connect lazily on first use, such that we don't block here and + // other clients can join onto the same channel while it's connecting. + let channel = self.endpoint.connect_lazy(); + + let id = self.next_channel_id.fetch_add(1, Ordering::Relaxed); + let entry = ChannelEntry { + channel: channel.clone(), + clients: 1, // account for the guard below + }; + channels.insert(id, entry); + + ChannelGuard { + pool: Arc::downgrade(self), + id, + channel: Some(channel), + } + } +} + +/// Tracks a channel acquired from the pool. The owned inner channel can be obtained with `take()`, +/// since the gRPC client requires an owned `Channel`. +pub struct ChannelGuard { + pool: Weak, + id: ChannelID, + channel: Option, +} + +impl ChannelGuard { + /// Returns the inner owned channel. Panics if called more than once. The caller must hold onto + /// the guard as long as the channel is in use, and should not clone it. + pub fn take(&mut self) -> Channel { + self.channel.take().expect("channel already taken") + } +} + +/// Returns the channel to the pool. +impl Drop for ChannelGuard { + fn drop(&mut self) { + let Some(pool) = self.pool.upgrade() else { + return; // pool was dropped + }; + let mut channels = pool.channels.lock().unwrap(); + let entry = channels.get_mut(&self.id).expect("unknown channel"); + assert!(entry.clients > 0, "channel underflow"); + entry.clients -= 1; + } +} + +/// A pool of gRPC clients for a single tenant shard. Each client acquires a channel from the inner +/// `ChannelPool`. A client is only given out to single caller at a time. The pool limits the total +/// number of concurrent clients to `CLIENT_LIMIT` via semaphore. +/// +/// The pool is always wrapped in an outer `Arc`, to allow long-lived guards across tasks/threads. +/// +/// TODO: reap idle clients. +pub struct ClientPool { + /// Tenant ID. + tenant_id: TenantId, + /// Timeline ID. + timeline_id: TimelineId, + /// Shard ID. + shard_id: ShardIndex, + /// Authentication token, if any. + auth_token: Option, + /// Channel pool to acquire channels from. + channel_pool: Arc, + /// Limits the max number of concurrent clients for this pool. + limiter: Arc, + /// Idle pooled clients. Acquired clients are removed from here and returned on drop. + /// + /// The first client in the map will be acquired next. The map is sorted by client ID, which in + /// turn is sorted by its channel ID, such that we prefer acquiring idle clients from + /// lower-ordered channels. This allows us to free up and reap higher-numbered channels as idle + /// clients are reaped. + idle: Mutex>, + /// Unique client ID generator. + next_client_id: AtomicUsize, +} + +type ClientID = (ChannelID, usize); + +struct ClientEntry { + /// The pooled gRPC client. + client: page_api::Client, + /// The channel guard for the channel used by the client. + channel_guard: ChannelGuard, +} + +impl ClientPool { + /// Creates a new client pool for the given tenant shard. Channels are acquired from the given + /// `ChannelPool`, which must point to a Pageserver that hosts the tenant shard. + pub fn new( + channel_pool: Arc, + tenant_id: TenantId, + timeline_id: TimelineId, + shard_id: ShardIndex, + auth_token: Option, + ) -> Arc { + Arc::new(Self { + tenant_id, + timeline_id, + shard_id, + auth_token, + channel_pool, + idle: Mutex::default(), + limiter: Arc::new(Semaphore::new(CLIENT_LIMIT)), + next_client_id: AtomicUsize::default(), + }) + } + + /// Gets a client from the pool, or creates a new one if necessary. Connections are established + /// lazily and do not block, but this call can block if the pool is at `CLIENT_LIMIT`. The + /// client is returned to the pool when the guard is dropped. + /// + /// This is moderately performance-sensitive. It is called for every unary request, but these + /// establish a new gRPC stream per request so they're already expensive. GetPage requests use + /// the `StreamPool` instead. + pub async fn get(self: &Arc) -> anyhow::Result { + let permit = self + .limiter + .clone() + .acquire_owned() + .await + .expect("never closed"); + + // Fast path: acquire an idle client from the pool. + if let Some((id, entry)) = self.idle.lock().unwrap().pop_first() { + return Ok(ClientGuard { + pool: Arc::downgrade(self), + id, + client: Some(entry.client), + channel_guard: Some(entry.channel_guard), + permit, + }); + } + + // Slow path: construct a new client. + let mut channel_guard = self.channel_pool.get(); + let client = page_api::Client::new( + channel_guard.take(), + self.tenant_id, + self.timeline_id, + self.shard_id, + self.auth_token.clone(), + None, + )?; + + Ok(ClientGuard { + pool: Arc::downgrade(self), + id: ( + channel_guard.id, + self.next_client_id.fetch_add(1, Ordering::Relaxed), + ), + client: Some(client), + channel_guard: Some(channel_guard), + permit, + }) + } +} + +/// A client acquired from the pool. The inner client can be accessed via Deref. The client is +/// returned to the pool when dropped. +pub struct ClientGuard { + pool: Weak, + id: ClientID, + client: Option, // Some until dropped + channel_guard: Option, // Some until dropped + permit: OwnedSemaphorePermit, +} + +impl Deref for ClientGuard { + type Target = page_api::Client; + + fn deref(&self) -> &Self::Target { + self.client.as_ref().expect("not dropped") + } +} + +impl DerefMut for ClientGuard { + fn deref_mut(&mut self) -> &mut Self::Target { + self.client.as_mut().expect("not dropped") + } +} + +/// Returns the client to the pool. +impl Drop for ClientGuard { + fn drop(&mut self) { + let Some(pool) = self.pool.upgrade() else { + return; // pool was dropped + }; + let entry = ClientEntry { + client: self.client.take().expect("dropped once"), + channel_guard: self.channel_guard.take().expect("dropped once"), + }; + pool.idle.lock().unwrap().insert(self.id, entry); + + _ = self.permit; // returned on drop, referenced for visibility + } +} + +/// A pool of bidirectional gRPC streams. Currently only used for GetPage streams. Each stream +/// acquires a client from the inner `ClientPool` for the stream's lifetime. +/// +/// Individual streams are not exposed to callers -- instead, the returned guard can be used to send +/// a single request and await the response. Internally, requests are multiplexed across streams and +/// channels. This allows proper queue depth enforcement and response routing. +/// +/// TODO: reap idle streams. +/// TODO: consider making this generic over request and response types; not currently needed. +pub struct StreamPool { + /// The client pool to acquire clients from. + client_pool: Arc, + /// All pooled streams. + /// + /// Incoming requests will be sent over an existing stream with available capacity. If all + /// streams are full, a new one is spun up and added to the pool (up to the `ClientPool` limit). + /// Each stream has an associated Tokio task that processes requests and responses. + streams: Arc>>, + /// Limits the max number of concurrent requests (not streams). + limiter: Arc, + /// Stream ID generator. + next_stream_id: AtomicUsize, +} + +type StreamID = usize; +type RequestSender = Sender<(page_api::GetPageRequest, ResponseSender)>; +type RequestReceiver = Receiver<(page_api::GetPageRequest, ResponseSender)>; +type ResponseSender = oneshot::Sender>; + +struct StreamEntry { + /// Sends caller requests to the stream task. The stream task exits when this is dropped. + sender: RequestSender, + /// Number of in-flight requests on this stream. This is an atomic to allow decrementing it on + /// completion without acquiring the `StreamPool::streams` lock. + queue_depth: Arc, +} + +impl StreamPool { + /// Creates a new stream pool, using the given client pool. + /// + /// NB: the stream pool should use a dedicated client pool. Otherwise, long-lived streams may + /// fill up the client pool and starve out unary requests. Client pools can share the same + /// `ChannelPool` though, since the channel pool is unbounded. + pub fn new(client_pool: Arc) -> Arc { + Arc::new(Self { + client_pool, + streams: Arc::default(), + limiter: Arc::new(Semaphore::new(CLIENT_LIMIT * STREAM_QUEUE_DEPTH)), + next_stream_id: AtomicUsize::default(), + }) + } + + /// Acquires an available stream from the pool, or spins up a new stream async if all streams + /// are full. Returns a guard that can be used to send a single request on the stream and await + /// the response, with queue depth quota already acquired. Blocks if the pool is at capacity + /// (i.e. `CLIENT_LIMIT * STREAM_QUEUE_DEPTH` requests in flight). + /// + /// This is very performance-sensitive, as it is on the GetPage hot path. + /// + /// TODO: this must do something more sophisticated for performance. We want: + /// + /// * Cheap, concurrent access in the common case where we can use a pooled stream. + /// * Quick acquisition of pooled streams with available capacity. + /// * Prefer streams that belong to lower-numbered channels, to reap idle channels. + /// * Prefer filling up existing streams' queue depth before spinning up new streams. + /// * Don't hold a lock while spinning up new streams. + /// * Allow concurrent clients to join onto streams while they're spun up. + /// * Allow spinning up multiple streams concurrently, but don't overshoot limits. + /// + /// For now, we just do something simple and functional, but very inefficient (linear scan). + pub async fn get(&self) -> StreamGuard { + let permit = self + .limiter + .clone() + .acquire_owned() + .await + .expect("never closed"); + let mut streams = self.streams.lock().unwrap(); + + // Look for a pooled stream with available capacity. + for entry in streams.values() { + assert!( + entry.queue_depth.load(Ordering::Relaxed) <= STREAM_QUEUE_DEPTH, + "stream queue overflow" + ); + if entry + .queue_depth + .fetch_update(Ordering::SeqCst, Ordering::SeqCst, |queue_depth| { + // Increment the queue depth via compare-and-swap. + // TODO: review ordering. + (queue_depth < STREAM_QUEUE_DEPTH).then_some(queue_depth + 1) + }) + .is_ok() + { + return StreamGuard { + sender: entry.sender.clone(), + queue_depth: entry.queue_depth.clone(), + permit, + }; + } + } + + // No available stream, spin up a new one. We install the stream entry in the pool first and + // return the guard, while spinning up the stream task async. This allows other callers to + // join onto this stream and also create additional streams concurrently if this fills up. + let id = self.next_stream_id.fetch_add(1, Ordering::Relaxed); + let queue_depth = Arc::new(AtomicUsize::new(1)); // reserve quota for this caller + let (req_tx, req_rx) = mpsc::channel(STREAM_QUEUE_DEPTH); + let entry = StreamEntry { + sender: req_tx.clone(), + queue_depth: queue_depth.clone(), + }; + streams.insert(id, entry); + + // NB: make sure we don't overshoot the client limit. The semaphore limit is CLIENT_LIMIT * + // STREAM_QUEUE_DEPTH, but if we were to misaccount queue depth we'd try to spin up more + // streams than CLIENT_LIMIT and block on the client pool ~forever. This should not happen + // because we only acquire queue depth under lock and after acquiring a semaphore permit. + assert!(streams.len() <= CLIENT_LIMIT, "stream overflow"); + + let client_pool = self.client_pool.clone(); + let streams = self.streams.clone(); + + tokio::spawn(async move { + if let Err(err) = Self::run_stream(client_pool, req_rx).await { + error!("stream failed: {err}"); + } + // Remove stream from pool on exit. + let entry = streams.lock().unwrap().remove(&id); + assert!(entry.is_some(), "unknown stream ID: {id}"); + }); + + StreamGuard { + sender: req_tx, + queue_depth, + permit, + } + } + + /// Runs a stream task. This acquires a client from the `ClientPool` and establishes a + /// bidirectional GetPage stream, then forwards requests and responses between callers and the + /// stream. It does not track or enforce queue depths -- that's done by `get()` since it must be + /// atomic with pool stream acquisition. + /// + /// The task exits when the request channel is closed, or on a stream error. The caller is + /// responsible for removing the stream from the pool on exit. + async fn run_stream( + client_pool: Arc, + mut caller_rx: RequestReceiver, + ) -> anyhow::Result<()> { + // Acquire a client from the pool and create a stream. + let mut client = client_pool.get().await?; + + let (req_tx, req_rx) = mpsc::channel(STREAM_QUEUE_DEPTH); + let req_stream = tokio_stream::wrappers::ReceiverStream::new(req_rx); + let mut resp_stream = client.get_pages(req_stream).await?; + + // Track caller response channels by request ID. If the task returns early, these response + // channels will be dropped and the waiting callers will receive an error. + let mut callers = HashMap::with_capacity(STREAM_QUEUE_DEPTH); + + // Process requests and responses. + loop { + // NB: this can trip if the server doesn't respond to a request, so only debug_assert. + debug_assert!(callers.len() <= STREAM_QUEUE_DEPTH, "stream queue overflow"); + + tokio::select! { + // Receive requests from callers and send them to the stream. + req = caller_rx.recv() => { + // Shut down if request channel is closed. + let Some((req, resp_tx)) = req else { + return Ok(()); + }; + + // Store the response channel by request ID. + if callers.contains_key(&req.request_id) { + // Error on request ID duplicates. Ignore callers that went away. + _ = resp_tx.send(Err(tonic::Status::invalid_argument( + format!("duplicate request ID: {}", req.request_id), + ))); + continue; + } + callers.insert(req.request_id, resp_tx); + + // Send the request on the stream. Bail out if the send fails. + req_tx.send(req).await.map_err(|_| { + tonic::Status::unavailable("stream closed") + })?; + } + + // Receive responses from the stream and send them to callers. + resp = resp_stream.next() => { + // Shut down if the stream is closed, and bail out on stream errors. + let Some(resp) = resp.transpose()? else { + return Ok(()) + }; + + // Send the response to the caller. Ignore errors if the caller went away. + let Some(resp_tx) = callers.remove(&resp.request_id) else { + warn!("received response for unknown request ID: {}", resp.request_id); + continue; + }; + _ = resp_tx.send(Ok(resp)); + } + } + } + } +} + +/// A pooled stream reference. Can be used to send a single request, to properly enforce queue +/// depth. Queue depth is already reserved and will be returned on drop. +pub struct StreamGuard { + sender: RequestSender, + queue_depth: Arc, + permit: OwnedSemaphorePermit, +} + +impl StreamGuard { + /// Sends a request on the stream and awaits the response. Consumes the guard, since it's only + /// valid for a single request (to enforce queue depth). This also drops the guard on return and + /// returns the queue depth quota to the pool. + /// + /// The `GetPageRequest::request_id` must be unique across in-flight requests. + /// + /// NB: errors are often returned as `GetPageResponse::status_code` instead of `tonic::Status` + /// to avoid tearing down the stream for per-request errors. Callers must check this. + pub async fn send( + self, + req: page_api::GetPageRequest, + ) -> tonic::Result { + let (resp_tx, resp_rx) = oneshot::channel(); + + self.sender + .send((req, resp_tx)) + .await + .map_err(|_| tonic::Status::unavailable("stream closed"))?; + + resp_rx + .await + .map_err(|_| tonic::Status::unavailable("stream closed"))? + } +} + +impl Drop for StreamGuard { + fn drop(&mut self) { + // Release the queue depth reservation on drop. This can prematurely decrement it if dropped + // before the response is received, but that's okay. + let prev_queue_depth = self.queue_depth.fetch_sub(1, Ordering::SeqCst); + assert!(prev_queue_depth > 0, "stream queue underflow"); + + _ = self.permit; // returned on drop, referenced for visibility + } +} From 09ff22a4d42ee0c97724e61dc08f325b2ea39111 Mon Sep 17 00:00:00 2001 From: Suhas Thalanki <54014218+thesuhas@users.noreply.github.com> Date: Tue, 8 Jul 2025 17:12:26 -0400 Subject: [PATCH 24/65] fix(compute): removing `NEON_EXT_INT_UPD` log statement added for debugging verbosity (#12509) Removes the `NEON_EXT_INT_UPD` log statement that was added for debugging verbosity. --- compute_tools/src/compute.rs | 8 -------- 1 file changed, 8 deletions(-) diff --git a/compute_tools/src/compute.rs b/compute_tools/src/compute.rs index f25aff1110..ec6e6c1634 100644 --- a/compute_tools/src/compute.rs +++ b/compute_tools/src/compute.rs @@ -2433,19 +2433,11 @@ LIMIT 100", // If the value is -1, we never suspend so set the value to default collection. // If the value is 0, it means default, we will just continue to use the default. if spec.suspend_timeout_seconds == -1 || spec.suspend_timeout_seconds == 0 { - info!( - "[NEON_EXT_INT_UPD] Spec Timeout: {}, New Timeout: {}", - spec.suspend_timeout_seconds, DEFAULT_INSTALLED_EXTENSIONS_COLLECTION_INTERVAL - ); self.params.installed_extensions_collection_interval.store( DEFAULT_INSTALLED_EXTENSIONS_COLLECTION_INTERVAL, std::sync::atomic::Ordering::SeqCst, ); } else { - info!( - "[NEON_EXT_INT_UPD] Spec Timeout: {}", - spec.suspend_timeout_seconds - ); self.params.installed_extensions_collection_interval.store( spec.suspend_timeout_seconds as u64, std::sync::atomic::Ordering::SeqCst, From 4dee2bfd82b221f90b934e5d68343caa3a0e531f Mon Sep 17 00:00:00 2001 From: Trung Dinh Date: Tue, 8 Jul 2025 14:14:04 -0700 Subject: [PATCH 25/65] pageserver: Introduce config to enable/disable eviction task (#12496) ## Problem We lost capability to explicitly disable the global eviction task (for testing). ## Summary of changes Add an `enabled` flag to `DiskUsageEvictionTaskConfig` to indicate whether we should run the eviction job or not. --- libs/pageserver_api/src/config.rs | 21 ++++- pageserver/src/config.rs | 85 ++++++++++++++----- pageserver/src/disk_usage_eviction_task.rs | 4 +- pageserver/src/utilization.rs | 7 +- ...er_max_throughput_getpage_at_latest_lsn.py | 18 +++- 5 files changed, 103 insertions(+), 32 deletions(-) diff --git a/libs/pageserver_api/src/config.rs b/libs/pageserver_api/src/config.rs index 00d6b61399..dc7e9aed7f 100644 --- a/libs/pageserver_api/src/config.rs +++ b/libs/pageserver_api/src/config.rs @@ -5,6 +5,7 @@ mod tests; use const_format::formatcp; use posthog_client_lite::PostHogClientConfig; +use utils::serde_percent::Percent; pub const DEFAULT_PG_LISTEN_PORT: u16 = 64000; pub const DEFAULT_PG_LISTEN_ADDR: &str = formatcp!("127.0.0.1:{DEFAULT_PG_LISTEN_PORT}"); pub const DEFAULT_HTTP_LISTEN_PORT: u16 = 9898; @@ -223,7 +224,7 @@ pub struct ConfigToml { pub metric_collection_bucket: Option, #[serde(with = "humantime_serde")] pub synthetic_size_calculation_interval: Duration, - pub disk_usage_based_eviction: Option, + pub disk_usage_based_eviction: DiskUsageEvictionTaskConfig, pub test_remote_failures: u64, pub ondemand_download_behavior_treat_error_as_warn: bool, #[serde(with = "humantime_serde")] @@ -273,6 +274,7 @@ pub struct ConfigToml { } #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] +#[serde(default)] pub struct DiskUsageEvictionTaskConfig { pub max_usage_pct: utils::serde_percent::Percent, pub min_avail_bytes: u64, @@ -283,6 +285,21 @@ pub struct DiskUsageEvictionTaskConfig { /// Select sorting for evicted layers #[serde(default)] pub eviction_order: EvictionOrder, + pub enabled: bool, +} + +impl Default for DiskUsageEvictionTaskConfig { + fn default() -> Self { + Self { + max_usage_pct: Percent::new(80).unwrap(), + min_avail_bytes: 2_000_000_000, + period: Duration::from_secs(60), + #[cfg(feature = "testing")] + mock_statvfs: None, + eviction_order: EvictionOrder::default(), + enabled: true, + } + } } #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] @@ -738,7 +755,7 @@ impl Default for ConfigToml { metric_collection_bucket: (None), - disk_usage_based_eviction: (None), + disk_usage_based_eviction: DiskUsageEvictionTaskConfig::default(), test_remote_failures: (0), diff --git a/pageserver/src/config.rs b/pageserver/src/config.rs index 6e22f9f36e..99d7e0ca3a 100644 --- a/pageserver/src/config.rs +++ b/pageserver/src/config.rs @@ -28,7 +28,6 @@ use reqwest::Url; use storage_broker::Uri; use utils::id::{NodeId, TimelineId}; use utils::logging::{LogFormat, SecretString}; -use utils::serde_percent::Percent; use crate::tenant::storage_layer::inmemory_layer::IndexEntry; use crate::tenant::{TENANTS_SEGMENT_NAME, TIMELINES_SEGMENT_NAME}; @@ -146,7 +145,7 @@ pub struct PageServerConf { pub metric_collection_bucket: Option, pub synthetic_size_calculation_interval: Duration, - pub disk_usage_based_eviction: Option, + pub disk_usage_based_eviction: DiskUsageEvictionTaskConfig, pub test_remote_failures: u64, @@ -460,16 +459,7 @@ impl PageServerConf { metric_collection_endpoint, metric_collection_bucket, synthetic_size_calculation_interval, - disk_usage_based_eviction: Some(disk_usage_based_eviction.unwrap_or( - DiskUsageEvictionTaskConfig { - max_usage_pct: Percent::new(80).unwrap(), - min_avail_bytes: 2_000_000_000, - period: Duration::from_secs(60), - #[cfg(feature = "testing")] - mock_statvfs: None, - eviction_order: Default::default(), - }, - )), + disk_usage_based_eviction, test_remote_failures, ondemand_download_behavior_treat_error_as_warn, background_task_maximum_delay, @@ -719,8 +709,9 @@ mod tests { use std::time::Duration; use camino::Utf8PathBuf; + use pageserver_api::config::{DiskUsageEvictionTaskConfig, EvictionOrder}; use rstest::rstest; - use utils::id::NodeId; + use utils::{id::NodeId, serde_percent::Percent}; use super::PageServerConf; @@ -820,19 +811,69 @@ mod tests { .expect("parse_and_validate"); } - #[test] - fn test_config_disk_usage_based_eviction_is_valid() { - let input = r#" + #[rstest] + #[ + case::omit_the_whole_config( + DiskUsageEvictionTaskConfig { + max_usage_pct: Percent::new(80).unwrap(), + min_avail_bytes: 2_000_000_000, + period: Duration::from_secs(60), + eviction_order: Default::default(), + #[cfg(feature = "testing")] + mock_statvfs: None, + enabled: true, + }, + r#" control_plane_api = "http://localhost:6666" - "#; + "#, + )] + #[ + case::omit_enabled_field( + DiskUsageEvictionTaskConfig { + max_usage_pct: Percent::new(80).unwrap(), + min_avail_bytes: 1_000_000_000, + period: Duration::from_secs(60), + eviction_order: EvictionOrder::RelativeAccessed { + highest_layer_count_loses_first: true, + }, + #[cfg(feature = "testing")] + mock_statvfs: None, + enabled: true, + }, + r#" + control_plane_api = "http://localhost:6666" + disk_usage_based_eviction = { max_usage_pct = 80, min_avail_bytes = 1000000000, period = "60s" } + "#, + )] + #[case::disabled( + DiskUsageEvictionTaskConfig { + max_usage_pct: Percent::new(80).unwrap(), + min_avail_bytes: 2_000_000_000, + period: Duration::from_secs(60), + eviction_order: EvictionOrder::RelativeAccessed { + highest_layer_count_loses_first: true, + }, + #[cfg(feature = "testing")] + mock_statvfs: None, + enabled: false, + }, + r#" + control_plane_api = "http://localhost:6666" + disk_usage_based_eviction = { enabled = false } + "# + )] + fn test_config_disk_usage_based_eviction_is_valid( + #[case] expected_disk_usage_based_eviction: DiskUsageEvictionTaskConfig, + #[case] input: &str, + ) { let config_toml = toml_edit::de::from_str::(input) .expect("disk_usage_based_eviction is valid"); let workdir = Utf8PathBuf::from("/nonexistent"); let config = PageServerConf::parse_and_validate(NodeId(0), config_toml, &workdir).unwrap(); - let disk_usage_based_eviction = config.disk_usage_based_eviction.unwrap(); - assert_eq!(disk_usage_based_eviction.max_usage_pct.get(), 80); - assert_eq!(disk_usage_based_eviction.min_avail_bytes, 2_000_000_000); - assert_eq!(disk_usage_based_eviction.period, Duration::from_secs(60)); - assert_eq!(disk_usage_based_eviction.eviction_order, Default::default()); + let disk_usage_based_eviction = config.disk_usage_based_eviction; + assert_eq!( + expected_disk_usage_based_eviction, + disk_usage_based_eviction + ); } } diff --git a/pageserver/src/disk_usage_eviction_task.rs b/pageserver/src/disk_usage_eviction_task.rs index e6529fb201..f1d34664a8 100644 --- a/pageserver/src/disk_usage_eviction_task.rs +++ b/pageserver/src/disk_usage_eviction_task.rs @@ -171,7 +171,8 @@ pub fn launch_disk_usage_global_eviction_task( tenant_manager: Arc, background_jobs_barrier: completion::Barrier, ) -> Option { - let Some(task_config) = &conf.disk_usage_based_eviction else { + let task_config = &conf.disk_usage_based_eviction; + if !task_config.enabled { info!("disk usage based eviction task not configured"); return None; }; @@ -1268,6 +1269,7 @@ mod filesystem_level_usage { #[cfg(feature = "testing")] mock_statvfs: None, eviction_order: pageserver_api::config::EvictionOrder::default(), + enabled: true, }, total_bytes: 100_000, avail_bytes: 0, diff --git a/pageserver/src/utilization.rs b/pageserver/src/utilization.rs index 29d1a31aaf..ccfad7a391 100644 --- a/pageserver/src/utilization.rs +++ b/pageserver/src/utilization.rs @@ -45,9 +45,10 @@ pub(crate) fn regenerate( let (disk_wanted_bytes, shard_count) = tenant_manager.calculate_utilization()?; // Fetch the fraction of disk space which may be used - let disk_usable_pct = match conf.disk_usage_based_eviction.clone() { - Some(e) => e.max_usage_pct, - None => Percent::new(100).unwrap(), + let disk_usable_pct = if conf.disk_usage_based_eviction.enabled { + conf.disk_usage_based_eviction.max_usage_pct + } else { + Percent::new(100).unwrap() }; // Express a static value for how many shards we may schedule on one node diff --git a/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py b/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py index bf998a2a0a..8e7055ef78 100644 --- a/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py +++ b/test_runner/performance/pageserver/pagebench/test_pageserver_max_throughput_getpage_at_latest_lsn.py @@ -71,7 +71,13 @@ def test_pageserver_characterize_latencies_with_1_client_and_throughput_with_man n_clients: int, ): setup_and_run_pagebench_benchmark( - neon_env_builder, zenbenchmark, pg_bin, n_tenants, pgbench_scale, duration, n_clients + neon_env_builder, + zenbenchmark, + pg_bin, + n_tenants, + pgbench_scale, + duration, + n_clients, ) @@ -86,7 +92,8 @@ def setup_and_run_pagebench_benchmark( ): def record(metric, **kwargs): zenbenchmark.record( - metric_name=f"pageserver_max_throughput_getpage_at_latest_lsn.{metric}", **kwargs + metric_name=f"pageserver_max_throughput_getpage_at_latest_lsn.{metric}", + **kwargs, ) params: dict[str, tuple[Any, dict[str, Any]]] = {} @@ -104,7 +111,7 @@ def setup_and_run_pagebench_benchmark( # configure cache sizes like in prod page_cache_size = 16384 max_file_descriptors = 500000 - neon_env_builder.pageserver_config_override = f"page_cache_size={page_cache_size}; max_file_descriptors={max_file_descriptors}; disk_usage_based_eviction={{max_usage_pct=99, min_avail_bytes=0, period = '999y'}}" + neon_env_builder.pageserver_config_override = f"page_cache_size={page_cache_size}; max_file_descriptors={max_file_descriptors}; disk_usage_based_eviction={{enabled = false}}" tracing_config = PageserverTracingConfig( sampling_ratio=(0, 1000), @@ -120,7 +127,10 @@ def setup_and_run_pagebench_benchmark( page_cache_size * 8192, {"unit": "byte"}, ), - "pageserver_config_override.max_file_descriptors": (max_file_descriptors, {"unit": ""}), + "pageserver_config_override.max_file_descriptors": ( + max_file_descriptors, + {"unit": ""}, + ), "pageserver_config_override.sampling_ratio": (ratio, {"unit": ""}), } ) From c848b995b296124b686b4eeec54b08aee3e539a1 Mon Sep 17 00:00:00 2001 From: Vlad Lazar Date: Tue, 8 Jul 2025 22:24:59 +0100 Subject: [PATCH 26/65] safekeeper: trim dead senders before adding more (#12490) ## Problem We only trim the senders if we tried to send a message to them and discovered that the channel is closed. This is problematic if the pageserver keeps connecting while there's nothing to send back for the shard. In this scenario we never trim down the senders list and can panic due to the u8 limit. ## Summary of Changes Trim down the dead senders before adding a new one. Closes LKB-178 --- safekeeper/src/send_interpreted_wal.rs | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/safekeeper/src/send_interpreted_wal.rs b/safekeeper/src/send_interpreted_wal.rs index 3797ac39d1..72a436e25f 100644 --- a/safekeeper/src/send_interpreted_wal.rs +++ b/safekeeper/src/send_interpreted_wal.rs @@ -561,6 +561,20 @@ impl InterpretedWalReader { // Update internal and external state, then reset the WAL stream // if required. let senders = self.shard_senders.entry(shard_id).or_default(); + + // Clean up any shard senders that have dropped out before adding the new + // one. This avoids a build up of dead senders. + senders.retain(|sender| { + let closed = sender.tx.is_closed(); + + if closed { + let sender_id = ShardSenderId::new(shard_id, sender.sender_id); + tracing::info!("Removed shard sender {}", sender_id); + } + + !closed + }); + let new_sender_id = match senders.last() { Some(sender) => sender.sender_id.next(), None => SenderId::first() From 43dbded8c871ef51f778b84fd1b1c264ea54ad44 Mon Sep 17 00:00:00 2001 From: "Alex Chi Z." <4198311+skyzh@users.noreply.github.com> Date: Tue, 8 Jul 2025 18:32:51 -0400 Subject: [PATCH 27/65] fix(pageserver): disallow lease creation below the applied gc cutoff (#12489) ## Problem close LKB-209 ## Summary of changes - We should not allow lease creation below the applied gc cutoff. - Also removed the condition for `AttachedSingle`. We should always check the lease against the gc cutoff in all attach modes. --------- Signed-off-by: Alex Chi Z --- pageserver/src/tenant/timeline.rs | 42 +++++++++++++++---------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/pageserver/src/tenant/timeline.rs b/pageserver/src/tenant/timeline.rs index 293f3c484d..4a08172337 100644 --- a/pageserver/src/tenant/timeline.rs +++ b/pageserver/src/tenant/timeline.rs @@ -118,7 +118,6 @@ use crate::pgdatadir_mapping::{ MAX_AUX_FILE_V2_DELTAS, MetricsUpdate, }; use crate::task_mgr::TaskKind; -use crate::tenant::config::AttachmentMode; use crate::tenant::gc_result::GcResult; use crate::tenant::layer_map::LayerMap; use crate::tenant::metadata::TimelineMetadata; @@ -1771,30 +1770,31 @@ impl Timeline { existing_lease.clone() } Entry::Vacant(vacant) => { - // Reject already GC-ed LSN if we are in AttachedSingle and - // not blocked by the lsn lease deadline. + // Never allow a lease to be requested for an LSN below the applied GC cutoff. The data could have been deleted. + let latest_gc_cutoff_lsn = self.get_applied_gc_cutoff_lsn(); + if lsn < *latest_gc_cutoff_lsn { + bail!( + "tried to request an lsn lease for an lsn below the latest gc cutoff. requested at {} gc cutoff {}", + lsn, + *latest_gc_cutoff_lsn + ); + } + + // We allow create lease for those below the planned gc cutoff if we are still within the grace period + // of GC blocking. let validate = { let conf = self.tenant_conf.load(); - conf.location.attach_mode == AttachmentMode::Single - && !conf.is_gc_blocked_by_lsn_lease_deadline() + !conf.is_gc_blocked_by_lsn_lease_deadline() }; - if init || validate { - let latest_gc_cutoff_lsn = self.get_applied_gc_cutoff_lsn(); - if lsn < *latest_gc_cutoff_lsn { - bail!( - "tried to request an lsn lease for an lsn below the latest gc cutoff. requested at {} gc cutoff {}", - lsn, - *latest_gc_cutoff_lsn - ); - } - if lsn < planned_cutoff { - bail!( - "tried to request an lsn lease for an lsn below the planned gc cutoff. requested at {} planned gc cutoff {}", - lsn, - planned_cutoff - ); - } + // Do not allow initial lease creation to be below the planned gc cutoff. The client (compute_ctl) determines + // whether it is a initial lease creation or a renewal. + if (init || validate) && lsn < planned_cutoff { + bail!( + "tried to request an lsn lease for an lsn below the planned gc cutoff. requested at {} planned gc cutoff {}", + lsn, + planned_cutoff + ); } let dt: DateTime = valid_until.into(); From aac1f8efb1086b6db7c599c26912920f08d479b3 Mon Sep 17 00:00:00 2001 From: Christian Schwarz Date: Wed, 9 Jul 2025 10:41:36 +0200 Subject: [PATCH 28/65] refactor(compaction): eliminate `CompactionError::CollectKeyspaceError` variant (#12517) The only differentiated handling of it is for `is_critical`, which in turn is a `matches!()` on several variants of the `enum CollectKeyspaceError` which is the value contained insided `CompactionError::CollectKeyspaceError`. This PR introduces a new error for `repartition()`, allowing its immediate callers to inspect it like `is_critical` did. A drive-by fix is more precise classification of WaitLsnError::BadState when mapping to `tonic::Status`. refs - https://databricks.atlassian.net/browse/LKB-182 --- pageserver/src/http/routes.rs | 1 - pageserver/src/pgdatadir_mapping.rs | 17 +++ pageserver/src/tenant.rs | 7 - pageserver/src/tenant/tasks.rs | 3 +- pageserver/src/tenant/timeline.rs | 129 ++++++++++++------- pageserver/src/tenant/timeline/compaction.rs | 43 +++++-- 6 files changed, 130 insertions(+), 70 deletions(-) diff --git a/pageserver/src/http/routes.rs b/pageserver/src/http/routes.rs index 0e40dbcd15..2995a37089 100644 --- a/pageserver/src/http/routes.rs +++ b/pageserver/src/http/routes.rs @@ -2502,7 +2502,6 @@ async fn timeline_checkpoint_handler( .map_err(|e| match e { CompactionError::ShuttingDown => ApiError::ShuttingDown, - CompactionError::CollectKeySpaceError(e) => ApiError::InternalServerError(anyhow::anyhow!(e)), CompactionError::Other(e) => ApiError::InternalServerError(e), } )?; diff --git a/pageserver/src/pgdatadir_mapping.rs b/pageserver/src/pgdatadir_mapping.rs index 31f38d485f..8532a6938f 100644 --- a/pageserver/src/pgdatadir_mapping.rs +++ b/pageserver/src/pgdatadir_mapping.rs @@ -141,6 +141,23 @@ pub(crate) enum CollectKeySpaceError { Cancelled, } +impl CollectKeySpaceError { + pub(crate) fn is_cancel(&self) -> bool { + match self { + CollectKeySpaceError::Decode(_) => false, + CollectKeySpaceError::PageRead(e) => e.is_cancel(), + CollectKeySpaceError::Cancelled => true, + } + } + pub(crate) fn into_anyhow(self) -> anyhow::Error { + match self { + CollectKeySpaceError::Decode(e) => anyhow::Error::new(e), + CollectKeySpaceError::PageRead(e) => anyhow::Error::new(e), + CollectKeySpaceError::Cancelled => anyhow::Error::new(self), + } + } +} + impl From for CollectKeySpaceError { fn from(err: PageReconstructError) -> Self { match err { diff --git a/pageserver/src/tenant.rs b/pageserver/src/tenant.rs index b0969a96c1..f576119db8 100644 --- a/pageserver/src/tenant.rs +++ b/pageserver/src/tenant.rs @@ -3324,13 +3324,6 @@ impl TenantShard { match err { err if err.is_cancel() => {} CompactionError::ShuttingDown => (), - CompactionError::CollectKeySpaceError(err) => { - // CollectKeySpaceError::Cancelled and PageRead::Cancelled are handled in `err.is_cancel` branch. - self.compaction_circuit_breaker - .lock() - .unwrap() - .fail(&CIRCUIT_BREAKERS_BROKEN, err); - } CompactionError::Other(err) => { self.compaction_circuit_breaker .lock() diff --git a/pageserver/src/tenant/tasks.rs b/pageserver/src/tenant/tasks.rs index 2ae6b7ff3d..bcece5589a 100644 --- a/pageserver/src/tenant/tasks.rs +++ b/pageserver/src/tenant/tasks.rs @@ -318,7 +318,6 @@ pub(crate) fn log_compaction_error( let level = match err { e if e.is_cancel() => return, ShuttingDown => return, - CollectKeySpaceError(_) => Level::ERROR, _ if task_cancelled => Level::INFO, Other(err) => { let root_cause = err.root_cause(); @@ -328,7 +327,7 @@ pub(crate) fn log_compaction_error( .is_some_and(|e| e.is_stopping()); let timeline = root_cause .downcast_ref::() - .is_some_and(|e| e.is_stopping()); + .is_some_and(|e| e.is_cancel()); let buffered_writer_flush_task_canelled = root_cause .downcast_ref::() .is_some_and(|e| e.is_cancel()); diff --git a/pageserver/src/tenant/timeline.rs b/pageserver/src/tenant/timeline.rs index 4a08172337..6088f40669 100644 --- a/pageserver/src/tenant/timeline.rs +++ b/pageserver/src/tenant/timeline.rs @@ -585,6 +585,28 @@ pub(crate) enum PageReconstructError { MissingKey(Box), } +impl PageReconstructError { + pub(crate) fn is_cancel(&self) -> bool { + match self { + PageReconstructError::Other(_) => false, + PageReconstructError::AncestorLsnTimeout(e) => e.is_cancel(), + PageReconstructError::Cancelled => true, + PageReconstructError::WalRedo(_) => false, + PageReconstructError::MissingKey(_) => false, + } + } + #[allow(dead_code)] // we use the is_cancel + into_anyhow pattern in quite a few places, this one will follow soon enough + pub(crate) fn into_anyhow(self) -> anyhow::Error { + match self { + PageReconstructError::Other(e) => e, + PageReconstructError::AncestorLsnTimeout(e) => e.into_anyhow(), + PageReconstructError::Cancelled => anyhow::Error::new(self), + PageReconstructError::WalRedo(e) => e, + PageReconstructError::MissingKey(_) => anyhow::Error::new(self), + } + } +} + impl From for PageReconstructError { fn from(value: anyhow::Error) -> Self { // with walingest.rs many PageReconstructError are wrapped in as anyhow::Error @@ -738,17 +760,6 @@ impl std::fmt::Display for MissingKeyError { } } -impl PageReconstructError { - /// Returns true if this error indicates a tenant/timeline shutdown alike situation - pub(crate) fn is_stopping(&self) -> bool { - use PageReconstructError::*; - match self { - Cancelled => true, - Other(_) | AncestorLsnTimeout(_) | WalRedo(_) | MissingKey(_) => false, - } - } -} - #[derive(thiserror::Error, Debug)] pub(crate) enum CreateImageLayersError { #[error("timeline shutting down")] @@ -951,13 +962,35 @@ pub enum WaitLsnError { Timeout(String), } +impl WaitLsnError { + pub(crate) fn is_cancel(&self) -> bool { + match self { + WaitLsnError::Shutdown => true, + WaitLsnError::BadState(timeline_state) => match timeline_state { + TimelineState::Loading => false, + TimelineState::Active => false, + TimelineState::Stopping => true, + TimelineState::Broken { .. } => false, + }, + WaitLsnError::Timeout(_) => false, + } + } + pub(crate) fn into_anyhow(self) -> anyhow::Error { + match self { + WaitLsnError::Shutdown => anyhow::Error::new(self), + WaitLsnError::BadState(_) => anyhow::Error::new(self), + WaitLsnError::Timeout(_) => anyhow::Error::new(self), + } + } +} + impl From for tonic::Status { fn from(err: WaitLsnError) -> Self { use tonic::Code; - let code = match &err { - WaitLsnError::Timeout(_) => Code::Internal, - WaitLsnError::BadState(_) => Code::Internal, - WaitLsnError::Shutdown => Code::Unavailable, + let code = if err.is_cancel() { + Code::Unavailable + } else { + Code::Internal }; tonic::Status::new(code, err.to_string()) } @@ -1084,6 +1117,26 @@ enum ImageLayerCreationOutcome { Skip, } +enum RepartitionError { + Other(anyhow::Error), + CollectKeyspace(CollectKeySpaceError), +} + +impl RepartitionError { + fn is_cancel(&self) -> bool { + match self { + RepartitionError::Other(_) => false, + RepartitionError::CollectKeyspace(e) => e.is_cancel(), + } + } + fn into_anyhow(self) -> anyhow::Error { + match self { + RepartitionError::Other(e) => e, + RepartitionError::CollectKeyspace(e) => e.into_anyhow(), + } + } +} + /// Public interface functions impl Timeline { /// Get the LSN where this branch was created @@ -2070,10 +2123,6 @@ impl Timeline { Err(CompactionError::Other(_)) => { self.compaction_failed.store(true, AtomicOrdering::Relaxed) } - Err(CompactionError::CollectKeySpaceError(_)) => { - // Cancelled errors are covered by the `Err(e) if e.is_cancel()` branch. - self.compaction_failed.store(true, AtomicOrdering::Relaxed) - } }; result @@ -4963,7 +5012,7 @@ impl Timeline { ctx, ) .await - .map_err(|e| FlushLayerError::from_anyhow(self, e.into()))?; + .map_err(|e| FlushLayerError::from_anyhow(self, e.into_anyhow()))?; if self.cancel.is_cancelled() { return Err(FlushLayerError::Cancelled); @@ -5213,18 +5262,18 @@ impl Timeline { partition_size: u64, flags: EnumSet, ctx: &RequestContext, - ) -> Result<((KeyPartitioning, SparseKeyPartitioning), Lsn), CompactionError> { + ) -> Result<((KeyPartitioning, SparseKeyPartitioning), Lsn), RepartitionError> { let Ok(mut guard) = self.partitioning.try_write_guard() else { // NB: there are two callers, one is the compaction task, of which there is only one per struct Tenant and hence Timeline. // The other is the initdb optimization in flush_frozen_layer, used by `boostrap_timeline`, which runs before `.activate()` // and hence before the compaction task starts. - return Err(CompactionError::Other(anyhow!( + return Err(RepartitionError::Other(anyhow!( "repartition() called concurrently" ))); }; let ((dense_partition, sparse_partition), partition_lsn) = &*guard.read(); if lsn < *partition_lsn { - return Err(CompactionError::Other(anyhow!( + return Err(RepartitionError::Other(anyhow!( "repartition() called with LSN going backwards, this should not happen" ))); } @@ -5245,7 +5294,10 @@ impl Timeline { )); } - let (dense_ks, sparse_ks) = self.collect_keyspace(lsn, ctx).await?; + let (dense_ks, sparse_ks) = self + .collect_keyspace(lsn, ctx) + .await + .map_err(RepartitionError::CollectKeyspace)?; let dense_partitioning = dense_ks.partition( &self.shard_identity, partition_size, @@ -6010,9 +6062,6 @@ impl Drop for Timeline { pub(crate) enum CompactionError { #[error("The timeline or pageserver is shutting down")] ShuttingDown, - /// Compaction cannot be done right now; page reconstruction and so on. - #[error("Failed to collect keyspace: {0}")] - CollectKeySpaceError(#[from] CollectKeySpaceError), #[error(transparent)] Other(anyhow::Error), } @@ -6020,27 +6069,15 @@ pub(crate) enum CompactionError { impl CompactionError { /// Errors that can be ignored, i.e., cancel and shutdown. pub fn is_cancel(&self) -> bool { - matches!( - self, - Self::ShuttingDown - | Self::CollectKeySpaceError(CollectKeySpaceError::Cancelled) - | Self::CollectKeySpaceError(CollectKeySpaceError::PageRead( - PageReconstructError::Cancelled - )) - ) + matches!(self, Self::ShuttingDown) } - /// Critical errors that indicate data corruption. - pub fn is_critical(&self) -> bool { - matches!( - self, - Self::CollectKeySpaceError( - CollectKeySpaceError::Decode(_) - | CollectKeySpaceError::PageRead( - PageReconstructError::MissingKey(_) | PageReconstructError::WalRedo(_), - ) - ) - ) + pub fn from_collect_keyspace(err: CollectKeySpaceError) -> Self { + if err.is_cancel() { + Self::ShuttingDown + } else { + Self::Other(err.into_anyhow()) + } } } diff --git a/pageserver/src/tenant/timeline/compaction.rs b/pageserver/src/tenant/timeline/compaction.rs index 2c0b98c1e2..c263df1eb2 100644 --- a/pageserver/src/tenant/timeline/compaction.rs +++ b/pageserver/src/tenant/timeline/compaction.rs @@ -16,7 +16,8 @@ use super::{ Timeline, }; -use crate::tenant::timeline::DeltaEntry; +use crate::pgdatadir_mapping::CollectKeySpaceError; +use crate::tenant::timeline::{DeltaEntry, RepartitionError}; use crate::walredo::RedoAttemptType; use anyhow::{Context, anyhow}; use bytes::Bytes; @@ -64,7 +65,7 @@ use crate::tenant::timeline::{ DeltaLayerWriter, ImageLayerCreationOutcome, ImageLayerWriter, IoConcurrency, Layer, ResidentLayer, drop_layer_manager_rlock, }; -use crate::tenant::{DeltaLayer, MaybeOffloaded}; +use crate::tenant::{DeltaLayer, MaybeOffloaded, PageReconstructError}; use crate::virtual_file::{MaybeFatalIo, VirtualFile}; /// Maximum number of deltas before generating an image layer in bottom-most compaction. @@ -572,7 +573,7 @@ impl GcCompactionQueue { match res { Ok(res) => Ok(res), Err(CompactionError::ShuttingDown) => Err(CompactionError::ShuttingDown), - Err(CompactionError::CollectKeySpaceError(_) | CompactionError::Other(_)) => { + Err(CompactionError::Other(_)) => { // There are some cases where traditional gc might collect some layer // files causing gc-compaction cannot read the full history of the key. // This needs to be resolved in the long-term by improving the compaction @@ -1417,22 +1418,33 @@ impl Timeline { } // Suppress errors when cancelled. - Err(_) if self.cancel.is_cancelled() => {} + // + // Log other errors but continue. Failure to repartition is normal, if the timeline was just created + // as an empty timeline. Also in unit tests, when we use the timeline as a simple + // key-value store, ignoring the datadir layout. Log the error but continue. + // + // TODO: + // 1. shouldn't we return early here if we observe cancellation + // 2. Experiment: can we stop checking self.cancel here? + Err(_) if self.cancel.is_cancelled() => {} // TODO: try how we fare removing this branch Err(err) if err.is_cancel() => {} - - // Alert on critical errors that indicate data corruption. - Err(err) if err.is_critical() => { + Err(RepartitionError::CollectKeyspace( + e @ CollectKeySpaceError::Decode(_) + | e @ CollectKeySpaceError::PageRead( + PageReconstructError::MissingKey(_) | PageReconstructError::WalRedo(_), + ), + )) => { + // Alert on critical errors that indicate data corruption. critical_timeline!( self.tenant_shard_id, self.timeline_id, - "could not compact, repartitioning keyspace failed: {err:?}" + "could not compact, repartitioning keyspace failed: {e:?}" ); } - - // Log other errors. No partitioning? This is normal, if the timeline was just created - // as an empty timeline. Also in unit tests, when we use the timeline as a simple - // key-value store, ignoring the datadir layout. Log the error but continue. - Err(err) => error!("could not compact, repartitioning keyspace failed: {err:?}"), + Err(e) => error!( + "could not compact, repartitioning keyspace failed: {:?}", + e.into_anyhow() + ), }; let partition_count = self.partitioning.read().0.0.parts.len(); @@ -2518,7 +2530,10 @@ impl Timeline { return Err(CompactionError::ShuttingDown); } - let (dense_ks, _sparse_ks) = self.collect_keyspace(end_lsn, ctx).await?; + let (dense_ks, _sparse_ks) = self + .collect_keyspace(end_lsn, ctx) + .await + .map_err(CompactionError::from_collect_keyspace)?; // TODO(chi): ignore sparse_keyspace for now, compact it in the future. let mut adaptor = TimelineAdaptor::new(self, (end_lsn, dense_ks)); From 5ea0bb2d4fa1a9bccac73321512554e3a148427f Mon Sep 17 00:00:00 2001 From: Folke Behrens Date: Wed, 9 Jul 2025 11:58:46 +0200 Subject: [PATCH 29/65] proxy: Drop unused metrics (#12521) * proxy_control_plane_token_acquire_seconds * proxy_allowed_ips_cache_misses * proxy_vpc_endpoint_id_cache_stats * proxy_access_blocker_flags_cache_stats * proxy_requests_auth_rate_limits_total * proxy_endpoints_auth_rate_limits * proxy_invalid_endpoints_total --- proxy/src/metrics.rs | 26 +------------------------- 1 file changed, 1 insertion(+), 25 deletions(-) diff --git a/proxy/src/metrics.rs b/proxy/src/metrics.rs index 7a21e4ecee..9d1a3d4358 100644 --- a/proxy/src/metrics.rs +++ b/proxy/src/metrics.rs @@ -10,7 +10,7 @@ use measured::{ Counter, CounterVec, FixedCardinalityLabel, Gauge, Histogram, HistogramVec, LabelGroup, MetricGroup, }; -use metrics::{CounterPairAssoc, CounterPairVec, HyperLogLog, HyperLogLogVec}; +use metrics::{CounterPairAssoc, CounterPairVec, HyperLogLogVec}; use tokio::time::{self, Instant}; use crate::control_plane::messages::ColdStartInfo; @@ -36,7 +36,6 @@ impl Metrics { metrics.proxy.redis_errors_total.init_all_dense(); metrics.proxy.redis_events_count.init_all_dense(); metrics.proxy.retries_metric.init_all_dense(); - metrics.proxy.invalid_endpoints_total.init_all_dense(); metrics.proxy.connection_failures_total.init_all_dense(); SELF.set(metrics) @@ -80,11 +79,6 @@ pub struct ProxyMetrics { )] pub console_request_latency: HistogramVec, - /// Time it takes to acquire a token to call console plane. - // largest bucket = 3^16 * 0.05ms = 2.15s - #[metric(metadata = Thresholds::exponential_buckets(0.00005, 3.0))] - pub control_plane_token_acquire_seconds: Histogram<16>, - /// Size of the HTTP request body lengths. // smallest bucket = 16 bytes // largest bucket = 4^12 * 16 bytes = 256MB @@ -98,19 +92,10 @@ pub struct ProxyMetrics { /// Number of opened connections to a database. pub http_pool_opened_connections: Gauge, - /// Number of cache hits/misses for allowed ips. - pub allowed_ips_cache_misses: CounterVec>, - /// Number of allowed ips #[metric(metadata = Thresholds::with_buckets([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 10.0, 20.0, 50.0, 100.0]))] pub allowed_ips_number: Histogram<10>, - /// Number of cache hits/misses for VPC endpoint IDs. - pub vpc_endpoint_id_cache_stats: CounterVec>, - - /// Number of cache hits/misses for access blocker flags. - pub access_blocker_flags_cache_stats: CounterVec>, - /// Number of allowed VPC endpoints IDs #[metric(metadata = Thresholds::with_buckets([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 10.0, 20.0, 50.0, 100.0]))] pub allowed_vpc_endpoint_ids: Histogram<10>, @@ -139,21 +124,12 @@ pub struct ProxyMetrics { /// Number of TLS handshake failures pub tls_handshake_failures: Counter, - /// Number of connection requests affected by authentication rate limits - pub requests_auth_rate_limits_total: Counter, - /// HLL approximate cardinality of endpoints that are connecting pub connecting_endpoints: HyperLogLogVec, 32>, /// Number of endpoints affected by errors of a given classification pub endpoints_affected_by_errors: HyperLogLogVec, 32>, - /// Number of endpoints affected by authentication rate limits - pub endpoints_auth_rate_limits: HyperLogLog<32>, - - /// Number of invalid endpoints (per protocol, per rejected). - pub invalid_endpoints_total: CounterVec, - /// Number of retries (per outcome, per retry_type). #[metric(metadata = Thresholds::with_buckets([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]))] pub retries_metric: HistogramVec, From 39159955306d505f9bae5d3677192daba9d005be Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Wed, 9 Jul 2025 13:42:46 +0200 Subject: [PATCH 30/65] pageserver/client_grpc: add rich Pageserver gRPC client (#12462) ## Problem For the communicator, we need a rich Pageserver gRPC client. Touches #11735. Requires #12434. ## Summary of changes This patch adds an initial rich Pageserver gRPC client. It supports: * Sharded tenants across multiple Pageservers. * Pooling of connections, clients, and streams for efficient resource use. * Concurrent use by many callers. * Internal handling of GetPage bidirectional streams, with pipelining and error handling. * Automatic retries. * Observability. The client is still under development. In particular, it needs GetPage batch splitting, shard map updates, and performance optimization. This will be addressed in follow-up PRs. --- Cargo.lock | 2 + libs/compute_api/src/spec.rs | 2 +- libs/pageserver_api/src/shard.rs | 6 +- libs/utils/src/shard.rs | 6 + pageserver/client_grpc/Cargo.toml | 2 + pageserver/client_grpc/src/client.rs | 303 +++++++++++++++++++++++++++ pageserver/client_grpc/src/lib.rs | 17 +- pageserver/client_grpc/src/retry.rs | 151 +++++++++++++ pageserver/page_api/src/model.rs | 15 ++ pageserver/src/page_service.rs | 2 + 10 files changed, 491 insertions(+), 15 deletions(-) create mode 100644 pageserver/client_grpc/src/client.rs create mode 100644 pageserver/client_grpc/src/retry.rs diff --git a/Cargo.lock b/Cargo.lock index 893932fb9d..558e8e2295 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -4498,7 +4498,9 @@ name = "pageserver_client_grpc" version = "0.1.0" dependencies = [ "anyhow", + "compute_api", "futures", + "pageserver_api", "pageserver_page_api", "tokio", "tokio-stream", diff --git a/libs/compute_api/src/spec.rs b/libs/compute_api/src/spec.rs index 60311aa3e6..0eeab2bebc 100644 --- a/libs/compute_api/src/spec.rs +++ b/libs/compute_api/src/spec.rs @@ -442,7 +442,7 @@ pub struct JwksSettings { } /// Protocol used to connect to a Pageserver. Parsed from the connstring scheme. -#[derive(Clone, Copy, Debug, Default)] +#[derive(Clone, Copy, Debug, Default, PartialEq, Eq)] pub enum PageserverProtocol { /// The original protocol based on libpq and COPY. Uses postgresql:// or postgres:// scheme. #[default] diff --git a/libs/pageserver_api/src/shard.rs b/libs/pageserver_api/src/shard.rs index 5a13aace64..d6f4cd5e66 100644 --- a/libs/pageserver_api/src/shard.rs +++ b/libs/pageserver_api/src/shard.rs @@ -332,7 +332,11 @@ fn hash_combine(mut a: u32, mut b: u32) -> u32 { /// /// The mapping of key to shard is not stable across changes to ShardCount: this is intentional /// and will be handled at higher levels when shards are split. -fn key_to_shard_number(count: ShardCount, stripe_size: ShardStripeSize, key: &Key) -> ShardNumber { +pub fn key_to_shard_number( + count: ShardCount, + stripe_size: ShardStripeSize, + key: &Key, +) -> ShardNumber { // Fast path for un-sharded tenants or broadcast keys if count < ShardCount(2) || key_is_shard0(key) { return ShardNumber(0); diff --git a/libs/utils/src/shard.rs b/libs/utils/src/shard.rs index f2b81373e2..5a0edf8cea 100644 --- a/libs/utils/src/shard.rs +++ b/libs/utils/src/shard.rs @@ -171,6 +171,12 @@ impl std::fmt::Display for ShardNumber { } } +impl std::fmt::Display for ShardCount { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + self.0.fmt(f) + } +} + impl std::fmt::Display for ShardSlug<'_> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!( diff --git a/pageserver/client_grpc/Cargo.toml b/pageserver/client_grpc/Cargo.toml index 5a3a2761c2..0a8bcad2ef 100644 --- a/pageserver/client_grpc/Cargo.toml +++ b/pageserver/client_grpc/Cargo.toml @@ -6,7 +6,9 @@ license.workspace = true [dependencies] anyhow.workspace = true +compute_api.workspace = true futures.workspace = true +pageserver_api.workspace = true pageserver_page_api.workspace = true tokio.workspace = true tokio-stream.workspace = true diff --git a/pageserver/client_grpc/src/client.rs b/pageserver/client_grpc/src/client.rs new file mode 100644 index 0000000000..5bccdeede3 --- /dev/null +++ b/pageserver/client_grpc/src/client.rs @@ -0,0 +1,303 @@ +use std::collections::HashMap; +use std::sync::Arc; + +use anyhow::anyhow; +use tracing::instrument; + +use crate::pool::{ChannelPool, ClientGuard, ClientPool, StreamGuard, StreamPool}; +use crate::retry::Retry; +use compute_api::spec::PageserverProtocol; +use pageserver_api::key::{Key, rel_block_to_key}; +use pageserver_api::shard::{ShardStripeSize, key_to_shard_number}; +use pageserver_page_api as page_api; +use utils::id::{TenantId, TimelineId}; +use utils::shard::{ShardCount, ShardIndex, ShardNumber}; + +/// A rich Pageserver gRPC client for a single tenant timeline. This client is more capable than the +/// basic `page_api::Client` gRPC client, and supports: +/// +/// * Sharded tenants across multiple Pageservers. +/// * Pooling of connections, clients, and streams for efficient resource use. +/// * Concurrent use by many callers. +/// * Internal handling of GetPage bidirectional streams, with pipelining and error handling. +/// * Automatic retries. +/// * Observability. +/// +/// TODO: this client does not support base backups or LSN leases, as these are only used by +/// compute_ctl. Consider adding this, but LSN leases need concurrent requests on all shards. +pub struct PageserverClient { + // TODO: support swapping out the shard map, e.g. via an ArcSwap. + shards: Shards, + retry: Retry, +} + +impl PageserverClient { + /// Creates a new Pageserver client for a given tenant and timeline. Uses the Pageservers given + /// in the shard map, which must be complete and must use gRPC URLs. + pub fn new( + tenant_id: TenantId, + timeline_id: TimelineId, + shard_map: HashMap, + stripe_size: ShardStripeSize, + auth_token: Option, + ) -> anyhow::Result { + let shards = Shards::new(tenant_id, timeline_id, shard_map, stripe_size, auth_token)?; + Ok(Self { + shards, + retry: Retry, + }) + } + + /// Returns whether a relation exists. + #[instrument(skip_all, fields(rel=%req.rel, lsn=%req.read_lsn))] + pub async fn check_rel_exists( + &self, + req: page_api::CheckRelExistsRequest, + ) -> tonic::Result { + self.retry + .with(async || { + // Relation metadata is only available on shard 0. + let mut client = self.shards.get_zero().client().await?; + client.check_rel_exists(req).await + }) + .await + } + + /// Returns the total size of a database, as # of bytes. + #[instrument(skip_all, fields(db_oid=%req.db_oid, lsn=%req.read_lsn))] + pub async fn get_db_size( + &self, + req: page_api::GetDbSizeRequest, + ) -> tonic::Result { + self.retry + .with(async || { + // Relation metadata is only available on shard 0. + let mut client = self.shards.get_zero().client().await?; + client.get_db_size(req).await + }) + .await + } + + /// Fetches a page. The `request_id` must be unique across all in-flight requests. + /// + /// Unlike the `page_api::Client`, this client automatically converts `status_code` into + /// `tonic::Status` errors. All responses will have `GetPageStatusCode::Ok`. + #[instrument(skip_all, fields( + req_id = %req.request_id, + rel = %req.rel, + blkno = %req.block_numbers[0], + blks = %req.block_numbers.len(), + lsn = %req.read_lsn, + ))] + pub async fn get_page( + &self, + req: page_api::GetPageRequest, + ) -> tonic::Result { + // TODO: this needs to split batch requests across shards and reassemble responses into a + // single response. It must also re-split the batch in case the shard map changes. For now, + // just use the first page. + let key = rel_block_to_key( + req.rel, + req.block_numbers + .first() + .copied() + .ok_or_else(|| tonic::Status::invalid_argument("no block numbers provided"))?, + ); + + self.retry + .with(async || { + let stream = self.shards.get_for_key(key).stream().await; + let resp = stream.send(req.clone()).await?; + + if resp.status_code != page_api::GetPageStatusCode::Ok { + return Err(tonic::Status::new( + resp.status_code.into(), + resp.reason.unwrap_or_else(|| String::from("unknown error")), + )); + } + + Ok(resp) + }) + .await + } + + /// Returns the size of a relation, as # of blocks. + #[instrument(skip_all, fields(rel=%req.rel, lsn=%req.read_lsn))] + pub async fn get_rel_size( + &self, + req: page_api::GetRelSizeRequest, + ) -> tonic::Result { + self.retry + .with(async || { + // Relation metadata is only available on shard 0. + let mut client = self.shards.get_zero().client().await?; + client.get_rel_size(req).await + }) + .await + } + + /// Fetches an SLRU segment. + #[instrument(skip_all, fields(kind=%req.kind, segno=%req.segno, lsn=%req.read_lsn))] + pub async fn get_slru_segment( + &self, + req: page_api::GetSlruSegmentRequest, + ) -> tonic::Result { + self.retry + .with(async || { + // SLRU segments are only available on shard 0. + let mut client = self.shards.get_zero().client().await?; + client.get_slru_segment(req).await + }) + .await + } +} + +/// Tracks the tenant's shards. +struct Shards { + /// The shard count. + /// + /// NB: this is 0 for unsharded tenants, following `ShardIndex::unsharded()` convention. + count: ShardCount, + /// The stripe size. Only used for sharded tenants. + stripe_size: ShardStripeSize, + /// Shards by shard index. + /// + /// NB: unsharded tenants use count 0, like `ShardIndex::unsharded()`. + /// + /// INVARIANT: every shard 0..count is present. + /// INVARIANT: shard 0 is always present. + map: HashMap, +} + +impl Shards { + /// Creates a new set of shards based on a shard map. + fn new( + tenant_id: TenantId, + timeline_id: TimelineId, + shard_map: HashMap, + stripe_size: ShardStripeSize, + auth_token: Option, + ) -> anyhow::Result { + let count = match shard_map.len() { + 0 => return Err(anyhow!("no shards provided")), + 1 => ShardCount::new(0), // NB: unsharded tenants use 0, like `ShardIndex::unsharded()` + n if n > u8::MAX as usize => return Err(anyhow!("too many shards: {n}")), + n => ShardCount::new(n as u8), + }; + + let mut map = HashMap::new(); + for (shard_id, url) in shard_map { + // The shard index must match the computed shard count, even for unsharded tenants. + if shard_id.shard_count != count { + return Err(anyhow!("invalid shard index {shard_id}, expected {count}")); + } + // The shard index' number and count must be consistent. + if !shard_id.is_unsharded() && shard_id.shard_number.0 >= shard_id.shard_count.0 { + return Err(anyhow!("invalid shard index {shard_id}")); + } + // The above conditions guarantee that we have all shards 0..count: len() matches count, + // shard number < count, and numbers are unique (via hashmap). + let shard = Shard::new(url, tenant_id, timeline_id, shard_id, auth_token.clone())?; + map.insert(shard_id, shard); + } + + Ok(Self { + count, + stripe_size, + map, + }) + } + + /// Looks up the given shard. + #[allow(clippy::result_large_err)] // TODO: check perf impact + fn get(&self, shard_id: ShardIndex) -> tonic::Result<&Shard> { + self.map + .get(&shard_id) + .ok_or_else(|| tonic::Status::not_found(format!("unknown shard {shard_id}"))) + } + + /// Looks up the shard that owns the given key. + fn get_for_key(&self, key: Key) -> &Shard { + let shard_number = key_to_shard_number(self.count, self.stripe_size, &key); + self.get(ShardIndex::new(shard_number, self.count)) + .expect("must exist") + } + + /// Returns shard 0. + fn get_zero(&self) -> &Shard { + self.get(ShardIndex::new(ShardNumber(0), self.count)) + .expect("always present") + } +} + +/// A single shard. +/// +/// TODO: consider separate pools for normal and bulk traffic, with different settings. +struct Shard { + /// Dedicated channel pool for this shard. Shared by all clients/streams in this shard. + _channel_pool: Arc, + /// Unary gRPC client pool for this shard. Uses the shared channel pool. + client_pool: Arc, + /// GetPage stream pool for this shard. Uses a dedicated client pool, but shares the channel + /// pool with unary clients. + stream_pool: Arc, +} + +impl Shard { + /// Creates a new shard. It has its own dedicated resource pools. + fn new( + url: String, + tenant_id: TenantId, + timeline_id: TimelineId, + shard_id: ShardIndex, + auth_token: Option, + ) -> anyhow::Result { + // Sanity-check that the URL uses gRPC. + if PageserverProtocol::from_connstring(&url)? != PageserverProtocol::Grpc { + return Err(anyhow!("invalid shard URL {url}: must use gRPC")); + } + + // Use a common channel pool for all clients, to multiplex unary and stream requests across + // the same TCP connections. The channel pool is unbounded (but client pools are bounded). + let channel_pool = ChannelPool::new(url)?; + + // Dedicated client pool for unary requests. + let client_pool = ClientPool::new( + channel_pool.clone(), + tenant_id, + timeline_id, + shard_id, + auth_token.clone(), + ); + + // Stream pool with dedicated client pool. If this shared a client pool with unary requests, + // long-lived streams could fill up the client pool and starve out unary requests. It shares + // the same underlying channel pool with unary clients though, which is unbounded. + let stream_pool = StreamPool::new(ClientPool::new( + channel_pool.clone(), + tenant_id, + timeline_id, + shard_id, + auth_token, + )); + + Ok(Self { + _channel_pool: channel_pool, + client_pool, + stream_pool, + }) + } + + /// Returns a pooled client for this shard. + async fn client(&self) -> tonic::Result { + self.client_pool + .get() + .await + .map_err(|err| tonic::Status::internal(format!("failed to get client: {err}"))) + } + + /// Returns a pooled stream for this shard. + async fn stream(&self) -> StreamGuard { + self.stream_pool.get().await + } +} diff --git a/pageserver/client_grpc/src/lib.rs b/pageserver/client_grpc/src/lib.rs index c900e1a939..2a59f9868c 100644 --- a/pageserver/client_grpc/src/lib.rs +++ b/pageserver/client_grpc/src/lib.rs @@ -1,14 +1,5 @@ -//! A rich Pageserver gRPC client. This client is more capable than the basic `page_api::Client` -//! gRPC client, and supports: -//! -//! * Sharded tenants across multiple Pageservers. -//! * Pooling of connections, clients, and streams for efficient resource use. -//! * Concurrent use by many callers. -//! * Internal handling of GetPage bidirectional streams. -//! * Automatic retries. -//! * Observability. -//! -//! The client is under development, this package is just a shell. - -#[allow(unused)] +mod client; mod pool; +mod retry; + +pub use client::PageserverClient; diff --git a/pageserver/client_grpc/src/retry.rs b/pageserver/client_grpc/src/retry.rs new file mode 100644 index 0000000000..b0473204d7 --- /dev/null +++ b/pageserver/client_grpc/src/retry.rs @@ -0,0 +1,151 @@ +use std::time::Duration; + +use tokio::time::Instant; +use tracing::{error, info, warn}; + +use utils::backoff::exponential_backoff_duration; + +/// A retry handler for Pageserver gRPC requests. +/// +/// This is used instead of backoff::retry for better control and observability. +pub struct Retry; + +impl Retry { + /// The per-request timeout. + // TODO: tune these, and/or make them configurable. Should we retry forever? + const REQUEST_TIMEOUT: Duration = Duration::from_secs(10); + /// The total timeout across all attempts + const TOTAL_TIMEOUT: Duration = Duration::from_secs(60); + /// The initial backoff duration. + const BASE_BACKOFF: Duration = Duration::from_millis(10); + /// The maximum backoff duration. + const MAX_BACKOFF: Duration = Duration::from_secs(10); + /// If true, log successful requests. For debugging. + const LOG_SUCCESS: bool = false; + + /// Runs the given async closure with timeouts and retries (exponential backoff). Logs errors, + /// using the current tracing span for context. + /// + /// Only certain gRPC status codes are retried, see [`Self::should_retry`]. For default + /// timeouts, see [`Self::REQUEST_TIMEOUT`] and [`Self::TOTAL_TIMEOUT`]. + pub async fn with(&self, mut f: F) -> tonic::Result + where + F: FnMut() -> O, + O: Future>, + { + let started = Instant::now(); + let deadline = started + Self::TOTAL_TIMEOUT; + let mut last_error = None; + let mut retries = 0; + loop { + // Set up a future to wait for the backoff (if any) and run the request with a timeout. + let backoff_and_try = async { + // NB: sleep() always sleeps 1ms, even when given a 0 argument. See: + // https://github.com/tokio-rs/tokio/issues/6866 + if let Some(backoff) = Self::backoff_duration(retries) { + tokio::time::sleep(backoff).await; + } + + let request_started = Instant::now(); + tokio::time::timeout(Self::REQUEST_TIMEOUT, f()) + .await + .map_err(|_| { + tonic::Status::deadline_exceeded(format!( + "request timed out after {:.3}s", + request_started.elapsed().as_secs_f64() + )) + })? + }; + + // Wait for the backoff and request, or bail out if the total timeout is exceeded. + let result = tokio::select! { + result = backoff_and_try => result, + + _ = tokio::time::sleep_until(deadline) => { + let last_error = last_error.unwrap_or_else(|| { + tonic::Status::deadline_exceeded(format!( + "request timed out after {:.3}s", + started.elapsed().as_secs_f64() + )) + }); + error!( + "giving up after {:.3}s and {retries} retries, last error {:?}: {}", + started.elapsed().as_secs_f64(), last_error.code(), last_error.message(), + ); + return Err(last_error); + } + }; + + match result { + // Success, return the result. + Ok(result) => { + if retries > 0 || Self::LOG_SUCCESS { + info!( + "request succeeded after {retries} retries in {:.3}s", + started.elapsed().as_secs_f64(), + ); + } + + return Ok(result); + } + + // Error, retry or bail out. + Err(status) => { + let (code, message) = (status.code(), status.message()); + let attempt = retries + 1; + + if !Self::should_retry(code) { + // NB: include the attempt here too. This isn't necessarily the first + // attempt, because the error may change between attempts. + error!( + "request failed with {code:?}: {message}, not retrying (attempt {attempt})" + ); + return Err(status); + } + + warn!("request failed with {code:?}: {message}, retrying (attempt {attempt})"); + + retries += 1; + last_error = Some(status); + } + } + } + } + + /// Returns the backoff duration for the given retry attempt, or None for no backoff. + fn backoff_duration(retry: usize) -> Option { + let backoff = exponential_backoff_duration( + retry as u32, + Self::BASE_BACKOFF.as_secs_f64(), + Self::MAX_BACKOFF.as_secs_f64(), + ); + (!backoff.is_zero()).then_some(backoff) + } + + /// Returns true if the given status code should be retries. + fn should_retry(code: tonic::Code) -> bool { + match code { + tonic::Code::Ok => panic!("unexpected Ok status code"), + + // These codes are transient, so retry them. + tonic::Code::Aborted => true, + tonic::Code::Cancelled => true, + tonic::Code::DeadlineExceeded => true, // maybe transient slowness + tonic::Code::Internal => true, // maybe transient failure? + tonic::Code::ResourceExhausted => true, + tonic::Code::Unavailable => true, + + // The following codes will like continue to fail, so don't retry. + tonic::Code::AlreadyExists => false, + tonic::Code::DataLoss => false, + tonic::Code::FailedPrecondition => false, + tonic::Code::InvalidArgument => false, + tonic::Code::NotFound => false, + tonic::Code::OutOfRange => false, + tonic::Code::PermissionDenied => false, + tonic::Code::Unauthenticated => false, + tonic::Code::Unimplemented => false, + tonic::Code::Unknown => false, + } + } +} diff --git a/pageserver/page_api/src/model.rs b/pageserver/page_api/src/model.rs index 4497fc6fc7..c5b6f06879 100644 --- a/pageserver/page_api/src/model.rs +++ b/pageserver/page_api/src/model.rs @@ -602,6 +602,21 @@ impl TryFrom for GetPageStatusCode { } } +impl From for tonic::Code { + fn from(status_code: GetPageStatusCode) -> Self { + use tonic::Code; + + match status_code { + GetPageStatusCode::Unknown => Code::Unknown, + GetPageStatusCode::Ok => Code::Ok, + GetPageStatusCode::NotFound => Code::NotFound, + GetPageStatusCode::InvalidRequest => Code::InvalidArgument, + GetPageStatusCode::InternalError => Code::Internal, + GetPageStatusCode::SlowDown => Code::ResourceExhausted, + } + } +} + // Fetches the size of a relation at a given LSN, as # of blocks. Only valid on shard 0, other // shards will error. #[derive(Clone, Copy, Debug)] diff --git a/pageserver/src/page_service.rs b/pageserver/src/page_service.rs index 6b614deac8..70fdb2e789 100644 --- a/pageserver/src/page_service.rs +++ b/pageserver/src/page_service.rs @@ -3353,6 +3353,8 @@ impl GrpcPageServiceHandler { /// NB: errors returned from here are intercepted in get_pages(), and may be converted to a /// GetPageResponse with an appropriate status code to avoid terminating the stream. /// + /// TODO: verify that the requested pages belong to this shard. + /// /// TODO: get_vectored() currently enforces a batch limit of 32. Postgres will typically send /// batches up to effective_io_concurrency = 100. Either we have to accept large batches, or /// split them up in the client or server. From 7049003cf790768eddf38a2e004d046e282adfb2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Arpad=20M=C3=BCller?= Date: Wed, 9 Jul 2025 14:02:44 +0200 Subject: [PATCH 31/65] storcon: print viability of --timelines-onto-safekeepers (#12485) The `--timelines-onto-safekeepers` flag is very consequential in the sense that it controls every single timeline creation. However, we don't have any automatic insight whether enabling the option will break things or not. The main way things can break is by misconfigured safekeepers, say they are marked as paused in the storcon db. The best input so far we can obtain via manually connecting via storcon_cli and listing safekeepers, but this is cumbersome and manual so prone to human error. So at storcon startup, do a simulated "test creation" in which we call `timelines_onto_safekeepers` with the configuration provided to us, and print whether it was successful or not. No actual timeline is created, and nothing is written into the storcon db. The heartbeat info will not have reached us at that point yet, but that's okay, because we still fall back to safekeepers that don't have any heartbeat. Also print some general scheduling policy stats on initial safekeeper load. Part of #11670. --- libs/pageserver_api/src/models.rs | 2 +- storage_controller/src/service.rs | 24 +++++++++++++++++++++++- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/libs/pageserver_api/src/models.rs b/libs/pageserver_api/src/models.rs index 16545364c1..6735320484 100644 --- a/libs/pageserver_api/src/models.rs +++ b/libs/pageserver_api/src/models.rs @@ -384,7 +384,7 @@ pub struct SafekeepersInfo { pub safekeepers: Vec, } -#[derive(Serialize, Deserialize, Clone)] +#[derive(Serialize, Deserialize, Clone, Debug)] pub struct SafekeeperInfo { pub id: NodeId, pub hostname: String, diff --git a/storage_controller/src/service.rs b/storage_controller/src/service.rs index 403ae15b59..ed6643d641 100644 --- a/storage_controller/src/service.rs +++ b/storage_controller/src/service.rs @@ -1677,7 +1677,21 @@ impl Service { .collect::>>()?; let safekeepers: HashMap = safekeepers.into_iter().map(|n| (n.get_id(), n)).collect(); - tracing::info!("Loaded {} safekeepers from database.", safekeepers.len()); + let count_policy = |policy| { + safekeepers + .iter() + .filter(|sk| sk.1.scheduling_policy() == policy) + .count() + }; + let active_sk_count = count_policy(SkSchedulingPolicy::Active); + let activating_sk_count = count_policy(SkSchedulingPolicy::Activating); + let pause_sk_count = count_policy(SkSchedulingPolicy::Pause); + let decom_sk_count = count_policy(SkSchedulingPolicy::Decomissioned); + tracing::info!( + "Loaded {} safekeepers from database. Active {active_sk_count}, activating {activating_sk_count}, \ + paused {pause_sk_count}, decomissioned {decom_sk_count}.", + safekeepers.len() + ); metrics::METRICS_REGISTRY .metrics_group .storage_controller_safekeeper_nodes @@ -1969,6 +1983,14 @@ impl Service { } }); + // Check that there is enough safekeepers configured that we can create new timelines + let test_sk_res = this.safekeepers_for_new_timeline().await; + tracing::info!( + timeline_safekeeper_count = config.timeline_safekeeper_count, + timelines_onto_safekeepers = config.timelines_onto_safekeepers, + "viability test result (test timeline creation on safekeepers): {test_sk_res:?}", + ); + Ok(this) } From 4ee0da0a2056d31d6d2194d862354e43193cbe23 Mon Sep 17 00:00:00 2001 From: Konstantin Knizhnik Date: Wed, 9 Jul 2025 15:49:21 +0300 Subject: [PATCH 32/65] Check prefetch response before assignment to slot (#12371) ## Problem See [Slack Channel](https://databricks.enterprise.slack.com/archives/C091LHU6NNB) Dropping connection without resetting prefetch state can cause request/response mismatch. And lack of check response correctness in communicator_prefetch_lookupv can cause data corruption. ## Summary of changes 1. Validate response before assignment to prefetch slot. 2. Consume prefetch requests before sending any other requests. --------- Co-authored-by: Kosntantin Knizhnik Co-authored-by: Konstantin Knizhnik --- pgxn/neon/communicator.c | 219 ++++++++++++++++++++++++--------------- 1 file changed, 135 insertions(+), 84 deletions(-) diff --git a/pgxn/neon/communicator.c b/pgxn/neon/communicator.c index 7c84be7d15..bd53855eab 100644 --- a/pgxn/neon/communicator.c +++ b/pgxn/neon/communicator.c @@ -65,6 +65,7 @@ #include "port/pg_iovec.h" #include "postmaster/interrupt.h" #include "replication/walsender.h" +#include "storage/ipc.h" #include "utils/timeout.h" #include "bitmap.h" @@ -412,6 +413,47 @@ compact_prefetch_buffers(void) return false; } +/* + * Check that prefetch response matches the slot + */ +static void +check_getpage_response(PrefetchRequest* slot, NeonResponse* resp) +{ + if (resp->tag != T_NeonGetPageResponse && resp->tag != T_NeonErrorResponse) + { + neon_shard_log(slot->shard_no, PANIC, "Unexpected prefetch response %d, ring_receive=%ld, ring_flush=%ld, ring_unused=%ld", + resp->tag, MyPState->ring_receive, MyPState->ring_flush, MyPState->ring_unused); + } + if (neon_protocol_version >= 3) + { + NRelFileInfo rinfo = BufTagGetNRelFileInfo(slot->buftag); + if (resp->tag == T_NeonGetPageResponse) + { + NeonGetPageResponse * getpage_resp = (NeonGetPageResponse *)resp; + if (resp->reqid != slot->reqid || + resp->lsn != slot->request_lsns.request_lsn || + resp->not_modified_since != slot->request_lsns.not_modified_since || + !RelFileInfoEquals(getpage_resp->req.rinfo, rinfo) || + getpage_resp->req.forknum != slot->buftag.forkNum || + getpage_resp->req.blkno != slot->buftag.blockNum) + { + NEON_PANIC_CONNECTION_STATE(slot->shard_no, PANIC, + "Receive unexpected getpage response {reqid=%lx,lsn=%X/%08X, since=%X/%08X, rel=%u/%u/%u.%u, block=%u} to get page request {reqid=%lx,lsn=%X/%08X, since=%X/%08X, rel=%u/%u/%u.%u, block=%u}", + resp->reqid, LSN_FORMAT_ARGS(resp->lsn), LSN_FORMAT_ARGS(resp->not_modified_since), RelFileInfoFmt(getpage_resp->req.rinfo), getpage_resp->req.forknum, getpage_resp->req.blkno, + slot->reqid, LSN_FORMAT_ARGS(slot->request_lsns.request_lsn), LSN_FORMAT_ARGS(slot->request_lsns.not_modified_since), RelFileInfoFmt(rinfo), slot->buftag.forkNum, slot->buftag.blockNum); + } + } + else if (resp->reqid != slot->reqid || + resp->lsn != slot->request_lsns.request_lsn || + resp->not_modified_since != slot->request_lsns.not_modified_since) + { + elog(WARNING, NEON_TAG "Error message {reqid=%lx,lsn=%X/%08X, since=%X/%08X} doesn't match exists request {reqid=%lx,lsn=%X/%08X, since=%X/%08X}", + resp->reqid, LSN_FORMAT_ARGS(resp->lsn), LSN_FORMAT_ARGS(resp->not_modified_since), + slot->reqid, LSN_FORMAT_ARGS(slot->request_lsns.request_lsn), LSN_FORMAT_ARGS(slot->request_lsns.not_modified_since)); + } + } +} + /* * If there might be responses still in the TCP buffer, then we should try to * use those, to reduce any TCP backpressure on the OS/PS side. @@ -446,15 +488,18 @@ communicator_prefetch_pump_state(void) if (response == NULL) break; + check_getpage_response(slot, response); + /* The slot should still be valid */ if (slot->status != PRFS_REQUESTED || slot->response != NULL || slot->my_ring_index != MyPState->ring_receive) - neon_shard_log(slot->shard_no, ERROR, + { + neon_shard_log(slot->shard_no, PANIC, "Incorrect prefetch slot state after receive: status=%d response=%p my=%lu receive=%lu", slot->status, slot->response, (long) slot->my_ring_index, (long) MyPState->ring_receive); - + } /* update prefetch state */ MyPState->n_responses_buffered += 1; MyPState->n_requests_inflight -= 1; @@ -593,6 +638,21 @@ readahead_buffer_resize(int newsize, void *extra) } +/* + * Callback to be called on backend exit to ensure correct state of compute-PS communication + * in case of backend cancel + */ +static void +prefetch_on_exit(int code, Datum arg) +{ + if (code != 0) /* do disconnect only on abnormal backend termination */ + { + shardno_t shard_no = DatumGetInt32(arg); + prefetch_on_ps_disconnect(); + page_server->disconnect(shard_no); + } +} + /* * Make sure that there are no responses still in the buffer. @@ -605,6 +665,11 @@ consume_prefetch_responses(void) { if (MyPState->ring_receive < MyPState->ring_unused) prefetch_wait_for(MyPState->ring_unused - 1); + /* + * We know for sure we're not working on any prefetch pages after + * this. + */ + END_PREFETCH_RECEIVE_WORK(); } static void @@ -722,10 +787,12 @@ prefetch_read(PrefetchRequest *slot) if (slot->status != PRFS_REQUESTED || slot->response != NULL || slot->my_ring_index != MyPState->ring_receive) - neon_shard_log(slot->shard_no, ERROR, + { + neon_shard_log(slot->shard_no, PANIC, "Incorrect prefetch read: status=%d response=%p my=%lu receive=%lu", slot->status, slot->response, (long)slot->my_ring_index, (long)MyPState->ring_receive); + } /* * Copy the request info so that if an error happens and the prefetch @@ -741,14 +808,18 @@ prefetch_read(PrefetchRequest *slot) MemoryContextSwitchTo(old); if (response) { + check_getpage_response(slot, response); + /* The slot should still be valid */ if (slot->status != PRFS_REQUESTED || slot->response != NULL || slot->my_ring_index != MyPState->ring_receive) - neon_shard_log(shard_no, ERROR, + { + neon_shard_log(shard_no, PANIC, "Incorrect prefetch slot state after receive: status=%d response=%p my=%lu receive=%lu", slot->status, slot->response, (long) slot->my_ring_index, (long) MyPState->ring_receive); + } /* update prefetch state */ MyPState->n_responses_buffered += 1; @@ -820,11 +891,10 @@ communicator_prefetch_receive(BufferTag tag) void prefetch_on_ps_disconnect(void) { - bool save_readpage_reentrant_guard = readpage_reentrant_guard; MyPState->ring_flush = MyPState->ring_unused; - /* Prohibit callig of prefetch_pump_state */ - START_PREFETCH_RECEIVE_WORK(); + /* Nothing should cancel disconnect: we should not leave connection in opaque state */ + HOLD_INTERRUPTS(); while (MyPState->ring_receive < MyPState->ring_unused) { @@ -854,9 +924,6 @@ prefetch_on_ps_disconnect(void) MyNeonCounters->getpage_prefetch_discards_total += 1; } - /* Restore guard */ - readpage_reentrant_guard = save_readpage_reentrant_guard; - /* * We can have gone into retry due to network error, so update stats with * the latest available @@ -865,6 +932,8 @@ prefetch_on_ps_disconnect(void) MyPState->n_requests_inflight; MyNeonCounters->getpage_prefetches_buffered = MyPState->n_responses_buffered; + + RESUME_INTERRUPTS(); } /* @@ -1027,16 +1096,11 @@ communicator_prefetch_lookupv(NRelFileInfo rinfo, ForkNumber forknum, BlockNumbe /* * Ignore errors */ - if (slot->response->tag != T_NeonGetPageResponse) + if (slot->response->tag == T_NeonErrorResponse) { - if (slot->response->tag != T_NeonErrorResponse) - { - NEON_PANIC_CONNECTION_STATE(slot->shard_no, PANIC, - "Expected GetPage (0x%02x) or Error (0x%02x) response to GetPageRequest, but got 0x%02x", - T_NeonGetPageResponse, T_NeonErrorResponse, slot->response->tag); - } continue; } + Assert(slot->response->tag == T_NeonGetPageResponse); /* checked by check_getpage_response when response was assigned to the slot */ memcpy(buffers[i], ((NeonGetPageResponse*)slot->response)->page, BLCKSZ); @@ -1351,7 +1415,7 @@ equal_requests(NeonRequest* a, NeonRequest* b) static NeonResponse * page_server_request(void const *req) { - NeonResponse *resp; + NeonResponse *resp = NULL; BufferTag tag = {0}; shardno_t shard_no; @@ -1371,7 +1435,7 @@ page_server_request(void const *req) tag.blockNum = ((NeonGetPageRequest *) req)->blkno; break; default: - neon_log(ERROR, "Unexpected request tag: %d", messageTag(req)); + neon_log(PANIC, "Unexpected request tag: %d", messageTag(req)); } shard_no = get_shard_number(&tag); @@ -1384,9 +1448,12 @@ page_server_request(void const *req) shard_no = 0; } - do + consume_prefetch_responses(); + + PG_TRY(); { - PG_TRY(); + before_shmem_exit(prefetch_on_exit, Int32GetDatum(shard_no)); + do { while (!page_server->send(shard_no, (NeonRequest *) req) || !page_server->flush(shard_no)) @@ -1394,30 +1461,24 @@ page_server_request(void const *req) /* do nothing */ } MyNeonCounters->pageserver_open_requests++; - consume_prefetch_responses(); resp = page_server->receive(shard_no); MyNeonCounters->pageserver_open_requests--; - } - PG_CATCH(); - { - /* - * Cancellation in this code needs to be handled better at some - * point, but this currently seems fine for now. - */ - page_server->disconnect(shard_no); - MyNeonCounters->pageserver_open_requests = 0; + } while (resp == NULL); + cancel_before_shmem_exit(prefetch_on_exit, Int32GetDatum(shard_no)); + } + PG_CATCH(); + { + cancel_before_shmem_exit(prefetch_on_exit, Int32GetDatum(shard_no)); + /* Nothing should cancel disconnect: we should not leave connection in opaque state */ + HOLD_INTERRUPTS(); + page_server->disconnect(shard_no); + MyNeonCounters->pageserver_open_requests = 0; + RESUME_INTERRUPTS(); - /* - * We know for sure we're not working on any prefetch pages after - * this. - */ - END_PREFETCH_RECEIVE_WORK(); + PG_RE_THROW(); + } + PG_END_TRY(); - PG_RE_THROW(); - } - PG_END_TRY(); - - } while (resp == NULL); return resp; } @@ -1502,7 +1563,7 @@ nm_pack_request(NeonRequest *msg) case T_NeonDbSizeResponse: case T_NeonGetSlruSegmentResponse: default: - neon_log(ERROR, "unexpected neon message tag 0x%02x", msg->tag); + neon_log(PANIC, "unexpected neon message tag 0x%02x", msg->tag); break; } return s; @@ -1654,7 +1715,7 @@ nm_unpack_response(StringInfo s) case T_NeonDbSizeRequest: case T_NeonGetSlruSegmentRequest: default: - neon_log(ERROR, "unexpected neon message tag 0x%02x", tag); + neon_log(PANIC, "unexpected neon message tag 0x%02x", tag); break; } @@ -1983,7 +2044,7 @@ communicator_exists(NRelFileInfo rinfo, ForkNumber forkNum, neon_request_lsns *r !RelFileInfoEquals(exists_resp->req.rinfo, request.rinfo) || exists_resp->req.forknum != request.forknum) { - NEON_PANIC_CONNECTION_STATE(-1, PANIC, + NEON_PANIC_CONNECTION_STATE(0, PANIC, "Unexpect response {reqid=%lx,lsn=%X/%08X, since=%X/%08X, rel=%u/%u/%u.%u} to exits request {reqid=%lx,lsn=%X/%08X, since=%X/%08X, rel=%u/%u/%u.%u}", resp->reqid, LSN_FORMAT_ARGS(resp->lsn), LSN_FORMAT_ARGS(resp->not_modified_since), RelFileInfoFmt(exists_resp->req.rinfo), exists_resp->req.forknum, request.hdr.reqid, LSN_FORMAT_ARGS(request.hdr.lsn), LSN_FORMAT_ARGS(request.hdr.not_modified_since), RelFileInfoFmt(request.rinfo), request.forknum); @@ -2014,7 +2075,7 @@ communicator_exists(NRelFileInfo rinfo, ForkNumber forkNum, neon_request_lsns *r break; default: - NEON_PANIC_CONNECTION_STATE(-1, PANIC, + NEON_PANIC_CONNECTION_STATE(0, PANIC, "Expected Exists (0x%02x) or Error (0x%02x) response to ExistsRequest, but got 0x%02x", T_NeonExistsResponse, T_NeonErrorResponse, resp->tag); } @@ -2158,6 +2219,7 @@ Retry: Assert(memcmp(&hashkey.buftag, &slot->buftag, sizeof(BufferTag)) == 0); Assert(hashkey.buftag.blockNum == base_blockno + i); + /* We already checked that response match request when storing it in slot */ resp = slot->response; switch (resp->tag) @@ -2165,21 +2227,6 @@ Retry: case T_NeonGetPageResponse: { NeonGetPageResponse* getpage_resp = (NeonGetPageResponse *) resp; - if (neon_protocol_version >= 3) - { - if (resp->reqid != slot->reqid || - resp->lsn != slot->request_lsns.request_lsn || - resp->not_modified_since != slot->request_lsns.not_modified_since || - !RelFileInfoEquals(getpage_resp->req.rinfo, rinfo) || - getpage_resp->req.forknum != forkNum || - getpage_resp->req.blkno != base_blockno + i) - { - NEON_PANIC_CONNECTION_STATE(-1, PANIC, - "Unexpect response {reqid=%lx,lsn=%X/%08X, since=%X/%08X, rel=%u/%u/%u.%u, block=%u} to get page request {reqid=%lx,lsn=%X/%08X, since=%X/%08X, rel=%u/%u/%u.%u, block=%u}", - resp->reqid, LSN_FORMAT_ARGS(resp->lsn), LSN_FORMAT_ARGS(resp->not_modified_since), RelFileInfoFmt(getpage_resp->req.rinfo), getpage_resp->req.forknum, getpage_resp->req.blkno, - slot->reqid, LSN_FORMAT_ARGS(slot->request_lsns.request_lsn), LSN_FORMAT_ARGS(slot->request_lsns.not_modified_since), RelFileInfoFmt(rinfo), forkNum, base_blockno + i); - } - } memcpy(buffer, getpage_resp->page, BLCKSZ); /* @@ -2192,17 +2239,6 @@ Retry: break; } case T_NeonErrorResponse: - if (neon_protocol_version >= 3) - { - if (resp->reqid != slot->reqid || - resp->lsn != slot->request_lsns.request_lsn || - resp->not_modified_since != slot->request_lsns.not_modified_since) - { - elog(WARNING, NEON_TAG "Error message {reqid=%lx,lsn=%X/%08X, since=%X/%08X} doesn't match get relsize request {reqid=%lx,lsn=%X/%08X, since=%X/%08X}", - resp->reqid, LSN_FORMAT_ARGS(resp->lsn), LSN_FORMAT_ARGS(resp->not_modified_since), - slot->reqid, LSN_FORMAT_ARGS(slot->request_lsns.request_lsn), LSN_FORMAT_ARGS(slot->request_lsns.not_modified_since)); - } - } ereport(ERROR, (errcode(ERRCODE_IO_ERROR), errmsg(NEON_TAG "[shard %d, reqid %lx] could not read block %u in rel %u/%u/%u.%u from page server at lsn %X/%08X", @@ -2257,7 +2293,7 @@ communicator_nblocks(NRelFileInfo rinfo, ForkNumber forknum, neon_request_lsns * !RelFileInfoEquals(relsize_resp->req.rinfo, request.rinfo) || relsize_resp->req.forknum != forknum) { - NEON_PANIC_CONNECTION_STATE(-1, PANIC, + NEON_PANIC_CONNECTION_STATE(0, PANIC, "Unexpect response {reqid=%lx,lsn=%X/%08X, since=%X/%08X, rel=%u/%u/%u.%u} to get relsize request {reqid=%lx,lsn=%X/%08X, since=%X/%08X, rel=%u/%u/%u.%u}", resp->reqid, LSN_FORMAT_ARGS(resp->lsn), LSN_FORMAT_ARGS(resp->not_modified_since), RelFileInfoFmt(relsize_resp->req.rinfo), relsize_resp->req.forknum, request.hdr.reqid, LSN_FORMAT_ARGS(request.hdr.lsn), LSN_FORMAT_ARGS(request.hdr.not_modified_since), RelFileInfoFmt(request.rinfo), forknum); @@ -2288,7 +2324,7 @@ communicator_nblocks(NRelFileInfo rinfo, ForkNumber forknum, neon_request_lsns * break; default: - NEON_PANIC_CONNECTION_STATE(-1, PANIC, + NEON_PANIC_CONNECTION_STATE(0, PANIC, "Expected Nblocks (0x%02x) or Error (0x%02x) response to NblocksRequest, but got 0x%02x", T_NeonNblocksResponse, T_NeonErrorResponse, resp->tag); } @@ -2327,7 +2363,7 @@ communicator_dbsize(Oid dbNode, neon_request_lsns *request_lsns) if (!equal_requests(resp, &request.hdr) || dbsize_resp->req.dbNode != dbNode) { - NEON_PANIC_CONNECTION_STATE(-1, PANIC, + NEON_PANIC_CONNECTION_STATE(0, PANIC, "Unexpect response {reqid=%lx,lsn=%X/%08X, since=%X/%08X, dbNode=%u} to get DB size request {reqid=%lx,lsn=%X/%08X, since=%X/%08X, dbNode=%u}", resp->reqid, LSN_FORMAT_ARGS(resp->lsn), LSN_FORMAT_ARGS(resp->not_modified_since), dbsize_resp->req.dbNode, request.hdr.reqid, LSN_FORMAT_ARGS(request.hdr.lsn), LSN_FORMAT_ARGS(request.hdr.not_modified_since), dbNode); @@ -2356,7 +2392,7 @@ communicator_dbsize(Oid dbNode, neon_request_lsns *request_lsns) break; default: - NEON_PANIC_CONNECTION_STATE(-1, PANIC, + NEON_PANIC_CONNECTION_STATE(0, PANIC, "Expected DbSize (0x%02x) or Error (0x%02x) response to DbSizeRequest, but got 0x%02x", T_NeonDbSizeResponse, T_NeonErrorResponse, resp->tag); } @@ -2372,7 +2408,7 @@ communicator_read_slru_segment(SlruKind kind, int64 segno, neon_request_lsns *re { int n_blocks; shardno_t shard_no = 0; /* All SLRUs are at shard 0 */ - NeonResponse *resp; + NeonResponse *resp = NULL; NeonGetSlruSegmentRequest request; request = (NeonGetSlruSegmentRequest) { @@ -2383,14 +2419,29 @@ communicator_read_slru_segment(SlruKind kind, int64 segno, neon_request_lsns *re .segno = segno }; - do + consume_prefetch_responses(); + + PG_TRY(); { - while (!page_server->send(shard_no, &request.hdr) || !page_server->flush(shard_no)); + before_shmem_exit(prefetch_on_exit, Int32GetDatum(shard_no)); + do + { + while (!page_server->send(shard_no, &request.hdr) || !page_server->flush(shard_no)); + resp = page_server->receive(shard_no); + } while (resp == NULL); + cancel_before_shmem_exit(prefetch_on_exit, Int32GetDatum(shard_no)); + } + PG_CATCH(); + { + cancel_before_shmem_exit(prefetch_on_exit, Int32GetDatum(shard_no)); + /* Nothing should cancel disconnect: we should not leave connection in opaque state */ + HOLD_INTERRUPTS(); + page_server->disconnect(shard_no); + RESUME_INTERRUPTS(); - consume_prefetch_responses(); - - resp = page_server->receive(shard_no); - } while (resp == NULL); + PG_RE_THROW(); + } + PG_END_TRY(); switch (resp->tag) { @@ -2403,7 +2454,7 @@ communicator_read_slru_segment(SlruKind kind, int64 segno, neon_request_lsns *re slru_resp->req.kind != kind || slru_resp->req.segno != segno) { - NEON_PANIC_CONNECTION_STATE(-1, PANIC, + NEON_PANIC_CONNECTION_STATE(0, PANIC, "Unexpect response {reqid=%lx,lsn=%X/%08X, since=%X/%08X, kind=%u, segno=%u} to get SLRU segment request {reqid=%lx,lsn=%X/%08X, since=%X/%08X, kind=%u, segno=%lluu}", resp->reqid, LSN_FORMAT_ARGS(resp->lsn), LSN_FORMAT_ARGS(resp->not_modified_since), slru_resp->req.kind, slru_resp->req.segno, request.hdr.reqid, LSN_FORMAT_ARGS(request.hdr.lsn), LSN_FORMAT_ARGS(request.hdr.not_modified_since), kind, (unsigned long long) segno); @@ -2435,7 +2486,7 @@ communicator_read_slru_segment(SlruKind kind, int64 segno, neon_request_lsns *re break; default: - NEON_PANIC_CONNECTION_STATE(-1, PANIC, + NEON_PANIC_CONNECTION_STATE(0, PANIC, "Expected GetSlruSegment (0x%02x) or Error (0x%02x) response to GetSlruSegmentRequest, but got 0x%02x", T_NeonGetSlruSegmentResponse, T_NeonErrorResponse, resp->tag); } From e7d18bc1884ae7f5448fff99bd02f6d9390d5566 Mon Sep 17 00:00:00 2001 From: Mikhail Date: Wed, 9 Jul 2025 13:55:10 +0100 Subject: [PATCH 33/65] Replica promotion in compute_ctl (#12183) Add `/promote` method for `compute_ctl` promoting secondary replica to primary, depends on secondary being prewarmed. Add `compute-ctl` mode to `test_replica_promotes`, testing happy path only (no corner cases yet) Add openapi spec for `/promote` and `/lfc` handlers https://github.com/neondatabase/cloud/issues/19011 Resolves: https://github.com/neondatabase/cloud/issues/29807 --- Cargo.lock | 1 + compute_tools/Cargo.toml | 2 +- compute_tools/src/compute.rs | 7 +- compute_tools/src/compute_promote.rs | 132 +++++++++++++++ compute_tools/src/http/openapi_spec.yaml | 145 +++++++++++++++++ compute_tools/src/http/routes/mod.rs | 1 + compute_tools/src/http/routes/promote.rs | 14 ++ compute_tools/src/http/server.rs | 3 +- compute_tools/src/lib.rs | 1 + libs/compute_api/src/responses.rs | 30 +++- test_runner/fixtures/endpoint/http.py | 15 +- test_runner/regress/test_replica_promotes.py | 162 ++++++++++++------- 12 files changed, 448 insertions(+), 65 deletions(-) create mode 100644 compute_tools/src/compute_promote.rs create mode 100644 compute_tools/src/http/routes/promote.rs diff --git a/Cargo.lock b/Cargo.lock index 558e8e2295..c49a2daba7 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -1348,6 +1348,7 @@ dependencies = [ "p256 0.13.2", "pageserver_page_api", "postgres", + "postgres-types", "postgres_initdb", "postgres_versioninfo", "regex", diff --git a/compute_tools/Cargo.toml b/compute_tools/Cargo.toml index 1a03022d89..910bae3bda 100644 --- a/compute_tools/Cargo.toml +++ b/compute_tools/Cargo.toml @@ -66,7 +66,7 @@ url.workspace = true uuid.workspace = true walkdir.workspace = true x509-cert.workspace = true - +postgres-types.workspace = true postgres_versioninfo.workspace = true postgres_initdb.workspace = true compute_api.workspace = true diff --git a/compute_tools/src/compute.rs b/compute_tools/src/compute.rs index ec6e6c1634..0496d38e67 100644 --- a/compute_tools/src/compute.rs +++ b/compute_tools/src/compute.rs @@ -3,7 +3,7 @@ use chrono::{DateTime, Utc}; use compute_api::privilege::Privilege; use compute_api::responses::{ ComputeConfig, ComputeCtlConfig, ComputeMetrics, ComputeStatus, LfcOffloadState, - LfcPrewarmState, TlsConfig, + LfcPrewarmState, PromoteState, TlsConfig, }; use compute_api::spec::{ ComputeAudit, ComputeFeature, ComputeMode, ComputeSpec, ExtVersion, PageserverProtocol, PgIdent, @@ -29,8 +29,7 @@ use std::sync::atomic::{AtomicU32, AtomicU64, Ordering}; use std::sync::{Arc, Condvar, Mutex, RwLock}; use std::time::{Duration, Instant}; use std::{env, fs}; -use tokio::task::JoinHandle; -use tokio::{spawn, time}; +use tokio::{spawn, sync::watch, task::JoinHandle, time}; use tracing::{Instrument, debug, error, info, instrument, warn}; use url::Url; use utils::id::{TenantId, TimelineId}; @@ -175,6 +174,7 @@ pub struct ComputeState { /// WAL flush LSN that is set after terminating Postgres and syncing safekeepers if /// mode == ComputeMode::Primary. None otherwise pub terminate_flush_lsn: Option, + pub promote_state: Option>, pub metrics: ComputeMetrics, } @@ -192,6 +192,7 @@ impl ComputeState { lfc_prewarm_state: LfcPrewarmState::default(), lfc_offload_state: LfcOffloadState::default(), terminate_flush_lsn: None, + promote_state: None, } } diff --git a/compute_tools/src/compute_promote.rs b/compute_tools/src/compute_promote.rs new file mode 100644 index 0000000000..42256faa22 --- /dev/null +++ b/compute_tools/src/compute_promote.rs @@ -0,0 +1,132 @@ +use crate::compute::ComputeNode; +use anyhow::{Context, Result, bail}; +use compute_api::{ + responses::{LfcPrewarmState, PromoteState, SafekeepersLsn}, + spec::ComputeMode, +}; +use std::{sync::Arc, time::Duration}; +use tokio::time::sleep; +use utils::lsn::Lsn; + +impl ComputeNode { + /// Returns only when promote fails or succeeds. If a network error occurs + /// and http client disconnects, this does not stop promotion, and subsequent + /// calls block until promote finishes. + /// Called by control plane on secondary after primary endpoint is terminated + pub async fn promote(self: &Arc, safekeepers_lsn: SafekeepersLsn) -> PromoteState { + let cloned = self.clone(); + let start_promotion = || { + let (tx, rx) = tokio::sync::watch::channel(PromoteState::NotPromoted); + tokio::spawn(async move { + tx.send(match cloned.promote_impl(safekeepers_lsn).await { + Ok(_) => PromoteState::Completed, + Err(err) => { + tracing::error!(%err, "promoting"); + PromoteState::Failed { + error: err.to_string(), + } + } + }) + }); + rx + }; + + let mut task; + // self.state is unlocked after block ends so we lock it in promote_impl + // and task.changed() is reached + { + task = self + .state + .lock() + .unwrap() + .promote_state + .get_or_insert_with(start_promotion) + .clone() + } + task.changed().await.expect("promote sender dropped"); + task.borrow().clone() + } + + // Why do we have to supply safekeepers? + // For secondary we use primary_connection_conninfo so safekeepers field is empty + async fn promote_impl(&self, safekeepers_lsn: SafekeepersLsn) -> Result<()> { + { + let state = self.state.lock().unwrap(); + let mode = &state.pspec.as_ref().unwrap().spec.mode; + if *mode != ComputeMode::Replica { + bail!("{} is not replica", mode.to_type_str()); + } + + // we don't need to query Postgres so not self.lfc_prewarm_state() + match &state.lfc_prewarm_state { + LfcPrewarmState::NotPrewarmed | LfcPrewarmState::Prewarming => { + bail!("prewarm not requested or pending") + } + LfcPrewarmState::Failed { error } => { + tracing::warn!(%error, "replica prewarm failed") + } + _ => {} + } + } + + let client = ComputeNode::get_maintenance_client(&self.tokio_conn_conf) + .await + .context("connecting to postgres")?; + + let primary_lsn = safekeepers_lsn.wal_flush_lsn; + let mut last_wal_replay_lsn: Lsn = Lsn::INVALID; + const RETRIES: i32 = 20; + for i in 0..=RETRIES { + let row = client + .query_one("SELECT pg_last_wal_replay_lsn()", &[]) + .await + .context("getting last replay lsn")?; + let lsn: u64 = row.get::(0).into(); + last_wal_replay_lsn = lsn.into(); + if last_wal_replay_lsn >= primary_lsn { + break; + } + tracing::info!("Try {i}, replica lsn {last_wal_replay_lsn}, primary lsn {primary_lsn}"); + sleep(Duration::from_secs(1)).await; + } + if last_wal_replay_lsn < primary_lsn { + bail!("didn't catch up with primary in {RETRIES} retries"); + } + + // using $1 doesn't work with ALTER SYSTEM SET + let safekeepers_sql = format!( + "ALTER SYSTEM SET neon.safekeepers='{}'", + safekeepers_lsn.safekeepers + ); + client + .query(&safekeepers_sql, &[]) + .await + .context("setting safekeepers")?; + client + .query("SELECT pg_reload_conf()", &[]) + .await + .context("reloading postgres config")?; + let row = client + .query_one("SELECT * FROM pg_promote()", &[]) + .await + .context("pg_promote")?; + if !row.get::(0) { + bail!("pg_promote() returned false"); + } + + let client = ComputeNode::get_maintenance_client(&self.tokio_conn_conf) + .await + .context("connecting to postgres")?; + let row = client + .query_one("SHOW transaction_read_only", &[]) + .await + .context("getting transaction_read_only")?; + if row.get::(0) == "on" { + bail!("replica in read only mode after promotion"); + } + + let mut state = self.state.lock().unwrap(); + state.pspec.as_mut().unwrap().spec.mode = ComputeMode::Primary; + Ok(()) + } +} diff --git a/compute_tools/src/http/openapi_spec.yaml b/compute_tools/src/http/openapi_spec.yaml index bbdb7d0917..eaf33d1f82 100644 --- a/compute_tools/src/http/openapi_spec.yaml +++ b/compute_tools/src/http/openapi_spec.yaml @@ -83,6 +83,87 @@ paths: schema: $ref: "#/components/schemas/DbsAndRoles" + /promote: + post: + tags: + - Promotion + summary: Promote secondary replica to primary + description: "" + operationId: promoteReplica + requestBody: + description: Promote requests data + required: true + content: + application/json: + schema: + $ref: "#/components/schemas/SafekeepersLsn" + responses: + 200: + description: Promote succeeded or wasn't started + content: + application/json: + schema: + $ref: "#/components/schemas/PromoteState" + 500: + description: Promote failed + content: + application/json: + schema: + $ref: "#/components/schemas/PromoteState" + + /lfc/prewarm: + post: + summary: Request LFC Prewarm + parameters: + - name: from_endpoint + in: query + schema: + type: string + description: "" + operationId: lfcPrewarm + responses: + 202: + description: LFC prewarm started + 429: + description: LFC prewarm ongoing + get: + tags: + - Prewarm + summary: Get LFC prewarm state + description: "" + operationId: getLfcPrewarmState + responses: + 200: + description: Prewarm state + content: + application/json: + schema: + $ref: "#/components/schemas/LfcPrewarmState" + + /lfc/offload: + post: + summary: Request LFC offload + description: "" + operationId: lfcOffload + responses: + 202: + description: LFC offload started + 429: + description: LFC offload ongoing + get: + tags: + - Prewarm + summary: Get LFC offloading state + description: "" + operationId: getLfcOffloadState + responses: + 200: + description: Offload state + content: + application/json: + schema: + $ref: "#/components/schemas/LfcOffloadState" + /database_schema: get: tags: @@ -497,6 +578,70 @@ components: type: string example: "1.0.0" + SafekeepersLsn: + type: object + required: + - safekeepers + - wal_flush_lsn + properties: + safekeepers: + description: Primary replica safekeepers + type: string + wal_flush_lsn: + description: Primary last WAL flush LSN + type: string + + LfcPrewarmState: + type: object + required: + - status + - total + - prewarmed + - skipped + properties: + status: + description: Lfc prewarm status + enum: [not_prewarmed, prewarming, completed, failed] + type: string + error: + description: Lfc prewarm error, if any + type: string + total: + description: Total pages processed + type: integer + prewarmed: + description: Total pages prewarmed + type: integer + skipped: + description: Pages processed but not prewarmed + type: integer + + LfcOffloadState: + type: object + required: + - status + properties: + status: + description: Lfc offload status + enum: [not_offloaded, offloading, completed, failed] + type: string + error: + description: Lfc offload error, if any + type: string + + PromoteState: + type: object + required: + - status + properties: + status: + description: Promote result + enum: [not_promoted, completed, failed] + type: string + error: + description: Promote error, if any + type: string + InstalledExtensions: type: object properties: diff --git a/compute_tools/src/http/routes/mod.rs b/compute_tools/src/http/routes/mod.rs index 432e66a830..dd71f663eb 100644 --- a/compute_tools/src/http/routes/mod.rs +++ b/compute_tools/src/http/routes/mod.rs @@ -14,6 +14,7 @@ pub(in crate::http) mod insights; pub(in crate::http) mod lfc; pub(in crate::http) mod metrics; pub(in crate::http) mod metrics_json; +pub(in crate::http) mod promote; pub(in crate::http) mod status; pub(in crate::http) mod terminate; diff --git a/compute_tools/src/http/routes/promote.rs b/compute_tools/src/http/routes/promote.rs new file mode 100644 index 0000000000..bc5f93b4da --- /dev/null +++ b/compute_tools/src/http/routes/promote.rs @@ -0,0 +1,14 @@ +use crate::http::JsonResponse; +use axum::Form; +use http::StatusCode; + +pub(in crate::http) async fn promote( + compute: axum::extract::State>, + Form(safekeepers_lsn): Form, +) -> axum::response::Response { + let state = compute.promote(safekeepers_lsn).await; + if let compute_api::responses::PromoteState::Failed { error } = state { + return JsonResponse::error(StatusCode::INTERNAL_SERVER_ERROR, error); + } + JsonResponse::success(StatusCode::OK, state) +} diff --git a/compute_tools/src/http/server.rs b/compute_tools/src/http/server.rs index d5d2427971..17939e39d4 100644 --- a/compute_tools/src/http/server.rs +++ b/compute_tools/src/http/server.rs @@ -23,7 +23,7 @@ use super::{ middleware::authorize::Authorize, routes::{ check_writability, configure, database_schema, dbs_and_roles, extension_server, extensions, - grants, insights, lfc, metrics, metrics_json, status, terminate, + grants, insights, lfc, metrics, metrics_json, promote, status, terminate, }, }; use crate::compute::ComputeNode; @@ -87,6 +87,7 @@ impl From<&Server> for Router> { let authenticated_router = Router::>::new() .route("/lfc/prewarm", get(lfc::prewarm_state).post(lfc::prewarm)) .route("/lfc/offload", get(lfc::offload_state).post(lfc::offload)) + .route("/promote", post(promote::promote)) .route("/check_writability", post(check_writability::is_writable)) .route("/configure", post(configure::configure)) .route("/database_schema", get(database_schema::get_schema_dump)) diff --git a/compute_tools/src/lib.rs b/compute_tools/src/lib.rs index 3899a1ca76..2d5d4565b7 100644 --- a/compute_tools/src/lib.rs +++ b/compute_tools/src/lib.rs @@ -12,6 +12,7 @@ pub mod logger; pub mod catalog; pub mod compute; pub mod compute_prewarm; +pub mod compute_promote; pub mod disk_quota; pub mod extension_server; pub mod installed_extensions; diff --git a/libs/compute_api/src/responses.rs b/libs/compute_api/src/responses.rs index a54411b06a..e10c381fb4 100644 --- a/libs/compute_api/src/responses.rs +++ b/libs/compute_api/src/responses.rs @@ -46,7 +46,7 @@ pub struct ExtensionInstallResponse { pub version: ExtVersion, } -#[derive(Serialize, Default, Debug, Clone)] +#[derive(Serialize, Default, Debug, Clone, PartialEq)] #[serde(tag = "status", rename_all = "snake_case")] pub enum LfcPrewarmState { #[default] @@ -58,6 +58,17 @@ pub enum LfcPrewarmState { }, } +impl Display for LfcPrewarmState { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + LfcPrewarmState::NotPrewarmed => f.write_str("NotPrewarmed"), + LfcPrewarmState::Prewarming => f.write_str("Prewarming"), + LfcPrewarmState::Completed => f.write_str("Completed"), + LfcPrewarmState::Failed { error } => write!(f, "Error({error})"), + } + } +} + #[derive(Serialize, Default, Debug, Clone, PartialEq)] #[serde(tag = "status", rename_all = "snake_case")] pub enum LfcOffloadState { @@ -70,6 +81,23 @@ pub enum LfcOffloadState { }, } +#[derive(Serialize, Debug, Clone, PartialEq)] +#[serde(tag = "status", rename_all = "snake_case")] +/// Response of /promote +pub enum PromoteState { + NotPromoted, + Completed, + Failed { error: String }, +} + +#[derive(Deserialize, Serialize, Default, Debug, Clone)] +#[serde(rename_all = "snake_case")] +/// Result of /safekeepers_lsn +pub struct SafekeepersLsn { + pub safekeepers: String, + pub wal_flush_lsn: utils::lsn::Lsn, +} + /// Response of the /status API #[derive(Serialize, Debug, Deserialize)] #[serde(rename_all = "snake_case")] diff --git a/test_runner/fixtures/endpoint/http.py b/test_runner/fixtures/endpoint/http.py index 294c52321b..1d278095ce 100644 --- a/test_runner/fixtures/endpoint/http.py +++ b/test_runner/fixtures/endpoint/http.py @@ -2,11 +2,12 @@ from __future__ import annotations import urllib.parse from enum import StrEnum -from typing import TYPE_CHECKING, final +from typing import TYPE_CHECKING, Any, final import requests from requests.adapters import HTTPAdapter from requests.auth import AuthBase +from requests.exceptions import ReadTimeout from typing_extensions import override from fixtures.log_helper import log @@ -102,6 +103,18 @@ class EndpointHttpClient(requests.Session): wait_until(offloaded) + def promote(self, safekeepers_lsn: dict[str, Any], disconnect: bool = False): + url = f"http://localhost:{self.external_port}/promote" + if disconnect: + try: # send first request to start promote and disconnect + self.post(url, data=safekeepers_lsn, timeout=0.001) + except ReadTimeout: + pass # wait on second request which returns on promotion finish + res = self.post(url, data=safekeepers_lsn) + res.raise_for_status() + json: dict[str, str] = res.json() + return json + def database_schema(self, database: str): res = self.get( f"http://localhost:{self.external_port}/database_schema?database={urllib.parse.quote(database, safe='')}", diff --git a/test_runner/regress/test_replica_promotes.py b/test_runner/regress/test_replica_promotes.py index 4486901bae..1f26269f40 100644 --- a/test_runner/regress/test_replica_promotes.py +++ b/test_runner/regress/test_replica_promotes.py @@ -1,29 +1,51 @@ """ -File with secondary->primary promotion testing. - -This far, only contains a test that we don't break and that the data is persisted. +Secondary -> primary promotion testing """ +from enum import StrEnum from typing import cast import psycopg2 +import pytest from fixtures.common_types import Lsn from fixtures.log_helper import log from fixtures.neon_fixtures import Endpoint, NeonEnv, wait_replica_caughtup -from fixtures.pg_version import PgVersion +from fixtures.utils import USE_LFC +from psycopg2.extensions import cursor as Cursor from pytest import raises def stop_and_check_lsn(ep: Endpoint, expected_lsn: Lsn | None): ep.stop(mode="immediate-terminate") lsn = ep.terminate_flush_lsn - if expected_lsn is not None: + assert (lsn is not None) == (expected_lsn is not None), f"{lsn=}, {expected_lsn=}" + if lsn is not None: assert lsn >= expected_lsn, f"{expected_lsn=} < {lsn=}" - else: - assert lsn == expected_lsn, f"{expected_lsn=} != {lsn=}" -def test_replica_promotes(neon_simple_env: NeonEnv, pg_version: PgVersion): +def get_lsn_triple(cur: Cursor) -> tuple[str, str, str]: + cur.execute( + """ + SELECT pg_current_wal_insert_lsn(), + pg_current_wal_lsn(), + pg_current_wal_flush_lsn() + """ + ) + return cast("tuple[str, str, str]", cur.fetchone()) + + +class PromoteMethod(StrEnum): + COMPUTE_CTL = "compute-ctl" + POSTGRES = "postgres" + + +METHOD_OPTIONS = [e for e in PromoteMethod] +METHOD_IDS = [e.value for e in PromoteMethod] + + +@pytest.mark.skipif(not USE_LFC, reason="LFC is disabled, skipping") +@pytest.mark.parametrize("method", METHOD_OPTIONS, ids=METHOD_IDS) +def test_replica_promote(neon_simple_env: NeonEnv, method: PromoteMethod): """ Test that a replica safely promotes, and can commit data updates which show up when the primary boots up after the promoted secondary endpoint @@ -38,29 +60,26 @@ def test_replica_promotes(neon_simple_env: NeonEnv, pg_version: PgVersion): with primary.connect() as primary_conn: primary_cur = primary_conn.cursor() + primary_cur.execute("create extension neon") primary_cur.execute( "create table t(pk bigint GENERATED ALWAYS AS IDENTITY, payload integer)" ) primary_cur.execute("INSERT INTO t(payload) SELECT generate_series(1, 100)") - primary_cur.execute( - """ - SELECT pg_current_wal_insert_lsn(), - pg_current_wal_lsn(), - pg_current_wal_flush_lsn() - """ - ) - lsn_triple = cast("tuple[str, str, str]", primary_cur.fetchone()) + + lsn_triple = get_lsn_triple(primary_cur) log.info(f"Primary: Current LSN after workload is {lsn_triple}") expected_primary_lsn: Lsn = Lsn(lsn_triple[2]) primary_cur.execute("show neon.safekeepers") safekeepers = primary_cur.fetchall()[0][0] - wait_replica_caughtup(primary, secondary) + if method == PromoteMethod.COMPUTE_CTL: + primary.http_client().offload_lfc() + else: + wait_replica_caughtup(primary, secondary) with secondary.connect() as secondary_conn: secondary_cur = secondary_conn.cursor() secondary_cur.execute("select count(*) from t") - assert secondary_cur.fetchone() == (100,) with raises(psycopg2.Error): @@ -71,28 +90,30 @@ def test_replica_promotes(neon_simple_env: NeonEnv, pg_version: PgVersion): secondary_cur.execute("select count(*) from t") assert secondary_cur.fetchone() == (100,) + primary_endpoint_id = primary.endpoint_id stop_and_check_lsn(primary, expected_primary_lsn) # Reconnect to the secondary to make sure we get a read-write connection promo_conn = secondary.connect() promo_cur = promo_conn.cursor() - promo_cur.execute(f"alter system set neon.safekeepers='{safekeepers}'") - promo_cur.execute("select pg_reload_conf()") + if method == PromoteMethod.COMPUTE_CTL: + client = secondary.http_client() + client.prewarm_lfc(primary_endpoint_id) + # control plane knows safekeepers, simulate it by querying primary + assert (lsn := primary.terminate_flush_lsn) + safekeepers_lsn = {"safekeepers": safekeepers, "wal_flush_lsn": lsn} + assert client.promote(safekeepers_lsn)["status"] == "completed" + else: + promo_cur.execute(f"alter system set neon.safekeepers='{safekeepers}'") + promo_cur.execute("select pg_reload_conf()") + promo_cur.execute("SELECT * FROM pg_promote()") + assert promo_cur.fetchone() == (True,) - promo_cur.execute("SELECT * FROM pg_promote()") - assert promo_cur.fetchone() == (True,) - promo_cur.execute( - """ - SELECT pg_current_wal_insert_lsn(), - pg_current_wal_lsn(), - pg_current_wal_flush_lsn() - """ - ) - log.info(f"Secondary: LSN after promotion is {promo_cur.fetchone()}") + lsn_triple = get_lsn_triple(promo_cur) + log.info(f"Secondary: LSN after promotion is {lsn_triple}") # Reconnect to the secondary to make sure we get a read-write connection - with secondary.connect() as new_primary_conn: - new_primary_cur = new_primary_conn.cursor() + with secondary.connect() as conn, conn.cursor() as new_primary_cur: new_primary_cur.execute("select count(*) from t") assert new_primary_cur.fetchone() == (100,) @@ -101,43 +122,34 @@ def test_replica_promotes(neon_simple_env: NeonEnv, pg_version: PgVersion): ) assert new_primary_cur.fetchall() == [(it,) for it in range(101, 201)] - new_primary_cur = new_primary_conn.cursor() + new_primary_cur = conn.cursor() new_primary_cur.execute("select payload from t") assert new_primary_cur.fetchall() == [(it,) for it in range(1, 201)] new_primary_cur.execute("select count(*) from t") assert new_primary_cur.fetchone() == (200,) - new_primary_cur.execute( - """ - SELECT pg_current_wal_insert_lsn(), - pg_current_wal_lsn(), - pg_current_wal_flush_lsn() - """ - ) - log.info(f"Secondary: LSN after workload is {new_primary_cur.fetchone()}") - with secondary.connect() as second_viewpoint_conn: - new_primary_cur = second_viewpoint_conn.cursor() + lsn_triple = get_lsn_triple(new_primary_cur) + log.info(f"Secondary: LSN after workload is {lsn_triple}") + expected_promoted_lsn = Lsn(lsn_triple[2]) + + with secondary.connect() as conn, conn.cursor() as new_primary_cur: new_primary_cur.execute("select payload from t") assert new_primary_cur.fetchall() == [(it,) for it in range(1, 201)] - # wait_for_last_flush_lsn(env, secondary, env.initial_tenant, env.initial_timeline) - - # secondaries don't sync safekeepers on finish so LSN will be None - stop_and_check_lsn(secondary, None) + if method == PromoteMethod.COMPUTE_CTL: + # compute_ctl's /promote switches replica type to Primary so it syncs + # safekeepers on finish + stop_and_check_lsn(secondary, expected_promoted_lsn) + else: + # on testing postgres, we don't update replica type, secondaries don't + # sync so lsn should be None + stop_and_check_lsn(secondary, None) primary = env.endpoints.create_start(branch_name="main", endpoint_id="primary2") - with primary.connect() as new_primary: - new_primary_cur = new_primary.cursor() - new_primary_cur.execute( - """ - SELECT pg_current_wal_insert_lsn(), - pg_current_wal_lsn(), - pg_current_wal_flush_lsn() - """ - ) - lsn_triple = cast("tuple[str, str, str]", new_primary_cur.fetchone()) + with primary.connect() as new_primary, new_primary.cursor() as new_primary_cur: + lsn_triple = get_lsn_triple(new_primary_cur) expected_primary_lsn = Lsn(lsn_triple[2]) log.info(f"New primary: Boot LSN is {lsn_triple}") @@ -146,5 +158,39 @@ def test_replica_promotes(neon_simple_env: NeonEnv, pg_version: PgVersion): new_primary_cur.execute("INSERT INTO t (payload) SELECT generate_series(201, 300)") new_primary_cur.execute("select count(*) from t") assert new_primary_cur.fetchone() == (300,) - stop_and_check_lsn(primary, expected_primary_lsn) + + +@pytest.mark.skipif(not USE_LFC, reason="LFC is disabled, skipping") +def test_replica_promote_handler_disconnects(neon_simple_env: NeonEnv): + """ + Test that if a handler disconnects from /promote route of compute_ctl, promotion still happens + once, and no error is thrown + """ + env: NeonEnv = neon_simple_env + primary: Endpoint = env.endpoints.create_start(branch_name="main", endpoint_id="primary") + secondary: Endpoint = env.endpoints.new_replica_start(origin=primary, endpoint_id="secondary") + + with primary.connect() as conn, conn.cursor() as cur: + cur.execute("create extension neon") + cur.execute("create table t(pk bigint GENERATED ALWAYS AS IDENTITY, payload integer)") + cur.execute("INSERT INTO t(payload) SELECT generate_series(1, 100)") + cur.execute("show neon.safekeepers") + safekeepers = cur.fetchall()[0][0] + + primary.http_client().offload_lfc() + primary_endpoint_id = primary.endpoint_id + primary.stop(mode="immediate-terminate") + assert (lsn := primary.terminate_flush_lsn) + + client = secondary.http_client() + client.prewarm_lfc(primary_endpoint_id) + safekeepers_lsn = {"safekeepers": safekeepers, "wal_flush_lsn": lsn} + assert client.promote(safekeepers_lsn, disconnect=True)["status"] == "completed" + + with secondary.connect() as conn, conn.cursor() as cur: + cur.execute("select count(*) from t") + assert cur.fetchone() == (100,) + cur.execute("INSERT INTO t (payload) SELECT generate_series(101, 200) RETURNING payload") + cur.execute("select count(*) from t") + assert cur.fetchone() == (200,) From 8f3351fa91c5170dbeb42b0fd84fd5ceec7b3ead Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Wed, 9 Jul 2025 16:17:22 +0200 Subject: [PATCH 34/65] pageserver/client_grpc: split GetPage batches across shards (#12469) ## Problem The rich gRPC Pageserver client needs to split GetPage batches that straddle multiple shards. Touches #11735. Requires #12462. ## Summary of changes Adds a `GetPageSplitter` which splits `GetPageRequest` that span multiple shards, and then reassembles the responses. Dispatches per-shard requests in parallel. --- Cargo.lock | 1 + pageserver/client_grpc/Cargo.toml | 1 + pageserver/client_grpc/src/client.rs | 90 ++++++++++---- pageserver/client_grpc/src/lib.rs | 1 + pageserver/client_grpc/src/split.rs | 172 +++++++++++++++++++++++++++ 5 files changed, 240 insertions(+), 25 deletions(-) create mode 100644 pageserver/client_grpc/src/split.rs diff --git a/Cargo.lock b/Cargo.lock index c49a2daba7..caed814d5f 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -4499,6 +4499,7 @@ name = "pageserver_client_grpc" version = "0.1.0" dependencies = [ "anyhow", + "bytes", "compute_api", "futures", "pageserver_api", diff --git a/pageserver/client_grpc/Cargo.toml b/pageserver/client_grpc/Cargo.toml index 0a8bcad2ef..84e27abb84 100644 --- a/pageserver/client_grpc/Cargo.toml +++ b/pageserver/client_grpc/Cargo.toml @@ -6,6 +6,7 @@ license.workspace = true [dependencies] anyhow.workspace = true +bytes.workspace = true compute_api.workspace = true futures.workspace = true pageserver_api.workspace = true diff --git a/pageserver/client_grpc/src/client.rs b/pageserver/client_grpc/src/client.rs index 5bccdeede3..c21ce2e47d 100644 --- a/pageserver/client_grpc/src/client.rs +++ b/pageserver/client_grpc/src/client.rs @@ -2,13 +2,15 @@ use std::collections::HashMap; use std::sync::Arc; use anyhow::anyhow; +use futures::stream::FuturesUnordered; +use futures::{FutureExt as _, StreamExt as _}; use tracing::instrument; use crate::pool::{ChannelPool, ClientGuard, ClientPool, StreamGuard, StreamPool}; use crate::retry::Retry; +use crate::split::GetPageSplitter; use compute_api::spec::PageserverProtocol; -use pageserver_api::key::{Key, rel_block_to_key}; -use pageserver_api::shard::{ShardStripeSize, key_to_shard_number}; +use pageserver_api::shard::ShardStripeSize; use pageserver_page_api as page_api; use utils::id::{TenantId, TimelineId}; use utils::shard::{ShardCount, ShardIndex, ShardNumber}; @@ -78,10 +80,11 @@ impl PageserverClient { .await } - /// Fetches a page. The `request_id` must be unique across all in-flight requests. + /// Fetches pages. The `request_id` must be unique across all in-flight requests. Automatically + /// splits requests that straddle shard boundaries, and assembles the responses. /// - /// Unlike the `page_api::Client`, this client automatically converts `status_code` into - /// `tonic::Status` errors. All responses will have `GetPageStatusCode::Ok`. + /// Unlike `page_api::Client`, this automatically converts `status_code` into `tonic::Status` + /// errors. All responses will have `GetPageStatusCode::Ok`. #[instrument(skip_all, fields( req_id = %req.request_id, rel = %req.rel, @@ -93,22 +96,55 @@ impl PageserverClient { &self, req: page_api::GetPageRequest, ) -> tonic::Result { - // TODO: this needs to split batch requests across shards and reassemble responses into a - // single response. It must also re-split the batch in case the shard map changes. For now, - // just use the first page. - let key = rel_block_to_key( - req.rel, - req.block_numbers - .first() - .copied() - .ok_or_else(|| tonic::Status::invalid_argument("no block numbers provided"))?, - ); + // Make sure we have at least one page. + if req.block_numbers.is_empty() { + return Err(tonic::Status::invalid_argument("no block number")); + } - self.retry + // Fast path: request is for a single shard. + if let Some(shard_id) = + GetPageSplitter::is_single_shard(&req, self.shards.count, self.shards.stripe_size) + { + return self.get_page_for_shard(shard_id, req).await; + } + + // Request spans multiple shards. Split it, dispatch concurrent per-shard requests, and + // reassemble the responses. + // + // TODO: when we support shard map updates, we need to detect when it changes and re-split + // the request on errors. + let mut splitter = GetPageSplitter::split(req, self.shards.count, self.shards.stripe_size); + + let mut shard_requests: FuturesUnordered<_> = splitter + .drain_requests() + .map(|(shard_id, shard_req)| { + // NB: each request will retry internally. + self.get_page_for_shard(shard_id, shard_req) + .map(move |result| result.map(|resp| (shard_id, resp))) + }) + .collect(); + + while let Some((shard_id, shard_response)) = shard_requests.next().await.transpose()? { + splitter.add_response(shard_id, shard_response)?; + } + + splitter.assemble_response() + } + + /// Fetches pages that belong to the given shard. + #[instrument(skip_all, fields(shard = %shard_id))] + async fn get_page_for_shard( + &self, + shard_id: ShardIndex, + req: page_api::GetPageRequest, + ) -> tonic::Result { + let resp = self + .retry .with(async || { - let stream = self.shards.get_for_key(key).stream().await; + let stream = self.shards.get(shard_id)?.stream().await; let resp = stream.send(req.clone()).await?; + // Convert per-request errors into a tonic::Status. if resp.status_code != page_api::GetPageStatusCode::Ok { return Err(tonic::Status::new( resp.status_code.into(), @@ -118,7 +154,18 @@ impl PageserverClient { Ok(resp) }) - .await + .await?; + + // Make sure we got the right number of pages. + // NB: check outside of the retry loop, since we don't want to retry this. + let (expected, actual) = (req.block_numbers.len(), resp.page_images.len()); + if expected != actual { + return Err(tonic::Status::internal(format!( + "expected {expected} pages for shard {shard_id}, got {actual}", + ))); + } + + Ok(resp) } /// Returns the size of a relation, as # of blocks. @@ -216,13 +263,6 @@ impl Shards { .ok_or_else(|| tonic::Status::not_found(format!("unknown shard {shard_id}"))) } - /// Looks up the shard that owns the given key. - fn get_for_key(&self, key: Key) -> &Shard { - let shard_number = key_to_shard_number(self.count, self.stripe_size, &key); - self.get(ShardIndex::new(shard_number, self.count)) - .expect("must exist") - } - /// Returns shard 0. fn get_zero(&self) -> &Shard { self.get(ShardIndex::new(ShardNumber(0), self.count)) diff --git a/pageserver/client_grpc/src/lib.rs b/pageserver/client_grpc/src/lib.rs index 2a59f9868c..3fc7178be2 100644 --- a/pageserver/client_grpc/src/lib.rs +++ b/pageserver/client_grpc/src/lib.rs @@ -1,5 +1,6 @@ mod client; mod pool; mod retry; +mod split; pub use client::PageserverClient; diff --git a/pageserver/client_grpc/src/split.rs b/pageserver/client_grpc/src/split.rs new file mode 100644 index 0000000000..5bbcaab393 --- /dev/null +++ b/pageserver/client_grpc/src/split.rs @@ -0,0 +1,172 @@ +use std::collections::HashMap; + +use bytes::Bytes; + +use pageserver_api::key::rel_block_to_key; +use pageserver_api::shard::{ShardStripeSize, key_to_shard_number}; +use pageserver_page_api as page_api; +use utils::shard::{ShardCount, ShardIndex}; + +/// Splits GetPageRequests that straddle shard boundaries and assembles the responses. +/// TODO: add tests for this. +pub struct GetPageSplitter { + /// The original request ID. Used for all shard requests. + request_id: page_api::RequestID, + /// Split requests by shard index. + requests: HashMap, + /// Maps the offset in `GetPageRequest::block_numbers` to the owning shard. Used to assemble + /// the response pages in the same order as the original request. + block_shards: Vec, + /// Page responses by shard index. Will be assembled into a single response. + responses: HashMap>, +} + +impl GetPageSplitter { + /// Checks if the given request only touches a single shard, and returns the shard ID. This is + /// the common case, so we check first in order to avoid unnecessary allocations and overhead. + /// The caller must ensure that the request has at least one block number, or this will panic. + pub fn is_single_shard( + req: &page_api::GetPageRequest, + count: ShardCount, + stripe_size: ShardStripeSize, + ) -> Option { + // Fast path: unsharded tenant. + if count.is_unsharded() { + return Some(ShardIndex::unsharded()); + } + + // Find the base shard index for the first page, and compare with the rest. + let key = rel_block_to_key(req.rel, *req.block_numbers.first().expect("no pages")); + let shard_number = key_to_shard_number(count, stripe_size, &key); + + req.block_numbers + .iter() + .skip(1) // computed above + .all(|&blkno| { + let key = rel_block_to_key(req.rel, blkno); + key_to_shard_number(count, stripe_size, &key) == shard_number + }) + .then_some(ShardIndex::new(shard_number, count)) + } + + /// Splits the given request. + pub fn split( + req: page_api::GetPageRequest, + count: ShardCount, + stripe_size: ShardStripeSize, + ) -> Self { + // The caller should make sure we don't split requests unnecessarily. + debug_assert!( + Self::is_single_shard(&req, count, stripe_size).is_none(), + "unnecessary request split" + ); + + // Split the requests by shard index. + let mut requests = HashMap::with_capacity(2); // common case + let mut block_shards = Vec::with_capacity(req.block_numbers.len()); + for blkno in req.block_numbers { + let key = rel_block_to_key(req.rel, blkno); + let shard_number = key_to_shard_number(count, stripe_size, &key); + let shard_id = ShardIndex::new(shard_number, count); + + let shard_req = requests + .entry(shard_id) + .or_insert_with(|| page_api::GetPageRequest { + request_id: req.request_id, + request_class: req.request_class, + rel: req.rel, + read_lsn: req.read_lsn, + block_numbers: Vec::new(), + }); + shard_req.block_numbers.push(blkno); + block_shards.push(shard_id); + } + + Self { + request_id: req.request_id, + responses: HashMap::with_capacity(requests.len()), + requests, + block_shards, + } + } + + /// Drains the per-shard requests, moving them out of the hashmap to avoid extra allocations. + pub fn drain_requests( + &mut self, + ) -> impl Iterator { + self.requests.drain() + } + + /// Adds a response from the given shard. + #[allow(clippy::result_large_err)] + pub fn add_response( + &mut self, + shard_id: ShardIndex, + response: page_api::GetPageResponse, + ) -> tonic::Result<()> { + // The caller should already have converted status codes into tonic::Status. + assert_eq!(response.status_code, page_api::GetPageStatusCode::Ok); + + // Make sure the response matches the request ID. + if response.request_id != self.request_id { + return Err(tonic::Status::internal(format!( + "response ID {} does not match request ID {}", + response.request_id, self.request_id + ))); + } + + // Add the response data to the map. + let old = self.responses.insert(shard_id, response.page_images); + + if old.is_some() { + return Err(tonic::Status::internal(format!( + "duplicate response for shard {shard_id}", + ))); + } + + Ok(()) + } + + /// Assembles the shard responses into a single response. Responses must be present for all + /// relevant shards, and the total number of pages must match the original request. + #[allow(clippy::result_large_err)] + pub fn assemble_response(self) -> tonic::Result { + let mut response = page_api::GetPageResponse { + request_id: self.request_id, + status_code: page_api::GetPageStatusCode::Ok, + reason: None, + page_images: Vec::with_capacity(self.block_shards.len()), + }; + + // Set up per-shard page iterators we can pull from. + let mut shard_responses = HashMap::with_capacity(self.responses.len()); + for (shard_id, responses) in self.responses { + shard_responses.insert(shard_id, responses.into_iter()); + } + + // Reassemble the responses in the same order as the original request. + for shard_id in &self.block_shards { + let page = shard_responses + .get_mut(shard_id) + .ok_or_else(|| { + tonic::Status::internal(format!("missing response for shard {shard_id}")) + })? + .next() + .ok_or_else(|| { + tonic::Status::internal(format!("missing page from shard {shard_id}")) + })?; + response.page_images.push(page); + } + + // Make sure there are no additional pages. + for (shard_id, mut pages) in shard_responses { + if pages.next().is_some() { + return Err(tonic::Status::internal(format!( + "extra pages returned from shard {shard_id}" + ))); + } + } + + Ok(response) + } +} From bc6a756f1c41145c7eba01b105c7d3a3e08829cf Mon Sep 17 00:00:00 2001 From: Mikhail Date: Wed, 9 Jul 2025 15:29:45 +0100 Subject: [PATCH 35/65] ci: lint openapi specs using redocly (#12510) We need to lint specs for pageserver, endpoint storage, and safekeeper #0000 --- .github/workflows/build_and_test.yml | 18 ++++++++++++ Makefile | 9 ++++++ compute_tools/src/http/openapi_spec.yaml | 29 ------------------- pageserver/src/http/openapi_spec.yml | 37 +++++++++--------------- 4 files changed, 41 insertions(+), 52 deletions(-) diff --git a/.github/workflows/build_and_test.yml b/.github/workflows/build_and_test.yml index 864abad574..cc9534f05d 100644 --- a/.github/workflows/build_and_test.yml +++ b/.github/workflows/build_and_test.yml @@ -87,6 +87,24 @@ jobs: uses: ./.github/workflows/build-build-tools-image.yml secrets: inherit + lint-openapi-spec: + runs-on: ubuntu-22.04 + needs: [ meta, check-permissions ] + # We do need to run this in `.*-rc-pr` because of hotfixes. + if: ${{ contains(fromJSON('["pr", "push-main", "storage-rc-pr", "proxy-rc-pr", "compute-rc-pr"]'), needs.meta.outputs.run-kind) }} + steps: + - name: Harden the runner (Audit all outbound calls) + uses: step-security/harden-runner@4d991eb9b905ef189e4c376166672c3f2f230481 # v2.11.0 + with: + egress-policy: audit + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2 + - uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0 + with: + registry: ghcr.io + username: ${{ github.actor }} + password: ${{ secrets.GITHUB_TOKEN }} + - run: make lint-openapi-spec + check-codestyle-python: needs: [ meta, check-permissions, build-build-tools-image ] # No need to run on `main` because we this in the merge queue. We do need to run this in `.*-rc-pr` because of hotfixes. diff --git a/Makefile b/Makefile index 4b31e26810..d07ac907b4 100644 --- a/Makefile +++ b/Makefile @@ -220,6 +220,15 @@ neon-pgindent: postgres-v17-pg-bsd-indent neon-pg-ext-v17 setup-pre-commit-hook: ln -s -f $(ROOT_PROJECT_DIR)/pre-commit.py .git/hooks/pre-commit +.PHONY: lint-openapi-spec +lint-openapi-spec: + # operation-2xx-response: pageserver timeline delete returns 404 on success + find . -iname "openapi_spec.y*ml" -exec\ + docker run --rm -v ${PWD}:/spec ghcr.io/redocly/cli:1.34.4\ + --skip-rule=operation-operationId --skip-rule=operation-summary --extends=minimal\ + --skip-rule=no-server-example.com --skip-rule=operation-2xx-response\ + lint {} \+ + # Targets for building PostgreSQL are defined in postgres.mk. # # But if the caller has indicated that PostgreSQL is already diff --git a/compute_tools/src/http/openapi_spec.yaml b/compute_tools/src/http/openapi_spec.yaml index eaf33d1f82..3c58b284b3 100644 --- a/compute_tools/src/http/openapi_spec.yaml +++ b/compute_tools/src/http/openapi_spec.yaml @@ -416,15 +416,6 @@ components: total_startup_ms: type: integer - Info: - type: object - description: Information about VM/Pod. - required: - - num_cpus - properties: - num_cpus: - type: integer - DbsAndRoles: type: object description: Databases and Roles @@ -642,26 +633,6 @@ components: description: Promote error, if any type: string - InstalledExtensions: - type: object - properties: - extensions: - description: Contains list of installed extensions. - type: array - items: - type: object - properties: - extname: - type: string - version: - type: string - items: - type: string - n_databases: - type: integer - owned_by_superuser: - type: integer - SetRoleGrantsRequest: type: object required: diff --git a/pageserver/src/http/openapi_spec.yml b/pageserver/src/http/openapi_spec.yml index e8d1367d6c..3ffc80f19a 100644 --- a/pageserver/src/http/openapi_spec.yml +++ b/pageserver/src/http/openapi_spec.yml @@ -116,26 +116,6 @@ paths: schema: type: string - /v1/tenant/{tenant_id}/timeline: - parameters: - - name: tenant_id - in: path - required: true - schema: - type: string - get: - description: Get timelines for tenant - responses: - "200": - description: TimelineInfo - content: - application/json: - schema: - type: array - items: - $ref: "#/components/schemas/TimelineInfo" - - /v1/tenant/{tenant_id}/timeline/{timeline_id}: parameters: - name: tenant_id @@ -618,7 +598,7 @@ paths: schema: $ref: "#/components/schemas/SecondaryProgress" - /v1/tenant/{tenant_id}/timeline/: + /v1/tenant/{tenant_id}/timeline: parameters: - name: tenant_id in: path @@ -685,6 +665,17 @@ paths: application/json: schema: $ref: "#/components/schemas/Error" + get: + description: Get timelines for tenant + responses: + "200": + description: TimelineInfo + content: + application/json: + schema: + type: array + items: + $ref: "#/components/schemas/TimelineInfo" /v1/tenant/{tenant_shard_id}/timeline/{timeline_id}/detach_ancestor: parameters: @@ -767,7 +758,7 @@ paths: $ref: "#/components/schemas/ServiceUnavailableError" - /v1/tenant/: + /v1/tenant: get: description: Get tenants list responses: @@ -847,7 +838,7 @@ paths: items: $ref: "#/components/schemas/TenantInfo" - /v1/tenant/{tenant_id}/config/: + /v1/tenant/{tenant_id}/config: parameters: - name: tenant_id in: path From 5c0de4ee8ccc39f67d718c9575897b6ca0c388b8 Mon Sep 17 00:00:00 2001 From: Alexander Lakhin Date: Wed, 9 Jul 2025 18:22:54 +0300 Subject: [PATCH 36/65] Fix parameter name in workload for test_multiple_subscription_branching (#12522) ## Problem As discovered in https://github.com/neondatabase/neon/issues/12394, test_multiple_subscription_branching generates skewed data distribution, that leads to test failures when the unevenly filled last table receives even more data. for table t0: pub_res = (42001,), sub_res = (42001,) for table t1: pub_res = (29001,), sub_res = (29001,) for table t2: pub_res = (21001,), sub_res = (21001,) for table t3: pub_res = (21001,), sub_res = (21001,) for table t4: pub_res = (1711001,), sub_res = (1711001,) ## Summary of changes Fix the name of the workload parameter to generate data as expected. --- test_runner/regress/test_subscriber_branching.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test_runner/regress/test_subscriber_branching.py b/test_runner/regress/test_subscriber_branching.py index 83bebc19be..63772f7cd4 100644 --- a/test_runner/regress/test_subscriber_branching.py +++ b/test_runner/regress/test_subscriber_branching.py @@ -332,7 +332,7 @@ def test_multiple_subscription_branching(neon_simple_env: NeonEnv): last_insert_lsn = query_scalar(cursor, "select pg_current_wal_insert_lsn();") - def start_publisher_workload(table_num: int, duration: int): + def start_publisher_workload(i: int, duration: int): start = time.time() with endpoint.cursor(dbname="publisher_db") as cur: while time.time() - start < duration: From 78a6daa874b216711e3afa204bbbb0eb1af3b64b Mon Sep 17 00:00:00 2001 From: a-masterov <72613290+a-masterov@users.noreply.github.com> Date: Wed, 9 Jul 2025 17:28:04 +0200 Subject: [PATCH 37/65] Add retrying in Random ops test if parent branch is not found. (#12513) ## Problem Due to a lag in replication, we sometimes cannot get the parent branch definition just after completion of the Public API restore call. This leads to the test failures. https://databricks.atlassian.net/browse/LKB-279 ## Summary of changes The workaround is implemented. Now test retries up to 30 seconds, waiting for the branch definition to appear. --------- Co-authored-by: Alexey Masterov --- test_runner/random_ops/test_random_ops.py | 103 ++++++++++++---------- 1 file changed, 55 insertions(+), 48 deletions(-) diff --git a/test_runner/random_ops/test_random_ops.py b/test_runner/random_ops/test_random_ops.py index d3815c40bb..5c43b06bc5 100644 --- a/test_runner/random_ops/test_random_ops.py +++ b/test_runner/random_ops/test_random_ops.py @@ -117,7 +117,9 @@ class NeonBranch: def create_child_branch(self) -> NeonBranch | None: return self.project.create_branch(self.id) - def create_ro_endpoint(self) -> NeonEndpoint: + def create_ro_endpoint(self) -> NeonEndpoint | None: + if not self.project.check_limit_endpoints(): + return None return NeonEndpoint( self.project, self.neon_api.create_endpoint(self.project_id, self.id, "read_only", {})["endpoint"], @@ -151,11 +153,26 @@ class NeonBranch: return self.updated_at = datetime.fromisoformat(res["branch"]["updated_at"]) parent_id: str = res["branch"]["parent_id"] + # XXX Retry get parent details to work around the issue + # https://databricks.atlassian.net/browse/LKB-279 + target_time = datetime.now() + timedelta(seconds=30) + while datetime.now() < target_time: + try: + parent_def = self.neon_api.get_branch_details(self.project_id, parent_id) + except HTTPError as he: + if he.response.status_code == 404: + log.info("Branch not found, waiting...") + time.sleep(1) + else: + raise HTTPError(he) from he + else: + break + else: + raise RuntimeError(f"Branch {parent_id} not found") + # Creates an object for the parent branch # After the reset operation a new parent branch is created - parent = NeonBranch( - self.project, self.neon_api.get_branch_details(self.project_id, parent_id), True - ) + parent = NeonBranch(self.project, parent_def, True) self.project.branches[parent_id] = parent self.parent = parent parent.children[self.id] = self @@ -168,29 +185,21 @@ class NeonBranch: source_timestamp: str | None = None, preserve_under_name: str | None = None, ) -> dict[str, Any] | None: + if not self.project.check_limit_branches(): + return None endpoints = [ep for ep in self.endpoints.values() if ep.type == "read_only"] # Terminate all the benchmarks running to prevent errors. Errors in benchmark during pgbench are expected for ep in endpoints: ep.terminate_benchmark() self.terminate_benchmark() - try: - res: dict[str, Any] = self.neon_api.restore_branch( - self.project_id, - self.id, - source_branch_id, - source_lsn, - source_timestamp, - preserve_under_name, - ) - except HTTPError as he: - if ( - he.response.status_code == 422 - and he.response.json()["code"] == "BRANCHES_LIMIT_EXCEEDED" - ): - log.info("Branch limit exceeded, skipping") - return None - else: - raise HTTPError(he) from he + res: dict[str, Any] = self.neon_api.restore_branch( + self.project_id, + self.id, + source_branch_id, + source_lsn, + source_timestamp, + preserve_under_name, + ) self.project.wait() self.start_benchmark() for ep in endpoints: @@ -239,19 +248,30 @@ class NeonProject: def delete(self) -> None: self.neon_api.delete_project(self.id) + def check_limit_branches(self) -> bool: + if self.limits["max_branches"] == -1 or len(self.branches) < self.limits["max_branches"]: + return True + log.info("branch limit exceeded (%s/%s)", len(self.branches), self.limits["max_branches"]) + return False + + def check_limit_endpoints(self) -> bool: + if ( + self.limits["max_read_only_endpoints"] == -1 + or self.read_only_endpoints_total < self.limits["max_read_only_endpoints"] + ): + return True + log.info( + "Maximum read only endpoint limit exceeded (%s/%s)", + self.read_only_endpoints_total, + self.limits["max_read_only_endpoints"], + ) + return False + def create_branch(self, parent_id: str | None = None) -> NeonBranch | None: self.wait() - try: - branch_def = self.neon_api.create_branch(self.id, parent_id=parent_id) - except HTTPError as he: - if ( - he.response.status_code == 422 - and he.response.json()["code"] == "BRANCHES_LIMIT_EXCEEDED" - ): - log.info("Branch limit exceeded, skipping") - return None - else: - raise HTTPError(he) from he + if not self.check_limit_branches(): + return None + branch_def = self.neon_api.create_branch(self.id, parent_id=parent_id) new_branch = NeonBranch(self, branch_def) self.wait() return new_branch @@ -388,17 +408,9 @@ def do_action(project: NeonProject, action: str) -> bool: log.info("Action: %s", action) if action == "new_branch": log.info("Trying to create a new branch") - if 0 <= project.limits["max_branches"] <= len(project.branches): - log.info( - "Maximum branch limit exceeded (%s of %s)", - len(project.branches), - project.limits["max_branches"], - ) - return False parent = project.branches[ random.choice(list(set(project.branches.keys()) - project.reset_branches)) ] - log.info("Parent: %s", parent) child = parent.create_child_branch() if child is None: return False @@ -413,16 +425,11 @@ def do_action(project: NeonProject, action: str) -> bool: log.info("Leaf branches not found, skipping") return False elif action == "new_ro_endpoint": - if 0 <= project.limits["max_read_only_endpoints"] <= project.read_only_endpoints_total: - log.info( - "Maximum read only endpoint limit exceeded (%s of %s)", - project.read_only_endpoints_total, - project.limits["max_read_only_endpoints"], - ) - return False ep = random.choice( [br for br in project.branches.values() if br.id not in project.reset_branches] ).create_ro_endpoint() + if ep is None: + return False log.info("Created the RO endpoint with id %s branch: %s", ep.id, ep.branch.id) ep.start_benchmark() elif action == "delete_ro_endpoint": From 5ec82105cccf5a8a6c1d28e6f817c703d4b77a26 Mon Sep 17 00:00:00 2001 From: "Alex Chi Z." <4198311+skyzh@users.noreply.github.com> Date: Wed, 9 Jul 2025 11:35:19 -0400 Subject: [PATCH 38/65] fix(pageserver): ensure remote size gets computed (#12520) ## Problem Follow up of #12400 ## Summary of changes We didn't set remote_size_mb to Some when initialized so it never gets computed :( Also added a new API to force refresh the properties. Signed-off-by: Alex Chi Z --- pageserver/src/feature_resolver.rs | 3 ++- pageserver/src/http/routes.rs | 20 ++++++++++++++++++++ test_runner/fixtures/pageserver/http.py | 7 +++++++ test_runner/regress/test_feature_flag.py | 9 +++++++++ 4 files changed, 38 insertions(+), 1 deletion(-) diff --git a/pageserver/src/feature_resolver.rs b/pageserver/src/feature_resolver.rs index 65cac8eea1..f0178fd9b3 100644 --- a/pageserver/src/feature_resolver.rs +++ b/pageserver/src/feature_resolver.rs @@ -409,11 +409,12 @@ impl TenantFeatureResolver { /// Refresh the cached properties and flags on the critical path. pub fn refresh_properties_and_flags(&self, tenant_shard: &TenantShard) { - let mut remote_size_mb = None; + let mut remote_size_mb = Some(0.0); for timeline in tenant_shard.list_timelines() { let size = timeline.metrics.resident_physical_size_get(); if size == 0 { remote_size_mb = None; + break; } if let Some(ref mut remote_size_mb) = remote_size_mb { *remote_size_mb += size as f64 / 1024.0 / 1024.0; diff --git a/pageserver/src/http/routes.rs b/pageserver/src/http/routes.rs index 2995a37089..3612686b5d 100644 --- a/pageserver/src/http/routes.rs +++ b/pageserver/src/http/routes.rs @@ -3691,6 +3691,23 @@ async fn read_tar_eof(mut reader: (impl tokio::io::AsyncRead + Unpin)) -> anyhow Ok(()) } +async fn force_refresh_feature_flag( + request: Request, + _cancel: CancellationToken, +) -> Result, ApiError> { + let tenant_shard_id: TenantShardId = parse_request_param(&request, "tenant_shard_id")?; + check_permission(&request, Some(tenant_shard_id.tenant_id))?; + + let state = get_state(&request); + let tenant = state + .tenant_manager + .get_attached_tenant_shard(tenant_shard_id)?; + tenant + .feature_resolver + .refresh_properties_and_flags(&tenant); + json_response(StatusCode::OK, ()) +} + async fn tenant_evaluate_feature_flag( request: Request, _cancel: CancellationToken, @@ -4156,6 +4173,9 @@ pub fn make_router( .get("/v1/tenant/:tenant_shard_id/feature_flag/:flag_key", |r| { api_handler(r, tenant_evaluate_feature_flag) }) + .post("/v1/tenant/:tenant_shard_id/force_refresh_feature_flag", |r| { + api_handler(r, force_refresh_feature_flag) + }) .put("/v1/feature_flag/:flag_key", |r| { testing_api_handler("force override feature flag - put", r, force_override_feature_flag_for_testing_put) }) diff --git a/test_runner/fixtures/pageserver/http.py b/test_runner/fixtures/pageserver/http.py index d9037f2d08..79cfba8da6 100644 --- a/test_runner/fixtures/pageserver/http.py +++ b/test_runner/fixtures/pageserver/http.py @@ -1247,3 +1247,10 @@ class PageserverHttpClient(requests.Session, MetricsGetter): ) self.verbose_error(res) return res.json() + + def force_refresh_feature_flag(self, tenant_id: TenantId | TenantShardId): + res = self.post( + f"http://localhost:{self.port}/v1/tenant/{tenant_id}/force_refresh_feature_flag", + ) + self.verbose_error(res) + return res.json() diff --git a/test_runner/regress/test_feature_flag.py b/test_runner/regress/test_feature_flag.py index 2712d13dcc..c6c192b6f1 100644 --- a/test_runner/regress/test_feature_flag.py +++ b/test_runner/regress/test_feature_flag.py @@ -49,3 +49,12 @@ def test_feature_flag(neon_env_builder: NeonEnvBuilder): env.initial_tenant, "test-feature-flag" )["result"] ) + + env.pageserver.http_client().force_refresh_feature_flag(env.initial_tenant) + + # Check if the properties exist + result = env.pageserver.http_client().evaluate_feature_flag_multivariate( + env.initial_tenant, "test-feature-flag" + ) + assert "tenant_remote_size_mb" in result["properties"] + assert "tenant_id" in result["properties"] From 2f71eda00fbfea3ee3382fef3024b6376676726e Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Wed, 9 Jul 2025 18:12:59 +0200 Subject: [PATCH 39/65] pageserver/client_grpc: add separate pools for bulk requests (#12475) ## Problem GetPage bulk requests such as prefetches and vacuum can head-of-line block foreground requests, causing increased latency. Touches #11735. Requires #12469. ## Summary of changes * Use dedicated channel/client/stream pools for bulk GetPage requests. * Use lower concurrency but higher queue depth for bulk pools. * Make pool limits configurable. * Require unbounded client pool for stream pool, to avoid accidental starvation. --- pageserver/client_grpc/src/client.rs | 112 +++++++++++++++----- pageserver/client_grpc/src/pool.rs | 148 +++++++++++++++------------ pageserver/page_api/src/model.rs | 15 ++- 3 files changed, 180 insertions(+), 95 deletions(-) diff --git a/pageserver/client_grpc/src/client.rs b/pageserver/client_grpc/src/client.rs index c21ce2e47d..63852868c3 100644 --- a/pageserver/client_grpc/src/client.rs +++ b/pageserver/client_grpc/src/client.rs @@ -1,4 +1,5 @@ use std::collections::HashMap; +use std::num::NonZero; use std::sync::Arc; use anyhow::anyhow; @@ -15,6 +16,32 @@ use pageserver_page_api as page_api; use utils::id::{TenantId, TimelineId}; use utils::shard::{ShardCount, ShardIndex, ShardNumber}; +/// Max number of concurrent clients per channel (i.e. TCP connection). New channels will be spun up +/// when full. +/// +/// TODO: tune all of these constants, and consider making them configurable. +/// TODO: consider separate limits for unary and streaming clients, so we don't fill up channels +/// with only streams. +const MAX_CLIENTS_PER_CHANNEL: NonZero = NonZero::new(16).unwrap(); + +/// Max number of concurrent unary request clients per shard. +const MAX_UNARY_CLIENTS: NonZero = NonZero::new(64).unwrap(); + +/// Max number of concurrent GetPage streams per shard. The max number of concurrent GetPage +/// requests is given by `MAX_STREAMS * MAX_STREAM_QUEUE_DEPTH`. +const MAX_STREAMS: NonZero = NonZero::new(64).unwrap(); + +/// Max number of pipelined requests per stream. +const MAX_STREAM_QUEUE_DEPTH: NonZero = NonZero::new(2).unwrap(); + +/// Max number of concurrent bulk GetPage streams per shard, used e.g. for prefetches. Because these +/// are more throughput-oriented, we have a smaller limit but higher queue depth. +const MAX_BULK_STREAMS: NonZero = NonZero::new(16).unwrap(); + +/// Max number of pipelined requests per bulk stream. These are more throughput-oriented and thus +/// get a larger queue depth. +const MAX_BULK_STREAM_QUEUE_DEPTH: NonZero = NonZero::new(4).unwrap(); + /// A rich Pageserver gRPC client for a single tenant timeline. This client is more capable than the /// basic `page_api::Client` gRPC client, and supports: /// @@ -87,6 +114,7 @@ impl PageserverClient { /// errors. All responses will have `GetPageStatusCode::Ok`. #[instrument(skip_all, fields( req_id = %req.request_id, + class = %req.request_class, rel = %req.rel, blkno = %req.block_numbers[0], blks = %req.block_numbers.len(), @@ -141,7 +169,11 @@ impl PageserverClient { let resp = self .retry .with(async || { - let stream = self.shards.get(shard_id)?.stream().await; + let stream = self + .shards + .get(shard_id)? + .stream(req.request_class.is_bulk()) + .await; let resp = stream.send(req.clone()).await?; // Convert per-request errors into a tonic::Status. @@ -270,17 +302,22 @@ impl Shards { } } -/// A single shard. +/// A single shard. Uses dedicated resource pools with the following structure: /// -/// TODO: consider separate pools for normal and bulk traffic, with different settings. +/// * Channel pool: unbounded. +/// * Unary client pool: MAX_UNARY_CLIENTS. +/// * Stream client pool: unbounded. +/// * Stream pool: MAX_STREAMS and MAX_STREAM_QUEUE_DEPTH. +/// * Bulk channel pool: unbounded. +/// * Bulk client pool: unbounded. +/// * Bulk stream pool: MAX_BULK_STREAMS and MAX_BULK_STREAM_QUEUE_DEPTH. struct Shard { - /// Dedicated channel pool for this shard. Shared by all clients/streams in this shard. - _channel_pool: Arc, - /// Unary gRPC client pool for this shard. Uses the shared channel pool. + /// Unary gRPC client pool. client_pool: Arc, - /// GetPage stream pool for this shard. Uses a dedicated client pool, but shares the channel - /// pool with unary clients. + /// GetPage stream pool. stream_pool: Arc, + /// GetPage stream pool for bulk requests, e.g. prefetches. + bulk_stream_pool: Arc, } impl Shard { @@ -297,34 +334,53 @@ impl Shard { return Err(anyhow!("invalid shard URL {url}: must use gRPC")); } - // Use a common channel pool for all clients, to multiplex unary and stream requests across - // the same TCP connections. The channel pool is unbounded (but client pools are bounded). - let channel_pool = ChannelPool::new(url)?; + // Common channel pool for unary and stream requests. Bounded by client/stream pools. + let channel_pool = ChannelPool::new(url.clone(), MAX_CLIENTS_PER_CHANNEL)?; - // Dedicated client pool for unary requests. + // Client pool for unary requests. let client_pool = ClientPool::new( channel_pool.clone(), tenant_id, timeline_id, shard_id, auth_token.clone(), + Some(MAX_UNARY_CLIENTS), ); - // Stream pool with dedicated client pool. If this shared a client pool with unary requests, - // long-lived streams could fill up the client pool and starve out unary requests. It shares - // the same underlying channel pool with unary clients though, which is unbounded. - let stream_pool = StreamPool::new(ClientPool::new( - channel_pool.clone(), - tenant_id, - timeline_id, - shard_id, - auth_token, - )); + // GetPage stream pool. Uses a dedicated client pool to avoid starving out unary clients, + // but shares a channel pool with it (as it's unbounded). + let stream_pool = StreamPool::new( + ClientPool::new( + channel_pool.clone(), + tenant_id, + timeline_id, + shard_id, + auth_token.clone(), + None, // unbounded, limited by stream pool + ), + Some(MAX_STREAMS), + MAX_STREAM_QUEUE_DEPTH, + ); + + // Bulk GetPage stream pool, e.g. for prefetches. Uses dedicated channel/client/stream pools + // to avoid head-of-line blocking of latency-sensitive requests. + let bulk_stream_pool = StreamPool::new( + ClientPool::new( + ChannelPool::new(url, MAX_CLIENTS_PER_CHANNEL)?, + tenant_id, + timeline_id, + shard_id, + auth_token, + None, // unbounded, limited by stream pool + ), + Some(MAX_BULK_STREAMS), + MAX_BULK_STREAM_QUEUE_DEPTH, + ); Ok(Self { - _channel_pool: channel_pool, client_pool, stream_pool, + bulk_stream_pool, }) } @@ -336,8 +392,12 @@ impl Shard { .map_err(|err| tonic::Status::internal(format!("failed to get client: {err}"))) } - /// Returns a pooled stream for this shard. - async fn stream(&self) -> StreamGuard { - self.stream_pool.get().await + /// Returns a pooled stream for this shard. If `bulk` is `true`, uses the dedicated bulk stream + /// pool (e.g. for prefetches). + async fn stream(&self, bulk: bool) -> StreamGuard { + match bulk { + false => self.stream_pool.get().await, + true => self.bulk_stream_pool.get().await, + } } } diff --git a/pageserver/client_grpc/src/pool.rs b/pageserver/client_grpc/src/pool.rs index 518e4e5b84..5a50004fd1 100644 --- a/pageserver/client_grpc/src/pool.rs +++ b/pageserver/client_grpc/src/pool.rs @@ -30,6 +30,7 @@ //! TODO: observability. use std::collections::{BTreeMap, HashMap}; +use std::num::NonZero; use std::ops::{Deref, DerefMut}; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::{Arc, Mutex, Weak}; @@ -44,22 +45,10 @@ use pageserver_page_api as page_api; use utils::id::{TenantId, TimelineId}; use utils::shard::ShardIndex; -/// Max number of concurrent clients per channel. -/// -/// TODO: tune these constants, and make them configurable. -/// TODO: consider separate limits for unary and streaming clients, so we don't fill up channels -/// with only streams. -const CLIENTS_PER_CHANNEL: usize = 16; - -/// Maximum number of concurrent clients per `ClientPool`. -const CLIENT_LIMIT: usize = 64; - -/// Max number of pipelined requests per gRPC GetPage stream. -const STREAM_QUEUE_DEPTH: usize = 2; - /// A gRPC channel pool, for a single Pageserver. A channel is shared by many clients (via HTTP/2 -/// stream multiplexing), up to `CLIENTS_PER_CHANNEL`. The pool does not limit the number of -/// channels, and instead relies on `ClientPool` to limit the number of concurrent clients. +/// stream multiplexing), up to `clients_per_channel` -- a new channel will be spun up beyond this. +/// The pool does not limit the number of channels, and instead relies on `ClientPool` or +/// `StreamPool` to limit the number of concurrent clients. /// /// The pool is always wrapped in an outer `Arc`, to allow long-lived guards across tasks/threads. /// @@ -69,6 +58,8 @@ const STREAM_QUEUE_DEPTH: usize = 2; pub struct ChannelPool { /// Pageserver endpoint to connect to. endpoint: Endpoint, + /// Max number of clients per channel. Beyond this, a new channel will be created. + max_clients_per_channel: NonZero, /// Open channels. channels: Mutex>, /// Channel ID generator. @@ -86,13 +77,14 @@ struct ChannelEntry { impl ChannelPool { /// Creates a new channel pool for the given Pageserver endpoint. - pub fn new(endpoint: E) -> anyhow::Result> + pub fn new(endpoint: E, max_clients_per_channel: NonZero) -> anyhow::Result> where E: TryInto + Send + Sync + 'static, >::Error: std::error::Error + Send + Sync, { Ok(Arc::new(Self { endpoint: endpoint.try_into()?, + max_clients_per_channel, channels: Mutex::default(), next_channel_id: AtomicUsize::default(), })) @@ -120,8 +112,11 @@ impl ChannelPool { // with lower-ordered channel IDs first. This will cluster clients in lower-ordered // channels, and free up higher-ordered channels such that they can be reaped. for (&id, entry) in channels.iter_mut() { - assert!(entry.clients <= CLIENTS_PER_CHANNEL, "channel overflow"); - if entry.clients < CLIENTS_PER_CHANNEL { + assert!( + entry.clients <= self.max_clients_per_channel.get(), + "channel overflow" + ); + if entry.clients < self.max_clients_per_channel.get() { entry.clients += 1; return ChannelGuard { pool: Arc::downgrade(self), @@ -181,7 +176,7 @@ impl Drop for ChannelGuard { /// A pool of gRPC clients for a single tenant shard. Each client acquires a channel from the inner /// `ChannelPool`. A client is only given out to single caller at a time. The pool limits the total -/// number of concurrent clients to `CLIENT_LIMIT` via semaphore. +/// number of concurrent clients to `max_clients` via semaphore. /// /// The pool is always wrapped in an outer `Arc`, to allow long-lived guards across tasks/threads. /// @@ -197,8 +192,8 @@ pub struct ClientPool { auth_token: Option, /// Channel pool to acquire channels from. channel_pool: Arc, - /// Limits the max number of concurrent clients for this pool. - limiter: Arc, + /// Limits the max number of concurrent clients for this pool. None if the pool is unbounded. + limiter: Option>, /// Idle pooled clients. Acquired clients are removed from here and returned on drop. /// /// The first client in the map will be acquired next. The map is sorted by client ID, which in @@ -221,13 +216,15 @@ struct ClientEntry { impl ClientPool { /// Creates a new client pool for the given tenant shard. Channels are acquired from the given - /// `ChannelPool`, which must point to a Pageserver that hosts the tenant shard. + /// `ChannelPool`, which must point to a Pageserver that hosts the tenant shard. Allows up to + /// `max_clients` concurrent clients, or unbounded if None. pub fn new( channel_pool: Arc, tenant_id: TenantId, timeline_id: TimelineId, shard_id: ShardIndex, auth_token: Option, + max_clients: Option>, ) -> Arc { Arc::new(Self { tenant_id, @@ -236,25 +233,24 @@ impl ClientPool { auth_token, channel_pool, idle: Mutex::default(), - limiter: Arc::new(Semaphore::new(CLIENT_LIMIT)), + limiter: max_clients.map(|max| Arc::new(Semaphore::new(max.get()))), next_client_id: AtomicUsize::default(), }) } /// Gets a client from the pool, or creates a new one if necessary. Connections are established - /// lazily and do not block, but this call can block if the pool is at `CLIENT_LIMIT`. The - /// client is returned to the pool when the guard is dropped. + /// lazily and do not block, but this call can block if the pool is at `max_clients`. The client + /// is returned to the pool when the guard is dropped. /// /// This is moderately performance-sensitive. It is called for every unary request, but these /// establish a new gRPC stream per request so they're already expensive. GetPage requests use /// the `StreamPool` instead. pub async fn get(self: &Arc) -> anyhow::Result { - let permit = self - .limiter - .clone() - .acquire_owned() - .await - .expect("never closed"); + // Acquire a permit if the pool is bounded. + let mut permit = None; + if let Some(limiter) = self.limiter.clone() { + permit = Some(limiter.acquire_owned().await.expect("never closed")); + } // Fast path: acquire an idle client from the pool. if let Some((id, entry)) = self.idle.lock().unwrap().pop_first() { @@ -296,9 +292,9 @@ impl ClientPool { pub struct ClientGuard { pool: Weak, id: ClientID, - client: Option, // Some until dropped - channel_guard: Option, // Some until dropped - permit: OwnedSemaphorePermit, + client: Option, // Some until dropped + channel_guard: Option, // Some until dropped + permit: Option, // None if pool is unbounded } impl Deref for ClientGuard { @@ -341,16 +337,21 @@ impl Drop for ClientGuard { /// TODO: reap idle streams. /// TODO: consider making this generic over request and response types; not currently needed. pub struct StreamPool { - /// The client pool to acquire clients from. + /// The client pool to acquire clients from. Must be unbounded. client_pool: Arc, /// All pooled streams. /// /// Incoming requests will be sent over an existing stream with available capacity. If all - /// streams are full, a new one is spun up and added to the pool (up to the `ClientPool` limit). - /// Each stream has an associated Tokio task that processes requests and responses. + /// streams are full, a new one is spun up and added to the pool (up to `max_streams`). Each + /// stream has an associated Tokio task that processes requests and responses. streams: Arc>>, - /// Limits the max number of concurrent requests (not streams). - limiter: Arc, + /// The max number of concurrent streams, or None if unbounded. + max_streams: Option>, + /// The max number of concurrent requests per stream. + max_queue_depth: NonZero, + /// Limits the max number of concurrent requests, given by `max_streams * max_queue_depth`. + /// None if the pool is unbounded. + limiter: Option>, /// Stream ID generator. next_stream_id: AtomicUsize, } @@ -369,16 +370,27 @@ struct StreamEntry { } impl StreamPool { - /// Creates a new stream pool, using the given client pool. + /// Creates a new stream pool, using the given client pool. It will send up to `max_queue_depth` + /// concurrent requests on each stream, and use up to `max_streams` concurrent streams. /// - /// NB: the stream pool should use a dedicated client pool. Otherwise, long-lived streams may - /// fill up the client pool and starve out unary requests. Client pools can share the same - /// `ChannelPool` though, since the channel pool is unbounded. - pub fn new(client_pool: Arc) -> Arc { + /// The client pool must be unbounded. The stream pool will enforce its own limits, and because + /// streams are long-lived they can cause persistent starvation if they exhaust the client pool. + /// The stream pool should generally have its own dedicated client pool (but it can share a + /// channel pool with others since these are always unbounded). + pub fn new( + client_pool: Arc, + max_streams: Option>, + max_queue_depth: NonZero, + ) -> Arc { + assert!(client_pool.limiter.is_none(), "bounded client pool"); Arc::new(Self { client_pool, streams: Arc::default(), - limiter: Arc::new(Semaphore::new(CLIENT_LIMIT * STREAM_QUEUE_DEPTH)), + limiter: max_streams.map(|max_streams| { + Arc::new(Semaphore::new(max_streams.get() * max_queue_depth.get())) + }), + max_streams, + max_queue_depth, next_stream_id: AtomicUsize::default(), }) } @@ -402,18 +414,17 @@ impl StreamPool { /// /// For now, we just do something simple and functional, but very inefficient (linear scan). pub async fn get(&self) -> StreamGuard { - let permit = self - .limiter - .clone() - .acquire_owned() - .await - .expect("never closed"); + // Acquire a permit if the pool is bounded. + let mut permit = None; + if let Some(limiter) = self.limiter.clone() { + permit = Some(limiter.acquire_owned().await.expect("never closed")); + } let mut streams = self.streams.lock().unwrap(); // Look for a pooled stream with available capacity. for entry in streams.values() { assert!( - entry.queue_depth.load(Ordering::Relaxed) <= STREAM_QUEUE_DEPTH, + entry.queue_depth.load(Ordering::Relaxed) <= self.max_queue_depth.get(), "stream queue overflow" ); if entry @@ -421,7 +432,7 @@ impl StreamPool { .fetch_update(Ordering::SeqCst, Ordering::SeqCst, |queue_depth| { // Increment the queue depth via compare-and-swap. // TODO: review ordering. - (queue_depth < STREAM_QUEUE_DEPTH).then_some(queue_depth + 1) + (queue_depth < self.max_queue_depth.get()).then_some(queue_depth + 1) }) .is_ok() { @@ -438,18 +449,16 @@ impl StreamPool { // join onto this stream and also create additional streams concurrently if this fills up. let id = self.next_stream_id.fetch_add(1, Ordering::Relaxed); let queue_depth = Arc::new(AtomicUsize::new(1)); // reserve quota for this caller - let (req_tx, req_rx) = mpsc::channel(STREAM_QUEUE_DEPTH); + let (req_tx, req_rx) = mpsc::channel(self.max_queue_depth.get()); let entry = StreamEntry { sender: req_tx.clone(), queue_depth: queue_depth.clone(), }; streams.insert(id, entry); - // NB: make sure we don't overshoot the client limit. The semaphore limit is CLIENT_LIMIT * - // STREAM_QUEUE_DEPTH, but if we were to misaccount queue depth we'd try to spin up more - // streams than CLIENT_LIMIT and block on the client pool ~forever. This should not happen - // because we only acquire queue depth under lock and after acquiring a semaphore permit. - assert!(streams.len() <= CLIENT_LIMIT, "stream overflow"); + if let Some(max_streams) = self.max_streams { + assert!(streams.len() <= max_streams.get(), "stream overflow"); + }; let client_pool = self.client_pool.clone(); let streams = self.streams.clone(); @@ -484,19 +493,22 @@ impl StreamPool { // Acquire a client from the pool and create a stream. let mut client = client_pool.get().await?; - let (req_tx, req_rx) = mpsc::channel(STREAM_QUEUE_DEPTH); - let req_stream = tokio_stream::wrappers::ReceiverStream::new(req_rx); + // NB: use an unbounded channel such that the stream send never blocks. Otherwise, we could + // theoretically deadlock if both the client and server block on sends (since we're not + // reading responses while sending). This is unlikely to happen due to gRPC/TCP buffers and + // low queue depths, but it was seen to happen with the libpq protocol so better safe than + // sorry. It should never buffer more than the queue depth anyway, but using an unbounded + // channel guarantees that it will never block. + let (req_tx, req_rx) = mpsc::unbounded_channel(); + let req_stream = tokio_stream::wrappers::UnboundedReceiverStream::new(req_rx); let mut resp_stream = client.get_pages(req_stream).await?; // Track caller response channels by request ID. If the task returns early, these response // channels will be dropped and the waiting callers will receive an error. - let mut callers = HashMap::with_capacity(STREAM_QUEUE_DEPTH); + let mut callers = HashMap::new(); // Process requests and responses. loop { - // NB: this can trip if the server doesn't respond to a request, so only debug_assert. - debug_assert!(callers.len() <= STREAM_QUEUE_DEPTH, "stream queue overflow"); - tokio::select! { // Receive requests from callers and send them to the stream. req = caller_rx.recv() => { @@ -515,8 +527,8 @@ impl StreamPool { } callers.insert(req.request_id, resp_tx); - // Send the request on the stream. Bail out if the send fails. - req_tx.send(req).await.map_err(|_| { + // Send the request on the stream. Bail out if the stream is closed. + req_tx.send(req).map_err(|_| { tonic::Status::unavailable("stream closed") })?; } @@ -545,7 +557,7 @@ impl StreamPool { pub struct StreamGuard { sender: RequestSender, queue_depth: Arc, - permit: OwnedSemaphorePermit, + permit: Option, // None if pool is unbounded } impl StreamGuard { diff --git a/pageserver/page_api/src/model.rs b/pageserver/page_api/src/model.rs index c5b6f06879..d0d3517d41 100644 --- a/pageserver/page_api/src/model.rs +++ b/pageserver/page_api/src/model.rs @@ -384,7 +384,7 @@ impl From for proto::GetPageRequest { pub type RequestID = u64; /// A GetPage request class. -#[derive(Clone, Copy, Debug)] +#[derive(Clone, Copy, Debug, strum_macros::Display)] pub enum GetPageClass { /// Unknown class. For backwards compatibility: used when an older client version sends a class /// that a newer server version has removed. @@ -397,6 +397,19 @@ pub enum GetPageClass { Background, } +impl GetPageClass { + /// Returns true if this is considered a bulk request (i.e. more throughput-oriented rather than + /// latency-sensitive). + pub fn is_bulk(&self) -> bool { + match self { + Self::Unknown => false, + Self::Normal => false, + Self::Prefetch => true, + Self::Background => true, + } + } +} + impl From for GetPageClass { fn from(pb: proto::GetPageClass) -> Self { match pb { From 12c26243fc5e8a4522a8f848923f6e3eb7e0ad7c Mon Sep 17 00:00:00 2001 From: Tristan Partin Date: Wed, 9 Jul 2025 11:47:21 -0500 Subject: [PATCH 40/65] Fix typo in migration testing related to pg_monitor (#12530) We should be joining on the neon_superuser roleid, not the pg_monitor roleid. Signed-off-by: Tristan Partin --- .../tests/0004-grant_pg_monitor_to_neon_superuser.sql | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/compute_tools/src/migrations/tests/0004-grant_pg_monitor_to_neon_superuser.sql b/compute_tools/src/migrations/tests/0004-grant_pg_monitor_to_neon_superuser.sql index acb8dd417d..deb7a364af 100644 --- a/compute_tools/src/migrations/tests/0004-grant_pg_monitor_to_neon_superuser.sql +++ b/compute_tools/src/migrations/tests/0004-grant_pg_monitor_to_neon_superuser.sql @@ -6,7 +6,7 @@ BEGIN admin_option AS admin INTO monitor FROM pg_auth_members - WHERE roleid = 'pg_monitor'::regrole + WHERE roleid = 'neon_superuser'::regrole AND member = 'pg_monitor'::regrole; IF NOT monitor.member THEN From 4bbabc092a7b675eb5b624c3d647e216e98dbe2d Mon Sep 17 00:00:00 2001 From: Dmitrii Kovalkov <34828390+DimasKovas@users.noreply.github.com> Date: Wed, 9 Jul 2025 21:16:06 +0400 Subject: [PATCH 41/65] tests: wait for flush lsn in test_branch_creation_before_gc (#12527) ## Problem Test `test_branch_creation_before_gc` is flaky in the internal repo. Pageserver sometimes lags behind write LSN. When we call GC it might not reach the LSN we try to create the branch at yet. ## Summary of changes - Wait till flush lsn on pageserver reaches the latest LSN before calling GC. --- test_runner/regress/test_branch_and_gc.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/test_runner/regress/test_branch_and_gc.py b/test_runner/regress/test_branch_and_gc.py index 8447c9bf2d..148f469a95 100644 --- a/test_runner/regress/test_branch_and_gc.py +++ b/test_runner/regress/test_branch_and_gc.py @@ -7,6 +7,7 @@ from typing import TYPE_CHECKING import pytest from fixtures.common_types import Lsn, TimelineId from fixtures.log_helper import log +from fixtures.neon_fixtures import wait_for_last_flush_lsn from fixtures.pageserver.http import TimelineCreate406 from fixtures.utils import query_scalar, skip_in_debug_build @@ -162,6 +163,9 @@ def test_branch_creation_before_gc(neon_simple_env: NeonEnv): ) lsn = Lsn(res[2][0][0]) + # Wait for all WAL to reach the pageserver, so GC cutoff LSN is greater than `lsn`. + wait_for_last_flush_lsn(env, endpoint0, tenant, b0) + # Use `failpoint=sleep` and `threading` to make the GC iteration triggers *before* the # branch creation task but the individual timeline GC iteration happens *after* # the branch creation task. From fe0ddb7169514b3c755c01a519d1067b696d2925 Mon Sep 17 00:00:00 2001 From: Vlad Lazar Date: Wed, 9 Jul 2025 18:41:34 +0100 Subject: [PATCH 42/65] libs: make remote storage failure injection probabilistic (#12526) Change the unreliable storage wrapper to fail by probability when there are more failure attempts left. Co-authored-by: Yecheng Yang --- libs/pageserver_api/src/config.rs | 2 + libs/remote_storage/Cargo.toml | 1 + libs/remote_storage/src/lib.rs | 10 +++- libs/remote_storage/src/simulate_failures.rs | 31 ++++++++-- libs/utils/src/env.rs | 59 ++++++++++++++++++++ pageserver/src/bin/pageserver.rs | 7 ++- pageserver/src/config.rs | 6 ++ proxy/src/context/parquet.rs | 2 +- 8 files changed, 107 insertions(+), 11 deletions(-) diff --git a/libs/pageserver_api/src/config.rs b/libs/pageserver_api/src/config.rs index dc7e9aed7f..22815955c1 100644 --- a/libs/pageserver_api/src/config.rs +++ b/libs/pageserver_api/src/config.rs @@ -226,6 +226,7 @@ pub struct ConfigToml { pub synthetic_size_calculation_interval: Duration, pub disk_usage_based_eviction: DiskUsageEvictionTaskConfig, pub test_remote_failures: u64, + pub test_remote_failures_probability: u64, pub ondemand_download_behavior_treat_error_as_warn: bool, #[serde(with = "humantime_serde")] pub background_task_maximum_delay: Duration, @@ -758,6 +759,7 @@ impl Default for ConfigToml { disk_usage_based_eviction: DiskUsageEvictionTaskConfig::default(), test_remote_failures: (0), + test_remote_failures_probability: (100), ondemand_download_behavior_treat_error_as_warn: (false), diff --git a/libs/remote_storage/Cargo.toml b/libs/remote_storage/Cargo.toml index 69316fd493..0ae13552b8 100644 --- a/libs/remote_storage/Cargo.toml +++ b/libs/remote_storage/Cargo.toml @@ -43,6 +43,7 @@ itertools.workspace = true sync_wrapper = { workspace = true, features = ["futures"] } byteorder = "1.4" +rand = "0.8.5" [dev-dependencies] camino-tempfile.workspace = true diff --git a/libs/remote_storage/src/lib.rs b/libs/remote_storage/src/lib.rs index ed416b2811..5885c3e791 100644 --- a/libs/remote_storage/src/lib.rs +++ b/libs/remote_storage/src/lib.rs @@ -732,9 +732,15 @@ impl GenericRemoteStorage { }) } - pub fn unreliable_wrapper(s: Self, fail_first: u64) -> Self { - Self::Unreliable(Arc::new(UnreliableWrapper::new(s, fail_first))) + /* BEGIN_HADRON */ + pub fn unreliable_wrapper(s: Self, fail_first: u64, fail_probability: u64) -> Self { + Self::Unreliable(Arc::new(UnreliableWrapper::new( + s, + fail_first, + fail_probability, + ))) } + /* END_HADRON */ /// See [`RemoteStorage::upload`], which this method calls with `None` as metadata. pub async fn upload_storage_object( diff --git a/libs/remote_storage/src/simulate_failures.rs b/libs/remote_storage/src/simulate_failures.rs index f9856a5856..30d116f57c 100644 --- a/libs/remote_storage/src/simulate_failures.rs +++ b/libs/remote_storage/src/simulate_failures.rs @@ -1,6 +1,8 @@ //! This module provides a wrapper around a real RemoteStorage implementation that //! causes the first N attempts at each upload or download operatio to fail. For //! testing purposes. +use rand::Rng; +use std::cmp; use std::collections::HashMap; use std::collections::hash_map::Entry; use std::num::NonZeroU32; @@ -25,6 +27,12 @@ pub struct UnreliableWrapper { // Tracks how many failed attempts of each operation has been made. attempts: Mutex>, + + /* BEGIN_HADRON */ + // This the probability of failure for each operation, ranged from [0, 100]. + // The probability is default to 100, which means that all operations will fail. + attempt_failure_probability: u64, + /* END_HADRON */ } /// Used to identify retries of different unique operation. @@ -40,7 +48,11 @@ enum RemoteOp { } impl UnreliableWrapper { - pub fn new(inner: crate::GenericRemoteStorage, attempts_to_fail: u64) -> Self { + pub fn new( + inner: crate::GenericRemoteStorage, + attempts_to_fail: u64, + attempt_failure_probability: u64, + ) -> Self { assert!(attempts_to_fail > 0); let inner = match inner { GenericRemoteStorage::AwsS3(s) => GenericRemoteStorage::AwsS3(s), @@ -51,9 +63,11 @@ impl UnreliableWrapper { panic!("Can't wrap unreliable wrapper unreliably") } }; + let actual_attempt_failure_probability = cmp::min(attempt_failure_probability, 100); UnreliableWrapper { inner, attempts_to_fail, + attempt_failure_probability: actual_attempt_failure_probability, attempts: Mutex::new(HashMap::new()), } } @@ -66,6 +80,7 @@ impl UnreliableWrapper { /// fn attempt(&self, op: RemoteOp) -> anyhow::Result { let mut attempts = self.attempts.lock().unwrap(); + let mut rng = rand::thread_rng(); match attempts.entry(op) { Entry::Occupied(mut e) => { @@ -75,15 +90,19 @@ impl UnreliableWrapper { *p }; - if attempts_before_this >= self.attempts_to_fail { - // let it succeed - e.remove(); - Ok(attempts_before_this) - } else { + /* BEGIN_HADRON */ + // If there are more attempts to fail, fail the request by probability. + if (attempts_before_this < self.attempts_to_fail) + && (rng.gen_range(0..=100) < self.attempt_failure_probability) + { let error = anyhow::anyhow!("simulated failure of remote operation {:?}", e.key()); Err(error) + } else { + e.remove(); + Ok(attempts_before_this) } + /* END_HADRON */ } Entry::Vacant(e) => { let error = anyhow::anyhow!("simulated failure of remote operation {:?}", e.key()); diff --git a/libs/utils/src/env.rs b/libs/utils/src/env.rs index 2a85f54a01..cc1cbf8009 100644 --- a/libs/utils/src/env.rs +++ b/libs/utils/src/env.rs @@ -44,3 +44,62 @@ where } } } + +/* BEGIN_HADRON */ +pub enum DeploymentMode { + Dev, + Staging, + Prod, +} + +pub fn get_deployment_mode() -> Option { + match std::env::var("DEPLOYMENT_MODE") { + Ok(env) => match env.as_str() { + "development" => Some(DeploymentMode::Dev), + "staging" => Some(DeploymentMode::Staging), + "production" => Some(DeploymentMode::Prod), + _ => { + tracing::error!("Unexpected DEPLOYMENT_MODE: {}", env); + None + } + }, + Err(_) => { + tracing::error!("DEPLOYMENT_MODE not set"); + None + } + } +} + +pub fn is_dev_or_staging() -> bool { + matches!( + get_deployment_mode(), + Some(DeploymentMode::Dev) | Some(DeploymentMode::Staging) + ) +} + +pub enum TestingMode { + Chaos, + Stress, +} + +pub fn get_test_mode() -> Option { + match std::env::var("HADRON_TEST_MODE") { + Ok(env) => match env.as_str() { + "chaos" => Some(TestingMode::Chaos), + "stress" => Some(TestingMode::Stress), + _ => { + tracing::error!("Unexpected HADRON_TEST_MODE: {}", env); + None + } + }, + Err(_) => { + tracing::error!("HADRON_TEST_MODE not set"); + None + } + } +} + +pub fn is_chaos_testing() -> bool { + matches!(get_test_mode(), Some(TestingMode::Chaos)) +} +/* END_HADRON */ diff --git a/pageserver/src/bin/pageserver.rs b/pageserver/src/bin/pageserver.rs index 327384fd82..78aba25d2e 100644 --- a/pageserver/src/bin/pageserver.rs +++ b/pageserver/src/bin/pageserver.rs @@ -889,8 +889,11 @@ async fn create_remote_storage_client( "Simulating remote failures for first {} attempts of each op", conf.test_remote_failures ); - remote_storage = - GenericRemoteStorage::unreliable_wrapper(remote_storage, conf.test_remote_failures); + remote_storage = GenericRemoteStorage::unreliable_wrapper( + remote_storage, + conf.test_remote_failures, + conf.test_remote_failures_probability, + ); } Ok(remote_storage) diff --git a/pageserver/src/config.rs b/pageserver/src/config.rs index 99d7e0ca3a..15ec31b0a6 100644 --- a/pageserver/src/config.rs +++ b/pageserver/src/config.rs @@ -147,7 +147,11 @@ pub struct PageServerConf { pub disk_usage_based_eviction: DiskUsageEvictionTaskConfig, + // The number of allowed failures in remote storage operations. pub test_remote_failures: u64, + // The probability of failure in remote storage operations. Only works when test_remote_failures > 1. + // Use 100 for 100% failure, 0 for no failure. + pub test_remote_failures_probability: u64, pub ondemand_download_behavior_treat_error_as_warn: bool, @@ -392,6 +396,7 @@ impl PageServerConf { synthetic_size_calculation_interval, disk_usage_based_eviction, test_remote_failures, + test_remote_failures_probability, ondemand_download_behavior_treat_error_as_warn, background_task_maximum_delay, control_plane_api, @@ -461,6 +466,7 @@ impl PageServerConf { synthetic_size_calculation_interval, disk_usage_based_eviction, test_remote_failures, + test_remote_failures_probability, ondemand_download_behavior_treat_error_as_warn, background_task_maximum_delay, control_plane_api: control_plane_api diff --git a/proxy/src/context/parquet.rs b/proxy/src/context/parquet.rs index b55cc14532..4d8df19476 100644 --- a/proxy/src/context/parquet.rs +++ b/proxy/src/context/parquet.rs @@ -267,7 +267,7 @@ async fn worker_inner( ) -> anyhow::Result<()> { #[cfg(any(test, feature = "testing"))] let storage = if config.test_remote_failures > 0 { - GenericRemoteStorage::unreliable_wrapper(storage, config.test_remote_failures) + GenericRemoteStorage::unreliable_wrapper(storage, config.test_remote_failures, 100) } else { storage }; From 28f604d628bfefa26b3016421756f91c8b6b2817 Mon Sep 17 00:00:00 2001 From: Tristan Partin Date: Wed, 9 Jul 2025 13:45:50 -0500 Subject: [PATCH 43/65] Make pg_monitor neon_superuser test more robust (#12532) Make sure to check for NULL just in case. Signed-off-by: Tristan Partin Co-authored-by: Vikas Jain --- .../0004-grant_pg_monitor_to_neon_superuser.sql | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/compute_tools/src/migrations/tests/0004-grant_pg_monitor_to_neon_superuser.sql b/compute_tools/src/migrations/tests/0004-grant_pg_monitor_to_neon_superuser.sql index deb7a364af..3464a2b1cf 100644 --- a/compute_tools/src/migrations/tests/0004-grant_pg_monitor_to_neon_superuser.sql +++ b/compute_tools/src/migrations/tests/0004-grant_pg_monitor_to_neon_superuser.sql @@ -6,14 +6,18 @@ BEGIN admin_option AS admin INTO monitor FROM pg_auth_members - WHERE roleid = 'neon_superuser'::regrole - AND member = 'pg_monitor'::regrole; + WHERE roleid = 'pg_monitor'::regrole + AND member = 'neon_superuser'::regrole; - IF NOT monitor.member THEN + IF monitor IS NULL THEN + RAISE EXCEPTION 'no entry in pg_auth_members for neon_superuser and pg_monitor'; + END IF; + + IF monitor.admin IS NULL OR NOT monitor.member THEN RAISE EXCEPTION 'neon_superuser is not a member of pg_monitor'; END IF; - IF NOT monitor.admin THEN + IF monitor.admin IS NULL OR NOT monitor.admin THEN RAISE EXCEPTION 'neon_superuser cannot grant pg_monitor'; END IF; END $$; From 0b639ba608d65b90df468346e72777e8953a4f42 Mon Sep 17 00:00:00 2001 From: "Alex Chi Z." <4198311+skyzh@users.noreply.github.com> Date: Wed, 9 Jul 2025 16:22:55 -0400 Subject: [PATCH 44/65] fix(storcon): correctly pass through lease error code (#12519) ## Problem close LKB-199 ## Summary of changes We always return the error as 500 to the cplane if a LSN lease request fails. This cause issues for the cplane as they don't retry on 500. This patch correctly passes through the error and assign the error code so that cplane can know if it is a retryable error. (TODO: look at the cplane code and learn the retry logic). Note that this patch does not resolve LKB-253 -- we need to handle not found error separately in the lsn lease path, like wait until the tenant gets attached, or return 503 so that cplane can retry. --------- Signed-off-by: Alex Chi Z --- storage_controller/src/service.rs | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/storage_controller/src/service.rs b/storage_controller/src/service.rs index ed6643d641..d2f7287be9 100644 --- a/storage_controller/src/service.rs +++ b/storage_controller/src/service.rs @@ -4428,7 +4428,7 @@ impl Service { .await; let mut failed = 0; - for (tid, result) in targeted_tenant_shards.iter().zip(results.into_iter()) { + for (tid, (_, result)) in targeted_tenant_shards.iter().zip(results.into_iter()) { match result { Ok(ok) => { if tid.is_shard_zero() { @@ -4795,7 +4795,7 @@ impl Service { .await; let mut valid_until = None; - for r in res { + for (node, r) in res { match r { Ok(lease) => { if let Some(ref mut valid_until) = valid_until { @@ -4805,7 +4805,7 @@ impl Service { } } Err(e) => { - return Err(ApiError::InternalServerError(anyhow::anyhow!(e))); + return Err(passthrough_api_error(&node, e)); } } } @@ -4919,7 +4919,7 @@ impl Service { max_retries: u32, timeout: Duration, cancel: &CancellationToken, - ) -> Vec> + ) -> Vec<(Node, mgmt_api::Result)> where O: Fn(TenantShardId, PageserverClient) -> F + Copy, F: std::future::Future>, @@ -4940,16 +4940,16 @@ impl Service { cancel, ) .await; - (idx, r) + (idx, node, r) }); } - while let Some((idx, r)) = futs.next().await { - results.push((idx, r.unwrap_or(Err(mgmt_api::Error::Cancelled)))); + while let Some((idx, node, r)) = futs.next().await { + results.push((idx, node, r.unwrap_or(Err(mgmt_api::Error::Cancelled)))); } - results.sort_by_key(|(idx, _)| *idx); - results.into_iter().map(|(_, r)| r).collect() + results.sort_by_key(|(idx, _, _)| *idx); + results.into_iter().map(|(_, node, r)| (node, r)).collect() } /// Helper for safely working with the shards in a tenant remotely on pageservers, for example @@ -5862,7 +5862,7 @@ impl Service { return; } - for result in self + for (_, result) in self .tenant_for_shards_api( attached, |tenant_shard_id, client| async move { @@ -5881,7 +5881,7 @@ impl Service { } } - for result in self + for (_, result) in self .tenant_for_shards_api( secondary, |tenant_shard_id, client| async move { @@ -8768,7 +8768,7 @@ impl Service { ) .await; - for ((tenant_shard_id, node, optimization), secondary_status) in + for ((tenant_shard_id, node, optimization), (_, secondary_status)) in want_secondary_status.into_iter().zip(results.into_iter()) { match secondary_status { From 2edd59aefbcd79288735cbbed335a27880597529 Mon Sep 17 00:00:00 2001 From: Christian Schwarz Date: Wed, 9 Jul 2025 23:15:44 +0200 Subject: [PATCH 45/65] impr(compaction): unify checking of `CompactionError` for cancellation reason (#12392) There are a couple of places that call `CompactionError::is_cancel` but don't check the `::Other` variant via downcasting for root cause being cancellation. The only place that does it is `log_compaction_error`. It's sad we have to do it, but, until we get around cleaning up all the culprits, a step forward is to unify the behavior so that all places that inspect a `CompactionError` for cancellation reason follow the same behavior. Thus, this PR ... - moves the downcasting checks against the `::Other` variant from `log_compaction_error` into `is_cancel()` and - enforces via type system that `.is_cancel()` is used to check whether a CompactionError is due to cancellation (matching on the `CompactionError::ShuttingDown` will cause a compile-time error). I don't think there's a _serious_ case right now where matching instead of using `is_cancel` causes problems. The worst case I could find is the circuit breaker and `compaction_failed`, which don't really matter if we're shutting down the timeline anyway. But it's unaesthetic and might cause log/alert noise down the line, so, this PR fixes that at least. Refs - https://databricks.atlassian.net/browse/LKB-182 - slack conversation about this PR: https://databricks.slack.com/archives/C09254R641L/p1751284317955159 --- pageserver/src/http/routes.rs | 11 +- pageserver/src/tenant.rs | 17 ++- pageserver/src/tenant/tasks.rs | 46 +------- pageserver/src/tenant/timeline.rs | 115 ++++++++++++++----- pageserver/src/tenant/timeline/compaction.rs | 26 ++--- 5 files changed, 117 insertions(+), 98 deletions(-) diff --git a/pageserver/src/http/routes.rs b/pageserver/src/http/routes.rs index 3612686b5d..767bba49e2 100644 --- a/pageserver/src/http/routes.rs +++ b/pageserver/src/http/routes.rs @@ -79,8 +79,8 @@ use crate::tenant::storage_layer::{IoConcurrency, LayerAccessStatsReset, LayerNa use crate::tenant::timeline::layer_manager::LayerManagerLockHolder; use crate::tenant::timeline::offload::{OffloadError, offload_timeline}; use crate::tenant::timeline::{ - CompactFlags, CompactOptions, CompactRequest, CompactionError, MarkInvisibleRequest, Timeline, - WaitLsnTimeout, WaitLsnWaiter, import_pgdata, + CompactFlags, CompactOptions, CompactRequest, MarkInvisibleRequest, Timeline, WaitLsnTimeout, + WaitLsnWaiter, import_pgdata, }; use crate::tenant::{ GetTimelineError, LogicalSizeCalculationCause, OffloadedTimeline, PageReconstructError, @@ -2500,9 +2500,10 @@ async fn timeline_checkpoint_handler( .compact(&cancel, flags, &ctx) .await .map_err(|e| - match e { - CompactionError::ShuttingDown => ApiError::ShuttingDown, - CompactionError::Other(e) => ApiError::InternalServerError(e), + if e.is_cancel() { + ApiError::ShuttingDown + } else { + ApiError::InternalServerError(e.into_anyhow()) } )?; } diff --git a/pageserver/src/tenant.rs b/pageserver/src/tenant.rs index f576119db8..240ba36236 100644 --- a/pageserver/src/tenant.rs +++ b/pageserver/src/tenant.rs @@ -3291,7 +3291,7 @@ impl TenantShard { // Ignore this, we likely raced with unarchival. OffloadError::NotArchived => Ok(()), OffloadError::AlreadyInProgress => Ok(()), - OffloadError::Cancelled => Err(CompactionError::ShuttingDown), + OffloadError::Cancelled => Err(CompactionError::new_cancelled()), // don't break the anyhow chain OffloadError::Other(err) => Err(CompactionError::Other(err)), })?; @@ -3321,16 +3321,13 @@ impl TenantShard { /// Trips the compaction circuit breaker if appropriate. pub(crate) fn maybe_trip_compaction_breaker(&self, err: &CompactionError) { - match err { - err if err.is_cancel() => {} - CompactionError::ShuttingDown => (), - CompactionError::Other(err) => { - self.compaction_circuit_breaker - .lock() - .unwrap() - .fail(&CIRCUIT_BREAKERS_BROKEN, err); - } + if err.is_cancel() { + return; } + self.compaction_circuit_breaker + .lock() + .unwrap() + .fail(&CIRCUIT_BREAKERS_BROKEN, err); } /// Cancel scheduled compaction tasks diff --git a/pageserver/src/tenant/tasks.rs b/pageserver/src/tenant/tasks.rs index bcece5589a..08fc7d61a5 100644 --- a/pageserver/src/tenant/tasks.rs +++ b/pageserver/src/tenant/tasks.rs @@ -17,17 +17,14 @@ use tracing::*; use utils::backoff::exponential_backoff_duration; use utils::completion::Barrier; use utils::pausable_failpoint; -use utils::sync::gate::GateError; use crate::context::{DownloadBehavior, RequestContext}; use crate::metrics::{self, BackgroundLoopSemaphoreMetricsRecorder, TENANT_TASK_EVENTS}; use crate::task_mgr::{self, BACKGROUND_RUNTIME, TOKIO_WORKER_THREADS, TaskKind}; -use crate::tenant::blob_io::WriteBlobError; use crate::tenant::throttle::Stats; use crate::tenant::timeline::CompactionError; use crate::tenant::timeline::compaction::CompactionOutcome; use crate::tenant::{TenantShard, TenantState}; -use crate::virtual_file::owned_buffers_io::write::FlushTaskError; /// Semaphore limiting concurrent background tasks (across all tenants). /// @@ -310,45 +307,12 @@ pub(crate) fn log_compaction_error( task_cancelled: bool, degrade_to_warning: bool, ) { - use CompactionError::*; + let is_cancel = err.is_cancel(); - use crate::tenant::PageReconstructError; - use crate::tenant::upload_queue::NotInitialized; - - let level = match err { - e if e.is_cancel() => return, - ShuttingDown => return, - _ if task_cancelled => Level::INFO, - Other(err) => { - let root_cause = err.root_cause(); - - let upload_queue = root_cause - .downcast_ref::() - .is_some_and(|e| e.is_stopping()); - let timeline = root_cause - .downcast_ref::() - .is_some_and(|e| e.is_cancel()); - let buffered_writer_flush_task_canelled = root_cause - .downcast_ref::() - .is_some_and(|e| e.is_cancel()); - let write_blob_cancelled = root_cause - .downcast_ref::() - .is_some_and(|e| e.is_cancel()); - let gate_closed = root_cause - .downcast_ref::() - .is_some_and(|e| e.is_cancel()); - let is_stopping = upload_queue - || timeline - || buffered_writer_flush_task_canelled - || write_blob_cancelled - || gate_closed; - - if is_stopping { - Level::INFO - } else { - Level::ERROR - } - } + let level = if is_cancel || task_cancelled { + Level::INFO + } else { + Level::ERROR }; if let Some((error_count, sleep_duration)) = retry_info { diff --git a/pageserver/src/tenant/timeline.rs b/pageserver/src/tenant/timeline.rs index 6088f40669..0a026d288e 100644 --- a/pageserver/src/tenant/timeline.rs +++ b/pageserver/src/tenant/timeline.rs @@ -1002,7 +1002,7 @@ impl From for tonic::Status { impl From for CompactionError { fn from(e: CreateImageLayersError) -> Self { match e { - CreateImageLayersError::Cancelled => CompactionError::ShuttingDown, + CreateImageLayersError::Cancelled => CompactionError::new_cancelled(), CreateImageLayersError::Other(e) => { CompactionError::Other(e.context("create image layers")) } @@ -2117,12 +2117,7 @@ impl Timeline { match &result { Ok(_) => self.compaction_failed.store(false, AtomicOrdering::Relaxed), Err(e) if e.is_cancel() => {} - Err(CompactionError::ShuttingDown) => { - // Covered by the `Err(e) if e.is_cancel()` branch. - } - Err(CompactionError::Other(_)) => { - self.compaction_failed.store(true, AtomicOrdering::Relaxed) - } + Err(_) => self.compaction_failed.store(true, AtomicOrdering::Relaxed), }; result @@ -6057,26 +6052,88 @@ impl Drop for Timeline { } } -/// Top-level failure to compact. -#[derive(Debug, thiserror::Error)] -pub(crate) enum CompactionError { - #[error("The timeline or pageserver is shutting down")] - ShuttingDown, - #[error(transparent)] - Other(anyhow::Error), -} +pub(crate) use compaction_error::CompactionError; +/// In a private mod to enforce that [`CompactionError::is_cancel`] is used +/// instead of `match`ing on [`CompactionError::ShuttingDown`]. +mod compaction_error { + use utils::sync::gate::GateError; -impl CompactionError { - /// Errors that can be ignored, i.e., cancel and shutdown. - pub fn is_cancel(&self) -> bool { - matches!(self, Self::ShuttingDown) + use crate::{ + pgdatadir_mapping::CollectKeySpaceError, + tenant::{PageReconstructError, blob_io::WriteBlobError, upload_queue::NotInitialized}, + virtual_file::owned_buffers_io::write::FlushTaskError, + }; + + /// Top-level failure to compact. Use [`Self::is_cancel`]. + #[derive(Debug, thiserror::Error)] + pub(crate) enum CompactionError { + /// Use [`Self::is_cancel`] instead of checking for this variant. + #[error("The timeline or pageserver is shutting down")] + #[allow(private_interfaces)] + ShuttingDown(ForbidMatching), // private ForbidMatching enforces use of [`Self::is_cancel`]. + #[error(transparent)] + Other(anyhow::Error), } - pub fn from_collect_keyspace(err: CollectKeySpaceError) -> Self { - if err.is_cancel() { - Self::ShuttingDown - } else { - Self::Other(err.into_anyhow()) + #[derive(Debug)] + struct ForbidMatching; + + impl CompactionError { + pub fn new_cancelled() -> Self { + Self::ShuttingDown(ForbidMatching) + } + /// Errors that can be ignored, i.e., cancel and shutdown. + pub fn is_cancel(&self) -> bool { + let other = match self { + CompactionError::ShuttingDown(_) => return true, + CompactionError::Other(other) => other, + }; + + // The write path of compaction in particular often lacks differentiated + // handling errors stemming from cancellation from other errors. + // So, if requested, we also check the ::Other variant by downcasting. + // The list below has been found empirically from flaky tests and production logs. + // The process is simple: on ::Other(), compaction will print the enclosed + // anyhow::Error in debug mode, i.e., with backtrace. That backtrace contains the + // line where the write path / compaction code does undifferentiated error handling + // from a non-anyhow type to an anyhow type. Add the type to the list of downcasts + // below, following the same is_cancel() pattern. + + let root_cause = other.root_cause(); + + let upload_queue = root_cause + .downcast_ref::() + .is_some_and(|e| e.is_stopping()); + let timeline = root_cause + .downcast_ref::() + .is_some_and(|e| e.is_cancel()); + let buffered_writer_flush_task_canelled = root_cause + .downcast_ref::() + .is_some_and(|e| e.is_cancel()); + let write_blob_cancelled = root_cause + .downcast_ref::() + .is_some_and(|e| e.is_cancel()); + let gate_closed = root_cause + .downcast_ref::() + .is_some_and(|e| e.is_cancel()); + upload_queue + || timeline + || buffered_writer_flush_task_canelled + || write_blob_cancelled + || gate_closed + } + pub fn into_anyhow(self) -> anyhow::Error { + match self { + CompactionError::ShuttingDown(ForbidMatching) => anyhow::Error::new(self), + CompactionError::Other(e) => e, + } + } + pub fn from_collect_keyspace(err: CollectKeySpaceError) -> Self { + if err.is_cancel() { + Self::new_cancelled() + } else { + Self::Other(err.into_anyhow()) + } } } } @@ -6088,7 +6145,7 @@ impl From for CompactionError { CompactionError::Other(anyhow::anyhow!(value)) } super::upload_queue::NotInitialized::ShuttingDown - | super::upload_queue::NotInitialized::Stopped => CompactionError::ShuttingDown, + | super::upload_queue::NotInitialized::Stopped => CompactionError::new_cancelled(), } } } @@ -6098,7 +6155,7 @@ impl From for CompactionError { match e { super::storage_layer::layer::DownloadError::TimelineShutdown | super::storage_layer::layer::DownloadError::DownloadCancelled => { - CompactionError::ShuttingDown + CompactionError::new_cancelled() } super::storage_layer::layer::DownloadError::ContextAndConfigReallyDeniesDownloads | super::storage_layer::layer::DownloadError::DownloadRequired @@ -6117,14 +6174,14 @@ impl From for CompactionError { impl From for CompactionError { fn from(_: layer_manager::Shutdown) -> Self { - CompactionError::ShuttingDown + CompactionError::new_cancelled() } } impl From for CompactionError { fn from(e: super::storage_layer::errors::PutError) -> Self { if e.is_cancel() { - CompactionError::ShuttingDown + CompactionError::new_cancelled() } else { CompactionError::Other(e.into_anyhow()) } @@ -6223,7 +6280,7 @@ impl Timeline { let mut guard = tokio::select! { guard = self.layers.write(LayerManagerLockHolder::Compaction) => guard, _ = self.cancel.cancelled() => { - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } }; diff --git a/pageserver/src/tenant/timeline/compaction.rs b/pageserver/src/tenant/timeline/compaction.rs index c263df1eb2..18a0ca852d 100644 --- a/pageserver/src/tenant/timeline/compaction.rs +++ b/pageserver/src/tenant/timeline/compaction.rs @@ -572,8 +572,8 @@ impl GcCompactionQueue { } match res { Ok(res) => Ok(res), - Err(CompactionError::ShuttingDown) => Err(CompactionError::ShuttingDown), - Err(CompactionError::Other(_)) => { + Err(e) if e.is_cancel() => Err(e), + Err(_) => { // There are some cases where traditional gc might collect some layer // files causing gc-compaction cannot read the full history of the key. // This needs to be resolved in the long-term by improving the compaction @@ -1260,7 +1260,7 @@ impl Timeline { // Is the timeline being deleted? if self.is_stopping() { trace!("Dropping out of compaction on timeline shutdown"); - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } let target_file_size = self.get_checkpoint_distance(); @@ -1624,7 +1624,7 @@ impl Timeline { for (i, layer) in layers_to_rewrite.into_iter().enumerate() { if self.cancel.is_cancelled() { - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } info!(layer=%layer, "rewriting layer after shard split: {}/{}", i, total); @@ -1722,7 +1722,7 @@ impl Timeline { Ok(()) => {}, Err(WaitCompletionError::NotInitialized(ni)) => return Err(CompactionError::from(ni)), Err(WaitCompletionError::UploadQueueShutDownOrStopped) => { - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } }, // Don't wait if there's L0 compaction to do. We don't need to update the outcome @@ -1985,7 +1985,7 @@ impl Timeline { let mut all_keys = Vec::new(); for l in deltas_to_compact.iter() { if self.cancel.is_cancelled() { - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } let delta = l.get_as_delta(ctx).await.map_err(CompactionError::Other)?; let keys = delta @@ -2078,7 +2078,7 @@ impl Timeline { stats.read_lock_held_compute_holes_micros = stats.read_lock_held_key_sort_micros.till_now(); if self.cancel.is_cancelled() { - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } stats.read_lock_drop_micros = stats.read_lock_held_compute_holes_micros.till_now(); @@ -2186,7 +2186,7 @@ impl Timeline { // avoid hitting the cancellation token on every key. in benches, we end up // shuffling an order of million keys per layer, this means we'll check it // around tens of times per layer. - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } let same_key = prev_key == Some(key); @@ -2271,7 +2271,7 @@ impl Timeline { if writer.is_none() { if self.cancel.is_cancelled() { // to be somewhat responsive to cancellation, check for each new layer - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } // Create writer if not initiaized yet writer = Some( @@ -2527,7 +2527,7 @@ impl Timeline { // Is the timeline being deleted? if self.is_stopping() { trace!("Dropping out of compaction on timeline shutdown"); - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } let (dense_ks, _sparse_ks) = self @@ -3189,7 +3189,7 @@ impl Timeline { let gc_lock = async { tokio::select! { guard = self.gc_lock.lock() => Ok(guard), - _ = cancel.cancelled() => Err(CompactionError::ShuttingDown), + _ = cancel.cancelled() => Err(CompactionError::new_cancelled()), } }; @@ -3462,7 +3462,7 @@ impl Timeline { } total_layer_size += layer.layer_desc().file_size; if cancel.is_cancelled() { - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } let should_yield = yield_for_l0 && self @@ -3609,7 +3609,7 @@ impl Timeline { } if cancel.is_cancelled() { - return Err(CompactionError::ShuttingDown); + return Err(CompactionError::new_cancelled()); } let should_yield = yield_for_l0 From 13e38a58a14c60da94486904d60a8b9e8e391503 Mon Sep 17 00:00:00 2001 From: Tristan Partin Date: Wed, 9 Jul 2025 16:35:39 -0500 Subject: [PATCH 46/65] Grant pg_signal_backend to neon_superuser (#12533) Allow neon_superuser to cancel backends from non-neon_superusers, excluding Postgres superusers. Signed-off-by: Tristan Partin Co-authored-by: Vikas Jain --- ...nt_pg_signal_backend_to_neon_superuser.sql | 1 + ...nt_pg_signal_backend_to_neon_superuser.sql | 23 +++++++++++++++++++ compute_tools/src/spec.rs | 1 + 3 files changed, 25 insertions(+) create mode 100644 compute_tools/src/migrations/0012-grant_pg_signal_backend_to_neon_superuser.sql create mode 100644 compute_tools/src/migrations/tests/0012-grant_pg_signal_backend_to_neon_superuser.sql diff --git a/compute_tools/src/migrations/0012-grant_pg_signal_backend_to_neon_superuser.sql b/compute_tools/src/migrations/0012-grant_pg_signal_backend_to_neon_superuser.sql new file mode 100644 index 0000000000..36e31544be --- /dev/null +++ b/compute_tools/src/migrations/0012-grant_pg_signal_backend_to_neon_superuser.sql @@ -0,0 +1 @@ +GRANT pg_signal_backend TO neon_superuser WITH ADMIN OPTION; diff --git a/compute_tools/src/migrations/tests/0012-grant_pg_signal_backend_to_neon_superuser.sql b/compute_tools/src/migrations/tests/0012-grant_pg_signal_backend_to_neon_superuser.sql new file mode 100644 index 0000000000..e62b742d30 --- /dev/null +++ b/compute_tools/src/migrations/tests/0012-grant_pg_signal_backend_to_neon_superuser.sql @@ -0,0 +1,23 @@ +DO $$ +DECLARE + signal_backend record; +BEGIN + SELECT pg_has_role('neon_superuser', 'pg_signal_backend', 'member') AS member, + admin_option AS admin + INTO signal_backend + FROM pg_auth_members + WHERE roleid = 'pg_signal_backend'::regrole + AND member = 'neon_superuser'::regrole; + + IF signal_backend IS NULL THEN + RAISE EXCEPTION 'no entry in pg_auth_members for neon_superuser and pg_signal_backend'; + END IF; + + IF signal_backend.member IS NULL OR NOT signal_backend.member THEN + RAISE EXCEPTION 'neon_superuser is not a member of pg_signal_backend'; + END IF; + + IF signal_backend.admin IS NULL OR NOT signal_backend.admin THEN + RAISE EXCEPTION 'neon_superuser cannot grant pg_signal_backend'; + END IF; +END $$; diff --git a/compute_tools/src/spec.rs b/compute_tools/src/spec.rs index 43cfbb48f7..b6382b2f56 100644 --- a/compute_tools/src/spec.rs +++ b/compute_tools/src/spec.rs @@ -197,6 +197,7 @@ pub async fn handle_migrations(client: &mut Client) -> Result<()> { include_str!( "./migrations/0011-grant_pg_show_replication_origin_status_to_neon_superuser.sql" ), + include_str!("./migrations/0012-grant_pg_signal_backend_to_neon_superuser.sql"), ]; MigrationRunner::new(client, &migrations) From 1a45b2ec900b37c40608ca94a1d9d37cad12fce8 Mon Sep 17 00:00:00 2001 From: Dimitri Fontaine Date: Thu, 10 Jul 2025 10:06:33 +0200 Subject: [PATCH 47/65] Review security model for executing Event Trigger code. (#12463) When a function is owned by a superuser (bootstrap user or otherwise), we consider it safe to run it. Only a superuser could have installed it, typically from CREATE EXTENSION script: we trust the code to execute. ## Problem This is intended to solve running pg_graphql Event Triggers graphql_watch_ddl and graphql_watch_drop which are executing the secdef function graphql.increment_schema_version(). ## Summary of changes Allow executing Event Trigger function owned by a superuser and with SECURITY DEFINER properties. The Event Trigger code runs with superuser privileges, and we consider that it's fine. --------- Co-authored-by: Tristan Partin --- pgxn/neon/neon_ddl_handler.c | 33 +----- test_runner/fixtures/neon_fixtures.py | 27 +++++ .../test_event_trigger_extension--1.0.sql | 32 ++++++ .../test_event_trigger_extension.control | 8 ++ .../regress/test_download_extensions.py | 22 ---- .../regress/test_event_trigger_extension.py | 102 ++++++++++++++++++ 6 files changed, 174 insertions(+), 50 deletions(-) create mode 100644 test_runner/regress/data/test_event_trigger_extension/test_event_trigger_extension--1.0.sql create mode 100644 test_runner/regress/data/test_event_trigger_extension/test_event_trigger_extension.control create mode 100644 test_runner/regress/test_event_trigger_extension.py diff --git a/pgxn/neon/neon_ddl_handler.c b/pgxn/neon/neon_ddl_handler.c index 2ce7b0086b..1f03e52c67 100644 --- a/pgxn/neon/neon_ddl_handler.c +++ b/pgxn/neon/neon_ddl_handler.c @@ -953,7 +953,9 @@ neon_fmgr_hook(FmgrHookEventType event, FmgrInfo *flinfo, Datum *private) /* * Fire Event Trigger if both function owner and current user are - * superuser, or none of them are. + * superuser. Allow executing Event Trigger function that belongs to a + * superuser when connected as a non-superuser, even when the function is + * SECURITY DEFINER. */ else if (event == FHET_START /* still enable it to pass pg_regress tests */ @@ -976,32 +978,7 @@ neon_fmgr_hook(FmgrHookEventType event, FmgrInfo *flinfo, Datum *private) function_is_owned_by_super = superuser_arg(function_owner); /* - * 1. Refuse to run SECURITY DEFINER function that belongs to a - * superuser when the current user is not a superuser itself. - */ - if (!role_is_super - && function_is_owned_by_super - && function_is_secdef) - { - char *func_name = get_func_name(flinfo->fn_oid); - - ereport(WARNING, - (errmsg("Skipping Event Trigger"), - errdetail("Event Trigger function \"%s\" is owned by \"%s\" " - "and is SECURITY DEFINER", - func_name, - GetUserNameFromId(function_owner, false)))); - - /* - * we can't skip execution directly inside the fmgr_hook so - * instead we change the event trigger function to a noop - * function. - */ - force_noop(flinfo); - } - - /* - * 2. Refuse to run functions that belongs to a non-superuser when the + * Refuse to run functions that belongs to a non-superuser when the * current user is a superuser. * * We could run a SECURITY DEFINER user-function here and be safe with @@ -1009,7 +986,7 @@ neon_fmgr_hook(FmgrHookEventType event, FmgrInfo *flinfo, Datum *private) * infrastructure maintenance operations, where we prefer to skip * running user-defined code. */ - else if (role_is_super && !function_is_owned_by_super) + if (role_is_super && !function_is_owned_by_super) { char *func_name = get_func_name(flinfo->fn_oid); diff --git a/test_runner/fixtures/neon_fixtures.py b/test_runner/fixtures/neon_fixtures.py index f54d5be635..42924f9b83 100644 --- a/test_runner/fixtures/neon_fixtures.py +++ b/test_runner/fixtures/neon_fixtures.py @@ -1795,6 +1795,33 @@ def neon_env_builder( record_property("preserve_database_files", builder.preserve_database_files) +@pytest.fixture(scope="function") +def neon_env_builder_local( + neon_env_builder: NeonEnvBuilder, + test_output_dir: Path, + pg_distrib_dir: Path, +) -> NeonEnvBuilder: + """ + Fixture to create a Neon environment for test with its own pg_install copy. + + This allows the test to edit the list of available extensions in the + local instance of Postgres used for the test, and install extensions via + downloading them when a remote extension is tested, for instance, or + copying files around for local extension testing. + """ + test_local_pginstall = test_output_dir / "pg_install" + log.info(f"copy {pg_distrib_dir} to {test_local_pginstall}") + + # We can't copy only the version that we are currently testing because other + # binaries like the storage controller need specific Postgres versions. + shutil.copytree(pg_distrib_dir, test_local_pginstall) + + neon_env_builder.pg_distrib_dir = test_local_pginstall + log.info(f"local neon_env_builder.pg_distrib_dir: {neon_env_builder.pg_distrib_dir}") + + return neon_env_builder + + @dataclass class PageserverPort: pg: int diff --git a/test_runner/regress/data/test_event_trigger_extension/test_event_trigger_extension--1.0.sql b/test_runner/regress/data/test_event_trigger_extension/test_event_trigger_extension--1.0.sql new file mode 100644 index 0000000000..2b82102802 --- /dev/null +++ b/test_runner/regress/data/test_event_trigger_extension/test_event_trigger_extension--1.0.sql @@ -0,0 +1,32 @@ +\echo Use "CREATE EXTENSION test_event_trigger_extension" to load this file. \quit + +CREATE SCHEMA event_trigger; + +create sequence if not exists event_trigger.seq_schema_version as int cycle; + +create or replace function event_trigger.increment_schema_version() + returns event_trigger + security definer + language plpgsql +as $$ +begin + perform pg_catalog.nextval('event_trigger.seq_schema_version'); +end; +$$; + +create or replace function event_trigger.get_schema_version() + returns int + security definer + language sql +as $$ + select last_value from event_trigger.seq_schema_version; +$$; + +-- On DDL event, increment the schema version number +create event trigger event_trigger_watch_ddl + on ddl_command_end + execute procedure event_trigger.increment_schema_version(); + +create event trigger event_trigger_watch_drop + on sql_drop + execute procedure event_trigger.increment_schema_version(); diff --git a/test_runner/regress/data/test_event_trigger_extension/test_event_trigger_extension.control b/test_runner/regress/data/test_event_trigger_extension/test_event_trigger_extension.control new file mode 100644 index 0000000000..4fe8c3341b --- /dev/null +++ b/test_runner/regress/data/test_event_trigger_extension/test_event_trigger_extension.control @@ -0,0 +1,8 @@ +default_version = '1.0' +comment = 'Test extension with Event Trigger' + +# make sure the extension objects are owned by the bootstrap user +# to check that the SECURITY DEFINER event trigger function is still +# called during non-superuser DDL events. +superuser = true +trusted = true diff --git a/test_runner/regress/test_download_extensions.py b/test_runner/regress/test_download_extensions.py index fe3b220c67..d7f78afac8 100644 --- a/test_runner/regress/test_download_extensions.py +++ b/test_runner/regress/test_download_extensions.py @@ -2,7 +2,6 @@ from __future__ import annotations import os import platform -import shutil import tarfile from enum import StrEnum from pathlib import Path @@ -31,27 +30,6 @@ if TYPE_CHECKING: from werkzeug.wrappers.request import Request -# use neon_env_builder_local fixture to override the default neon_env_builder fixture -# and use a test-specific pg_install instead of shared one -@pytest.fixture(scope="function") -def neon_env_builder_local( - neon_env_builder: NeonEnvBuilder, - test_output_dir: Path, - pg_distrib_dir: Path, -) -> NeonEnvBuilder: - test_local_pginstall = test_output_dir / "pg_install" - log.info(f"copy {pg_distrib_dir} to {test_local_pginstall}") - - # We can't copy only the version that we are currently testing because other - # binaries like the storage controller need specific Postgres versions. - shutil.copytree(pg_distrib_dir, test_local_pginstall) - - neon_env_builder.pg_distrib_dir = test_local_pginstall - log.info(f"local neon_env_builder.pg_distrib_dir: {neon_env_builder.pg_distrib_dir}") - - return neon_env_builder - - @final class RemoteExtension(StrEnum): SQL_ONLY = "test_extension_sql_only" diff --git a/test_runner/regress/test_event_trigger_extension.py b/test_runner/regress/test_event_trigger_extension.py new file mode 100644 index 0000000000..ac4351dcd5 --- /dev/null +++ b/test_runner/regress/test_event_trigger_extension.py @@ -0,0 +1,102 @@ +from __future__ import annotations + +import shutil +from pathlib import Path +from typing import TYPE_CHECKING, cast + +import pytest +from fixtures.log_helper import log +from fixtures.paths import BASE_DIR + +if TYPE_CHECKING: + from pathlib import Path + + from fixtures.neon_fixtures import ( + NeonEnvBuilder, + ) + from fixtures.pg_version import PgVersion + + +# use neon_env_builder_local fixture to override the default neon_env_builder fixture +# and use a test-specific pg_install instead of shared one +@pytest.fixture(scope="function") +def neon_env_builder_event_trigger_extension( + neon_env_builder_local: NeonEnvBuilder, + test_output_dir: Path, + pg_version: PgVersion, +) -> NeonEnvBuilder: + test_local_pginstall = test_output_dir / "pg_install" + + # Now copy the SQL only extension test_event_trigger_extension in the local + # pginstall extension directory on-disk + test_event_trigger_extension_dir = ( + BASE_DIR / "test_runner" / "regress" / "data" / "test_event_trigger_extension" + ) + + test_local_extension_dir = ( + test_local_pginstall / f"v{pg_version}" / "share" / "postgresql" / "extension" + ) + + log.info(f"copy {test_event_trigger_extension_dir} to {test_local_extension_dir}") + + for f in [ + test_event_trigger_extension_dir / "test_event_trigger_extension.control", + test_event_trigger_extension_dir / "test_event_trigger_extension--1.0.sql", + ]: + shutil.copy(f, test_local_extension_dir) + + return neon_env_builder_local + + +def test_event_trigger_extension(neon_env_builder_event_trigger_extension: NeonEnvBuilder): + """ + Test installing an extension that contains an Event Trigger. + + The Event Trigger function is owned by the extension owner, which at + CREATE EXTENSION is going to be the Postgres bootstrap user, per the + extension control file where both superuser = true and trusted = true. + + Also this function is SECURTY DEFINER, to allow for making changes to + the extension SQL objects, in our case a sequence. + + This test makes sure that the event trigger function is fired correctly + by non-privileged user DDL actions such as CREATE TABLE. + """ + env = neon_env_builder_event_trigger_extension.init_start() + env.create_branch("test_event_trigger_extension") + + endpoint = env.endpoints.create_start("test_event_trigger_extension") + extension = "test_event_trigger_extension" + database = "test_event_trigger_extension" + + endpoint.safe_psql(f"CREATE DATABASE {database}") + endpoint.safe_psql(f"CREATE EXTENSION {extension}", dbname=database) + + # check that the extension is owned by the bootstrap superuser (cloud_admin) + pg_bootstrap_superuser_name = "cloud_admin" + with endpoint.connect(dbname=database) as pg_conn: + with pg_conn.cursor() as cur: + cur.execute( + f"select rolname from pg_roles r join pg_extension e on r.oid = e.extowner where extname = '{extension}'" + ) + owner = cast("tuple[str]", cur.fetchone())[0] + assert owner == pg_bootstrap_superuser_name, ( + f"extension {extension} is not owned by bootstrap user '{pg_bootstrap_superuser_name}'" + ) + + # test that the SQL-only Event Trigger (SECURITY DEFINER function) runs + # correctly now that the extension has been installed + # + # create table to trigger the event trigger, twice, check sequence count + with endpoint.connect(dbname=database) as pg_conn: + log.info("creating SQL objects (tables)") + with pg_conn.cursor() as cur: + cur.execute("CREATE TABLE foo1(id int primary key)") + cur.execute("CREATE TABLE foo2(id int)") + + cur.execute("SELECT event_trigger.get_schema_version()") + res = cast("tuple[int]", cur.fetchone()) + ver = res[0] + + log.info(f"schema version is now {ver}") + assert ver == 2, "schema version is not 2" From 08b19f001c77afbcd3fbde06a11e495d6222967a Mon Sep 17 00:00:00 2001 From: Vlad Lazar Date: Thu, 10 Jul 2025 11:07:21 +0100 Subject: [PATCH 48/65] pageserver: optionally force image layer creation on timeout (#12529) This PR introduces a `image_creation_timeout` to page servers so that we can force the image creation after a certain period. This is set to 1 day on dev/staging for now, and will rollout to production 1/2 weeks later. Majority of the PR are boilerplate code to add the new knob. Specific changes of the PR are: 1. During L0 compaction, check if we should force a compaction if min(LSN) of all delta layers < force_image_creation LSN. 2. During image creation, check if we should force a compaction if the image's LSN < force_image_creation LSN and there are newer deltas with overlapping key ranges. 3. Also tweaked the check image creation interval to make sure we honor image_creation_timeout. Vlad's note: This should be a no-op. I added an extra PS config for the large timeline threshold to enable this. --------- Co-authored-by: Chen Luo --- control_plane/src/pageserver.rs | 6 ++ libs/pageserver_api/src/config.rs | 9 ++ libs/pageserver_api/src/models.rs | 18 ++++ pageserver/src/config.rs | 6 ++ pageserver/src/tenant.rs | 9 ++ pageserver/src/tenant/timeline.rs | 68 ++++++++++-- pageserver/src/tenant/timeline/compaction.rs | 100 ++++++++++++++++-- .../regress/test_attach_tenant_config.py | 1 + test_runner/regress/test_compaction.py | 75 +++++++++++++ 9 files changed, 275 insertions(+), 17 deletions(-) diff --git a/control_plane/src/pageserver.rs b/control_plane/src/pageserver.rs index 3f66960edd..3673d1f4f2 100644 --- a/control_plane/src/pageserver.rs +++ b/control_plane/src/pageserver.rs @@ -452,6 +452,12 @@ impl PageServerNode { .map(|x| x.parse::()) .transpose() .context("Failed to parse 'image_creation_threshold' as non zero integer")?, + // HADRON + image_layer_force_creation_period: settings + .remove("image_layer_force_creation_period") + .map(humantime::parse_duration) + .transpose() + .context("Failed to parse 'image_layer_force_creation_period' as duration")?, image_layer_creation_check_threshold: settings .remove("image_layer_creation_check_threshold") .map(|x| x.parse::()) diff --git a/libs/pageserver_api/src/config.rs b/libs/pageserver_api/src/config.rs index 22815955c1..9e9c7a4dcb 100644 --- a/libs/pageserver_api/src/config.rs +++ b/libs/pageserver_api/src/config.rs @@ -272,6 +272,8 @@ pub struct ConfigToml { pub timeline_import_config: TimelineImportConfig, #[serde(skip_serializing_if = "Option::is_none")] pub basebackup_cache_config: Option, + #[serde(skip_serializing_if = "Option::is_none")] + pub image_layer_generation_large_timeline_threshold: Option, } #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] @@ -561,6 +563,11 @@ pub struct TenantConfigToml { pub gc_period: Duration, // Delta layer churn threshold to create L1 image layers. pub image_creation_threshold: usize, + // HADRON + // When the timeout is reached, PageServer will (1) force compact any remaining L0 deltas and + // (2) create image layers if there are any L1 deltas. + #[serde(with = "humantime_serde")] + pub image_layer_force_creation_period: Option, // Determines how much history is retained, to allow // branching and read replicas at an older point in time. // The unit is time. @@ -823,6 +830,7 @@ impl Default for ConfigToml { }, basebackup_cache_config: None, posthog_config: None, + image_layer_generation_large_timeline_threshold: Some(2 * 1024 * 1024 * 1024), } } } @@ -916,6 +924,7 @@ impl Default for TenantConfigToml { gc_period: humantime::parse_duration(DEFAULT_GC_PERIOD) .expect("cannot parse default gc period"), image_creation_threshold: DEFAULT_IMAGE_CREATION_THRESHOLD, + image_layer_force_creation_period: None, pitr_interval: humantime::parse_duration(DEFAULT_PITR_INTERVAL) .expect("cannot parse default PITR interval"), walreceiver_connect_timeout: humantime::parse_duration( diff --git a/libs/pageserver_api/src/models.rs b/libs/pageserver_api/src/models.rs index 6735320484..56dd95eab3 100644 --- a/libs/pageserver_api/src/models.rs +++ b/libs/pageserver_api/src/models.rs @@ -597,6 +597,9 @@ pub struct TenantConfigPatch { pub gc_period: FieldPatch, #[serde(skip_serializing_if = "FieldPatch::is_noop")] pub image_creation_threshold: FieldPatch, + // HADRON + #[serde(skip_serializing_if = "FieldPatch::is_noop")] + pub image_layer_force_creation_period: FieldPatch, #[serde(skip_serializing_if = "FieldPatch::is_noop")] pub pitr_interval: FieldPatch, #[serde(skip_serializing_if = "FieldPatch::is_noop")] @@ -700,6 +703,11 @@ pub struct TenantConfig { #[serde(skip_serializing_if = "Option::is_none")] pub image_creation_threshold: Option, + // HADRON + #[serde(skip_serializing_if = "Option::is_none")] + #[serde(with = "humantime_serde")] + pub image_layer_force_creation_period: Option, + #[serde(skip_serializing_if = "Option::is_none")] #[serde(with = "humantime_serde")] pub pitr_interval: Option, @@ -798,6 +806,7 @@ impl TenantConfig { mut gc_horizon, mut gc_period, mut image_creation_threshold, + mut image_layer_force_creation_period, mut pitr_interval, mut walreceiver_connect_timeout, mut lagging_wal_timeout, @@ -861,6 +870,11 @@ impl TenantConfig { patch .image_creation_threshold .apply(&mut image_creation_threshold); + // HADRON + patch + .image_layer_force_creation_period + .map(|v| humantime::parse_duration(&v))? + .apply(&mut image_layer_force_creation_period); patch .pitr_interval .map(|v| humantime::parse_duration(&v))? @@ -942,6 +956,7 @@ impl TenantConfig { gc_horizon, gc_period, image_creation_threshold, + image_layer_force_creation_period, pitr_interval, walreceiver_connect_timeout, lagging_wal_timeout, @@ -1016,6 +1031,9 @@ impl TenantConfig { image_creation_threshold: self .image_creation_threshold .unwrap_or(global_conf.image_creation_threshold), + image_layer_force_creation_period: self + .image_layer_force_creation_period + .or(global_conf.image_layer_force_creation_period), pitr_interval: self.pitr_interval.unwrap_or(global_conf.pitr_interval), walreceiver_connect_timeout: self .walreceiver_connect_timeout diff --git a/pageserver/src/config.rs b/pageserver/src/config.rs index 15ec31b0a6..f64c5838ff 100644 --- a/pageserver/src/config.rs +++ b/pageserver/src/config.rs @@ -252,6 +252,10 @@ pub struct PageServerConf { pub timeline_import_config: pageserver_api::config::TimelineImportConfig, pub basebackup_cache_config: Option, + + /// Defines what is a big tenant for the purpose of image layer generation. + /// See Timeline::should_check_if_image_layers_required + pub image_layer_generation_large_timeline_threshold: Option, } /// Token for authentication to safekeepers @@ -432,6 +436,7 @@ impl PageServerConf { posthog_config, timeline_import_config, basebackup_cache_config, + image_layer_generation_large_timeline_threshold, } = config_toml; let mut conf = PageServerConf { @@ -490,6 +495,7 @@ impl PageServerConf { dev_mode, timeline_import_config, basebackup_cache_config, + image_layer_generation_large_timeline_threshold, // ------------------------------------------------------------ // fields that require additional validation or custom handling diff --git a/pageserver/src/tenant.rs b/pageserver/src/tenant.rs index 240ba36236..7e2e6d96b8 100644 --- a/pageserver/src/tenant.rs +++ b/pageserver/src/tenant.rs @@ -4171,6 +4171,15 @@ impl TenantShard { .unwrap_or(self.conf.default_tenant_conf.image_creation_threshold) } + // HADRON + pub fn get_image_creation_timeout(&self) -> Option { + let tenant_conf = self.tenant_conf.load().tenant_conf.clone(); + tenant_conf.image_layer_force_creation_period.or(self + .conf + .default_tenant_conf + .image_layer_force_creation_period) + } + pub fn get_pitr_interval(&self) -> Duration { let tenant_conf = self.tenant_conf.load().tenant_conf.clone(); tenant_conf diff --git a/pageserver/src/tenant/timeline.rs b/pageserver/src/tenant/timeline.rs index 0a026d288e..a9bc0a060b 100644 --- a/pageserver/src/tenant/timeline.rs +++ b/pageserver/src/tenant/timeline.rs @@ -351,6 +351,13 @@ pub struct Timeline { last_image_layer_creation_check_at: AtomicLsn, last_image_layer_creation_check_instant: std::sync::Mutex>, + // HADRON + /// If a key range has writes with LSN > force_image_creation_lsn, then we should force image layer creation + /// on this key range. + force_image_creation_lsn: AtomicLsn, + /// The last time instant when force_image_creation_lsn is computed. + force_image_creation_lsn_computed_at: std::sync::Mutex>, + /// Current logical size of the "datadir", at the last LSN. current_logical_size: LogicalSize, @@ -2846,6 +2853,18 @@ impl Timeline { .unwrap_or(self.conf.default_tenant_conf.image_creation_threshold) } + // HADRON + fn get_image_creation_timeout(&self) -> Option { + let tenant_conf = self.tenant_conf.load(); + tenant_conf + .tenant_conf + .image_layer_force_creation_period + .or(self + .conf + .default_tenant_conf + .image_layer_force_creation_period) + } + fn get_compaction_algorithm_settings(&self) -> CompactionAlgorithmSettings { let tenant_conf = &self.tenant_conf.load(); tenant_conf @@ -3115,7 +3134,9 @@ impl Timeline { repartition_threshold: 0, last_image_layer_creation_check_at: AtomicLsn::new(0), last_image_layer_creation_check_instant: Mutex::new(None), - + // HADRON + force_image_creation_lsn: AtomicLsn::new(0), + force_image_creation_lsn_computed_at: std::sync::Mutex::new(None), last_received_wal: Mutex::new(None), rel_size_latest_cache: RwLock::new(HashMap::new()), rel_size_snapshot_cache: Mutex::new(LruCache::new(relsize_snapshot_cache_capacity)), @@ -5036,6 +5057,7 @@ impl Timeline { .create_image_layers( &partitions, self.initdb_lsn, + None, ImageLayerCreationMode::Initial, ctx, LastImageLayerCreationStatus::Initial, @@ -5307,14 +5329,19 @@ impl Timeline { } // Is it time to create a new image layer for the given partition? True if we want to generate. - async fn time_for_new_image_layer(&self, partition: &KeySpace, lsn: Lsn) -> bool { + async fn time_for_new_image_layer( + &self, + partition: &KeySpace, + lsn: Lsn, + force_image_creation_lsn: Option, + ) -> bool { let threshold = self.get_image_creation_threshold(); let guard = self.layers.read(LayerManagerLockHolder::Compaction).await; let Ok(layers) = guard.layer_map() else { return false; }; - + let mut min_image_lsn: Lsn = Lsn::MAX; let mut max_deltas = 0; for part_range in &partition.ranges { let image_coverage = layers.image_coverage(part_range, lsn); @@ -5349,9 +5376,22 @@ impl Timeline { return true; } } + min_image_lsn = min(min_image_lsn, img_lsn); } } + // HADRON + if min_image_lsn < force_image_creation_lsn.unwrap_or(Lsn(0)) && max_deltas > 0 { + info!( + "forcing image creation for partitioned range {}-{}. Min image LSN: {}, force image creation LSN: {}", + partition.ranges[0].start, + partition.ranges[0].end, + min_image_lsn, + force_image_creation_lsn.unwrap() + ); + return true; + } + debug!( max_deltas, "none of the partitioned ranges had >= {threshold} deltas" @@ -5577,7 +5617,7 @@ impl Timeline { /// suffer from the lack of image layers /// 2. For small tenants (that can mostly fit in RAM), we use a much longer interval fn should_check_if_image_layers_required(self: &Arc, lsn: Lsn) -> bool { - const LARGE_TENANT_THRESHOLD: u64 = 2 * 1024 * 1024 * 1024; + let large_timeline_threshold = self.conf.image_layer_generation_large_timeline_threshold; let last_checks_at = self.last_image_layer_creation_check_at.load(); let distance = lsn @@ -5591,12 +5631,12 @@ impl Timeline { let mut time_based_decision = false; let mut last_check_instant = self.last_image_layer_creation_check_instant.lock().unwrap(); if let CurrentLogicalSize::Exact(logical_size) = self.current_logical_size.current_size() { - let check_required_after = if Into::::into(&logical_size) >= LARGE_TENANT_THRESHOLD - { - self.get_checkpoint_timeout() - } else { - Duration::from_secs(3600 * 48) - }; + let check_required_after = + if Some(Into::::into(&logical_size)) >= large_timeline_threshold { + self.get_checkpoint_timeout() + } else { + Duration::from_secs(3600 * 48) + }; time_based_decision = match *last_check_instant { Some(last_check) => { @@ -5624,10 +5664,12 @@ impl Timeline { /// true = we have generate all image layers, false = we preempt the process for L0 compaction. /// /// `partition_mode` is only for logging purpose and is not used anywhere in this function. + #[allow(clippy::too_many_arguments)] async fn create_image_layers( self: &Arc, partitioning: &KeyPartitioning, lsn: Lsn, + force_image_creation_lsn: Option, mode: ImageLayerCreationMode, ctx: &RequestContext, last_status: LastImageLayerCreationStatus, @@ -5731,7 +5773,11 @@ impl Timeline { } else if let ImageLayerCreationMode::Try = mode { // check_for_image_layers = false -> skip // check_for_image_layers = true -> check time_for_new_image_layer -> skip/generate - if !check_for_image_layers || !self.time_for_new_image_layer(partition, lsn).await { + if !check_for_image_layers + || !self + .time_for_new_image_layer(partition, lsn, force_image_creation_lsn) + .await + { start = img_range.end; continue; } diff --git a/pageserver/src/tenant/timeline/compaction.rs b/pageserver/src/tenant/timeline/compaction.rs index 18a0ca852d..171f9d1284 100644 --- a/pageserver/src/tenant/timeline/compaction.rs +++ b/pageserver/src/tenant/timeline/compaction.rs @@ -4,10 +4,11 @@ //! //! The old legacy algorithm is implemented directly in `timeline.rs`. +use std::cmp::min; use std::collections::{BinaryHeap, HashMap, HashSet, VecDeque}; use std::ops::{Deref, Range}; use std::sync::Arc; -use std::time::{Duration, Instant}; +use std::time::{Duration, Instant, SystemTime}; use super::layer_manager::LayerManagerLockHolder; use super::{ @@ -33,6 +34,7 @@ use pageserver_api::models::{CompactInfoResponse, CompactKeyRange}; use pageserver_api::shard::{ShardCount, ShardIdentity, TenantShardId}; use pageserver_compaction::helpers::{fully_contains, overlaps_with}; use pageserver_compaction::interface::*; +use postgres_ffi::to_pg_timestamp; use serde::Serialize; use tokio::sync::{OwnedSemaphorePermit, Semaphore}; use tokio_util::sync::CancellationToken; @@ -45,6 +47,7 @@ use wal_decoder::models::value::Value; use crate::context::{AccessStatsBehavior, RequestContext, RequestContextBuilder}; use crate::page_cache; +use crate::pgdatadir_mapping::LsnForTimestamp; use crate::statvfs::Statvfs; use crate::tenant::checks::check_valid_layermap; use crate::tenant::gc_block::GcBlock; @@ -1267,6 +1270,12 @@ impl Timeline { // Define partitioning schema if needed + // HADRON + let force_image_creation_lsn = self + .get_or_compute_force_image_creation_lsn(cancel, ctx) + .await + .map_err(CompactionError::Other)?; + // 1. L0 Compact let l0_outcome = { let timer = self.metrics.compact_time_histo.start_timer(); @@ -1274,6 +1283,7 @@ impl Timeline { .compact_level0( target_file_size, options.flags.contains(CompactFlags::ForceL0Compaction), + force_image_creation_lsn, ctx, ) .await?; @@ -1376,6 +1386,7 @@ impl Timeline { .create_image_layers( &partitioning, lsn, + force_image_creation_lsn, mode, &image_ctx, self.last_image_layer_creation_status @@ -1472,6 +1483,63 @@ impl Timeline { Ok(CompactionOutcome::Done) } + /* BEGIN_HADRON */ + // Get the force image creation LSN. Compute it if the last computed LSN is too old. + async fn get_or_compute_force_image_creation_lsn( + self: &Arc, + cancel: &CancellationToken, + ctx: &RequestContext, + ) -> anyhow::Result> { + const FORCE_IMAGE_CREATION_LSN_COMPUTE_INTERVAL: Duration = Duration::from_secs(10 * 60); // 10 minutes + let image_layer_force_creation_period = self.get_image_creation_timeout(); + if image_layer_force_creation_period.is_none() { + return Ok(None); + } + + let image_layer_force_creation_period = image_layer_force_creation_period.unwrap(); + let force_image_creation_lsn_computed_at = + *self.force_image_creation_lsn_computed_at.lock().unwrap(); + if force_image_creation_lsn_computed_at.is_none() + || force_image_creation_lsn_computed_at.unwrap().elapsed() + > FORCE_IMAGE_CREATION_LSN_COMPUTE_INTERVAL + { + let now: SystemTime = SystemTime::now(); + let timestamp = now + .checked_sub(image_layer_force_creation_period) + .ok_or_else(|| { + anyhow::anyhow!( + "image creation timeout is too large: {image_layer_force_creation_period:?}" + ) + })?; + let timestamp = to_pg_timestamp(timestamp); + let force_image_creation_lsn = match self + .find_lsn_for_timestamp(timestamp, cancel, ctx) + .await? + { + LsnForTimestamp::Present(lsn) | LsnForTimestamp::Future(lsn) => lsn, + _ => { + let gc_lsn = *self.get_applied_gc_cutoff_lsn(); + tracing::info!( + "no LSN found for timestamp {timestamp:?}, using latest GC cutoff LSN {}", + gc_lsn + ); + gc_lsn + } + }; + self.force_image_creation_lsn + .store(force_image_creation_lsn); + *self.force_image_creation_lsn_computed_at.lock().unwrap() = Some(Instant::now()); + tracing::info!( + "computed force image creation LSN: {}", + force_image_creation_lsn + ); + Ok(Some(force_image_creation_lsn)) + } else { + Ok(Some(self.force_image_creation_lsn.load())) + } + } + /* END_HADRON */ + /// Check for layers that are elegible to be rewritten: /// - Shard splitting: After a shard split, ancestor layers beyond pitr_interval, so that /// we don't indefinitely retain keys in this shard that aren't needed. @@ -1801,6 +1869,7 @@ impl Timeline { self: &Arc, target_file_size: u64, force_compaction_ignore_threshold: bool, + force_compaction_lsn: Option, ctx: &RequestContext, ) -> Result { let CompactLevel0Phase1Result { @@ -1821,6 +1890,7 @@ impl Timeline { stats, target_file_size, force_compaction_ignore_threshold, + force_compaction_lsn, &ctx, ) .instrument(phase1_span) @@ -1843,6 +1913,7 @@ impl Timeline { mut stats: CompactLevel0Phase1StatsBuilder, target_file_size: u64, force_compaction_ignore_threshold: bool, + force_compaction_lsn: Option, ctx: &RequestContext, ) -> Result { let begin = tokio::time::Instant::now(); @@ -1872,11 +1943,28 @@ impl Timeline { return Ok(CompactLevel0Phase1Result::default()); } } else { - debug!( - level0_deltas = level0_deltas.len(), - threshold, "too few deltas to compact" - ); - return Ok(CompactLevel0Phase1Result::default()); + // HADRON + let min_lsn = level0_deltas + .iter() + .map(|a| a.get_lsn_range().start) + .reduce(min); + if force_compaction_lsn.is_some() + && min_lsn.is_some() + && min_lsn.unwrap() < force_compaction_lsn.unwrap() + { + info!( + "forcing L0 compaction of {} L0 deltas. Min lsn: {}, force compaction lsn: {}", + level0_deltas.len(), + min_lsn.unwrap(), + force_compaction_lsn.unwrap() + ); + } else { + debug!( + level0_deltas = level0_deltas.len(), + threshold, "too few deltas to compact" + ); + return Ok(CompactLevel0Phase1Result::default()); + } } } diff --git a/test_runner/regress/test_attach_tenant_config.py b/test_runner/regress/test_attach_tenant_config.py index 7788faceb4..eaaa3014a5 100644 --- a/test_runner/regress/test_attach_tenant_config.py +++ b/test_runner/regress/test_attach_tenant_config.py @@ -165,6 +165,7 @@ def test_fully_custom_config(positive_env: NeonEnv): "gc_horizon": 23 * (1024 * 1024), "gc_period": "2h 13m", "image_creation_threshold": 7, + "image_layer_force_creation_period": "1m", "pitr_interval": "1m", "lagging_wal_timeout": "23m", "lazy_slru_download": True, diff --git a/test_runner/regress/test_compaction.py b/test_runner/regress/test_compaction.py index 1570d40ae9..e67161c6b7 100644 --- a/test_runner/regress/test_compaction.py +++ b/test_runner/regress/test_compaction.py @@ -944,3 +944,78 @@ def test_image_layer_compression(neon_env_builder: NeonEnvBuilder, enabled: bool f"SELECT count(*) FROM foo WHERE id={v} and val=repeat('abcde{v:0>3}', 500)" ) assert res[0][0] == 1 + + +# BEGIN_HADRON +def get_layer_map(env, tenant_shard_id, timeline_id, ps_id): + client = env.pageservers[ps_id].http_client() + layer_map = client.layer_map_info(tenant_shard_id, timeline_id) + image_layer_count = 0 + delta_layer_count = 0 + for layer in layer_map.historic_layers: + if layer.kind == "Image": + image_layer_count += 1 + elif layer.kind == "Delta": + delta_layer_count += 1 + return image_layer_count, delta_layer_count + + +def test_image_creation_timeout(neon_env_builder: NeonEnvBuilder): + """ + Tests that page server can force creating new images if image creation timeout is enabled + """ + # use large knobs to disable L0 compaction/image creation except for the force image creation + tenant_conf = { + "compaction_threshold": "100", + "image_creation_threshold": "100", + "image_layer_creation_check_threshold": "1", + "checkpoint_distance": 10 * 1024, + "checkpoint_timeout": "1s", + "image_layer_force_creation_period": "1s", + # The lsn for forced image layer creations is calculated once every 10 minutes. + # Hence, drive compaction manually such that the test doesn't compute it at the + # wrong time. + "compaction_period": "0s", + } + + # consider every tenant large to run the image layer generation check more eagerly + neon_env_builder.pageserver_config_override = ( + "image_layer_generation_large_timeline_threshold=0" + ) + + neon_env_builder.num_pageservers = 1 + neon_env_builder.num_safekeepers = 1 + env = neon_env_builder.init_start(initial_tenant_conf=tenant_conf) + + tenant_id = env.initial_tenant + timeline_id = env.initial_timeline + + endpoint = env.endpoints.create_start("main") + endpoint.safe_psql("CREATE TABLE foo (id INTEGER PRIMARY KEY, val text)") + # Generate some rows. + for v in range(10): + endpoint.safe_psql(f"INSERT INTO foo (id, val) VALUES ({v}, repeat('abcde{v:0>3}', 500))") + + # Sleep a bit such that the inserts are considered when calculating the forced image layer creation LSN. + time.sleep(2) + + def check_force_image_creation(): + ps_http = env.pageserver.http_client() + ps_http.timeline_compact(tenant_id, timeline_id) + image, delta = get_layer_map(env, tenant_id, timeline_id, 0) + log.info(f"images: {image}, deltas: {delta}") + assert image > 0 + + env.pageserver.assert_log_contains("forcing L0 compaction of") + env.pageserver.assert_log_contains("forcing image creation for partitioned range") + + wait_until(check_force_image_creation) + + endpoint.stop_and_destroy() + + env.pageserver.allowed_errors.append( + ".*created delta file of size.*larger than double of target.*" + ) + + +# END_HADRON From f4b03ddd7b4cb858430b16d642f5f80f11e8b5b1 Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Thu, 10 Jul 2025 12:18:37 +0200 Subject: [PATCH 49/65] pageserver/client_grpc: reap idle pool resources (#12476) ## Problem The gRPC client pools don't reap idle resources. Touches #11735. Requires #12475. ## Summary of changes Reap idle pool resources (channels/clients/streams) after 3 minutes of inactivity. Also restructure the `StreamPool` to use a mutex rather than atomics for synchronization, for simplicity. This will be optimized later. --- Cargo.lock | 1 + pageserver/client_grpc/Cargo.toml | 4 + pageserver/client_grpc/src/pool.rs | 241 ++++++++++++++++++++++++----- 3 files changed, 207 insertions(+), 39 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index caed814d5f..4150944ad0 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -4506,6 +4506,7 @@ dependencies = [ "pageserver_page_api", "tokio", "tokio-stream", + "tokio-util", "tonic 0.13.1", "tracing", "utils", diff --git a/pageserver/client_grpc/Cargo.toml b/pageserver/client_grpc/Cargo.toml index 84e27abb84..ca224900ac 100644 --- a/pageserver/client_grpc/Cargo.toml +++ b/pageserver/client_grpc/Cargo.toml @@ -4,6 +4,9 @@ version = "0.1.0" edition.workspace = true license.workspace = true +[features] +testing = ["pageserver_api/testing"] + [dependencies] anyhow.workspace = true bytes.workspace = true @@ -13,6 +16,7 @@ pageserver_api.workspace = true pageserver_page_api.workspace = true tokio.workspace = true tokio-stream.workspace = true +tokio-util.workspace = true tonic.workspace = true tracing.workspace = true utils.workspace = true diff --git a/pageserver/client_grpc/src/pool.rs b/pageserver/client_grpc/src/pool.rs index 5a50004fd1..89b3bd646f 100644 --- a/pageserver/client_grpc/src/pool.rs +++ b/pageserver/client_grpc/src/pool.rs @@ -34,10 +34,12 @@ use std::num::NonZero; use std::ops::{Deref, DerefMut}; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::{Arc, Mutex, Weak}; +use std::time::{Duration, Instant}; use futures::StreamExt as _; use tokio::sync::mpsc::{Receiver, Sender}; use tokio::sync::{OwnedSemaphorePermit, Semaphore, mpsc, oneshot}; +use tokio_util::sync::CancellationToken; use tonic::transport::{Channel, Endpoint}; use tracing::{error, warn}; @@ -45,6 +47,25 @@ use pageserver_page_api as page_api; use utils::id::{TenantId, TimelineId}; use utils::shard::ShardIndex; +/// Reap channels/clients/streams that have been idle for this long. +/// +/// TODO: this is per-pool. For nested pools, it can take up to 3x as long for a TCP connection to +/// be reaped. First, we must wait for an idle stream to be reaped, which marks its client as idle. +/// Then, we must wait for the idle client to be reaped, which marks its channel as idle. Then, we +/// must wait for the idle channel to be reaped. Is that a problem? Maybe not, we just have to +/// account for it when setting the reap threshold. Alternatively, we can immediately reap empty +/// channels, and/or stream pool clients. +const REAP_IDLE_THRESHOLD: Duration = match cfg!(any(test, feature = "testing")) { + false => Duration::from_secs(180), + true => Duration::from_secs(1), // exercise reaping in tests +}; + +/// Reap idle resources with this interval. +const REAP_IDLE_INTERVAL: Duration = match cfg!(any(test, feature = "testing")) { + false => Duration::from_secs(10), + true => Duration::from_secs(1), // exercise reaping in tests +}; + /// A gRPC channel pool, for a single Pageserver. A channel is shared by many clients (via HTTP/2 /// stream multiplexing), up to `clients_per_channel` -- a new channel will be spun up beyond this. /// The pool does not limit the number of channels, and instead relies on `ClientPool` or @@ -52,7 +73,6 @@ use utils::shard::ShardIndex; /// /// The pool is always wrapped in an outer `Arc`, to allow long-lived guards across tasks/threads. /// -/// TODO: reap idle channels. /// TODO: consider prewarming a set of channels, to avoid initial connection latency. /// TODO: consider adding a circuit breaker for errors and fail fast. pub struct ChannelPool { @@ -62,6 +82,8 @@ pub struct ChannelPool { max_clients_per_channel: NonZero, /// Open channels. channels: Mutex>, + /// Reaps idle channels. + idle_reaper: Reaper, /// Channel ID generator. next_channel_id: AtomicUsize, } @@ -73,6 +95,9 @@ struct ChannelEntry { channel: Channel, /// Number of clients using this channel. clients: usize, + /// The channel has been idle (no clients) since this time. None if channel is in use. + /// INVARIANT: Some if clients == 0, otherwise None. + idle_since: Option, } impl ChannelPool { @@ -82,12 +107,15 @@ impl ChannelPool { E: TryInto + Send + Sync + 'static, >::Error: std::error::Error + Send + Sync, { - Ok(Arc::new(Self { + let pool = Arc::new(Self { endpoint: endpoint.try_into()?, max_clients_per_channel, channels: Mutex::default(), + idle_reaper: Reaper::new(REAP_IDLE_THRESHOLD, REAP_IDLE_INTERVAL), next_channel_id: AtomicUsize::default(), - })) + }); + pool.idle_reaper.spawn(&pool); + Ok(pool) } /// Acquires a gRPC channel for a client. Multiple clients may acquire the same channel. @@ -116,8 +144,14 @@ impl ChannelPool { entry.clients <= self.max_clients_per_channel.get(), "channel overflow" ); + assert_eq!( + entry.idle_since.is_some(), + entry.clients == 0, + "incorrect channel idle state" + ); if entry.clients < self.max_clients_per_channel.get() { entry.clients += 1; + entry.idle_since = None; return ChannelGuard { pool: Arc::downgrade(self), id, @@ -134,6 +168,7 @@ impl ChannelPool { let entry = ChannelEntry { channel: channel.clone(), clients: 1, // account for the guard below + idle_since: None, }; channels.insert(id, entry); @@ -145,6 +180,20 @@ impl ChannelPool { } } +impl Reapable for ChannelPool { + /// Reaps channels that have been idle since before the cutoff. + fn reap_idle(&self, cutoff: Instant) { + self.channels.lock().unwrap().retain(|_, entry| { + let Some(idle_since) = entry.idle_since else { + assert_ne!(entry.clients, 0, "empty channel not marked idle"); + return true; + }; + assert_eq!(entry.clients, 0, "idle channel has clients"); + idle_since >= cutoff + }) + } +} + /// Tracks a channel acquired from the pool. The owned inner channel can be obtained with `take()`, /// since the gRPC client requires an owned `Channel`. pub struct ChannelGuard { @@ -167,10 +216,15 @@ impl Drop for ChannelGuard { let Some(pool) = self.pool.upgrade() else { return; // pool was dropped }; + let mut channels = pool.channels.lock().unwrap(); let entry = channels.get_mut(&self.id).expect("unknown channel"); + assert!(entry.idle_since.is_none(), "active channel marked idle"); assert!(entry.clients > 0, "channel underflow"); entry.clients -= 1; + if entry.clients == 0 { + entry.idle_since = Some(Instant::now()); // mark channel as idle + } } } @@ -179,8 +233,6 @@ impl Drop for ChannelGuard { /// number of concurrent clients to `max_clients` via semaphore. /// /// The pool is always wrapped in an outer `Arc`, to allow long-lived guards across tasks/threads. -/// -/// TODO: reap idle clients. pub struct ClientPool { /// Tenant ID. tenant_id: TenantId, @@ -201,6 +253,8 @@ pub struct ClientPool { /// lower-ordered channels. This allows us to free up and reap higher-numbered channels as idle /// clients are reaped. idle: Mutex>, + /// Reaps idle clients. + idle_reaper: Reaper, /// Unique client ID generator. next_client_id: AtomicUsize, } @@ -212,6 +266,9 @@ struct ClientEntry { client: page_api::Client, /// The channel guard for the channel used by the client. channel_guard: ChannelGuard, + /// The client has been idle since this time. All clients in `ClientPool::idle` are idle by + /// definition, so this is the time when it was added back to the pool. + idle_since: Instant, } impl ClientPool { @@ -226,16 +283,19 @@ impl ClientPool { auth_token: Option, max_clients: Option>, ) -> Arc { - Arc::new(Self { + let pool = Arc::new(Self { tenant_id, timeline_id, shard_id, auth_token, channel_pool, idle: Mutex::default(), + idle_reaper: Reaper::new(REAP_IDLE_THRESHOLD, REAP_IDLE_INTERVAL), limiter: max_clients.map(|max| Arc::new(Semaphore::new(max.get()))), next_client_id: AtomicUsize::default(), - }) + }); + pool.idle_reaper.spawn(&pool); + pool } /// Gets a client from the pool, or creates a new one if necessary. Connections are established @@ -287,6 +347,16 @@ impl ClientPool { } } +impl Reapable for ClientPool { + /// Reaps clients that have been idle since before the cutoff. + fn reap_idle(&self, cutoff: Instant) { + self.idle + .lock() + .unwrap() + .retain(|_, entry| entry.idle_since >= cutoff) + } +} + /// A client acquired from the pool. The inner client can be accessed via Deref. The client is /// returned to the pool when dropped. pub struct ClientGuard { @@ -317,9 +387,11 @@ impl Drop for ClientGuard { let Some(pool) = self.pool.upgrade() else { return; // pool was dropped }; + let entry = ClientEntry { client: self.client.take().expect("dropped once"), channel_guard: self.channel_guard.take().expect("dropped once"), + idle_since: Instant::now(), }; pool.idle.lock().unwrap().insert(self.id, entry); @@ -334,7 +406,6 @@ impl Drop for ClientGuard { /// a single request and await the response. Internally, requests are multiplexed across streams and /// channels. This allows proper queue depth enforcement and response routing. /// -/// TODO: reap idle streams. /// TODO: consider making this generic over request and response types; not currently needed. pub struct StreamPool { /// The client pool to acquire clients from. Must be unbounded. @@ -344,7 +415,7 @@ pub struct StreamPool { /// Incoming requests will be sent over an existing stream with available capacity. If all /// streams are full, a new one is spun up and added to the pool (up to `max_streams`). Each /// stream has an associated Tokio task that processes requests and responses. - streams: Arc>>, + streams: Mutex>, /// The max number of concurrent streams, or None if unbounded. max_streams: Option>, /// The max number of concurrent requests per stream. @@ -352,6 +423,8 @@ pub struct StreamPool { /// Limits the max number of concurrent requests, given by `max_streams * max_queue_depth`. /// None if the pool is unbounded. limiter: Option>, + /// Reaps idle streams. + idle_reaper: Reaper, /// Stream ID generator. next_stream_id: AtomicUsize, } @@ -364,9 +437,11 @@ type ResponseSender = oneshot::Sender>; struct StreamEntry { /// Sends caller requests to the stream task. The stream task exits when this is dropped. sender: RequestSender, - /// Number of in-flight requests on this stream. This is an atomic to allow decrementing it on - /// completion without acquiring the `StreamPool::streams` lock. - queue_depth: Arc, + /// Number of in-flight requests on this stream. + queue_depth: usize, + /// The time when this stream went idle (queue_depth == 0). + /// INVARIANT: Some if queue_depth == 0, otherwise None. + idle_since: Option, } impl StreamPool { @@ -383,16 +458,19 @@ impl StreamPool { max_queue_depth: NonZero, ) -> Arc { assert!(client_pool.limiter.is_none(), "bounded client pool"); - Arc::new(Self { + let pool = Arc::new(Self { client_pool, - streams: Arc::default(), + streams: Mutex::default(), limiter: max_streams.map(|max_streams| { Arc::new(Semaphore::new(max_streams.get() * max_queue_depth.get())) }), max_streams, max_queue_depth, + idle_reaper: Reaper::new(REAP_IDLE_THRESHOLD, REAP_IDLE_INTERVAL), next_stream_id: AtomicUsize::default(), - }) + }); + pool.idle_reaper.spawn(&pool); + pool } /// Acquires an available stream from the pool, or spins up a new stream async if all streams @@ -412,8 +490,8 @@ impl StreamPool { /// * Allow concurrent clients to join onto streams while they're spun up. /// * Allow spinning up multiple streams concurrently, but don't overshoot limits. /// - /// For now, we just do something simple and functional, but very inefficient (linear scan). - pub async fn get(&self) -> StreamGuard { + /// For now, we just do something simple but inefficient (linear scan under mutex). + pub async fn get(self: &Arc) -> StreamGuard { // Acquire a permit if the pool is bounded. let mut permit = None; if let Some(limiter) = self.limiter.clone() { @@ -422,23 +500,23 @@ impl StreamPool { let mut streams = self.streams.lock().unwrap(); // Look for a pooled stream with available capacity. - for entry in streams.values() { + for (&id, entry) in streams.iter_mut() { assert!( - entry.queue_depth.load(Ordering::Relaxed) <= self.max_queue_depth.get(), + entry.queue_depth <= self.max_queue_depth.get(), "stream queue overflow" ); - if entry - .queue_depth - .fetch_update(Ordering::SeqCst, Ordering::SeqCst, |queue_depth| { - // Increment the queue depth via compare-and-swap. - // TODO: review ordering. - (queue_depth < self.max_queue_depth.get()).then_some(queue_depth + 1) - }) - .is_ok() - { + assert_eq!( + entry.idle_since.is_some(), + entry.queue_depth == 0, + "incorrect stream idle state" + ); + if entry.queue_depth < self.max_queue_depth.get() { + entry.queue_depth += 1; + entry.idle_since = None; return StreamGuard { + pool: Arc::downgrade(self), + id, sender: entry.sender.clone(), - queue_depth: entry.queue_depth.clone(), permit, }; } @@ -448,11 +526,11 @@ impl StreamPool { // return the guard, while spinning up the stream task async. This allows other callers to // join onto this stream and also create additional streams concurrently if this fills up. let id = self.next_stream_id.fetch_add(1, Ordering::Relaxed); - let queue_depth = Arc::new(AtomicUsize::new(1)); // reserve quota for this caller let (req_tx, req_rx) = mpsc::channel(self.max_queue_depth.get()); let entry = StreamEntry { sender: req_tx.clone(), - queue_depth: queue_depth.clone(), + queue_depth: 1, // reserve quota for this caller + idle_since: None, }; streams.insert(id, entry); @@ -461,20 +539,23 @@ impl StreamPool { }; let client_pool = self.client_pool.clone(); - let streams = self.streams.clone(); + let pool = Arc::downgrade(self); tokio::spawn(async move { if let Err(err) = Self::run_stream(client_pool, req_rx).await { error!("stream failed: {err}"); } - // Remove stream from pool on exit. - let entry = streams.lock().unwrap().remove(&id); - assert!(entry.is_some(), "unknown stream ID: {id}"); + // Remove stream from pool on exit. Weak reference to avoid holding the pool alive. + if let Some(pool) = pool.upgrade() { + let entry = pool.streams.lock().unwrap().remove(&id); + assert!(entry.is_some(), "unknown stream ID: {id}"); + } }); StreamGuard { + pool: Arc::downgrade(self), + id, sender: req_tx, - queue_depth, permit, } } @@ -552,11 +633,26 @@ impl StreamPool { } } +impl Reapable for StreamPool { + /// Reaps streams that have been idle since before the cutoff. + fn reap_idle(&self, cutoff: Instant) { + self.streams.lock().unwrap().retain(|_, entry| { + let Some(idle_since) = entry.idle_since else { + assert_ne!(entry.queue_depth, 0, "empty stream not marked idle"); + return true; + }; + assert_eq!(entry.queue_depth, 0, "idle stream has requests"); + idle_since >= cutoff + }); + } +} + /// A pooled stream reference. Can be used to send a single request, to properly enforce queue /// depth. Queue depth is already reserved and will be returned on drop. pub struct StreamGuard { + pool: Weak, + id: StreamID, sender: RequestSender, - queue_depth: Arc, permit: Option, // None if pool is unbounded } @@ -588,11 +684,78 @@ impl StreamGuard { impl Drop for StreamGuard { fn drop(&mut self) { + let Some(pool) = self.pool.upgrade() else { + return; // pool was dropped + }; + // Release the queue depth reservation on drop. This can prematurely decrement it if dropped // before the response is received, but that's okay. - let prev_queue_depth = self.queue_depth.fetch_sub(1, Ordering::SeqCst); - assert!(prev_queue_depth > 0, "stream queue underflow"); + let mut streams = pool.streams.lock().unwrap(); + let entry = streams.get_mut(&self.id).expect("unknown stream"); + assert!(entry.idle_since.is_none(), "active stream marked idle"); + assert!(entry.queue_depth > 0, "stream queue underflow"); + entry.queue_depth -= 1; + if entry.queue_depth == 0 { + entry.idle_since = Some(Instant::now()); // mark stream as idle + } _ = self.permit; // returned on drop, referenced for visibility } } + +/// Periodically reaps idle resources from a pool. +struct Reaper { + /// The task check interval. + interval: Duration, + /// The threshold for reaping idle resources. + threshold: Duration, + /// Cancels the reaper task. Cancelled when the reaper is dropped. + cancel: CancellationToken, +} + +impl Reaper { + /// Creates a new reaper. + pub fn new(threshold: Duration, interval: Duration) -> Self { + Self { + cancel: CancellationToken::new(), + threshold, + interval, + } + } + + /// Spawns a task to periodically reap idle resources from the given task pool. The task is + /// cancelled when the reaper is dropped. + pub fn spawn(&self, pool: &Arc) { + // NB: hold a weak pool reference, otherwise the task will prevent dropping the pool. + let pool = Arc::downgrade(pool); + let cancel = self.cancel.clone(); + let (interval, threshold) = (self.interval, self.threshold); + + tokio::spawn(async move { + loop { + tokio::select! { + _ = tokio::time::sleep(interval) => { + let Some(pool) = pool.upgrade() else { + return; // pool was dropped + }; + pool.reap_idle(Instant::now() - threshold); + } + + _ = cancel.cancelled() => return, + } + } + }); + } +} + +impl Drop for Reaper { + fn drop(&mut self) { + self.cancel.cancel(); // cancel reaper task + } +} + +/// A reapable resource pool. +trait Reapable: Send + Sync + 'static { + /// Reaps resources that have been idle since before the given cutoff. + fn reap_idle(&self, cutoff: Instant); +} From bdca5b500b078eb9afb528fd464f496e07c97024 Mon Sep 17 00:00:00 2001 From: Mikhail Date: Thu, 10 Jul 2025 12:11:53 +0100 Subject: [PATCH 50/65] Fix test_lfc_prewarm: reduce number of prewarms, sleep before LFC offloading (#12515) Fixes: - Sleep before LFC offloading in `test_lfc_prewarm[autoprewarm]` to ensure offloaded LFC is the one exported after all writes finish - Reduce number of prewarms and increase timeout in `test_lfc_prewarm_under_workload` as debug builds were failing due to timeout. Additional changes: - Remove `check_pinned_entries`: https://github.com/neondatabase/neon/pull/12447#discussion_r2185946210 - Fix LFC error metrics description: https://github.com/neondatabase/neon/pull/12486#discussion_r2190763107 --- compute_tools/src/metrics.rs | 4 +-- test_runner/regress/test_lfc_prewarm.py | 44 +++++++++++-------------- 2 files changed, 21 insertions(+), 27 deletions(-) diff --git a/compute_tools/src/metrics.rs b/compute_tools/src/metrics.rs index 91dedbb42a..6e4df73c0f 100644 --- a/compute_tools/src/metrics.rs +++ b/compute_tools/src/metrics.rs @@ -108,7 +108,7 @@ pub(crate) static LFC_PREWARMS: Lazy = Lazy::new(|| { pub(crate) static LFC_PREWARM_ERRORS: Lazy = Lazy::new(|| { register_int_counter!( "compute_ctl_lfc_prewarm_errors_total", - "Total number of LFC prewarms errors requested by compute_ctl or autoprewarm option", + "Total number of LFC prewarm errors", ) .expect("failed to define a metric") }); @@ -124,7 +124,7 @@ pub(crate) static LFC_OFFLOADS: Lazy = Lazy::new(|| { pub(crate) static LFC_OFFLOAD_ERRORS: Lazy = Lazy::new(|| { register_int_counter!( "compute_ctl_lfc_offload_errors_total", - "Total number of LFC offload errors requested by compute_ctl or lfc_offload_period_seconds option", + "Total number of LFC offload errors", ) .expect("failed to define a metric") }); diff --git a/test_runner/regress/test_lfc_prewarm.py b/test_runner/regress/test_lfc_prewarm.py index ae36bbda79..22e5bf576f 100644 --- a/test_runner/regress/test_lfc_prewarm.py +++ b/test_runner/regress/test_lfc_prewarm.py @@ -1,6 +1,7 @@ import random import threading from enum import StrEnum +from time import sleep from typing import Any import pytest @@ -24,18 +25,7 @@ OFFLOAD_LABEL = "compute_ctl_lfc_offloads_total" OFFLOAD_ERR_LABEL = "compute_ctl_lfc_offload_errors_total" METHOD_VALUES = [e for e in PrewarmMethod] METHOD_IDS = [e.value for e in PrewarmMethod] - - -def check_pinned_entries(cur: Cursor): - """ - Wait till none of LFC buffers are pinned - """ - - def none_pinned(): - cur.execute("select lfc_value from neon_lfc_stats where lfc_key='file_cache_chunks_pinned'") - assert cur.fetchall()[0][0] == 0 - - wait_until(none_pinned) +AUTOOFFLOAD_INTERVAL_SECS = 2 def prom_parse(client: EndpointHttpClient) -> dict[str, float]: @@ -49,9 +39,18 @@ def prom_parse(client: EndpointHttpClient) -> dict[str, float]: def offload_lfc(method: PrewarmMethod, client: EndpointHttpClient, cur: Cursor) -> Any: + if method == PrewarmMethod.POSTGRES: + cur.execute("select get_local_cache_state()") + return cur.fetchall()[0][0] + if method == PrewarmMethod.AUTOPREWARM: + # With autoprewarm, we need to be sure LFC was offloaded after all writes + # finish, so we sleep. Otherwise we'll have less prewarmed pages than we want + sleep(AUTOOFFLOAD_INTERVAL_SECS) client.offload_lfc_wait() - elif method == PrewarmMethod.COMPUTE_CTL: + return + + if method == PrewarmMethod.COMPUTE_CTL: status = client.prewarm_lfc_status() assert status["status"] == "not_prewarmed" assert "error" not in status @@ -60,11 +59,9 @@ def offload_lfc(method: PrewarmMethod, client: EndpointHttpClient, cur: Cursor) parsed = prom_parse(client) desired = {OFFLOAD_LABEL: 1, PREWARM_LABEL: 0, OFFLOAD_ERR_LABEL: 0, PREWARM_ERR_LABEL: 0} assert parsed == desired, f"{parsed=} != {desired=}" - elif method == PrewarmMethod.POSTGRES: - cur.execute("select get_local_cache_state()") - return cur.fetchall()[0][0] - else: - raise AssertionError(f"{method} not in PrewarmMethod") + return + + raise AssertionError(f"{method} not in PrewarmMethod") def prewarm_endpoint( @@ -106,14 +103,13 @@ def test_lfc_prewarm(neon_simple_env: NeonEnv, method: PrewarmMethod): "neon.file_cache_size_limit=1GB", "neon.file_cache_prewarm_limit=1000", ] - offload_secs = 2 if method == PrewarmMethod.AUTOPREWARM: endpoint = env.endpoints.create_start( branch_name="main", config_lines=cfg, autoprewarm=True, - offload_lfc_interval_seconds=offload_secs, + offload_lfc_interval_seconds=AUTOOFFLOAD_INTERVAL_SECS, ) else: endpoint = env.endpoints.create_start(branch_name="main", config_lines=cfg) @@ -135,7 +131,7 @@ def test_lfc_prewarm(neon_simple_env: NeonEnv, method: PrewarmMethod): endpoint.stop() if method == PrewarmMethod.AUTOPREWARM: - endpoint.start(autoprewarm=True, offload_lfc_interval_seconds=offload_secs) + endpoint.start(autoprewarm=True, offload_lfc_interval_seconds=AUTOOFFLOAD_INTERVAL_SECS) else: endpoint.start() @@ -162,7 +158,6 @@ def test_lfc_prewarm(neon_simple_env: NeonEnv, method: PrewarmMethod): lfc_cur.execute("select sum(pk) from t") assert lfc_cur.fetchall()[0][0] == n_records * (n_records + 1) / 2 - check_pinned_entries(pg_cur) desired = {"status": "completed", "total": total, "prewarmed": prewarmed, "skipped": skipped} check_prewarmed(method, client, desired) @@ -243,9 +238,9 @@ def test_lfc_prewarm_under_workload(neon_simple_env: NeonEnv, method: PrewarmMet prewarm_thread.start() def prewarmed(): - assert n_prewarms > 5 + assert n_prewarms > 3 - wait_until(prewarmed) + wait_until(prewarmed, timeout=40) # debug builds don't finish in 20s running = False for t in workload_threads: @@ -256,7 +251,6 @@ def test_lfc_prewarm_under_workload(neon_simple_env: NeonEnv, method: PrewarmMet total_balance = lfc_cur.fetchall()[0][0] assert total_balance == 0 - check_pinned_entries(pg_cur) if method == PrewarmMethod.POSTGRES: return desired = { From ffeede085e3008616872372ac98edcef573c8677 Mon Sep 17 00:00:00 2001 From: Vlad Lazar Date: Thu, 10 Jul 2025 12:58:22 +0100 Subject: [PATCH 51/65] libs: move metric collection for pageserver and safekeeper in a background task (#12525) ## Problem Safekeeper and pageserver metrics collection might time out. We've seen this in both hadron and neon. ## Summary of changes This PR moves metrics collection in PS/SK to the background so that we will always get some metrics, despite there may be some delays. Will leave it to the future work to reduce metrics collection time. --------- Co-authored-by: Chen Luo --- libs/http-utils/src/endpoint.rs | 37 ++++++++- libs/pageserver_api/src/config.rs | 2 + libs/utils/src/lib.rs | 2 + libs/utils/src/metrics_collector.rs | 75 +++++++++++++++++++ pageserver/src/bin/pageserver.rs | 41 +++++++++- pageserver/src/config.rs | 6 ++ pageserver/src/http/routes.rs | 7 +- pageserver/src/lib.rs | 12 +++ safekeeper/src/bin/safekeeper.rs | 27 +++++++ safekeeper/src/http/routes.rs | 9 ++- safekeeper/src/lib.rs | 2 + safekeeper/src/wal_backup.rs | 2 +- .../tests/walproposer_sim/safekeeper.rs | 1 + test_runner/fixtures/pageserver/http.py | 2 +- test_runner/fixtures/safekeeper/http.py | 2 +- 15 files changed, 217 insertions(+), 10 deletions(-) create mode 100644 libs/utils/src/metrics_collector.rs diff --git a/libs/http-utils/src/endpoint.rs b/libs/http-utils/src/endpoint.rs index f32ced1180..a61bf8e08a 100644 --- a/libs/http-utils/src/endpoint.rs +++ b/libs/http-utils/src/endpoint.rs @@ -20,6 +20,7 @@ use tokio_stream::wrappers::ReceiverStream; use tokio_util::io::ReaderStream; use tracing::{Instrument, debug, info, info_span, warn}; use utils::auth::{AuthError, Claims, SwappableJwtAuth}; +use utils::metrics_collector::{METRICS_COLLECTOR, METRICS_STALE_MILLIS}; use crate::error::{ApiError, api_error_handler, route_error_handler}; use crate::request::{get_query_param, parse_query_param}; @@ -250,9 +251,28 @@ impl std::io::Write for ChannelWriter { } } -pub async fn prometheus_metrics_handler(_req: Request) -> Result, ApiError> { +pub async fn prometheus_metrics_handler( + req: Request, + force_metric_collection_on_scrape: bool, +) -> Result, ApiError> { SERVE_METRICS_COUNT.inc(); + // HADRON + let requested_use_latest = parse_query_param(&req, "use_latest")?; + + let use_latest = match requested_use_latest { + None => force_metric_collection_on_scrape, + Some(true) => true, + Some(false) => { + if force_metric_collection_on_scrape { + // We don't cache in this case + true + } else { + false + } + } + }; + let started_at = std::time::Instant::now(); let (tx, rx) = mpsc::channel(1); @@ -277,12 +297,18 @@ pub async fn prometheus_metrics_handler(_req: Request) -> Result) -> Result { tracing::info!( @@ -303,6 +333,7 @@ pub async fn prometheus_metrics_handler(_req: Request) -> Result, #[serde(skip_serializing_if = "Option::is_none")] pub image_layer_generation_large_timeline_threshold: Option, + pub force_metric_collection_on_scrape: bool, } #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)] @@ -831,6 +832,7 @@ impl Default for ConfigToml { basebackup_cache_config: None, posthog_config: None, image_layer_generation_large_timeline_threshold: Some(2 * 1024 * 1024 * 1024), + force_metric_collection_on_scrape: true, } } } diff --git a/libs/utils/src/lib.rs b/libs/utils/src/lib.rs index 11f787562c..2b81da017d 100644 --- a/libs/utils/src/lib.rs +++ b/libs/utils/src/lib.rs @@ -99,6 +99,8 @@ pub mod elapsed_accum; #[cfg(target_os = "linux")] pub mod linux_socket_ioctl; +pub mod metrics_collector; + // Re-export used in macro. Avoids adding git-version as dep in target crates. #[doc(hidden)] pub use git_version; diff --git a/libs/utils/src/metrics_collector.rs b/libs/utils/src/metrics_collector.rs new file mode 100644 index 0000000000..9e57fcd643 --- /dev/null +++ b/libs/utils/src/metrics_collector.rs @@ -0,0 +1,75 @@ +use std::{ + sync::{Arc, RwLock}, + time::{Duration, Instant}, +}; + +use metrics::{IntGauge, proto::MetricFamily, register_int_gauge}; +use once_cell::sync::Lazy; + +pub static METRICS_STALE_MILLIS: Lazy = Lazy::new(|| { + register_int_gauge!( + "metrics_metrics_stale_milliseconds", + "The current metrics stale time in milliseconds" + ) + .expect("failed to define a metric") +}); + +#[derive(Debug)] +pub struct CollectedMetrics { + pub metrics: Vec, + pub collected_at: Instant, +} + +impl CollectedMetrics { + fn new(metrics: Vec) -> Self { + Self { + metrics, + collected_at: Instant::now(), + } + } +} + +#[derive(Debug)] +pub struct MetricsCollector { + last_collected: RwLock>, +} + +impl MetricsCollector { + pub fn new() -> Self { + Self { + last_collected: RwLock::new(Arc::new(CollectedMetrics::new(vec![]))), + } + } + + #[tracing::instrument(name = "metrics_collector", skip_all)] + pub fn run_once(&self, cache_metrics: bool) -> Arc { + let started = Instant::now(); + let metrics = metrics::gather(); + let collected = Arc::new(CollectedMetrics::new(metrics)); + if cache_metrics { + let mut guard = self.last_collected.write().unwrap(); + *guard = collected.clone(); + } + tracing::info!( + "Collected {} metric families in {} ms", + collected.metrics.len(), + started.elapsed().as_millis() + ); + collected + } + + pub fn last_collected(&self) -> Arc { + self.last_collected.read().unwrap().clone() + } +} + +impl Default for MetricsCollector { + fn default() -> Self { + Self::new() + } +} + +// Interval for metrics collection. Currently hard-coded to be the same as the metrics scape interval from the obs agent +pub static METRICS_COLLECTION_INTERVAL: Duration = Duration::from_secs(30); + +pub static METRICS_COLLECTOR: Lazy = Lazy::new(MetricsCollector::default); diff --git a/pageserver/src/bin/pageserver.rs b/pageserver/src/bin/pageserver.rs index 78aba25d2e..299fe7e159 100644 --- a/pageserver/src/bin/pageserver.rs +++ b/pageserver/src/bin/pageserver.rs @@ -29,8 +29,8 @@ use pageserver::task_mgr::{ }; use pageserver::tenant::{TenantSharedResources, mgr, secondary}; use pageserver::{ - CancellableTask, ConsumptionMetricsTasks, HttpEndpointListener, HttpsEndpointListener, http, - page_cache, page_service, task_mgr, virtual_file, + CancellableTask, ConsumptionMetricsTasks, HttpEndpointListener, HttpsEndpointListener, + MetricsCollectionTask, http, page_cache, page_service, task_mgr, virtual_file, }; use postgres_backend::AuthType; use remote_storage::GenericRemoteStorage; @@ -41,6 +41,7 @@ use tracing_utils::OtelGuard; use utils::auth::{JwtAuth, SwappableJwtAuth}; use utils::crashsafe::syncfs; use utils::logging::TracingErrorLayerEnablement; +use utils::metrics_collector::{METRICS_COLLECTION_INTERVAL, METRICS_COLLECTOR}; use utils::sentry_init::init_sentry; use utils::{failpoint_support, logging, project_build_tag, project_git_version, tcp_listener}; @@ -763,6 +764,41 @@ fn start_pageserver( (http_task, https_task) }; + /* BEGIN_HADRON */ + let metrics_collection_task = { + let cancel = shutdown_pageserver.child_token(); + let task = crate::BACKGROUND_RUNTIME.spawn({ + let cancel = cancel.clone(); + let background_jobs_barrier = background_jobs_barrier.clone(); + async move { + if conf.force_metric_collection_on_scrape { + return; + } + + // first wait until background jobs are cleared to launch. + tokio::select! { + _ = cancel.cancelled() => { return; }, + _ = background_jobs_barrier.wait() => {} + }; + let mut interval = tokio::time::interval(METRICS_COLLECTION_INTERVAL); + loop { + tokio::select! { + _ = cancel.cancelled() => { + tracing::info!("cancelled metrics collection task, exiting..."); + break; + }, + _ = interval.tick() => {} + } + tokio::task::spawn_blocking(|| { + METRICS_COLLECTOR.run_once(true); + }); + } + } + }); + MetricsCollectionTask(CancellableTask { task, cancel }) + }; + /* END_HADRON */ + let consumption_metrics_tasks = { let cancel = shutdown_pageserver.child_token(); let task = crate::BACKGROUND_RUNTIME.spawn({ @@ -844,6 +880,7 @@ fn start_pageserver( https_endpoint_listener, page_service, page_service_grpc, + metrics_collection_task, consumption_metrics_tasks, disk_usage_eviction_task, &tenant_manager, diff --git a/pageserver/src/config.rs b/pageserver/src/config.rs index f64c5838ff..bb73ae1dd5 100644 --- a/pageserver/src/config.rs +++ b/pageserver/src/config.rs @@ -256,6 +256,10 @@ pub struct PageServerConf { /// Defines what is a big tenant for the purpose of image layer generation. /// See Timeline::should_check_if_image_layers_required pub image_layer_generation_large_timeline_threshold: Option, + + /// Controls whether to collect all metrics on each scrape or to return potentially stale + /// results. + pub force_metric_collection_on_scrape: bool, } /// Token for authentication to safekeepers @@ -437,6 +441,7 @@ impl PageServerConf { timeline_import_config, basebackup_cache_config, image_layer_generation_large_timeline_threshold, + force_metric_collection_on_scrape, } = config_toml; let mut conf = PageServerConf { @@ -496,6 +501,7 @@ impl PageServerConf { timeline_import_config, basebackup_cache_config, image_layer_generation_large_timeline_threshold, + force_metric_collection_on_scrape, // ------------------------------------------------------------ // fields that require additional validation or custom handling diff --git a/pageserver/src/http/routes.rs b/pageserver/src/http/routes.rs index 767bba49e2..ed0a5440cb 100644 --- a/pageserver/src/http/routes.rs +++ b/pageserver/src/http/routes.rs @@ -3938,9 +3938,14 @@ pub fn make_router( .expect("construct launch timestamp header middleware"), ); + let force_metric_collection_on_scrape = state.conf.force_metric_collection_on_scrape; + + let prometheus_metrics_handler_wrapper = + move |req| prometheus_metrics_handler(req, force_metric_collection_on_scrape); + Ok(router .data(state) - .get("/metrics", |r| request_span(r, prometheus_metrics_handler)) + .get("/metrics", move |r| request_span(r, prometheus_metrics_handler_wrapper)) .get("/profile/cpu", |r| request_span(r, profile_cpu_handler)) .get("/profile/heap", |r| request_span(r, profile_heap_handler)) .get("/v1/status", |r| api_handler(r, status_handler)) diff --git a/pageserver/src/lib.rs b/pageserver/src/lib.rs index 0dd3c465e0..0864026f6b 100644 --- a/pageserver/src/lib.rs +++ b/pageserver/src/lib.rs @@ -73,6 +73,9 @@ pub struct HttpEndpointListener(pub CancellableTask); pub struct HttpsEndpointListener(pub CancellableTask); pub struct ConsumptionMetricsTasks(pub CancellableTask); pub struct DiskUsageEvictionTask(pub CancellableTask); +// HADRON +pub struct MetricsCollectionTask(pub CancellableTask); + impl CancellableTask { pub async fn shutdown(self) { self.cancel.cancel(); @@ -87,6 +90,7 @@ pub async fn shutdown_pageserver( https_listener: Option, page_service: page_service::Listener, grpc_task: Option, + metrics_collection_task: MetricsCollectionTask, consumption_metrics_worker: ConsumptionMetricsTasks, disk_usage_eviction_task: Option, tenant_manager: &TenantManager, @@ -211,6 +215,14 @@ pub async fn shutdown_pageserver( // Best effort to persist any outstanding deletions, to avoid leaking objects deletion_queue.shutdown(Duration::from_secs(5)).await; + // HADRON + timed( + metrics_collection_task.0.shutdown(), + "shutdown metrics collections metrics", + Duration::from_secs(1), + ) + .await; + timed( consumption_metrics_worker.0.shutdown(), "shutdown consumption metrics", diff --git a/safekeeper/src/bin/safekeeper.rs b/safekeeper/src/bin/safekeeper.rs index 8fda625817..b2d5976ef4 100644 --- a/safekeeper/src/bin/safekeeper.rs +++ b/safekeeper/src/bin/safekeeper.rs @@ -37,6 +37,7 @@ use tracing::*; use utils::auth::{JwtAuth, Scope, SwappableJwtAuth}; use utils::id::NodeId; use utils::logging::{self, LogFormat, SecretString}; +use utils::metrics_collector::{METRICS_COLLECTION_INTERVAL, METRICS_COLLECTOR}; use utils::sentry_init::init_sentry; use utils::{pid_file, project_build_tag, project_git_version, tcp_listener}; @@ -243,6 +244,11 @@ struct Args { #[arg(long)] enable_tls_wal_service_api: bool, + /// Controls whether to collect all metrics on each scrape or to return potentially stale + /// results. + #[arg(long, default_value_t = true)] + force_metric_collection_on_scrape: bool, + /// Run in development mode (disables security checks) #[arg(long, help = "Run in development mode (disables security checks)")] dev: bool, @@ -428,6 +434,7 @@ async fn main() -> anyhow::Result<()> { ssl_ca_certs, use_https_safekeeper_api: args.use_https_safekeeper_api, enable_tls_wal_service_api: args.enable_tls_wal_service_api, + force_metric_collection_on_scrape: args.force_metric_collection_on_scrape, }); // initialize sentry if SENTRY_DSN is provided @@ -640,6 +647,26 @@ async fn start_safekeeper(conf: Arc) -> Result<()> { .map(|res| ("broker main".to_owned(), res)); tasks_handles.push(Box::pin(broker_task_handle)); + /* BEGIN_HADRON */ + if conf.force_metric_collection_on_scrape { + let metrics_handle = current_thread_rt + .as_ref() + .unwrap_or_else(|| BACKGROUND_RUNTIME.handle()) + .spawn(async move { + let mut interval: tokio::time::Interval = + tokio::time::interval(METRICS_COLLECTION_INTERVAL); + loop { + interval.tick().await; + tokio::task::spawn_blocking(|| { + METRICS_COLLECTOR.run_once(true); + }); + } + }) + .map(|res| ("broker main".to_owned(), res)); + tasks_handles.push(Box::pin(metrics_handle)); + } + /* END_HADRON */ + set_build_info_metric(GIT_VERSION, BUILD_TAG); // TODO: update tokio-stream, convert to real async Stream with diff --git a/safekeeper/src/http/routes.rs b/safekeeper/src/http/routes.rs index 384c582678..4b061c65d9 100644 --- a/safekeeper/src/http/routes.rs +++ b/safekeeper/src/http/routes.rs @@ -699,6 +699,11 @@ pub fn make_router( })) } + let force_metric_collection_on_scrape = conf.force_metric_collection_on_scrape; + + let prometheus_metrics_handler_wrapper = + move |req| prometheus_metrics_handler(req, force_metric_collection_on_scrape); + // NB: on any changes do not forget to update the OpenAPI spec // located nearby (/safekeeper/src/http/openapi_spec.yaml). let auth = conf.http_auth.clone(); @@ -706,7 +711,9 @@ pub fn make_router( .data(conf) .data(global_timelines) .data(auth) - .get("/metrics", |r| request_span(r, prometheus_metrics_handler)) + .get("/metrics", move |r| { + request_span(r, prometheus_metrics_handler_wrapper) + }) .get("/profile/cpu", |r| request_span(r, profile_cpu_handler)) .get("/profile/heap", |r| request_span(r, profile_heap_handler)) .get("/v1/status", |r| request_span(r, status_handler)) diff --git a/safekeeper/src/lib.rs b/safekeeper/src/lib.rs index c461c071da..c0b5403ebf 100644 --- a/safekeeper/src/lib.rs +++ b/safekeeper/src/lib.rs @@ -134,6 +134,7 @@ pub struct SafeKeeperConf { pub ssl_ca_certs: Vec, pub use_https_safekeeper_api: bool, pub enable_tls_wal_service_api: bool, + pub force_metric_collection_on_scrape: bool, } impl SafeKeeperConf { @@ -183,6 +184,7 @@ impl SafeKeeperConf { ssl_ca_certs: Vec::new(), use_https_safekeeper_api: false, enable_tls_wal_service_api: false, + force_metric_collection_on_scrape: true, } } } diff --git a/safekeeper/src/wal_backup.rs b/safekeeper/src/wal_backup.rs index 7e10847a1b..0e8dfd64c3 100644 --- a/safekeeper/src/wal_backup.rs +++ b/safekeeper/src/wal_backup.rs @@ -166,7 +166,7 @@ fn hadron_determine_offloader(mgr: &Manager, state: &StateSnapshot) -> (Option) -> Result<()> { ssl_ca_certs: Vec::new(), use_https_safekeeper_api: false, enable_tls_wal_service_api: false, + force_metric_collection_on_scrape: true, }; let mut global = GlobalMap::new(disk, conf.clone())?; diff --git a/test_runner/fixtures/pageserver/http.py b/test_runner/fixtures/pageserver/http.py index 79cfba8da6..8e7d957b22 100644 --- a/test_runner/fixtures/pageserver/http.py +++ b/test_runner/fixtures/pageserver/http.py @@ -1002,7 +1002,7 @@ class PageserverHttpClient(requests.Session, MetricsGetter): def get_metrics_str(self) -> str: """You probably want to use get_metrics() instead.""" - res = self.get(f"http://localhost:{self.port}/metrics") + res = self.get(f"http://localhost:{self.port}/metrics?use_latest=true") self.verbose_error(res) return res.text diff --git a/test_runner/fixtures/safekeeper/http.py b/test_runner/fixtures/safekeeper/http.py index 942b620be6..ceb00c0f90 100644 --- a/test_runner/fixtures/safekeeper/http.py +++ b/test_runner/fixtures/safekeeper/http.py @@ -143,7 +143,7 @@ class SafekeeperHttpClient(requests.Session, MetricsGetter): def get_metrics_str(self) -> str: """You probably want to use get_metrics() instead.""" - request_result = self.get(f"http://localhost:{self.port}/metrics") + request_result = self.get(f"http://localhost:{self.port}/metrics?use_latest=true") request_result.raise_for_status() return request_result.text From d33b3c7457e1bbe15f1961ebec749249c6f77f5e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Arpad=20M=C3=BCller?= Date: Thu, 10 Jul 2025 16:03:20 +0200 Subject: [PATCH 52/65] Print viability via custom printing impl (#12544) As per https://github.com/neondatabase/neon/pull/12485#issuecomment-3056525882 , we don't want to print the viability error via a debug impl as it prints the backtrace. SafekeeperInfo doesn't have a display impl, so fall back to `Debug` for the `Ok` case. It gives single line output so it's okay to use `Debug` for it. Follow up of https://github.com/neondatabase/neon/pull/12485 --- storage_controller/src/service.rs | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/storage_controller/src/service.rs b/storage_controller/src/service.rs index d2f7287be9..3844570b47 100644 --- a/storage_controller/src/service.rs +++ b/storage_controller/src/service.rs @@ -1984,11 +1984,14 @@ impl Service { }); // Check that there is enough safekeepers configured that we can create new timelines - let test_sk_res = this.safekeepers_for_new_timeline().await; + let test_sk_res_str = match this.safekeepers_for_new_timeline().await { + Ok(v) => format!("Ok({v:?})"), + Err(v) => format!("Err({v:})"), + }; tracing::info!( timeline_safekeeper_count = config.timeline_safekeeper_count, timelines_onto_safekeepers = config.timelines_onto_safekeepers, - "viability test result (test timeline creation on safekeepers): {test_sk_res:?}", + "viability test result (test timeline creation on safekeepers): {test_sk_res_str}", ); Ok(this) From be5bbaecadda71478638608c469c184aaf124bf5 Mon Sep 17 00:00:00 2001 From: "Alex Chi Z." <4198311+skyzh@users.noreply.github.com> Date: Thu, 10 Jul 2025 10:28:58 -0400 Subject: [PATCH 53/65] fix(storcon): correctly handle 404 error in lsn lease (#12537) ## Problem close LKB-253 ## Summary of changes 404 for timeline requests could happen when the tenant is intended to be on a pageserver but not attached yet. This patch adds handling for the lease request. In the future, we should extend this handling to more operations. --------- Signed-off-by: Alex Chi Z --- storage_controller/src/service.rs | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/storage_controller/src/service.rs b/storage_controller/src/service.rs index 3844570b47..9c1b81d261 100644 --- a/storage_controller/src/service.rs +++ b/storage_controller/src/service.rs @@ -4761,6 +4761,7 @@ impl Service { ) .await; + let mut retry_if_not_attached = false; let targets = { let locked = self.inner.read().unwrap(); let mut targets = Vec::new(); @@ -4777,6 +4778,24 @@ impl Service { .expect("Pageservers may not be deleted while referenced"); targets.push((*tenant_shard_id, node.clone())); + + if let Some(location) = shard.observed.locations.get(node_id) { + if let Some(ref conf) = location.conf { + if conf.mode != LocationConfigMode::AttachedSingle + && conf.mode != LocationConfigMode::AttachedMulti + { + // If the shard is attached as secondary, we need to retry if 404. + retry_if_not_attached = true; + } + // If the shard is attached as primary, we should succeed. + } else { + // Location conf is not available yet, retry if 404. + retry_if_not_attached = true; + } + } else { + // The shard is not attached to the intended pageserver yet, retry if 404. + retry_if_not_attached = true; + } } } targets @@ -4807,6 +4826,18 @@ impl Service { valid_until = Some(lease.valid_until); } } + Err(mgmt_api::Error::ApiError(StatusCode::NOT_FOUND, _)) + if retry_if_not_attached => + { + // This is expected if the attach is not finished yet. Return 503 so that the client can retry. + return Err(ApiError::ResourceUnavailable( + format!( + "Timeline is not attached to the pageserver {} yet, please retry", + node.get_id() + ) + .into(), + )); + } Err(e) => { return Err(passthrough_api_error(&node, e)); } From 2c6b327be6619e66038427e5cbfe42159498a949 Mon Sep 17 00:00:00 2001 From: HaoyuHuang Date: Thu, 10 Jul 2025 07:39:38 -0700 Subject: [PATCH 54/65] A few PS changes (#12540) # TLDR All changes are no-op except some metrics. ## Summary of changes I ### Pageserver Added a new global counter metric `pageserver_pagestream_handler_results_total` that categorizes pagestream request results according to their outcomes: 1. Success 2. Internal errors 3. Other errors Internal errors include: 1. Page reconstruction error: This probably indicates a pageserver bug/corruption 2. LSN timeout error: Could indicate overload or bugs with PS's ability to reach other components 3. Misrouted request error: Indicates bugs in the Storage Controller/HCC Other errors include transient errors that are expected during normal operation or errors indicating bugs with other parts of the system (e.g., malformed requests, errors due to cancelled operations during PS shutdown, etc.) ## Summary of changes II This PR adds a pageserver endpoint and its counterpart in storage controller to list visible size of all tenant shards. This will be a prerequisite of the tenant rebalance command. ## Problem III We need a way to download WAL segments/layerfiles from S3 and replay WAL records. We cannot access production S3 from our laptops directly, and we also can't transfer any user data out of production systems for GDPR compliance, so we need solutions. ## Summary of changes III This PR adds a couple of tools to support the debugging workflow in production: 1. A new `pagectl download-remote-object` command that can be used to download remote storage objects assuming the correct access is set up. ## Summary of changes IV This PR adds a command to list all visible delta and image layers from index_part. This is useful to debug compaction issues as index_part often contain a lot of covered layers due to PITR. --------- Co-authored-by: William Huang Co-authored-by: Chen Luo Co-authored-by: Vlad Lazar --- Cargo.lock | 1 + pageserver/client/src/mgmt_api.rs | 66 ++++++++++- pageserver/ctl/Cargo.toml | 1 + pageserver/ctl/src/download_remote_object.rs | 85 ++++++++++++++ pageserver/ctl/src/index_part.rs | 110 +++++++++++++++--- pageserver/ctl/src/main.rs | 6 + pageserver/src/http/routes.rs | 29 ++++- pageserver/src/metrics.rs | 18 +++ pageserver/src/page_service.rs | 53 ++++++++- pageserver/src/tenant.rs | 10 ++ .../src/tenant/storage_layer/layer_name.rs | 2 +- safekeeper/src/metrics.rs | 9 ++ safekeeper/src/safekeeper.rs | 19 ++- test_runner/fixtures/pageserver/http.py | 7 ++ test_runner/regress/test_pageserver_api.py | 13 +++ 15 files changed, 404 insertions(+), 25 deletions(-) create mode 100644 pageserver/ctl/src/download_remote_object.rs diff --git a/Cargo.lock b/Cargo.lock index 4150944ad0..85080f8473 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -4323,6 +4323,7 @@ dependencies = [ "pageserver_api", "postgres_ffi", "remote_storage", + "serde", "serde_json", "svg_fmt", "thiserror 1.0.69", diff --git a/pageserver/client/src/mgmt_api.rs b/pageserver/client/src/mgmt_api.rs index af4be23b9b..fe1ddc2e7d 100644 --- a/pageserver/client/src/mgmt_api.rs +++ b/pageserver/client/src/mgmt_api.rs @@ -1,4 +1,4 @@ -use std::collections::HashMap; +use std::collections::{BTreeMap, HashMap}; use std::error::Error as _; use std::time::Duration; @@ -251,6 +251,70 @@ impl Client { Ok(()) } + pub async fn tenant_timeline_compact( + &self, + tenant_shard_id: TenantShardId, + timeline_id: TimelineId, + force_image_layer_creation: bool, + must_force_image_layer_creation: bool, + scheduled: bool, + wait_until_done: bool, + ) -> Result<()> { + let mut path = reqwest::Url::parse(&format!( + "{}/v1/tenant/{tenant_shard_id}/timeline/{timeline_id}/compact", + self.mgmt_api_endpoint + )) + .expect("Cannot build URL"); + + if force_image_layer_creation { + path.query_pairs_mut() + .append_pair("force_image_layer_creation", "true"); + } + + if must_force_image_layer_creation { + path.query_pairs_mut() + .append_pair("must_force_image_layer_creation", "true"); + } + + if scheduled { + path.query_pairs_mut().append_pair("scheduled", "true"); + } + if wait_until_done { + path.query_pairs_mut() + .append_pair("wait_until_scheduled_compaction_done", "true"); + path.query_pairs_mut() + .append_pair("wait_until_uploaded", "true"); + } + self.request(Method::PUT, path, ()).await?; + Ok(()) + } + + /* BEGIN_HADRON */ + pub async fn tenant_timeline_describe( + &self, + tenant_shard_id: &TenantShardId, + timeline_id: &TimelineId, + ) -> Result { + let mut path = reqwest::Url::parse(&format!( + "{}/v1/tenant/{tenant_shard_id}/timeline/{timeline_id}", + self.mgmt_api_endpoint + )) + .expect("Cannot build URL"); + path.query_pairs_mut() + .append_pair("include-image-consistent-lsn", "true"); + + let response: reqwest::Response = self.request(Method::GET, path, ()).await?; + let body = response.json().await.map_err(Error::ReceiveBody)?; + Ok(body) + } + + pub async fn list_tenant_visible_size(&self) -> Result> { + let uri = format!("{}/v1/list_tenant_visible_size", self.mgmt_api_endpoint); + let resp = self.get(&uri).await?; + resp.json().await.map_err(Error::ReceiveBody) + } + /* END_HADRON */ + pub async fn tenant_scan_remote_storage( &self, tenant_id: TenantId, diff --git a/pageserver/ctl/Cargo.toml b/pageserver/ctl/Cargo.toml index 7b70f0dc87..ba34fa1f69 100644 --- a/pageserver/ctl/Cargo.toml +++ b/pageserver/ctl/Cargo.toml @@ -17,6 +17,7 @@ pageserver = { path = ".." } pageserver_api.workspace = true remote_storage = { path = "../../libs/remote_storage" } postgres_ffi.workspace = true +serde.workspace = true thiserror.workspace = true tokio.workspace = true tokio-util.workspace = true diff --git a/pageserver/ctl/src/download_remote_object.rs b/pageserver/ctl/src/download_remote_object.rs new file mode 100644 index 0000000000..aa09774701 --- /dev/null +++ b/pageserver/ctl/src/download_remote_object.rs @@ -0,0 +1,85 @@ +use camino::Utf8PathBuf; +use clap::Parser; +use tokio_util::sync::CancellationToken; + +/// Download a specific object from remote storage to a local file. +/// +/// The remote storage configuration is supplied via the `REMOTE_STORAGE_CONFIG` environment +/// variable, in the same TOML format that the pageserver itself understands. This allows the +/// command to work with any cloud supported by the `remote_storage` crate (currently AWS S3, +/// Azure Blob Storage and local files), as long as the credentials are available via the +/// standard environment variables expected by the underlying SDKs. +/// +/// Examples for setting the environment variable: +/// +/// ```bash +/// # AWS S3 (region can also be provided via AWS_REGION) +/// export REMOTE_STORAGE_CONFIG='remote_storage = { bucket_name = "my-bucket", bucket_region = "us-east-2" }' +/// +/// # Azure Blob Storage (account key picked up from AZURE_STORAGE_ACCOUNT_KEY) +/// export REMOTE_STORAGE_CONFIG='remote_storage = { container = "my-container", account = "my-account" }' +/// ``` +#[derive(Parser)] +pub(crate) struct DownloadRemoteObjectCmd { + /// Key / path of the object to download (relative to the remote storage prefix). + /// + /// Examples: + /// "wal/3aa8f.../00000001000000000000000A" + /// "pageserver/v1/tenants//timelines//layer_12345" + pub remote_path: String, + + /// Path of the local file to create. Existing file will be overwritten. + /// + /// Examples: + /// "./segment" + /// "/tmp/layer_12345.parquet" + pub output_file: Utf8PathBuf, +} + +pub(crate) async fn main(cmd: &DownloadRemoteObjectCmd) -> anyhow::Result<()> { + use remote_storage::{DownloadOpts, GenericRemoteStorage, RemotePath, RemoteStorageConfig}; + + // Fetch remote storage configuration from the environment + let config_str = std::env::var("REMOTE_STORAGE_CONFIG").map_err(|_| { + anyhow::anyhow!( + "'REMOTE_STORAGE_CONFIG' environment variable must be set to a valid remote storage TOML config" + ) + })?; + + let config = RemoteStorageConfig::from_toml_str(&config_str)?; + + // Initialise remote storage client + let storage = GenericRemoteStorage::from_config(&config).await?; + + // RemotePath must be relative – leading slashes confuse the parser. + let remote_path_str = cmd.remote_path.trim_start_matches('/'); + let remote_path = RemotePath::from_string(remote_path_str)?; + + let cancel = CancellationToken::new(); + + println!( + "Downloading '{remote_path}' from remote storage bucket {:?} ...", + config.storage.bucket_name() + ); + + // Start the actual download + let download = storage + .download(&remote_path, &DownloadOpts::default(), &cancel) + .await?; + + // Stream to file + let mut reader = tokio_util::io::StreamReader::new(download.download_stream); + let tmp_path = cmd.output_file.with_extension("tmp"); + let mut file = tokio::fs::File::create(&tmp_path).await?; + tokio::io::copy(&mut reader, &mut file).await?; + file.sync_all().await?; + // Atomically move into place + tokio::fs::rename(&tmp_path, &cmd.output_file).await?; + + println!( + "Downloaded to '{}'. Last modified: {:?}, etag: {}", + cmd.output_file, download.last_modified, download.etag + ); + + Ok(()) +} diff --git a/pageserver/ctl/src/index_part.rs b/pageserver/ctl/src/index_part.rs index 838d00e490..9801f3c9dc 100644 --- a/pageserver/ctl/src/index_part.rs +++ b/pageserver/ctl/src/index_part.rs @@ -1,14 +1,16 @@ use std::str::FromStr; -use anyhow::Context; +use anyhow::{Context, Ok}; use camino::Utf8PathBuf; use pageserver::tenant::{ IndexPart, layer_map::{LayerMap, SearchResult}, - remote_timeline_client::remote_layer_path, - storage_layer::{PersistentLayerDesc, ReadableLayerWeak}, + remote_timeline_client::{index::LayerFileMetadata, remote_layer_path}, + storage_layer::{LayerName, LayerVisibilityHint, PersistentLayerDesc, ReadableLayerWeak}, }; use pageserver_api::key::Key; +use serde::Serialize; +use std::collections::BTreeMap; use utils::{ id::{TenantId, TimelineId}, lsn::Lsn, @@ -33,6 +35,31 @@ pub(crate) enum IndexPartCmd { #[arg(long)] lsn: String, }, + /// List all visible delta and image layers at the latest LSN. + ListVisibleLayers { + #[arg(long)] + path: Utf8PathBuf, + }, +} + +fn create_layer_map_from_index_part( + index_part: &IndexPart, + tenant_shard_id: TenantShardId, + timeline_id: TimelineId, +) -> LayerMap { + let mut layer_map = LayerMap::default(); + { + let mut updates = layer_map.batch_update(); + for (key, value) in index_part.layer_metadata.iter() { + updates.insert_historic(PersistentLayerDesc::from_filename( + tenant_shard_id, + timeline_id, + key.clone(), + value.file_size, + )); + } + } + layer_map } async fn search_layers( @@ -49,18 +76,7 @@ async fn search_layers( let bytes = tokio::fs::read(path).await?; IndexPart::from_json_bytes(&bytes).unwrap() }; - let mut layer_map = LayerMap::default(); - { - let mut updates = layer_map.batch_update(); - for (key, value) in index_json.layer_metadata.iter() { - updates.insert_historic(PersistentLayerDesc::from_filename( - tenant_shard_id, - timeline_id, - key.clone(), - value.file_size, - )); - } - } + let layer_map = create_layer_map_from_index_part(&index_json, tenant_shard_id, timeline_id); let key = Key::from_hex(key)?; let lsn = Lsn::from_str(lsn).unwrap(); @@ -98,6 +114,69 @@ async fn search_layers( Ok(()) } +#[derive(Debug, Clone, Serialize)] +struct VisibleLayers { + pub total_images: u64, + pub total_image_bytes: u64, + pub total_deltas: u64, + pub total_delta_bytes: u64, + pub layer_metadata: BTreeMap, +} + +impl VisibleLayers { + pub fn new() -> Self { + Self { + layer_metadata: BTreeMap::new(), + total_images: 0, + total_image_bytes: 0, + total_deltas: 0, + total_delta_bytes: 0, + } + } + + pub fn add_layer(&mut self, name: LayerName, layer: LayerFileMetadata) { + match name { + LayerName::Image(_) => { + self.total_images += 1; + self.total_image_bytes += layer.file_size; + } + LayerName::Delta(_) => { + self.total_deltas += 1; + self.total_delta_bytes += layer.file_size; + } + } + self.layer_metadata.insert(name, layer); + } +} + +async fn list_visible_layers(path: &Utf8PathBuf) -> anyhow::Result<()> { + let tenant_id = TenantId::generate(); + let tenant_shard_id = TenantShardId::unsharded(tenant_id); + let timeline_id = TimelineId::generate(); + + let bytes = tokio::fs::read(path).await.context("read file")?; + let index_part = IndexPart::from_json_bytes(&bytes).context("deserialize")?; + let layer_map = create_layer_map_from_index_part(&index_part, tenant_shard_id, timeline_id); + let mut visible_layers = VisibleLayers::new(); + let (layers, _key_space) = layer_map.get_visibility(Vec::new()); + for (layer, visibility) in layers { + if visibility == LayerVisibilityHint::Visible { + visible_layers.add_layer( + layer.layer_name(), + index_part + .layer_metadata + .get(&layer.layer_name()) + .unwrap() + .clone(), + ); + } + } + let output = serde_json::to_string_pretty(&visible_layers).context("serialize output")?; + println!("{output}"); + + Ok(()) +} + pub(crate) async fn main(cmd: &IndexPartCmd) -> anyhow::Result<()> { match cmd { IndexPartCmd::Dump { path } => { @@ -114,5 +193,6 @@ pub(crate) async fn main(cmd: &IndexPartCmd) -> anyhow::Result<()> { key, lsn, } => search_layers(tenant_id, timeline_id, path, key, lsn).await, + IndexPartCmd::ListVisibleLayers { path } => list_visible_layers(path).await, } } diff --git a/pageserver/ctl/src/main.rs b/pageserver/ctl/src/main.rs index 3cd4faaf2e..e84ad2c87f 100644 --- a/pageserver/ctl/src/main.rs +++ b/pageserver/ctl/src/main.rs @@ -4,6 +4,7 @@ //! //! Separate, `metadata` subcommand allows to print and update pageserver's metadata file. +mod download_remote_object; mod draw_timeline_dir; mod index_part; mod key; @@ -16,6 +17,7 @@ use std::time::{Duration, SystemTime}; use camino::{Utf8Path, Utf8PathBuf}; use clap::{Parser, Subcommand}; +use download_remote_object::DownloadRemoteObjectCmd; use index_part::IndexPartCmd; use layers::LayerCmd; use page_trace::PageTraceCmd; @@ -63,6 +65,7 @@ enum Commands { /// Debug print a hex key found from logs Key(key::DescribeKeyCommand), PageTrace(PageTraceCmd), + DownloadRemoteObject(DownloadRemoteObjectCmd), } /// Read and update pageserver metadata file @@ -185,6 +188,9 @@ async fn main() -> anyhow::Result<()> { } Commands::Key(dkc) => dkc.execute(), Commands::PageTrace(cmd) => page_trace::main(&cmd)?, + Commands::DownloadRemoteObject(cmd) => { + download_remote_object::main(&cmd).await?; + } }; Ok(()) } diff --git a/pageserver/src/http/routes.rs b/pageserver/src/http/routes.rs index ed0a5440cb..7030ac368d 100644 --- a/pageserver/src/http/routes.rs +++ b/pageserver/src/http/routes.rs @@ -2,7 +2,9 @@ //! Management HTTP API //! use std::cmp::Reverse; -use std::collections::{BinaryHeap, HashMap}; +use std::collections::BTreeMap; +use std::collections::BinaryHeap; +use std::collections::HashMap; use std::str::FromStr; use std::sync::Arc; use std::time::Duration; @@ -3214,6 +3216,30 @@ async fn get_utilization( .map_err(ApiError::InternalServerError) } +/// HADRON +async fn list_tenant_visible_size_handler( + request: Request, + _cancel: CancellationToken, +) -> Result, ApiError> { + check_permission(&request, None)?; + let state = get_state(&request); + + let mut map = BTreeMap::new(); + for (tenant_shard_id, slot) in state.tenant_manager.list() { + match slot { + TenantSlot::Attached(tenant) => { + let visible_size = tenant.get_visible_size(); + map.insert(tenant_shard_id, visible_size); + } + TenantSlot::Secondary(_) | TenantSlot::InProgress(_) => { + continue; + } + } + } + + json_response(StatusCode::OK, map) +} + async fn list_aux_files( mut request: Request, _cancel: CancellationToken, @@ -4151,6 +4177,7 @@ pub fn make_router( .put("/v1/io_engine", |r| api_handler(r, put_io_engine_handler)) .put("/v1/io_mode", |r| api_handler(r, put_io_mode_handler)) .get("/v1/utilization", |r| api_handler(r, get_utilization)) + .get("/v1/list_tenant_visible_size", |r| api_handler(r, list_tenant_visible_size_handler)) .post( "/v1/tenant/:tenant_shard_id/timeline/:timeline_id/ingest_aux_files", |r| testing_api_handler("ingest_aux_files", r, ingest_aux_files), diff --git a/pageserver/src/metrics.rs b/pageserver/src/metrics.rs index eb89e166b2..1b783326a0 100644 --- a/pageserver/src/metrics.rs +++ b/pageserver/src/metrics.rs @@ -2847,6 +2847,24 @@ pub(crate) static MISROUTED_PAGESTREAM_REQUESTS: Lazy = Lazy::new(|| .expect("failed to define a metric") }); +// Global counter for PageStream request results by outcome. Outcomes are divided into 3 categories: +// - success +// - internal_error: errors that indicate bugs in the storage cluster (e.g. page reconstruction errors, misrouted requests, LSN timeout errors) +// - other_error: transient error conditions that are expected in normal operation or indicate bugs with other parts of the system (e.g. error due to pageserver shutdown, malformed requests etc.) +pub(crate) static PAGESTREAM_HANDLER_RESULTS_TOTAL: Lazy = Lazy::new(|| { + register_int_counter_vec!( + "pageserver_pagestream_handler_results_total", + "Number of pageserver pagestream handler results by outcome (success, internal_error, other_error)", + &["outcome"] + ) + .expect("failed to define a metric") +}); + +// Constants for pageserver_pagestream_handler_results_total's outcome labels +pub(crate) const PAGESTREAM_HANDLER_OUTCOME_SUCCESS: &str = "success"; +pub(crate) const PAGESTREAM_HANDLER_OUTCOME_INTERNAL_ERROR: &str = "internal_error"; +pub(crate) const PAGESTREAM_HANDLER_OUTCOME_OTHER_ERROR: &str = "other_error"; + // Metrics collected on WAL redo operations // // We collect the time spent in actual WAL redo ('redo'), and time waiting diff --git a/pageserver/src/page_service.rs b/pageserver/src/page_service.rs index 70fdb2e789..ebb1addcdb 100644 --- a/pageserver/src/page_service.rs +++ b/pageserver/src/page_service.rs @@ -70,7 +70,7 @@ use crate::context::{ }; use crate::metrics::{ self, COMPUTE_COMMANDS_COUNTERS, ComputeCommandKind, GetPageBatchBreakReason, LIVE_CONNECTIONS, - MISROUTED_PAGESTREAM_REQUESTS, SmgrOpTimer, TimelineMetrics, + MISROUTED_PAGESTREAM_REQUESTS, PAGESTREAM_HANDLER_RESULTS_TOTAL, SmgrOpTimer, TimelineMetrics, }; use crate::pgdatadir_mapping::{LsnRange, Version}; use crate::span::{ @@ -1441,20 +1441,57 @@ impl PageServerHandler { let (response_msg, ctx) = match handler_result { Err(e) => match &e.err { PageStreamError::Shutdown => { + // BEGIN HADRON + PAGESTREAM_HANDLER_RESULTS_TOTAL + .with_label_values(&[metrics::PAGESTREAM_HANDLER_OUTCOME_OTHER_ERROR]) + .inc(); + // END HADRON + // If we fail to fulfil a request during shutdown, which may be _because_ of // shutdown, then do not send the error to the client. Instead just drop the // connection. span.in_scope(|| info!("dropping connection due to shutdown")); return Err(QueryError::Shutdown); } - PageStreamError::Reconnect(reason) => { - span.in_scope(|| info!("handler requested reconnect: {reason}")); + PageStreamError::Reconnect(_reason) => { + span.in_scope(|| { + // BEGIN HADRON + // We can get here because the compute node is pointing at the wrong PS. We + // already have a metric to keep track of this so suppressing this log to + // reduce log spam. The information in this log message is not going to be that + // helpful given the volume of logs that can be generated. + // info!("handler requested reconnect: {reason}") + // END HADRON + }); + // BEGIN HADRON + PAGESTREAM_HANDLER_RESULTS_TOTAL + .with_label_values(&[ + metrics::PAGESTREAM_HANDLER_OUTCOME_INTERNAL_ERROR, + ]) + .inc(); + // END HADRON return Err(QueryError::Reconnect); } PageStreamError::Read(_) | PageStreamError::LsnTimeout(_) | PageStreamError::NotFound(_) | PageStreamError::BadRequest(_) => { + // BEGIN HADRON + if let PageStreamError::Read(_) | PageStreamError::LsnTimeout(_) = &e.err { + PAGESTREAM_HANDLER_RESULTS_TOTAL + .with_label_values(&[ + metrics::PAGESTREAM_HANDLER_OUTCOME_INTERNAL_ERROR, + ]) + .inc(); + } else { + PAGESTREAM_HANDLER_RESULTS_TOTAL + .with_label_values(&[ + metrics::PAGESTREAM_HANDLER_OUTCOME_OTHER_ERROR, + ]) + .inc(); + } + // END HADRON + // print the all details to the log with {:#}, but for the client the // error message is enough. Do not log if shutting down, as the anyhow::Error // here includes cancellation which is not an error. @@ -1472,7 +1509,15 @@ impl PageServerHandler { ) } }, - Ok((response_msg, _op_timer_already_observed, ctx)) => (response_msg, Some(ctx)), + Ok((response_msg, _op_timer_already_observed, ctx)) => { + // BEGIN HADRON + PAGESTREAM_HANDLER_RESULTS_TOTAL + .with_label_values(&[metrics::PAGESTREAM_HANDLER_OUTCOME_SUCCESS]) + .inc(); + // END HADRON + + (response_msg, Some(ctx)) + } }; let ctx = ctx.map(|req_ctx| { diff --git a/pageserver/src/tenant.rs b/pageserver/src/tenant.rs index 7e2e6d96b8..f67269851a 100644 --- a/pageserver/src/tenant.rs +++ b/pageserver/src/tenant.rs @@ -5719,6 +5719,16 @@ impl TenantShard { .unwrap_or(0) } + /// HADRON + /// Return the visible size of all timelines in this tenant. + pub(crate) fn get_visible_size(&self) -> u64 { + let timelines = self.timelines.lock().unwrap(); + timelines + .values() + .map(|t| t.metrics.visible_physical_size_gauge.get()) + .sum() + } + /// Builds a new tenant manifest, and uploads it if it differs from the last-known tenant /// manifest in `Self::remote_tenant_manifest`. /// diff --git a/pageserver/src/tenant/storage_layer/layer_name.rs b/pageserver/src/tenant/storage_layer/layer_name.rs index 0f7995f87b..973852defc 100644 --- a/pageserver/src/tenant/storage_layer/layer_name.rs +++ b/pageserver/src/tenant/storage_layer/layer_name.rs @@ -225,7 +225,7 @@ impl fmt::Display for ImageLayerName { /// storage and object names in remote storage consist of the LayerName plus some extra qualifiers /// that uniquely identify the physical incarnation of a layer (see [crate::tenant::remote_timeline_client::remote_layer_path]) /// and [`crate::tenant::storage_layer::layer::local_layer_path`]) -#[derive(Debug, PartialEq, Eq, Hash, Clone)] +#[derive(Debug, PartialEq, Eq, Hash, Clone, Ord, PartialOrd)] pub enum LayerName { Image(ImageLayerName), Delta(DeltaLayerName), diff --git a/safekeeper/src/metrics.rs b/safekeeper/src/metrics.rs index 9baa80f73a..1f98651e71 100644 --- a/safekeeper/src/metrics.rs +++ b/safekeeper/src/metrics.rs @@ -59,6 +59,15 @@ pub static FLUSH_WAL_SECONDS: Lazy = Lazy::new(|| { .expect("Failed to register safekeeper_flush_wal_seconds histogram") }); /* BEGIN_HADRON */ +// Counter of all ProposerAcceptorMessage requests received +pub static PROPOSER_ACCEPTOR_MESSAGES_TOTAL: Lazy = Lazy::new(|| { + register_int_counter_vec!( + "safekeeper_proposer_acceptor_messages_total", + "Total number of ProposerAcceptorMessage requests received by the Safekeeper.", + &["outcome"] + ) + .expect("Failed to register safekeeper_proposer_acceptor_messages_total counter") +}); pub static WAL_DISK_IO_ERRORS: Lazy = Lazy::new(|| { register_int_counter!( "safekeeper_wal_disk_io_errors", diff --git a/safekeeper/src/safekeeper.rs b/safekeeper/src/safekeeper.rs index 4d15fc9de3..09ca041e22 100644 --- a/safekeeper/src/safekeeper.rs +++ b/safekeeper/src/safekeeper.rs @@ -24,7 +24,7 @@ use utils::id::{NodeId, TenantId, TimelineId}; use utils::lsn::Lsn; use utils::pageserver_feedback::PageserverFeedback; -use crate::metrics::MISC_OPERATION_SECONDS; +use crate::metrics::{MISC_OPERATION_SECONDS, PROPOSER_ACCEPTOR_MESSAGES_TOTAL}; use crate::state::TimelineState; use crate::{control_file, wal_storage}; @@ -938,7 +938,7 @@ where &mut self, msg: &ProposerAcceptorMessage, ) -> Result> { - match msg { + let res = match msg { ProposerAcceptorMessage::Greeting(msg) => self.handle_greeting(msg).await, ProposerAcceptorMessage::VoteRequest(msg) => self.handle_vote_request(msg).await, ProposerAcceptorMessage::Elected(msg) => self.handle_elected(msg).await, @@ -949,7 +949,20 @@ where self.handle_append_request(msg, false).await } ProposerAcceptorMessage::FlushWAL => self.handle_flush().await, - } + }; + + // BEGIN HADRON + match &res { + Ok(_) => PROPOSER_ACCEPTOR_MESSAGES_TOTAL + .with_label_values(&["success"]) + .inc(), + Err(_) => PROPOSER_ACCEPTOR_MESSAGES_TOTAL + .with_label_values(&["error"]) + .inc(), + }; + + res + // END HADRON } /// Handle initial message from proposer: check its sanity and send my diff --git a/test_runner/fixtures/pageserver/http.py b/test_runner/fixtures/pageserver/http.py index 8e7d957b22..23b9d1c8c9 100644 --- a/test_runner/fixtures/pageserver/http.py +++ b/test_runner/fixtures/pageserver/http.py @@ -333,6 +333,13 @@ class PageserverHttpClient(requests.Session, MetricsGetter): res = self.post(f"http://localhost:{self.port}/v1/reload_auth_validation_keys") self.verbose_error(res) + def list_tenant_visible_size(self) -> dict[TenantShardId, int]: + res = self.get(f"http://localhost:{self.port}/v1/list_tenant_visible_size") + self.verbose_error(res) + res_json = res.json() + assert isinstance(res_json, dict) + return res_json + def tenant_list(self) -> list[dict[Any, Any]]: res = self.get(f"http://localhost:{self.port}/v1/tenant") self.verbose_error(res) diff --git a/test_runner/regress/test_pageserver_api.py b/test_runner/regress/test_pageserver_api.py index 7f9207047e..92889e5de3 100644 --- a/test_runner/regress/test_pageserver_api.py +++ b/test_runner/regress/test_pageserver_api.py @@ -3,6 +3,7 @@ from __future__ import annotations from typing import TYPE_CHECKING from fixtures.common_types import Lsn, TenantId, TimelineId +from fixtures.log_helper import log from fixtures.neon_fixtures import ( DEFAULT_BRANCH_NAME, NeonEnv, @@ -164,3 +165,15 @@ def test_pageserver_http_index_part_force_patch(neon_env_builder: NeonEnvBuilder {"rel_size_migration": "legacy"}, ) assert client.timeline_detail(tenant_id, timeline_id)["rel_size_migration"] == "legacy" + + +def test_pageserver_get_tenant_visible_size(neon_env_builder: NeonEnvBuilder): + neon_env_builder.num_pageservers = 1 + env = neon_env_builder.init_start() + env.create_tenant(shard_count=4) + env.create_tenant(shard_count=2) + + json = env.pageserver.http_client().list_tenant_visible_size() + log.info(f"{json}") + # initial tennat + 2 newly created tenants + assert len(json) == 7 From 2fc77c836b3c0e88946254fb9235ded1db60dd75 Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Thu, 10 Jul 2025 17:46:39 +0200 Subject: [PATCH 55/65] pageserver/client_grpc: add shard map updates (#12480) ## Problem The communicator gRPC client must support changing the shard map on splits. Touches #11735. Requires #12476. ## Summary of changes * Wrap the shard set in a `ArcSwap` to allow swapping it out. * Add a new `ShardSpec` parameter struct to pass validated shard info to the client. * Add `update_shards()` to change the shard set. In-flight requests are allowed to complete using the old shards. * Restructure `get_page` to use a stable view of the shard map, and retry errors at the top (pre-split) level to pick up shard map changes. * Also marks `tonic::Status::Internal` as non-retryable, so that we can use it for client-side invariant checks without continually retrying these. --- Cargo.lock | 1 + pageserver/client_grpc/Cargo.toml | 1 + pageserver/client_grpc/src/client.rs | 257 ++++++++++++++++++--------- pageserver/client_grpc/src/lib.rs | 2 +- pageserver/client_grpc/src/retry.rs | 5 +- pageserver/client_grpc/src/split.rs | 31 ++-- 6 files changed, 199 insertions(+), 98 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 85080f8473..1d68b8f862 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -4500,6 +4500,7 @@ name = "pageserver_client_grpc" version = "0.1.0" dependencies = [ "anyhow", + "arc-swap", "bytes", "compute_api", "futures", diff --git a/pageserver/client_grpc/Cargo.toml b/pageserver/client_grpc/Cargo.toml index ca224900ac..e2741ad839 100644 --- a/pageserver/client_grpc/Cargo.toml +++ b/pageserver/client_grpc/Cargo.toml @@ -9,6 +9,7 @@ testing = ["pageserver_api/testing"] [dependencies] anyhow.workspace = true +arc-swap.workspace = true bytes.workspace = true compute_api.workspace = true futures.workspace = true diff --git a/pageserver/client_grpc/src/client.rs b/pageserver/client_grpc/src/client.rs index 63852868c3..ee09c1f13c 100644 --- a/pageserver/client_grpc/src/client.rs +++ b/pageserver/client_grpc/src/client.rs @@ -3,6 +3,7 @@ use std::num::NonZero; use std::sync::Arc; use anyhow::anyhow; +use arc_swap::ArcSwap; use futures::stream::FuturesUnordered; use futures::{FutureExt as _, StreamExt as _}; use tracing::instrument; @@ -55,28 +56,74 @@ const MAX_BULK_STREAM_QUEUE_DEPTH: NonZero = NonZero::new(4).unwrap(); /// TODO: this client does not support base backups or LSN leases, as these are only used by /// compute_ctl. Consider adding this, but LSN leases need concurrent requests on all shards. pub struct PageserverClient { - // TODO: support swapping out the shard map, e.g. via an ArcSwap. - shards: Shards, + /// The tenant ID. + tenant_id: TenantId, + /// The timeline ID. + timeline_id: TimelineId, + /// The JWT auth token for this tenant, if any. + auth_token: Option, + /// The shards for this tenant. + shards: ArcSwap, + /// The retry configuration. retry: Retry, } impl PageserverClient { /// Creates a new Pageserver client for a given tenant and timeline. Uses the Pageservers given - /// in the shard map, which must be complete and must use gRPC URLs. + /// in the shard spec, which must be complete and must use gRPC URLs. pub fn new( tenant_id: TenantId, timeline_id: TimelineId, - shard_map: HashMap, - stripe_size: ShardStripeSize, + shard_spec: ShardSpec, auth_token: Option, ) -> anyhow::Result { - let shards = Shards::new(tenant_id, timeline_id, shard_map, stripe_size, auth_token)?; + let shards = Shards::new(tenant_id, timeline_id, shard_spec, auth_token.clone())?; Ok(Self { - shards, + tenant_id, + timeline_id, + auth_token, + shards: ArcSwap::new(Arc::new(shards)), retry: Retry, }) } + /// Updates the shards from the given shard spec. In-flight requests will complete using the + /// existing shards, but may retry with the new shards if they fail. + /// + /// TODO: verify that in-flight requests are allowed to complete, and that the old pools are + /// properly spun down and dropped afterwards. + pub fn update_shards(&self, shard_spec: ShardSpec) -> anyhow::Result<()> { + // Validate the shard spec. We should really use `ArcSwap::rcu` for this, to avoid races + // with concurrent updates, but that involves creating a new `Shards` on every attempt, + // which spins up a bunch of Tokio tasks and such. These should already be checked elsewhere + // in the stack, and if they're violated then we already have problems elsewhere, so a + // best-effort but possibly-racy check is okay here. + let old = self.shards.load_full(); + if shard_spec.count < old.count { + return Err(anyhow!( + "can't reduce shard count from {} to {}", + old.count, + shard_spec.count + )); + } + if !old.count.is_unsharded() && shard_spec.stripe_size != old.stripe_size { + return Err(anyhow!( + "can't change stripe size from {} to {}", + old.stripe_size, + shard_spec.stripe_size + )); + } + + let shards = Shards::new( + self.tenant_id, + self.timeline_id, + shard_spec, + self.auth_token.clone(), + )?; + self.shards.store(Arc::new(shards)); + Ok(()) + } + /// Returns whether a relation exists. #[instrument(skip_all, fields(rel=%req.rel, lsn=%req.read_lsn))] pub async fn check_rel_exists( @@ -86,7 +133,7 @@ impl PageserverClient { self.retry .with(async || { // Relation metadata is only available on shard 0. - let mut client = self.shards.get_zero().client().await?; + let mut client = self.shards.load_full().get_zero().client().await?; client.check_rel_exists(req).await }) .await @@ -101,7 +148,7 @@ impl PageserverClient { self.retry .with(async || { // Relation metadata is only available on shard 0. - let mut client = self.shards.get_zero().client().await?; + let mut client = self.shards.load_full().get_zero().client().await?; client.get_db_size(req).await }) .await @@ -129,28 +176,42 @@ impl PageserverClient { return Err(tonic::Status::invalid_argument("no block number")); } + // The shards may change while we're fetching pages. We execute the request using a stable + // view of the shards (especially important for requests that span shards), but retry the + // top-level (pre-split) request to pick up shard changes. This can lead to unnecessary + // retries and re-splits in some cases where requests span shards, but these are expected to + // be rare. + // + // TODO: the gRPC server and client doesn't yet properly support shard splits. Revisit this + // once we figure out how to handle these. + self.retry + .with(async || Self::get_page_with_shards(req.clone(), &self.shards.load_full()).await) + .await + } + + /// Fetches pages using the given shards. This uses a stable view of the shards, regardless of + /// concurrent shard updates. Does not retry internally, but is retried by `get_page()`. + async fn get_page_with_shards( + req: page_api::GetPageRequest, + shards: &Shards, + ) -> tonic::Result { // Fast path: request is for a single shard. if let Some(shard_id) = - GetPageSplitter::is_single_shard(&req, self.shards.count, self.shards.stripe_size) + GetPageSplitter::is_single_shard(&req, shards.count, shards.stripe_size) { - return self.get_page_for_shard(shard_id, req).await; + return Self::get_page_with_shard(req, shards.get(shard_id)?).await; } // Request spans multiple shards. Split it, dispatch concurrent per-shard requests, and // reassemble the responses. - // - // TODO: when we support shard map updates, we need to detect when it changes and re-split - // the request on errors. - let mut splitter = GetPageSplitter::split(req, self.shards.count, self.shards.stripe_size); + let mut splitter = GetPageSplitter::split(req, shards.count, shards.stripe_size); - let mut shard_requests: FuturesUnordered<_> = splitter - .drain_requests() - .map(|(shard_id, shard_req)| { - // NB: each request will retry internally. - self.get_page_for_shard(shard_id, shard_req) - .map(move |result| result.map(|resp| (shard_id, resp))) - }) - .collect(); + let mut shard_requests = FuturesUnordered::new(); + for (shard_id, shard_req) in splitter.drain_requests() { + let future = Self::get_page_with_shard(shard_req, shards.get(shard_id)?) + .map(move |result| result.map(|resp| (shard_id, resp))); + shard_requests.push(future); + } while let Some((shard_id, shard_response)) = shard_requests.next().await.transpose()? { splitter.add_response(shard_id, shard_response)?; @@ -159,41 +220,28 @@ impl PageserverClient { splitter.assemble_response() } - /// Fetches pages that belong to the given shard. - #[instrument(skip_all, fields(shard = %shard_id))] - async fn get_page_for_shard( - &self, - shard_id: ShardIndex, + /// Fetches pages on the given shard. Does not retry internally. + async fn get_page_with_shard( req: page_api::GetPageRequest, + shard: &Shard, ) -> tonic::Result { - let resp = self - .retry - .with(async || { - let stream = self - .shards - .get(shard_id)? - .stream(req.request_class.is_bulk()) - .await; - let resp = stream.send(req.clone()).await?; + let expected = req.block_numbers.len(); + let stream = shard.stream(req.request_class.is_bulk()).await; + let resp = stream.send(req).await?; - // Convert per-request errors into a tonic::Status. - if resp.status_code != page_api::GetPageStatusCode::Ok { - return Err(tonic::Status::new( - resp.status_code.into(), - resp.reason.unwrap_or_else(|| String::from("unknown error")), - )); - } + // Convert per-request errors into a tonic::Status. + if resp.status_code != page_api::GetPageStatusCode::Ok { + return Err(tonic::Status::new( + resp.status_code.into(), + resp.reason.unwrap_or_else(|| String::from("unknown error")), + )); + } - Ok(resp) - }) - .await?; - - // Make sure we got the right number of pages. - // NB: check outside of the retry loop, since we don't want to retry this. - let (expected, actual) = (req.block_numbers.len(), resp.page_images.len()); + // Check that we received the expected number of pages. + let actual = resp.page_images.len(); if expected != actual { return Err(tonic::Status::internal(format!( - "expected {expected} pages for shard {shard_id}, got {actual}", + "expected {expected} pages, got {actual}", ))); } @@ -209,7 +257,7 @@ impl PageserverClient { self.retry .with(async || { // Relation metadata is only available on shard 0. - let mut client = self.shards.get_zero().client().await?; + let mut client = self.shards.load_full().get_zero().client().await?; client.get_rel_size(req).await }) .await @@ -224,48 +272,51 @@ impl PageserverClient { self.retry .with(async || { // SLRU segments are only available on shard 0. - let mut client = self.shards.get_zero().client().await?; + let mut client = self.shards.load_full().get_zero().client().await?; client.get_slru_segment(req).await }) .await } } -/// Tracks the tenant's shards. -struct Shards { +/// Shard specification for a PageserverClient. +pub struct ShardSpec { + /// Maps shard indices to gRPC URLs. + /// + /// INVARIANT: every shard 0..count is present, and shard 0 is always present. + /// INVARIANT: every URL is valid and uses grpc:// scheme. + urls: HashMap, /// The shard count. /// /// NB: this is 0 for unsharded tenants, following `ShardIndex::unsharded()` convention. count: ShardCount, - /// The stripe size. Only used for sharded tenants. + /// The stripe size for these shards. stripe_size: ShardStripeSize, - /// Shards by shard index. - /// - /// NB: unsharded tenants use count 0, like `ShardIndex::unsharded()`. - /// - /// INVARIANT: every shard 0..count is present. - /// INVARIANT: shard 0 is always present. - map: HashMap, } -impl Shards { - /// Creates a new set of shards based on a shard map. - fn new( - tenant_id: TenantId, - timeline_id: TimelineId, - shard_map: HashMap, - stripe_size: ShardStripeSize, - auth_token: Option, +impl ShardSpec { + /// Creates a new shard spec with the given URLs and stripe size. All shards must be given. + /// The stripe size may be omitted for unsharded tenants. + pub fn new( + urls: HashMap, + stripe_size: Option, ) -> anyhow::Result { - let count = match shard_map.len() { + // Compute the shard count. + let count = match urls.len() { 0 => return Err(anyhow!("no shards provided")), 1 => ShardCount::new(0), // NB: unsharded tenants use 0, like `ShardIndex::unsharded()` n if n > u8::MAX as usize => return Err(anyhow!("too many shards: {n}")), n => ShardCount::new(n as u8), }; - let mut map = HashMap::new(); - for (shard_id, url) in shard_map { + // Determine the stripe size. It doesn't matter for unsharded tenants. + if stripe_size.is_none() && !count.is_unsharded() { + return Err(anyhow!("stripe size must be given for sharded tenants")); + } + let stripe_size = stripe_size.unwrap_or_default(); + + // Validate the shard spec. + for (shard_id, url) in &urls { // The shard index must match the computed shard count, even for unsharded tenants. if shard_id.shard_count != count { return Err(anyhow!("invalid shard index {shard_id}, expected {count}")); @@ -276,21 +327,64 @@ impl Shards { } // The above conditions guarantee that we have all shards 0..count: len() matches count, // shard number < count, and numbers are unique (via hashmap). - let shard = Shard::new(url, tenant_id, timeline_id, shard_id, auth_token.clone())?; - map.insert(shard_id, shard); + + // Validate the URL. + if PageserverProtocol::from_connstring(url)? != PageserverProtocol::Grpc { + return Err(anyhow!("invalid shard URL {url}: must use gRPC")); + } } Ok(Self { + urls, count, stripe_size, - map, + }) + } +} + +/// Tracks the tenant's shards. +struct Shards { + /// Shards by shard index. + /// + /// INVARIANT: every shard 0..count is present. + /// INVARIANT: shard 0 is always present. + by_index: HashMap, + /// The shard count. + /// + /// NB: this is 0 for unsharded tenants, following `ShardIndex::unsharded()` convention. + count: ShardCount, + /// The stripe size. Only used for sharded tenants. + stripe_size: ShardStripeSize, +} + +impl Shards { + /// Creates a new set of shards based on a shard spec. + fn new( + tenant_id: TenantId, + timeline_id: TimelineId, + shard_spec: ShardSpec, + auth_token: Option, + ) -> anyhow::Result { + // NB: the shard spec has already been validated when constructed. + let mut shards = HashMap::with_capacity(shard_spec.urls.len()); + for (shard_id, url) in shard_spec.urls { + shards.insert( + shard_id, + Shard::new(url, tenant_id, timeline_id, shard_id, auth_token.clone())?, + ); + } + + Ok(Self { + by_index: shards, + count: shard_spec.count, + stripe_size: shard_spec.stripe_size, }) } /// Looks up the given shard. #[allow(clippy::result_large_err)] // TODO: check perf impact fn get(&self, shard_id: ShardIndex) -> tonic::Result<&Shard> { - self.map + self.by_index .get(&shard_id) .ok_or_else(|| tonic::Status::not_found(format!("unknown shard {shard_id}"))) } @@ -329,11 +423,6 @@ impl Shard { shard_id: ShardIndex, auth_token: Option, ) -> anyhow::Result { - // Sanity-check that the URL uses gRPC. - if PageserverProtocol::from_connstring(&url)? != PageserverProtocol::Grpc { - return Err(anyhow!("invalid shard URL {url}: must use gRPC")); - } - // Common channel pool for unary and stream requests. Bounded by client/stream pools. let channel_pool = ChannelPool::new(url.clone(), MAX_CLIENTS_PER_CHANNEL)?; diff --git a/pageserver/client_grpc/src/lib.rs b/pageserver/client_grpc/src/lib.rs index 3fc7178be2..14fb3fbd5a 100644 --- a/pageserver/client_grpc/src/lib.rs +++ b/pageserver/client_grpc/src/lib.rs @@ -3,4 +3,4 @@ mod pool; mod retry; mod split; -pub use client::PageserverClient; +pub use client::{PageserverClient, ShardSpec}; diff --git a/pageserver/client_grpc/src/retry.rs b/pageserver/client_grpc/src/retry.rs index b0473204d7..a4d4b19870 100644 --- a/pageserver/client_grpc/src/retry.rs +++ b/pageserver/client_grpc/src/retry.rs @@ -131,7 +131,6 @@ impl Retry { tonic::Code::Aborted => true, tonic::Code::Cancelled => true, tonic::Code::DeadlineExceeded => true, // maybe transient slowness - tonic::Code::Internal => true, // maybe transient failure? tonic::Code::ResourceExhausted => true, tonic::Code::Unavailable => true, @@ -139,6 +138,10 @@ impl Retry { tonic::Code::AlreadyExists => false, tonic::Code::DataLoss => false, tonic::Code::FailedPrecondition => false, + // NB: don't retry Internal. It is intended for serious errors such as invariant + // violations, and is also used for client-side invariant checks that would otherwise + // result in retry loops. + tonic::Code::Internal => false, tonic::Code::InvalidArgument => false, tonic::Code::NotFound => false, tonic::Code::OutOfRange => false, diff --git a/pageserver/client_grpc/src/split.rs b/pageserver/client_grpc/src/split.rs index 5bbcaab393..57c9299b96 100644 --- a/pageserver/client_grpc/src/split.rs +++ b/pageserver/client_grpc/src/split.rs @@ -97,7 +97,8 @@ impl GetPageSplitter { self.requests.drain() } - /// Adds a response from the given shard. + /// Adds a response from the given shard. The response must match the request ID and have an OK + /// status code. A response must not already exist for the given shard ID. #[allow(clippy::result_large_err)] pub fn add_response( &mut self, @@ -105,24 +106,30 @@ impl GetPageSplitter { response: page_api::GetPageResponse, ) -> tonic::Result<()> { // The caller should already have converted status codes into tonic::Status. - assert_eq!(response.status_code, page_api::GetPageStatusCode::Ok); + if response.status_code != page_api::GetPageStatusCode::Ok { + return Err(tonic::Status::internal(format!( + "unexpected non-OK response for shard {shard_id}: {:?}", + response.status_code + ))); + } - // Make sure the response matches the request ID. + // The stream pool ensures the response matches the request ID. if response.request_id != self.request_id { return Err(tonic::Status::internal(format!( - "response ID {} does not match request ID {}", - response.request_id, self.request_id + "response ID mismatch for shard {shard_id}: expected {}, got {}", + self.request_id, response.request_id + ))); + } + + // We only dispatch one request per shard. + if self.responses.contains_key(&shard_id) { + return Err(tonic::Status::internal(format!( + "duplicate response for shard {shard_id}" ))); } // Add the response data to the map. - let old = self.responses.insert(shard_id, response.page_images); - - if old.is_some() { - return Err(tonic::Status::internal(format!( - "duplicate response for shard {shard_id}", - ))); - } + self.responses.insert(shard_id, response.page_images); Ok(()) } From 8630d37f5e52a851a48b5936acd409cac5044bb0 Mon Sep 17 00:00:00 2001 From: Alexander Bayandin Date: Thu, 10 Jul 2025 16:53:38 +0100 Subject: [PATCH 56/65] test_runner: manually reuse ports in PortDistributor (#12423) ## Problem Sometimes we run out of free ports in `PortDistributor`. This affects particularly failed tests that we rerun automatically up to 3 times (which makes it use up to 3x more ports) ## Summary of changes - Cycle over the range of ports to reuse freed ports from previous tests Ref: LKB-62 --- test_runner/fixtures/port_distributor.py | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/test_runner/fixtures/port_distributor.py b/test_runner/fixtures/port_distributor.py index 6a829a9399..e51d08e16e 100644 --- a/test_runner/fixtures/port_distributor.py +++ b/test_runner/fixtures/port_distributor.py @@ -3,6 +3,7 @@ from __future__ import annotations import re import socket from contextlib import closing +from itertools import cycle from fixtures.log_helper import log @@ -34,15 +35,23 @@ def can_bind(host: str, port: int) -> bool: class PortDistributor: def __init__(self, base_port: int, port_number: int): - self.iterator = iter(range(base_port, base_port + port_number)) + self.base_port = base_port + self.port_number = port_number + self.cycle = cycle(range(base_port, base_port + port_number)) self.port_map: dict[int, int] = {} def get_port(self) -> int: - for port in self.iterator: + checked = 0 + for port in self.cycle: if can_bind("localhost", port): return port + elif checked < self.port_number: + checked += 1 + else: + break + raise RuntimeError( - "port range configured for test is exhausted, consider enlarging the range" + f"port range ({self.base_port}..{self.base_port + self.port_number}) configured for test is exhausted, consider enlarging the range" ) def replace_with_new_port(self, value: int | str) -> int | str: From dcdfe80bf015e93b991c0aa86ffbbffbcd18c198 Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Thu, 10 Jul 2025 19:30:09 +0200 Subject: [PATCH 57/65] pagebench: add support for rich gRPC client (#12477) ## Problem We need to benchmark the rich gRPC client `client_grpc::PageserverClient` against the basic, no-frills `page_api::Client` to determine how much overhead it adds. Touches #11735. Requires #12476. ## Summary of changes Add a `pagebench --rich-client` parameter to use `client_grpc::PageserverClient`. Also adds a compression parameter to the client. --- Cargo.lock | 1 + Cargo.toml | 1 + pageserver/client_grpc/src/client.rs | 28 ++++++- pageserver/client_grpc/src/pool.rs | 7 +- pageserver/pagebench/Cargo.toml | 3 +- .../pagebench/src/cmd/getpage_latest_lsn.rs | 84 +++++++++++++++++++ 6 files changed, 120 insertions(+), 4 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index 1d68b8f862..c528354053 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -4294,6 +4294,7 @@ dependencies = [ "humantime-serde", "pageserver_api", "pageserver_client", + "pageserver_client_grpc", "pageserver_page_api", "rand 0.8.5", "reqwest", diff --git a/Cargo.toml b/Cargo.toml index 14f2cfcb56..0d521ee4d9 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -262,6 +262,7 @@ neon-shmem = { version = "0.1", path = "./libs/neon-shmem/" } pageserver = { path = "./pageserver" } pageserver_api = { version = "0.1", path = "./libs/pageserver_api/" } pageserver_client = { path = "./pageserver/client" } +pageserver_client_grpc = { path = "./pageserver/client_grpc" } pageserver_compaction = { version = "0.1", path = "./pageserver/compaction/" } pageserver_page_api = { path = "./pageserver/page_api" } postgres_backend = { version = "0.1", path = "./libs/postgres_backend/" } diff --git a/pageserver/client_grpc/src/client.rs b/pageserver/client_grpc/src/client.rs index ee09c1f13c..e790f4018e 100644 --- a/pageserver/client_grpc/src/client.rs +++ b/pageserver/client_grpc/src/client.rs @@ -6,6 +6,7 @@ use anyhow::anyhow; use arc_swap::ArcSwap; use futures::stream::FuturesUnordered; use futures::{FutureExt as _, StreamExt as _}; +use tonic::codec::CompressionEncoding; use tracing::instrument; use crate::pool::{ChannelPool, ClientGuard, ClientPool, StreamGuard, StreamPool}; @@ -62,6 +63,8 @@ pub struct PageserverClient { timeline_id: TimelineId, /// The JWT auth token for this tenant, if any. auth_token: Option, + /// The compression to use, if any. + compression: Option, /// The shards for this tenant. shards: ArcSwap, /// The retry configuration. @@ -76,12 +79,20 @@ impl PageserverClient { timeline_id: TimelineId, shard_spec: ShardSpec, auth_token: Option, + compression: Option, ) -> anyhow::Result { - let shards = Shards::new(tenant_id, timeline_id, shard_spec, auth_token.clone())?; + let shards = Shards::new( + tenant_id, + timeline_id, + shard_spec, + auth_token.clone(), + compression, + )?; Ok(Self { tenant_id, timeline_id, auth_token, + compression, shards: ArcSwap::new(Arc::new(shards)), retry: Retry, }) @@ -119,6 +130,7 @@ impl PageserverClient { self.timeline_id, shard_spec, self.auth_token.clone(), + self.compression, )?; self.shards.store(Arc::new(shards)); Ok(()) @@ -364,13 +376,21 @@ impl Shards { timeline_id: TimelineId, shard_spec: ShardSpec, auth_token: Option, + compression: Option, ) -> anyhow::Result { // NB: the shard spec has already been validated when constructed. let mut shards = HashMap::with_capacity(shard_spec.urls.len()); for (shard_id, url) in shard_spec.urls { shards.insert( shard_id, - Shard::new(url, tenant_id, timeline_id, shard_id, auth_token.clone())?, + Shard::new( + url, + tenant_id, + timeline_id, + shard_id, + auth_token.clone(), + compression, + )?, ); } @@ -422,6 +442,7 @@ impl Shard { timeline_id: TimelineId, shard_id: ShardIndex, auth_token: Option, + compression: Option, ) -> anyhow::Result { // Common channel pool for unary and stream requests. Bounded by client/stream pools. let channel_pool = ChannelPool::new(url.clone(), MAX_CLIENTS_PER_CHANNEL)?; @@ -433,6 +454,7 @@ impl Shard { timeline_id, shard_id, auth_token.clone(), + compression, Some(MAX_UNARY_CLIENTS), ); @@ -445,6 +467,7 @@ impl Shard { timeline_id, shard_id, auth_token.clone(), + compression, None, // unbounded, limited by stream pool ), Some(MAX_STREAMS), @@ -460,6 +483,7 @@ impl Shard { timeline_id, shard_id, auth_token, + compression, None, // unbounded, limited by stream pool ), Some(MAX_BULK_STREAMS), diff --git a/pageserver/client_grpc/src/pool.rs b/pageserver/client_grpc/src/pool.rs index 89b3bd646f..2dde40b5b4 100644 --- a/pageserver/client_grpc/src/pool.rs +++ b/pageserver/client_grpc/src/pool.rs @@ -40,6 +40,7 @@ use futures::StreamExt as _; use tokio::sync::mpsc::{Receiver, Sender}; use tokio::sync::{OwnedSemaphorePermit, Semaphore, mpsc, oneshot}; use tokio_util::sync::CancellationToken; +use tonic::codec::CompressionEncoding; use tonic::transport::{Channel, Endpoint}; use tracing::{error, warn}; @@ -242,6 +243,8 @@ pub struct ClientPool { shard_id: ShardIndex, /// Authentication token, if any. auth_token: Option, + /// Compression to use. + compression: Option, /// Channel pool to acquire channels from. channel_pool: Arc, /// Limits the max number of concurrent clients for this pool. None if the pool is unbounded. @@ -281,6 +284,7 @@ impl ClientPool { timeline_id: TimelineId, shard_id: ShardIndex, auth_token: Option, + compression: Option, max_clients: Option>, ) -> Arc { let pool = Arc::new(Self { @@ -288,6 +292,7 @@ impl ClientPool { timeline_id, shard_id, auth_token, + compression, channel_pool, idle: Mutex::default(), idle_reaper: Reaper::new(REAP_IDLE_THRESHOLD, REAP_IDLE_INTERVAL), @@ -331,7 +336,7 @@ impl ClientPool { self.timeline_id, self.shard_id, self.auth_token.clone(), - None, + self.compression, )?; Ok(ClientGuard { diff --git a/pageserver/pagebench/Cargo.toml b/pageserver/pagebench/Cargo.toml index f5dfc0db25..4086213830 100644 --- a/pageserver/pagebench/Cargo.toml +++ b/pageserver/pagebench/Cargo.toml @@ -27,8 +27,9 @@ tokio-util.workspace = true tonic.workspace = true url.workspace = true -pageserver_client.workspace = true pageserver_api.workspace = true +pageserver_client.workspace = true +pageserver_client_grpc.workspace = true pageserver_page_api.workspace = true utils = { path = "../../libs/utils/" } workspace_hack = { version = "0.1", path = "../../workspace_hack" } diff --git a/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs b/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs index f14caf548c..42c7e40489 100644 --- a/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs +++ b/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs @@ -10,12 +10,14 @@ use anyhow::Context; use async_trait::async_trait; use bytes::Bytes; use camino::Utf8PathBuf; +use futures::stream::FuturesUnordered; use futures::{Stream, StreamExt as _}; use pageserver_api::key::Key; use pageserver_api::keyspace::KeySpaceAccum; use pageserver_api::pagestream_api::{PagestreamGetPageRequest, PagestreamRequest}; use pageserver_api::reltag::RelTag; use pageserver_api::shard::TenantShardId; +use pageserver_client_grpc::{self as client_grpc, ShardSpec}; use pageserver_page_api as page_api; use rand::prelude::*; use tokio::task::JoinSet; @@ -37,6 +39,10 @@ pub(crate) struct Args { /// Pageserver connection string. Supports postgresql:// and grpc:// protocols. #[clap(long, default_value = "postgres://postgres@localhost:64000")] page_service_connstring: String, + /// Use the rich gRPC Pageserver client `client_grpc::PageserverClient`, rather than the basic + /// no-frills `page_api::Client`. Only valid with grpc:// connstrings. + #[clap(long)] + rich_client: bool, #[clap(long)] pageserver_jwt: Option, #[clap(long, default_value = "1")] @@ -332,6 +338,7 @@ async fn main_impl( let client: Box = match scheme.as_str() { "postgresql" | "postgres" => { assert!(!args.compression, "libpq does not support compression"); + assert!(!args.rich_client, "rich client requires grpc://"); Box::new( LibpqClient::new(&args.page_service_connstring, worker_id.timeline) .await @@ -339,6 +346,16 @@ async fn main_impl( ) } + "grpc" if args.rich_client => Box::new( + RichGrpcClient::new( + &args.page_service_connstring, + worker_id.timeline, + args.compression, + ) + .await + .unwrap(), + ), + "grpc" => Box::new( GrpcClient::new( &args.page_service_connstring, @@ -680,3 +697,70 @@ impl Client for GrpcClient { Ok((resp.request_id, resp.page_images)) } } + +/// A rich gRPC Pageserver client. +struct RichGrpcClient { + inner: Arc, + requests: FuturesUnordered< + Pin> + Send>>, + >, +} + +impl RichGrpcClient { + async fn new( + connstring: &str, + ttid: TenantTimelineId, + compression: bool, + ) -> anyhow::Result { + let inner = Arc::new(client_grpc::PageserverClient::new( + ttid.tenant_id, + ttid.timeline_id, + ShardSpec::new( + [(ShardIndex::unsharded(), connstring.to_string())].into(), + None, + )?, + None, + compression.then_some(tonic::codec::CompressionEncoding::Zstd), + )?); + Ok(Self { + inner, + requests: FuturesUnordered::new(), + }) + } +} + +#[async_trait] +impl Client for RichGrpcClient { + async fn send_get_page( + &mut self, + req_id: u64, + req_lsn: Lsn, + mod_lsn: Lsn, + rel: RelTag, + blks: Vec, + ) -> anyhow::Result<()> { + let req = page_api::GetPageRequest { + request_id: req_id, + request_class: page_api::GetPageClass::Normal, + read_lsn: page_api::ReadLsn { + request_lsn: req_lsn, + not_modified_since_lsn: Some(mod_lsn), + }, + rel, + block_numbers: blks, + }; + let inner = self.inner.clone(); + self.requests.push(Box::pin(async move { + inner + .get_page(req) + .await + .map_err(|err| anyhow::anyhow!("{err}")) + })); + Ok(()) + } + + async fn recv_get_page(&mut self) -> anyhow::Result<(u64, Vec)> { + let resp = self.requests.next().await.unwrap()?; + Ok((resp.request_id, resp.page_images)) + } +} From 13b5e7b26fe009c711b3e57436433cf8e4d140d6 Mon Sep 17 00:00:00 2001 From: "Alex Chi Z." <4198311+skyzh@users.noreply.github.com> Date: Thu, 10 Jul 2025 14:02:54 -0400 Subject: [PATCH 58/65] fix(compute_ctl): reload config before applying spec (#12551) ## Problem If we have catalog update AND a pageserver migration batched in a single spec, we will not be able to apply the spec (running the SQL) because the compute is not attached to the right pageserver and we are not able to read anything if we don't pick up the latest pageserver connstring. This is not a case for now because cplane always schedules shard split / pageserver migrations with `skip_pg_catalog_updates` (I suppose). Context: https://databricks.slack.com/archives/C09254R641L/p1752163559259399?thread_ts=1752160163.141149&cid=C09254R641L With this fix, backpressure will likely not be able to affect reconfigurations. ## Summary of changes Do `pg_reload_conf` before we apply specs in SQL. --------- Signed-off-by: Alex Chi Z --- compute_tools/src/compute.rs | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/compute_tools/src/compute.rs b/compute_tools/src/compute.rs index 0496d38e67..4a29c232ac 100644 --- a/compute_tools/src/compute.rs +++ b/compute_tools/src/compute.rs @@ -1805,6 +1805,8 @@ impl ComputeNode { tls_config, )?; + self.pg_reload_conf()?; + if !spec.skip_pg_catalog_updates { let max_concurrent_connections = spec.reconfigure_concurrency; // Temporarily reset max_cluster_size in config @@ -1824,10 +1826,9 @@ impl ComputeNode { Ok(()) })?; + self.pg_reload_conf()?; } - self.pg_reload_conf()?; - let unknown_op = "unknown".to_string(); let op_id = spec.operation_uuid.as_ref().unwrap_or(&unknown_op); info!( From c5aaf1ae21df31233a4bc81eef88d56e95b2a33e Mon Sep 17 00:00:00 2001 From: Mikhail Date: Thu, 10 Jul 2025 19:37:54 +0100 Subject: [PATCH 59/65] Qualify call to neon extension in compute_ctl's prewarming (#12554) https://github.com/neondatabase/cloud/issues/19011 Calls without `neon.` failed on staging. Also fix local tests to work with qualified calls --- compute_tools/src/compute_prewarm.rs | 6 +++--- test_runner/regress/test_lfc_prewarm.py | 14 ++++++++------ test_runner/regress/test_replica_promotes.py | 4 ++-- 3 files changed, 13 insertions(+), 11 deletions(-) diff --git a/compute_tools/src/compute_prewarm.rs b/compute_tools/src/compute_prewarm.rs index 3f6f9a7ecc..d014a5bb72 100644 --- a/compute_tools/src/compute_prewarm.rs +++ b/compute_tools/src/compute_prewarm.rs @@ -70,7 +70,7 @@ impl ComputeNode { } }; let row = match client - .query_one("select * from get_prewarm_info()", &[]) + .query_one("select * from neon.get_prewarm_info()", &[]) .await { Ok(row) => row, @@ -146,7 +146,7 @@ impl ComputeNode { ComputeNode::get_maintenance_client(&self.tokio_conn_conf) .await .context("connecting to postgres")? - .query_one("select prewarm_local_cache($1)", &[&uncompressed]) + .query_one("select neon.prewarm_local_cache($1)", &[&uncompressed]) .await .context("loading LFC state into postgres") .map(|_| ()) @@ -196,7 +196,7 @@ impl ComputeNode { ComputeNode::get_maintenance_client(&self.tokio_conn_conf) .await .context("connecting to postgres")? - .query_one("select get_local_cache_state()", &[]) + .query_one("select neon.get_local_cache_state()", &[]) .await .context("querying LFC state")? .try_get::(0) diff --git a/test_runner/regress/test_lfc_prewarm.py b/test_runner/regress/test_lfc_prewarm.py index 22e5bf576f..0f0cf4cc6d 100644 --- a/test_runner/regress/test_lfc_prewarm.py +++ b/test_runner/regress/test_lfc_prewarm.py @@ -40,7 +40,7 @@ def prom_parse(client: EndpointHttpClient) -> dict[str, float]: def offload_lfc(method: PrewarmMethod, client: EndpointHttpClient, cur: Cursor) -> Any: if method == PrewarmMethod.POSTGRES: - cur.execute("select get_local_cache_state()") + cur.execute("select neon.get_local_cache_state()") return cur.fetchall()[0][0] if method == PrewarmMethod.AUTOPREWARM: @@ -72,7 +72,7 @@ def prewarm_endpoint( elif method == PrewarmMethod.COMPUTE_CTL: client.prewarm_lfc() elif method == PrewarmMethod.POSTGRES: - cur.execute("select prewarm_local_cache(%s)", (lfc_state,)) + cur.execute("select neon.prewarm_local_cache(%s)", (lfc_state,)) def check_prewarmed( @@ -116,7 +116,7 @@ def test_lfc_prewarm(neon_simple_env: NeonEnv, method: PrewarmMethod): pg_conn = endpoint.connect() pg_cur = pg_conn.cursor() - pg_cur.execute("create extension neon") + pg_cur.execute("create schema neon; create extension neon with schema neon") pg_cur.execute("create database lfc") lfc_conn = endpoint.connect(dbname="lfc") @@ -142,10 +142,12 @@ def test_lfc_prewarm(neon_simple_env: NeonEnv, method: PrewarmMethod): lfc_cur = lfc_conn.cursor() prewarm_endpoint(method, client, pg_cur, lfc_state) - pg_cur.execute("select lfc_value from neon_lfc_stats where lfc_key='file_cache_used_pages'") + pg_cur.execute( + "select lfc_value from neon.neon_lfc_stats where lfc_key='file_cache_used_pages'" + ) lfc_used_pages = pg_cur.fetchall()[0][0] log.info(f"Used LFC size: {lfc_used_pages}") - pg_cur.execute("select * from get_prewarm_info()") + pg_cur.execute("select * from neon.get_prewarm_info()") total, prewarmed, skipped, _ = pg_cur.fetchall()[0] log.info(f"Prewarm info: {total=} {prewarmed=} {skipped=}") progress = (prewarmed + skipped) * 100 // total @@ -186,7 +188,7 @@ def test_lfc_prewarm_under_workload(neon_simple_env: NeonEnv, method: PrewarmMet pg_conn = endpoint.connect() pg_cur = pg_conn.cursor() - pg_cur.execute("create extension neon") + pg_cur.execute("create schema neon; create extension neon with schema neon") pg_cur.execute("CREATE DATABASE lfc") lfc_conn = endpoint.connect(dbname="lfc") diff --git a/test_runner/regress/test_replica_promotes.py b/test_runner/regress/test_replica_promotes.py index 1f26269f40..8d39ac123a 100644 --- a/test_runner/regress/test_replica_promotes.py +++ b/test_runner/regress/test_replica_promotes.py @@ -60,7 +60,7 @@ def test_replica_promote(neon_simple_env: NeonEnv, method: PromoteMethod): with primary.connect() as primary_conn: primary_cur = primary_conn.cursor() - primary_cur.execute("create extension neon") + primary_cur.execute("create schema neon;create extension neon with schema neon") primary_cur.execute( "create table t(pk bigint GENERATED ALWAYS AS IDENTITY, payload integer)" ) @@ -172,7 +172,7 @@ def test_replica_promote_handler_disconnects(neon_simple_env: NeonEnv): secondary: Endpoint = env.endpoints.new_replica_start(origin=primary, endpoint_id="secondary") with primary.connect() as conn, conn.cursor() as cur: - cur.execute("create extension neon") + cur.execute("create schema neon;create extension neon with schema neon") cur.execute("create table t(pk bigint GENERATED ALWAYS AS IDENTITY, payload integer)") cur.execute("INSERT INTO t(payload) SELECT generate_series(1, 100)") cur.execute("show neon.safekeepers") From 3593fe195a55441b76874e64bb168acf71f6b4c4 Mon Sep 17 00:00:00 2001 From: Mikhail Date: Thu, 10 Jul 2025 20:28:10 +0100 Subject: [PATCH 60/65] split TerminationPending into two values, keeping ComputeStatus stateless (#12506) After https://github.com/neondatabase/neon/pull/12240 we observed issues in our go code as `ComputeStatus` is not stateless, thus doesn't deserialize as string. ``` could not check compute activity: json: cannot unmarshal object into Go struct field ComputeState.status of type computeclient.ComputeStatus ``` - Fix this by splitting this status into two. - Update compute OpenApi spec to reflect changes to `/terminate` in previous PR --- compute_tools/README.md | 9 +++-- compute_tools/src/compute.rs | 17 +++++++--- compute_tools/src/http/openapi_spec.yaml | 39 ++++++++++++++++++++-- compute_tools/src/http/routes/terminate.rs | 14 ++++---- compute_tools/src/monitor.rs | 3 +- control_plane/src/endpoint.rs | 3 +- libs/compute_api/src/responses.rs | 18 ++++++++-- 7 files changed, 81 insertions(+), 22 deletions(-) diff --git a/compute_tools/README.md b/compute_tools/README.md index 8d84031efc..49f1368f0e 100644 --- a/compute_tools/README.md +++ b/compute_tools/README.md @@ -46,11 +46,14 @@ stateDiagram-v2 Configuration --> Failed : Failed to configure the compute Configuration --> Running : Compute has been configured Empty --> Init : Compute spec is immediately available - Empty --> TerminationPending : Requested termination + Empty --> TerminationPendingFast : Requested termination + Empty --> TerminationPendingImmediate : Requested termination Init --> Failed : Failed to start Postgres Init --> Running : Started Postgres - Running --> TerminationPending : Requested termination - TerminationPending --> Terminated : Terminated compute + Running --> TerminationPendingFast : Requested termination + Running --> TerminationPendingImmediate : Requested termination + TerminationPendingFast --> Terminated compute with 30s delay for cplane to inspect status + TerminationPendingImmediate --> Terminated : Terminated compute immediately Failed --> [*] : Compute exited Terminated --> [*] : Compute exited ``` diff --git a/compute_tools/src/compute.rs b/compute_tools/src/compute.rs index 4a29c232ac..c05cc229a2 100644 --- a/compute_tools/src/compute.rs +++ b/compute_tools/src/compute.rs @@ -956,14 +956,20 @@ impl ComputeNode { None }; - let mut delay_exit = false; let mut state = self.state.lock().unwrap(); state.terminate_flush_lsn = lsn; - if let ComputeStatus::TerminationPending { mode } = state.status { + + let delay_exit = state.status == ComputeStatus::TerminationPendingFast; + if state.status == ComputeStatus::TerminationPendingFast + || state.status == ComputeStatus::TerminationPendingImmediate + { + info!( + "Changing compute status from {} to {}", + state.status, + ComputeStatus::Terminated + ); state.status = ComputeStatus::Terminated; self.state_changed.notify_all(); - // we were asked to terminate gracefully, don't exit to avoid restart - delay_exit = mode == compute_api::responses::TerminateMode::Fast } drop(state); @@ -1901,7 +1907,8 @@ impl ComputeNode { // exit loop ComputeStatus::Failed - | ComputeStatus::TerminationPending { .. } + | ComputeStatus::TerminationPendingFast + | ComputeStatus::TerminationPendingImmediate | ComputeStatus::Terminated => break 'cert_update, // wait diff --git a/compute_tools/src/http/openapi_spec.yaml b/compute_tools/src/http/openapi_spec.yaml index 3c58b284b3..93a357e160 100644 --- a/compute_tools/src/http/openapi_spec.yaml +++ b/compute_tools/src/http/openapi_spec.yaml @@ -371,9 +371,28 @@ paths: summary: Terminate Postgres and wait for it to exit description: "" operationId: terminate + parameters: + - name: mode + in: query + description: "Terminate mode: fast (wait 30s before returning) and immediate" + required: false + schema: + type: string + enum: ["fast", "immediate"] + default: fast responses: 200: description: Result + content: + application/json: + schema: + $ref: "#/components/schemas/TerminateResponse" + 201: + description: Result if compute is already terminated + content: + application/json: + schema: + $ref: "#/components/schemas/TerminateResponse" 412: description: "wrong state" content: @@ -530,11 +549,14 @@ components: type: string enum: - empty - - init - - failed - - running - configuration_pending + - init + - running - configuration + - failed + - termination_pending_fast + - termination_pending_immediate + - terminated example: running ExtensionInstallRequest: @@ -660,6 +682,17 @@ components: description: Role name. example: "neon" + TerminateResponse: + type: object + required: + - lsn + properties: + lsn: + type: string + nullable: true + description: "last WAL flush LSN" + example: "0/028F10D8" + SetRoleGrantsResponse: type: object required: diff --git a/compute_tools/src/http/routes/terminate.rs b/compute_tools/src/http/routes/terminate.rs index 32d90a5990..5b30b020c8 100644 --- a/compute_tools/src/http/routes/terminate.rs +++ b/compute_tools/src/http/routes/terminate.rs @@ -3,7 +3,7 @@ use crate::http::JsonResponse; use axum::extract::State; use axum::response::Response; use axum_extra::extract::OptionalQuery; -use compute_api::responses::{ComputeStatus, TerminateResponse}; +use compute_api::responses::{ComputeStatus, TerminateMode, TerminateResponse}; use http::StatusCode; use serde::Deserialize; use std::sync::Arc; @@ -12,7 +12,7 @@ use tracing::info; #[derive(Deserialize, Default)] pub struct TerminateQuery { - mode: compute_api::responses::TerminateMode, + mode: TerminateMode, } /// Terminate the compute. @@ -24,16 +24,16 @@ pub(in crate::http) async fn terminate( { let mut state = compute.state.lock().unwrap(); if state.status == ComputeStatus::Terminated { - return JsonResponse::success(StatusCode::CREATED, state.terminate_flush_lsn); + let response = TerminateResponse { + lsn: state.terminate_flush_lsn, + }; + return JsonResponse::success(StatusCode::CREATED, response); } if !matches!(state.status, ComputeStatus::Empty | ComputeStatus::Running) { return JsonResponse::invalid_status(state.status); } - state.set_status( - ComputeStatus::TerminationPending { mode }, - &compute.state_changed, - ); + state.set_status(mode.into(), &compute.state_changed); } forward_termination_signal(false); diff --git a/compute_tools/src/monitor.rs b/compute_tools/src/monitor.rs index 8a2f6addad..fa01545856 100644 --- a/compute_tools/src/monitor.rs +++ b/compute_tools/src/monitor.rs @@ -84,7 +84,8 @@ impl ComputeMonitor { if matches!( compute_status, ComputeStatus::Terminated - | ComputeStatus::TerminationPending { .. } + | ComputeStatus::TerminationPendingFast + | ComputeStatus::TerminationPendingImmediate | ComputeStatus::Failed ) { info!( diff --git a/control_plane/src/endpoint.rs b/control_plane/src/endpoint.rs index 74ab15dc97..ad2067e0f2 100644 --- a/control_plane/src/endpoint.rs +++ b/control_plane/src/endpoint.rs @@ -922,7 +922,8 @@ impl Endpoint { ComputeStatus::Empty | ComputeStatus::ConfigurationPending | ComputeStatus::Configuration - | ComputeStatus::TerminationPending { .. } + | ComputeStatus::TerminationPendingFast + | ComputeStatus::TerminationPendingImmediate | ComputeStatus::Terminated => { bail!("unexpected compute status: {:?}", state.status) } diff --git a/libs/compute_api/src/responses.rs b/libs/compute_api/src/responses.rs index e10c381fb4..2fe233214a 100644 --- a/libs/compute_api/src/responses.rs +++ b/libs/compute_api/src/responses.rs @@ -121,6 +121,15 @@ pub enum TerminateMode { Immediate, } +impl From for ComputeStatus { + fn from(mode: TerminateMode) -> Self { + match mode { + TerminateMode::Fast => ComputeStatus::TerminationPendingFast, + TerminateMode::Immediate => ComputeStatus::TerminationPendingImmediate, + } + } +} + #[derive(Serialize, Clone, Copy, Debug, Deserialize, PartialEq, Eq)] #[serde(rename_all = "snake_case")] pub enum ComputeStatus { @@ -141,7 +150,9 @@ pub enum ComputeStatus { // control-plane to terminate it. Failed, // Termination requested - TerminationPending { mode: TerminateMode }, + TerminationPendingFast, + // Termination requested, without waiting 30s before returning from /terminate + TerminationPendingImmediate, // Terminated Postgres Terminated, } @@ -160,7 +171,10 @@ impl Display for ComputeStatus { ComputeStatus::Running => f.write_str("running"), ComputeStatus::Configuration => f.write_str("configuration"), ComputeStatus::Failed => f.write_str("failed"), - ComputeStatus::TerminationPending { .. } => f.write_str("termination-pending"), + ComputeStatus::TerminationPendingFast => f.write_str("termination-pending-fast"), + ComputeStatus::TerminationPendingImmediate => { + f.write_str("termination-pending-immediate") + } ComputeStatus::Terminated => f.write_str("terminated"), } } From 1b7339b53e2483ab9d8af553007b1af038440c6e Mon Sep 17 00:00:00 2001 From: Tristan Partin Date: Thu, 10 Jul 2025 15:34:11 -0500 Subject: [PATCH 61/65] PG: add max_wal_rate (#12470) ## Problem One PG tenant may write too fast and overwhelm the PS. The other tenants sharing the same PSs will get very little bandwidth. We had one experiment that two tenants sharing the same PSs. One tenant runs a large ingestion that delivers hundreds of MB/s while the other only get < 10 MB/s. ## Summary of changes Rate limit how fast PG can generate WALs. The default is -1. We may scale the default value with the CPU count. Need to run some experiments to verify. ## How is this tested? CI. PGBench. No limit first. Then set to 1 MB/s and you can see the tps drop. Then reverted the change and tps increased again. pgbench -i -s 10 -p 55432 -h 127.0.0.1 -U cloud_admin -d postgres pgbench postgres -c 10 -j 10 -T 6000000 -P 1 -b tpcb-like -h 127.0.0.1 -U cloud_admin -p 55432 progress: 33.0 s, 986.0 tps, lat 10.142 ms stddev 3.856 progress: 34.0 s, 973.0 tps, lat 10.299 ms stddev 3.857 progress: 35.0 s, 1004.0 tps, lat 9.939 ms stddev 3.604 progress: 36.0 s, 984.0 tps, lat 10.183 ms stddev 3.713 progress: 37.0 s, 998.0 tps, lat 10.004 ms stddev 3.668 progress: 38.0 s, 648.9 tps, lat 12.947 ms stddev 24.970 progress: 39.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 40.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 41.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 42.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 43.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 44.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 45.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 46.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 47.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 48.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 49.0 s, 347.3 tps, lat 321.560 ms stddev 1805.633 progress: 50.0 s, 346.8 tps, lat 9.898 ms stddev 3.809 progress: 51.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 52.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 53.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 54.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 55.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 56.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 57.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 58.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 59.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 60.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 61.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 62.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 63.0 s, 494.5 tps, lat 276.504 ms stddev 1853.689 progress: 64.0 s, 488.0 tps, lat 20.530 ms stddev 71.981 progress: 65.0 s, 407.8 tps, lat 9.502 ms stddev 3.329 progress: 66.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 67.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 68.0 s, 504.5 tps, lat 71.627 ms stddev 397.733 progress: 69.0 s, 371.0 tps, lat 24.898 ms stddev 29.007 progress: 70.0 s, 541.0 tps, lat 19.684 ms stddev 24.094 progress: 71.0 s, 342.0 tps, lat 29.542 ms stddev 54.935 Co-authored-by: Haoyu Huang --- libs/walproposer/src/api_bindings.rs | 7 ++++ pgxn/neon/walproposer.h | 17 ++++++++ pgxn/neon/walproposer_pg.c | 62 +++++++++++++++++++++++++++- 3 files changed, 84 insertions(+), 2 deletions(-) diff --git a/libs/walproposer/src/api_bindings.rs b/libs/walproposer/src/api_bindings.rs index 7c6abf252e..5f856a44d4 100644 --- a/libs/walproposer/src/api_bindings.rs +++ b/libs/walproposer/src/api_bindings.rs @@ -428,6 +428,12 @@ pub fn empty_shmem() -> crate::bindings::WalproposerShmemState { shard_number: 0, }; + let empty_wal_rate_limiter = crate::bindings::WalRateLimiter { + should_limit: crate::bindings::pg_atomic_uint32 { value: 0 }, + sent_bytes: 0, + last_recorded_time_us: 0, + }; + crate::bindings::WalproposerShmemState { propEpochStartLsn: crate::bindings::pg_atomic_uint64 { value: 0 }, donor_name: [0; 64], @@ -441,6 +447,7 @@ pub fn empty_shmem() -> crate::bindings::WalproposerShmemState { num_shards: 0, replica_promote: false, min_ps_feedback: empty_feedback, + wal_rate_limiter: empty_wal_rate_limiter, } } diff --git a/pgxn/neon/walproposer.h b/pgxn/neon/walproposer.h index 4b223b6b18..e3a4022664 100644 --- a/pgxn/neon/walproposer.h +++ b/pgxn/neon/walproposer.h @@ -376,6 +376,18 @@ typedef struct PageserverFeedback uint32 shard_number; } PageserverFeedback; +/* BEGIN_HADRON */ +typedef struct WalRateLimiter +{ + /* If the value is 1, PG backends will hit backpressure. */ + pg_atomic_uint32 should_limit; + /* The number of bytes sent in the current second. */ + uint64 sent_bytes; + /* The last recorded time in microsecond. */ + TimestampTz last_recorded_time_us; +} WalRateLimiter; +/* END_HADRON */ + typedef struct WalproposerShmemState { pg_atomic_uint64 propEpochStartLsn; @@ -395,6 +407,11 @@ typedef struct WalproposerShmemState /* aggregated feedback with min LSNs across shards */ PageserverFeedback min_ps_feedback; + + /* BEGIN_HADRON */ + /* The WAL rate limiter */ + WalRateLimiter wal_rate_limiter; + /* END_HADRON */ } WalproposerShmemState; /* diff --git a/pgxn/neon/walproposer_pg.c b/pgxn/neon/walproposer_pg.c index 185fc83ace..aaf8f43eeb 100644 --- a/pgxn/neon/walproposer_pg.c +++ b/pgxn/neon/walproposer_pg.c @@ -66,6 +66,9 @@ int wal_acceptor_reconnect_timeout = 1000; int wal_acceptor_connection_timeout = 10000; int safekeeper_proto_version = 3; char *safekeeper_conninfo_options = ""; +/* BEGIN_HADRON */ +int databricks_max_wal_mb_per_second = -1; +/* END_HADRON */ /* Set to true in the walproposer bgw. */ static bool am_walproposer; @@ -252,6 +255,18 @@ nwp_register_gucs(void) PGC_POSTMASTER, 0, NULL, NULL, NULL); + + /* BEGIN_HADRON */ + DefineCustomIntVariable( + "databricks.max_wal_mb_per_second", + "The maximum WAL MB per second allowed. If breached, sending WAL hit the backpressure. Setting to -1 disables the limit.", + NULL, + &databricks_max_wal_mb_per_second, + -1, -1, INT_MAX, + PGC_SUSET, + GUC_UNIT_MB, + NULL, NULL, NULL); + /* END_HADRON */ } @@ -393,6 +408,7 @@ assign_neon_safekeepers(const char *newval, void *extra) static uint64 backpressure_lag_impl(void) { + struct WalproposerShmemState* state = NULL; if (max_replication_apply_lag > 0 || max_replication_flush_lag > 0 || max_replication_write_lag > 0) { XLogRecPtr writePtr; @@ -426,6 +442,18 @@ backpressure_lag_impl(void) return (myFlushLsn - applyPtr - max_replication_apply_lag * MB); } } + + /* BEGIN_HADRON */ + if (databricks_max_wal_mb_per_second == -1) { + return 0; + } + + state = GetWalpropShmemState(); + if (state != NULL && pg_atomic_read_u32(&state->wal_rate_limiter.should_limit) == 1) + { + return 1; + } + /* END_HADRON */ return 0; } @@ -472,6 +500,9 @@ WalproposerShmemInit(void) pg_atomic_init_u64(&walprop_shared->mineLastElectedTerm, 0); pg_atomic_init_u64(&walprop_shared->backpressureThrottlingTime, 0); pg_atomic_init_u64(&walprop_shared->currentClusterSize, 0); + /* BEGIN_HADRON */ + pg_atomic_init_u32(&walprop_shared->wal_rate_limiter.should_limit, 0); + /* END_HADRON */ } LWLockRelease(AddinShmemInitLock); @@ -487,6 +518,9 @@ WalproposerShmemInit_SyncSafekeeper(void) pg_atomic_init_u64(&walprop_shared->propEpochStartLsn, 0); pg_atomic_init_u64(&walprop_shared->mineLastElectedTerm, 0); pg_atomic_init_u64(&walprop_shared->backpressureThrottlingTime, 0); + /* BEGIN_HADRON */ + pg_atomic_init_u32(&walprop_shared->wal_rate_limiter.should_limit, 0); + /* END_HADRON */ } #define BACK_PRESSURE_DELAY 10000L // 0.01 sec @@ -521,7 +555,6 @@ backpressure_throttling_impl(void) if (lag == 0) return retry; - old_status = get_ps_display(&len); new_status = (char *) palloc(len + 64 + 1); memcpy(new_status, old_status, len); @@ -1458,6 +1491,8 @@ XLogBroadcastWalProposer(WalProposer *wp) { XLogRecPtr startptr; XLogRecPtr endptr; + struct WalproposerShmemState *state = NULL; + TimestampTz now = 0; /* Start from the last sent position */ startptr = sentPtr; @@ -1502,13 +1537,36 @@ XLogBroadcastWalProposer(WalProposer *wp) * that arbitrary LSN is eventually reported as written, flushed and * applied, so that it can measure the elapsed time. */ - LagTrackerWrite(endptr, GetCurrentTimestamp()); + now = GetCurrentTimestamp(); + LagTrackerWrite(endptr, now); /* Do we have any work to do? */ Assert(startptr <= endptr); if (endptr <= startptr) return; + /* BEGIN_HADRON */ + state = GetWalpropShmemState(); + if (databricks_max_wal_mb_per_second != -1 && state != NULL) + { + uint64 max_wal_bytes = (uint64) databricks_max_wal_mb_per_second * 1024 * 1024; + struct WalRateLimiter *limiter = &state->wal_rate_limiter; + + if (now - limiter->last_recorded_time_us > USECS_PER_SEC) + { + /* Reset the rate limiter */ + limiter->last_recorded_time_us = now; + limiter->sent_bytes = 0; + pg_atomic_exchange_u32(&limiter->should_limit, 0); + } + limiter->sent_bytes += (endptr - startptr); + if (limiter->sent_bytes > max_wal_bytes) + { + pg_atomic_exchange_u32(&limiter->should_limit, 1); + } + } + /* END_HADRON */ + WalProposerBroadcast(wp, startptr, endptr); sentPtr = endptr; From 44ea17b7b24240937fc214a1bc5453da0d840ece Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Thu, 10 Jul 2025 22:39:42 +0200 Subject: [PATCH 62/65] pageserver/page_api: add attempt to GetPage request ID (#12536) ## Problem `GetPageRequest::request_id` is supposed to be a unique ID for a request. It's not, because we may retry the request using the same ID. This causes assertion failures and confusion. Touches #11735. Requires #12480. ## Summary of changes Extend the request ID with a retry attempt, and handle it in the gRPC client and server. --- pageserver/client_grpc/src/client.rs | 23 +++++--- pageserver/client_grpc/src/pool.rs | 13 +++++ pageserver/client_grpc/src/retry.rs | 8 +-- pageserver/page_api/proto/page_service.proto | 12 +++- pageserver/page_api/src/model.rs | 58 +++++++++++++++++-- .../pagebench/src/cmd/getpage_latest_lsn.rs | 8 +-- pageserver/src/page_service.rs | 19 +++--- 7 files changed, 110 insertions(+), 31 deletions(-) diff --git a/pageserver/client_grpc/src/client.rs b/pageserver/client_grpc/src/client.rs index e790f4018e..393f89819a 100644 --- a/pageserver/client_grpc/src/client.rs +++ b/pageserver/client_grpc/src/client.rs @@ -143,7 +143,7 @@ impl PageserverClient { req: page_api::CheckRelExistsRequest, ) -> tonic::Result { self.retry - .with(async || { + .with(async |_| { // Relation metadata is only available on shard 0. let mut client = self.shards.load_full().get_zero().client().await?; client.check_rel_exists(req).await @@ -158,7 +158,7 @@ impl PageserverClient { req: page_api::GetDbSizeRequest, ) -> tonic::Result { self.retry - .with(async || { + .with(async |_| { // Relation metadata is only available on shard 0. let mut client = self.shards.load_full().get_zero().client().await?; client.get_db_size(req).await @@ -166,8 +166,9 @@ impl PageserverClient { .await } - /// Fetches pages. The `request_id` must be unique across all in-flight requests. Automatically - /// splits requests that straddle shard boundaries, and assembles the responses. + /// Fetches pages. The `request_id` must be unique across all in-flight requests, and the + /// `attempt` must be 0 (incremented on retry). Automatically splits requests that straddle + /// shard boundaries, and assembles the responses. /// /// Unlike `page_api::Client`, this automatically converts `status_code` into `tonic::Status` /// errors. All responses will have `GetPageStatusCode::Ok`. @@ -187,6 +188,10 @@ impl PageserverClient { if req.block_numbers.is_empty() { return Err(tonic::Status::invalid_argument("no block number")); } + // The request attempt must be 0. The client will increment it internally. + if req.request_id.attempt != 0 { + return Err(tonic::Status::invalid_argument("request attempt must be 0")); + } // The shards may change while we're fetching pages. We execute the request using a stable // view of the shards (especially important for requests that span shards), but retry the @@ -197,7 +202,11 @@ impl PageserverClient { // TODO: the gRPC server and client doesn't yet properly support shard splits. Revisit this // once we figure out how to handle these. self.retry - .with(async || Self::get_page_with_shards(req.clone(), &self.shards.load_full()).await) + .with(async |attempt| { + let mut req = req.clone(); + req.request_id.attempt = attempt as u32; + Self::get_page_with_shards(req, &self.shards.load_full()).await + }) .await } @@ -267,7 +276,7 @@ impl PageserverClient { req: page_api::GetRelSizeRequest, ) -> tonic::Result { self.retry - .with(async || { + .with(async |_| { // Relation metadata is only available on shard 0. let mut client = self.shards.load_full().get_zero().client().await?; client.get_rel_size(req).await @@ -282,7 +291,7 @@ impl PageserverClient { req: page_api::GetSlruSegmentRequest, ) -> tonic::Result { self.retry - .with(async || { + .with(async |_| { // SLRU segments are only available on shard 0. let mut client = self.shards.load_full().get_zero().client().await?; client.get_slru_segment(req).await diff --git a/pageserver/client_grpc/src/pool.rs b/pageserver/client_grpc/src/pool.rs index 2dde40b5b4..906872e091 100644 --- a/pageserver/client_grpc/src/pool.rs +++ b/pageserver/client_grpc/src/pool.rs @@ -591,6 +591,10 @@ impl StreamPool { // Track caller response channels by request ID. If the task returns early, these response // channels will be dropped and the waiting callers will receive an error. + // + // NB: this will leak entries if the server doesn't respond to a request (by request ID). + // It shouldn't happen, and if it does it will often hold onto queue depth quota anyway and + // block further use. But we could consider reaping closed channels after some time. let mut callers = HashMap::new(); // Process requests and responses. @@ -695,6 +699,15 @@ impl Drop for StreamGuard { // Release the queue depth reservation on drop. This can prematurely decrement it if dropped // before the response is received, but that's okay. + // + // TODO: actually, it's probably not okay. Queue depth release should be moved into the + // stream task, such that it continues to account for the queue depth slot until the server + // responds. Otherwise, if a slow request times out and keeps blocking the stream, the + // server will keep waiting on it and we can pile on subsequent requests (including the + // timeout retry) in the same stream and get blocked. But we may also want to avoid blocking + // requests on e.g. LSN waits and layer downloads, instead returning early to free up the + // stream. Or just scale out streams with a queue depth of 1 to sidestep all head-of-line + // blocking. TBD. let mut streams = pool.streams.lock().unwrap(); let entry = streams.get_mut(&self.id).expect("unknown stream"); assert!(entry.idle_since.is_none(), "active stream marked idle"); diff --git a/pageserver/client_grpc/src/retry.rs b/pageserver/client_grpc/src/retry.rs index a4d4b19870..a1e0b8636f 100644 --- a/pageserver/client_grpc/src/retry.rs +++ b/pageserver/client_grpc/src/retry.rs @@ -23,14 +23,14 @@ impl Retry { /// If true, log successful requests. For debugging. const LOG_SUCCESS: bool = false; - /// Runs the given async closure with timeouts and retries (exponential backoff). Logs errors, - /// using the current tracing span for context. + /// Runs the given async closure with timeouts and retries (exponential backoff), passing the + /// attempt number starting at 0. Logs errors, using the current tracing span for context. /// /// Only certain gRPC status codes are retried, see [`Self::should_retry`]. For default /// timeouts, see [`Self::REQUEST_TIMEOUT`] and [`Self::TOTAL_TIMEOUT`]. pub async fn with(&self, mut f: F) -> tonic::Result where - F: FnMut() -> O, + F: FnMut(usize) -> O, // takes attempt number, starting at 0 O: Future>, { let started = Instant::now(); @@ -47,7 +47,7 @@ impl Retry { } let request_started = Instant::now(); - tokio::time::timeout(Self::REQUEST_TIMEOUT, f()) + tokio::time::timeout(Self::REQUEST_TIMEOUT, f(retries)) .await .map_err(|_| { tonic::Status::deadline_exceeded(format!( diff --git a/pageserver/page_api/proto/page_service.proto b/pageserver/page_api/proto/page_service.proto index 1d6c230916..b1f266d910 100644 --- a/pageserver/page_api/proto/page_service.proto +++ b/pageserver/page_api/proto/page_service.proto @@ -153,7 +153,7 @@ message GetDbSizeResponse { message GetPageRequest { // A request ID. Will be included in the response. Should be unique for // in-flight requests on the stream. - uint64 request_id = 1; + RequestID request_id = 1; // The request class. GetPageClass request_class = 2; // The LSN to read at. @@ -177,6 +177,14 @@ message GetPageRequest { repeated uint32 block_number = 5; } +// A Request ID. Should be unique for in-flight requests on a stream. Included in the response. +message RequestID { + // The base request ID. + uint64 id = 1; + // The request attempt. Starts at 0, incremented on each retry. + uint32 attempt = 2; +} + // A GetPageRequest class. Primarily intended for observability, but may also be // used for prioritization in the future. enum GetPageClass { @@ -199,7 +207,7 @@ enum GetPageClass { // the entire batch is ready, so no one can make use of the individual pages. message GetPageResponse { // The original request's ID. - uint64 request_id = 1; + RequestID request_id = 1; // The response status code. GetPageStatusCode status_code = 2; // A string describing the status, if any. diff --git a/pageserver/page_api/src/model.rs b/pageserver/page_api/src/model.rs index d0d3517d41..4db8237ad8 100644 --- a/pageserver/page_api/src/model.rs +++ b/pageserver/page_api/src/model.rs @@ -356,7 +356,10 @@ impl TryFrom for GetPageRequest { return Err(ProtocolError::Missing("block_number")); } Ok(Self { - request_id: pb.request_id, + request_id: pb + .request_id + .ok_or(ProtocolError::Missing("request_id"))? + .into(), request_class: pb.request_class.into(), read_lsn: pb .read_lsn @@ -371,7 +374,7 @@ impl TryFrom for GetPageRequest { impl From for proto::GetPageRequest { fn from(request: GetPageRequest) -> Self { Self { - request_id: request.request_id, + request_id: Some(request.request_id.into()), request_class: request.request_class.into(), read_lsn: Some(request.read_lsn.into()), rel: Some(request.rel.into()), @@ -380,8 +383,51 @@ impl From for proto::GetPageRequest { } } -/// A GetPage request ID. -pub type RequestID = u64; +/// A GetPage request ID and retry attempt. Should be unique for in-flight requests on a stream. +#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Hash, PartialOrd, Ord)] +pub struct RequestID { + /// The base request ID. + pub id: u64, + // The request attempt. Starts at 0, incremented on each retry. + pub attempt: u32, +} + +impl RequestID { + /// Creates a new RequestID with the given ID and an initial attempt of 0. + pub fn new(id: u64) -> Self { + Self { id, attempt: 0 } + } +} + +impl Display for RequestID { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + write!(f, "{}.{}", self.id, self.attempt) + } +} + +impl From for RequestID { + fn from(pb: proto::RequestId) -> Self { + Self { + id: pb.id, + attempt: pb.attempt, + } + } +} + +impl From for RequestID { + fn from(id: u64) -> Self { + Self::new(id) + } +} + +impl From for proto::RequestId { + fn from(request_id: RequestID) -> Self { + Self { + id: request_id.id, + attempt: request_id.attempt, + } + } +} /// A GetPage request class. #[derive(Clone, Copy, Debug, strum_macros::Display)] @@ -467,7 +513,7 @@ pub struct GetPageResponse { impl From for GetPageResponse { fn from(pb: proto::GetPageResponse) -> Self { Self { - request_id: pb.request_id, + request_id: pb.request_id.unwrap_or_default().into(), status_code: pb.status_code.into(), reason: Some(pb.reason).filter(|r| !r.is_empty()), page_images: pb.page_image, @@ -478,7 +524,7 @@ impl From for GetPageResponse { impl From for proto::GetPageResponse { fn from(response: GetPageResponse) -> Self { Self { - request_id: response.request_id, + request_id: Some(response.request_id.into()), status_code: response.status_code.into(), reason: response.reason.unwrap_or_default(), page_image: response.page_images, diff --git a/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs b/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs index 42c7e40489..b5c191e29a 100644 --- a/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs +++ b/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs @@ -674,7 +674,7 @@ impl Client for GrpcClient { blks: Vec, ) -> anyhow::Result<()> { let req = page_api::GetPageRequest { - request_id: req_id, + request_id: req_id.into(), request_class: page_api::GetPageClass::Normal, read_lsn: page_api::ReadLsn { request_lsn: req_lsn, @@ -694,7 +694,7 @@ impl Client for GrpcClient { "unexpected status code: {}", resp.status_code, ); - Ok((resp.request_id, resp.page_images)) + Ok((resp.request_id.id, resp.page_images)) } } @@ -740,7 +740,7 @@ impl Client for RichGrpcClient { blks: Vec, ) -> anyhow::Result<()> { let req = page_api::GetPageRequest { - request_id: req_id, + request_id: req_id.into(), request_class: page_api::GetPageClass::Normal, read_lsn: page_api::ReadLsn { request_lsn: req_lsn, @@ -761,6 +761,6 @@ impl Client for RichGrpcClient { async fn recv_get_page(&mut self) -> anyhow::Result<(u64, Vec)> { let resp = self.requests.next().await.unwrap()?; - Ok((resp.request_id, resp.page_images)) + Ok((resp.request_id.id, resp.page_images)) } } diff --git a/pageserver/src/page_service.rs b/pageserver/src/page_service.rs index ebb1addcdb..b2f6cd465d 100644 --- a/pageserver/src/page_service.rs +++ b/pageserver/src/page_service.rs @@ -3338,9 +3338,12 @@ impl GrpcPageServiceHandler { } /// Generates a PagestreamRequest header from a ReadLsn and request ID. - fn make_hdr(read_lsn: page_api::ReadLsn, req_id: u64) -> PagestreamRequest { + fn make_hdr( + read_lsn: page_api::ReadLsn, + req_id: Option, + ) -> PagestreamRequest { PagestreamRequest { - reqid: req_id, + reqid: req_id.map(|r| r.id).unwrap_or_default(), request_lsn: read_lsn.request_lsn, not_modified_since: read_lsn .not_modified_since_lsn @@ -3450,7 +3453,7 @@ impl GrpcPageServiceHandler { batch.push(BatchedGetPageRequest { req: PagestreamGetPageRequest { - hdr: Self::make_hdr(req.read_lsn, req.request_id), + hdr: Self::make_hdr(req.read_lsn, Some(req.request_id)), rel: req.rel, blkno, }, @@ -3528,7 +3531,7 @@ impl proto::PageService for GrpcPageServiceHandler { span_record!(rel=%req.rel, lsn=%req.read_lsn); let req = PagestreamExistsRequest { - hdr: Self::make_hdr(req.read_lsn, 0), + hdr: Self::make_hdr(req.read_lsn, None), rel: req.rel, }; @@ -3678,7 +3681,7 @@ impl proto::PageService for GrpcPageServiceHandler { span_record!(db_oid=%req.db_oid, lsn=%req.read_lsn); let req = PagestreamDbSizeRequest { - hdr: Self::make_hdr(req.read_lsn, 0), + hdr: Self::make_hdr(req.read_lsn, None), dbnode: req.db_oid, }; @@ -3728,7 +3731,7 @@ impl proto::PageService for GrpcPageServiceHandler { .await? .downgrade(); while let Some(req) = reqs.message().await? { - let req_id = req.request_id; + let req_id = req.request_id.map(page_api::RequestID::from).unwrap_or_default(); let result = Self::get_page(&ctx, &timeline, req, io_concurrency.clone()) .instrument(span.clone()) // propagate request span .await; @@ -3767,7 +3770,7 @@ impl proto::PageService for GrpcPageServiceHandler { span_record!(rel=%req.rel, lsn=%req.read_lsn); let req = PagestreamNblocksRequest { - hdr: Self::make_hdr(req.read_lsn, 0), + hdr: Self::make_hdr(req.read_lsn, None), rel: req.rel, }; @@ -3800,7 +3803,7 @@ impl proto::PageService for GrpcPageServiceHandler { span_record!(kind=%req.kind, segno=%req.segno, lsn=%req.read_lsn); let req = PagestreamGetSlruSegmentRequest { - hdr: Self::make_hdr(req.read_lsn, 0), + hdr: Self::make_hdr(req.read_lsn, None), kind: req.kind as u8, segno: req.segno, }; From b91f821e8bae4ed8635f2c9380f304fc575eed91 Mon Sep 17 00:00:00 2001 From: "Alex Chi Z." <4198311+skyzh@users.noreply.github.com> Date: Thu, 10 Jul 2025 17:49:52 -0400 Subject: [PATCH 63/65] fix(libpagestore): update the default stripe size (#12557) ## Problem Part of LKB-379 The pageserver connstrings are updated in the postmaster and then there's a hook to propagate it to the shared memory of all backends. However, the shard stripe doesn't. This would cause problems during shard splits: * the compute has active reads/writes * shard split happens and the cplane applies the new config (pageserver connstring + stripe size) * pageserver connstring will be updated immediately once the postmaster receives the SIGHUP, and it will be copied over the the shared memory of all other backends. * stripe size is a normal GUC and we don't have special handling around that, so if any active backend has ongoing txns the value won't be applied. * now it's possible for backends to issue requests based on the wrong stripe size; what's worse, if a request gets cached in the prefetch buffer, it will get stuck forever. ## Summary of changes To make sure it aligns with the current default in storcon. Signed-off-by: Alex Chi Z --- pgxn/neon/libpagestore.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pgxn/neon/libpagestore.c b/pgxn/neon/libpagestore.c index 3b6c4247c3..05ba6da663 100644 --- a/pgxn/neon/libpagestore.c +++ b/pgxn/neon/libpagestore.c @@ -1410,7 +1410,7 @@ pg_init_libpagestore(void) "sharding stripe size", NULL, &stripe_size, - 32768, 1, INT_MAX, + 2048, 1, INT_MAX, PGC_SIGHUP, GUC_UNIT_BLOCKS, NULL, NULL, NULL); From 8aa9540a05cfab2cf870b309665d78c837310acb Mon Sep 17 00:00:00 2001 From: Erik Grinaker Date: Fri, 11 Jul 2025 00:35:14 +0200 Subject: [PATCH 64/65] pageserver/page_api: include block number and rel in gRPC `GetPageResponse` (#12542) ## Problem With gRPC `GetPageRequest` batches, we'll have non-trivial fragmentation/reassembly logic in several places of the stack (concurrent reads, shard splits, LFC hits, etc). If we included the block numbers with the pages in `GetPageResponse` we could have better verification and observability that the final responses are correct. Touches #11735. Requires #12480. ## Summary of changes Add a `Page` struct with`block_number` for `GetPageResponse`, along with the `RelTag` for completeness, and verify them in the rich gRPC client. --- pageserver/client_grpc/src/client.rs | 34 +++- pageserver/client_grpc/src/split.rs | 160 +++++++++++------- pageserver/page_api/proto/page_service.proto | 19 ++- pageserver/page_api/src/client.rs | 3 +- pageserver/page_api/src/model.rs | 59 +++++-- .../pagebench/src/cmd/getpage_latest_lsn.rs | 10 +- pageserver/src/page_service.rs | 8 +- 7 files changed, 201 insertions(+), 92 deletions(-) diff --git a/pageserver/client_grpc/src/client.rs b/pageserver/client_grpc/src/client.rs index 393f89819a..7049fbdb96 100644 --- a/pageserver/client_grpc/src/client.rs +++ b/pageserver/client_grpc/src/client.rs @@ -218,7 +218,7 @@ impl PageserverClient { ) -> tonic::Result { // Fast path: request is for a single shard. if let Some(shard_id) = - GetPageSplitter::is_single_shard(&req, shards.count, shards.stripe_size) + GetPageSplitter::for_single_shard(&req, shards.count, shards.stripe_size) { return Self::get_page_with_shard(req, shards.get(shard_id)?).await; } @@ -238,7 +238,7 @@ impl PageserverClient { splitter.add_response(shard_id, shard_response)?; } - splitter.assemble_response() + splitter.get_response() } /// Fetches pages on the given shard. Does not retry internally. @@ -246,9 +246,8 @@ impl PageserverClient { req: page_api::GetPageRequest, shard: &Shard, ) -> tonic::Result { - let expected = req.block_numbers.len(); let stream = shard.stream(req.request_class.is_bulk()).await; - let resp = stream.send(req).await?; + let resp = stream.send(req.clone()).await?; // Convert per-request errors into a tonic::Status. if resp.status_code != page_api::GetPageStatusCode::Ok { @@ -258,11 +257,27 @@ impl PageserverClient { )); } - // Check that we received the expected number of pages. - let actual = resp.page_images.len(); - if expected != actual { + // Check that we received the expected pages. + if req.rel != resp.rel { return Err(tonic::Status::internal(format!( - "expected {expected} pages, got {actual}", + "shard {} returned wrong relation, expected {} got {}", + shard.id, req.rel, resp.rel + ))); + } + if !req + .block_numbers + .iter() + .copied() + .eq(resp.pages.iter().map(|p| p.block_number)) + { + return Err(tonic::Status::internal(format!( + "shard {} returned wrong pages, expected {:?} got {:?}", + shard.id, + req.block_numbers, + resp.pages + .iter() + .map(|page| page.block_number) + .collect::>() ))); } @@ -435,6 +450,8 @@ impl Shards { /// * Bulk client pool: unbounded. /// * Bulk stream pool: MAX_BULK_STREAMS and MAX_BULK_STREAM_QUEUE_DEPTH. struct Shard { + /// The shard ID. + id: ShardIndex, /// Unary gRPC client pool. client_pool: Arc, /// GetPage stream pool. @@ -500,6 +517,7 @@ impl Shard { ); Ok(Self { + id: shard_id, client_pool, stream_pool, bulk_stream_pool, diff --git a/pageserver/client_grpc/src/split.rs b/pageserver/client_grpc/src/split.rs index 57c9299b96..b7539b900c 100644 --- a/pageserver/client_grpc/src/split.rs +++ b/pageserver/client_grpc/src/split.rs @@ -5,27 +5,24 @@ use bytes::Bytes; use pageserver_api::key::rel_block_to_key; use pageserver_api::shard::{ShardStripeSize, key_to_shard_number}; use pageserver_page_api as page_api; -use utils::shard::{ShardCount, ShardIndex}; +use utils::shard::{ShardCount, ShardIndex, ShardNumber}; /// Splits GetPageRequests that straddle shard boundaries and assembles the responses. /// TODO: add tests for this. pub struct GetPageSplitter { - /// The original request ID. Used for all shard requests. - request_id: page_api::RequestID, /// Split requests by shard index. requests: HashMap, - /// Maps the offset in `GetPageRequest::block_numbers` to the owning shard. Used to assemble - /// the response pages in the same order as the original request. + /// The response being assembled. Preallocated with empty pages, to be filled in. + response: page_api::GetPageResponse, + /// Maps the offset in `request.block_numbers` and `response.pages` to the owning shard. Used + /// to assemble the response pages in the same order as the original request. block_shards: Vec, - /// Page responses by shard index. Will be assembled into a single response. - responses: HashMap>, } impl GetPageSplitter { /// Checks if the given request only touches a single shard, and returns the shard ID. This is /// the common case, so we check first in order to avoid unnecessary allocations and overhead. - /// The caller must ensure that the request has at least one block number, or this will panic. - pub fn is_single_shard( + pub fn for_single_shard( req: &page_api::GetPageRequest, count: ShardCount, stripe_size: ShardStripeSize, @@ -35,8 +32,12 @@ impl GetPageSplitter { return Some(ShardIndex::unsharded()); } - // Find the base shard index for the first page, and compare with the rest. - let key = rel_block_to_key(req.rel, *req.block_numbers.first().expect("no pages")); + // Find the first page's shard, for comparison. If there are no pages, just return the first + // shard (caller likely checked already, otherwise the server will reject it). + let Some(&first_page) = req.block_numbers.first() else { + return Some(ShardIndex::new(ShardNumber(0), count)); + }; + let key = rel_block_to_key(req.rel, first_page); let shard_number = key_to_shard_number(count, stripe_size, &key); req.block_numbers @@ -57,19 +58,19 @@ impl GetPageSplitter { ) -> Self { // The caller should make sure we don't split requests unnecessarily. debug_assert!( - Self::is_single_shard(&req, count, stripe_size).is_none(), + Self::for_single_shard(&req, count, stripe_size).is_none(), "unnecessary request split" ); // Split the requests by shard index. let mut requests = HashMap::with_capacity(2); // common case let mut block_shards = Vec::with_capacity(req.block_numbers.len()); - for blkno in req.block_numbers { + for &blkno in &req.block_numbers { let key = rel_block_to_key(req.rel, blkno); let shard_number = key_to_shard_number(count, stripe_size, &key); let shard_id = ShardIndex::new(shard_number, count); - let shard_req = requests + requests .entry(shard_id) .or_insert_with(|| page_api::GetPageRequest { request_id: req.request_id, @@ -77,20 +78,39 @@ impl GetPageSplitter { rel: req.rel, read_lsn: req.read_lsn, block_numbers: Vec::new(), - }); - shard_req.block_numbers.push(blkno); + }) + .block_numbers + .push(blkno); block_shards.push(shard_id); } - Self { + // Construct a response to be populated by shard responses. Preallocate empty page slots + // with the expected block numbers. + let response = page_api::GetPageResponse { request_id: req.request_id, - responses: HashMap::with_capacity(requests.len()), + status_code: page_api::GetPageStatusCode::Ok, + reason: None, + rel: req.rel, + pages: req + .block_numbers + .into_iter() + .map(|block_number| { + page_api::Page { + block_number, + image: Bytes::new(), // empty page slot to be filled in + } + }) + .collect(), + }; + + Self { requests, + response, block_shards, } } - /// Drains the per-shard requests, moving them out of the hashmap to avoid extra allocations. + /// Drains the per-shard requests, moving them out of the splitter to avoid extra allocations. pub fn drain_requests( &mut self, ) -> impl Iterator { @@ -108,72 +128,82 @@ impl GetPageSplitter { // The caller should already have converted status codes into tonic::Status. if response.status_code != page_api::GetPageStatusCode::Ok { return Err(tonic::Status::internal(format!( - "unexpected non-OK response for shard {shard_id}: {:?}", - response.status_code + "unexpected non-OK response for shard {shard_id}: {} {}", + response.status_code, + response.reason.unwrap_or_default() ))); } - // The stream pool ensures the response matches the request ID. - if response.request_id != self.request_id { + if response.request_id != self.response.request_id { return Err(tonic::Status::internal(format!( "response ID mismatch for shard {shard_id}: expected {}, got {}", - self.request_id, response.request_id + self.response.request_id, response.request_id ))); } - // We only dispatch one request per shard. - if self.responses.contains_key(&shard_id) { + // Place the shard response pages into the assembled response, in request order. + let mut pages = response.pages.into_iter(); + + for (i, &s) in self.block_shards.iter().enumerate() { + if shard_id != s { + continue; + } + + let Some(slot) = self.response.pages.get_mut(i) else { + return Err(tonic::Status::internal(format!( + "no block_shards slot {i} for shard {shard_id}" + ))); + }; + let Some(page) = pages.next() else { + return Err(tonic::Status::internal(format!( + "missing page {} in shard {shard_id} response", + slot.block_number + ))); + }; + if page.block_number != slot.block_number { + return Err(tonic::Status::internal(format!( + "shard {shard_id} returned wrong page at index {i}, expected {} got {}", + slot.block_number, page.block_number + ))); + } + if !slot.image.is_empty() { + return Err(tonic::Status::internal(format!( + "shard {shard_id} returned duplicate page {} at index {i}", + slot.block_number + ))); + } + + *slot = page; + } + + // Make sure we've consumed all pages from the shard response. + if let Some(extra_page) = pages.next() { return Err(tonic::Status::internal(format!( - "duplicate response for shard {shard_id}" + "shard {shard_id} returned extra page: {}", + extra_page.block_number ))); } - // Add the response data to the map. - self.responses.insert(shard_id, response.page_images); - Ok(()) } - /// Assembles the shard responses into a single response. Responses must be present for all - /// relevant shards, and the total number of pages must match the original request. + /// Fetches the final, assembled response. #[allow(clippy::result_large_err)] - pub fn assemble_response(self) -> tonic::Result { - let mut response = page_api::GetPageResponse { - request_id: self.request_id, - status_code: page_api::GetPageStatusCode::Ok, - reason: None, - page_images: Vec::with_capacity(self.block_shards.len()), - }; - - // Set up per-shard page iterators we can pull from. - let mut shard_responses = HashMap::with_capacity(self.responses.len()); - for (shard_id, responses) in self.responses { - shard_responses.insert(shard_id, responses.into_iter()); - } - - // Reassemble the responses in the same order as the original request. - for shard_id in &self.block_shards { - let page = shard_responses - .get_mut(shard_id) - .ok_or_else(|| { - tonic::Status::internal(format!("missing response for shard {shard_id}")) - })? - .next() - .ok_or_else(|| { - tonic::Status::internal(format!("missing page from shard {shard_id}")) - })?; - response.page_images.push(page); - } - - // Make sure there are no additional pages. - for (shard_id, mut pages) in shard_responses { - if pages.next().is_some() { + pub fn get_response(self) -> tonic::Result { + // Check that the response is complete. + for (i, page) in self.response.pages.iter().enumerate() { + if page.image.is_empty() { return Err(tonic::Status::internal(format!( - "extra pages returned from shard {shard_id}" + "missing page {} for shard {}", + page.block_number, + self.block_shards + .get(i) + .map(|s| s.to_string()) + .unwrap_or_else(|| "?".to_string()) ))); } } - Ok(response) + Ok(self.response) } } diff --git a/pageserver/page_api/proto/page_service.proto b/pageserver/page_api/proto/page_service.proto index b1f266d910..d113a04a42 100644 --- a/pageserver/page_api/proto/page_service.proto +++ b/pageserver/page_api/proto/page_service.proto @@ -208,12 +208,25 @@ enum GetPageClass { message GetPageResponse { // The original request's ID. RequestID request_id = 1; - // The response status code. + // The response status code. If not OK, the rel and page fields will be empty. GetPageStatusCode status_code = 2; // A string describing the status, if any. string reason = 3; - // The 8KB page images, in the same order as the request. Empty if status_code != OK. - repeated bytes page_image = 4; + // The relation that the pages belong to. + RelTag rel = 4; + // The page(s), in the same order as the request. + repeated Page page = 5; +} + +// A page. +// +// TODO: it would be slightly more efficient (but less convenient) to have separate arrays of block +// numbers and images, but given the 8KB page size it's probably negligible. Benchmark it anyway. +message Page { + // The page number. + uint32 block_number = 1; + // The materialized page image, as an 8KB byte vector. + bytes image = 2; } // A GetPageResponse status code. diff --git a/pageserver/page_api/src/client.rs b/pageserver/page_api/src/client.rs index 6523d00d3d..f70d0e7b28 100644 --- a/pageserver/page_api/src/client.rs +++ b/pageserver/page_api/src/client.rs @@ -1,4 +1,5 @@ use anyhow::Context as _; +use futures::future::ready; use futures::{Stream, StreamExt as _, TryStreamExt as _}; use tokio::io::AsyncRead; use tokio_util::io::StreamReader; @@ -110,7 +111,7 @@ impl Client { ) -> tonic::Result> + Send + 'static> { let reqs = reqs.map(proto::GetPageRequest::from); let resps = self.inner.get_pages(reqs).await?.into_inner(); - Ok(resps.map_ok(GetPageResponse::from)) + Ok(resps.and_then(|resp| ready(GetPageResponse::try_from(resp).map_err(|err| err.into())))) } /// Returns the size of a relation, as # of blocks. diff --git a/pageserver/page_api/src/model.rs b/pageserver/page_api/src/model.rs index 4db8237ad8..a9dd154285 100644 --- a/pageserver/page_api/src/model.rs +++ b/pageserver/page_api/src/model.rs @@ -502,22 +502,30 @@ impl From for i32 { pub struct GetPageResponse { /// The original request's ID. pub request_id: RequestID, - /// The response status code. + /// The response status code. If not OK, the `rel` and `pages` fields will be empty. pub status_code: GetPageStatusCode, /// A string describing the status, if any. pub reason: Option, - /// The 8KB page images, in the same order as the request. Empty if status != OK. - pub page_images: Vec, + /// The relation that the pages belong to. + pub rel: RelTag, + // The page(s), in the same order as the request. + pub pages: Vec, } -impl From for GetPageResponse { - fn from(pb: proto::GetPageResponse) -> Self { - Self { - request_id: pb.request_id.unwrap_or_default().into(), +impl TryFrom for GetPageResponse { + type Error = ProtocolError; + + fn try_from(pb: proto::GetPageResponse) -> Result { + Ok(Self { + request_id: pb + .request_id + .ok_or(ProtocolError::Missing("request_id"))? + .into(), status_code: pb.status_code.into(), reason: Some(pb.reason).filter(|r| !r.is_empty()), - page_images: pb.page_image, - } + rel: pb.rel.ok_or(ProtocolError::Missing("rel"))?.try_into()?, + pages: pb.page.into_iter().map(Page::from).collect(), + }) } } @@ -527,7 +535,8 @@ impl From for proto::GetPageResponse { request_id: Some(response.request_id.into()), status_code: response.status_code.into(), reason: response.reason.unwrap_or_default(), - page_image: response.page_images, + rel: Some(response.rel.into()), + page: response.pages.into_iter().map(proto::Page::from).collect(), } } } @@ -560,11 +569,39 @@ impl GetPageResponse { request_id, status_code, reason: Some(status.message().to_string()), - page_images: Vec::new(), + rel: RelTag::default(), + pages: Vec::new(), }) } } +// A page. +#[derive(Clone, Debug)] +pub struct Page { + /// The page number. + pub block_number: u32, + /// The materialized page image, as an 8KB byte vector. + pub image: Bytes, +} + +impl From for Page { + fn from(pb: proto::Page) -> Self { + Self { + block_number: pb.block_number, + image: pb.image, + } + } +} + +impl From for proto::Page { + fn from(page: Page) -> Self { + Self { + block_number: page.block_number, + image: page.image, + } + } +} + /// A GetPage response status code. /// /// These are effectively equivalent to gRPC statuses. However, we use a bidirectional stream diff --git a/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs b/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs index b5c191e29a..30b30d36f6 100644 --- a/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs +++ b/pageserver/pagebench/src/cmd/getpage_latest_lsn.rs @@ -694,7 +694,10 @@ impl Client for GrpcClient { "unexpected status code: {}", resp.status_code, ); - Ok((resp.request_id.id, resp.page_images)) + Ok(( + resp.request_id.id, + resp.pages.into_iter().map(|p| p.image).collect(), + )) } } @@ -761,6 +764,9 @@ impl Client for RichGrpcClient { async fn recv_get_page(&mut self) -> anyhow::Result<(u64, Vec)> { let resp = self.requests.next().await.unwrap()?; - Ok((resp.request_id.id, resp.page_images)) + Ok(( + resp.request_id.id, + resp.pages.into_iter().map(|p| p.image).collect(), + )) } } diff --git a/pageserver/src/page_service.rs b/pageserver/src/page_service.rs index b2f6cd465d..1fc7e4eac7 100644 --- a/pageserver/src/page_service.rs +++ b/pageserver/src/page_service.rs @@ -3483,12 +3483,16 @@ impl GrpcPageServiceHandler { request_id: req.request_id, status_code: page_api::GetPageStatusCode::Ok, reason: None, - page_images: Vec::with_capacity(results.len()), + rel: req.rel, + pages: Vec::with_capacity(results.len()), }; for result in results { match result { - Ok((PagestreamBeMessage::GetPage(r), _, _)) => resp.page_images.push(r.page), + Ok((PagestreamBeMessage::GetPage(r), _, _)) => resp.pages.push(page_api::Page { + block_number: r.req.blkno, + image: r.page, + }), Ok((resp, _, _)) => { return Err(tonic::Status::internal(format!( "unexpected response: {resp:?}" From cec0543b5141f24d928d891a49d8832c70161311 Mon Sep 17 00:00:00 2001 From: Tristan Partin Date: Thu, 10 Jul 2025 17:58:54 -0500 Subject: [PATCH 65/65] Add background to compute migration 0002-alter_roles.sql (#11708) On December 8th, 2023, an engineering escalation (INC-110) was opened after it was found that BYPASSRLS was being applied to all roles. PR that introduced the issue: https://github.com/neondatabase/neon/pull/5657 Subsequent commit on main: https://github.com/neondatabase/neon/commit/ad99fa5f0393e2679e5323df653c508ffa0ac072 NOBYPASSRLS and INHERIT are the defaults for a Postgres role, but because it isn't easy to know if a Postgres cluster is affected by the issue, we need to keep the migration around for a long time, if not indefinitely, so any cluster can be fixed. Branching is the gift that keeps on giving... Signed-off-by: Tristan Partin Signed-off-by: Tristan Partin --- compute_tools/src/migrations/0002-alter_roles.sql | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/compute_tools/src/migrations/0002-alter_roles.sql b/compute_tools/src/migrations/0002-alter_roles.sql index 6cb49f873f..8fc371eb8f 100644 --- a/compute_tools/src/migrations/0002-alter_roles.sql +++ b/compute_tools/src/migrations/0002-alter_roles.sql @@ -1,3 +1,16 @@ +-- On December 8th, 2023, an engineering escalation (INC-110) was opened after +-- it was found that BYPASSRLS was being applied to all roles. +-- +-- PR that introduced the issue: https://github.com/neondatabase/neon/pull/5657 +-- Subsequent commit on main: https://github.com/neondatabase/neon/commit/ad99fa5f0393e2679e5323df653c508ffa0ac072 +-- +-- NOBYPASSRLS and INHERIT are the defaults for a Postgres role, but because it +-- isn't easy to know if a Postgres cluster is affected by the issue, we need to +-- keep the migration around for a long time, if not indefinitely, so any +-- cluster can be fixed. +-- +-- Branching is the gift that keeps on giving... + DO $$ DECLARE role_name text;