Compare commits

...

49 Commits

Author SHA1 Message Date
Christian Schwarz
0f400096b1 lints 2025-07-11 18:14:51 +00:00
Christian Schwarz
9baacfa406 cargo fmt 2025-07-11 18:12:37 +00:00
Christian Schwarz
904a63dff5 push down Arc<Mutex>> 2025-07-11 18:12:21 +00:00
Christian Schwarz
67dbe63275 avoid the callback 2025-07-11 17:58:07 +00:00
Christian Schwarz
8662463b06 less lock contention 2025-07-11 17:55:13 +00:00
Christian Schwarz
78ad89b4d5 some abstraction for notifiers 2025-07-11 16:51:40 +00:00
Christian Schwarz
9c63b7c39c switch to std::sync::Mutex 2025-07-11 16:30:07 +00:00
Christian Schwarz
69918a041c storcon was not passing headers 2025-07-11 16:13:55 +00:00
Christian Schwarz
3306c5045b failpoint_supper::sleep_millis_async was wrong, use sync failpoints 2025-07-11 15:55:49 +00:00
Christian Schwarz
137328f304 initial set of fixups 2025-07-11 14:07:33 +00:00
Christian Schwarz
19b0a79968 cargo fmt 2025-07-11 15:45:42 +02:00
Christian Schwarz
5d50c3c086 compile fix 2025-07-11 15:45:09 +02:00
Christian Schwarz
10955dfb52 clippy 2025-07-11 15:44:14 +02:00
Christian Schwarz
a3e05b5abf everything compiles 2025-07-11 15:39:12 +02:00
Christian Schwarz
2156d02658 reexport either 2025-07-11 15:19:48 +02:00
Christian Schwarz
9e923bcc9b remove unused stuff 2025-07-11 15:16:50 +02:00
Christian Schwarz
3a3062d236 great progress although that Pin<Box<dyn Future>> hurts perf, let's revisit it 2025-07-11 15:11:00 +02:00
Christian Schwarz
01cd326153 pageserver crate: install neon_failpoint as fail + cargo fmt 2025-07-11 14:39:41 +02:00
Christian Schwarz
a1095efd85 macros fixup 2025-07-11 14:28:40 +02:00
Christian Schwarz
75d4ccb05c support context failpoint in client and mgmt API 2025-07-11 14:26:23 +02:00
Christian Schwarz
415cdff336 audit for failpoint usage, implement missing ops 2025-07-11 14:00:23 +02:00
Christian Schwarz
27dc11f5cc apply to the _with_context() macros and update docs 2025-07-11 13:11:17 +02:00
Christian Schwarz
ac5279e600 fix macros more 2025-07-11 13:03:22 +02:00
Christian Schwarz
9aeee51bf3 safekeeper compiles now 2025-07-11 12:53:33 +02:00
Christian Schwarz
fc3f55d236 fix failpoint macros 2025-07-11 12:46:33 +02:00
Christian Schwarz
8a04a62ecd remove fail crate, put neon_failpoint in place 2025-07-11 12:37:09 +02:00
Christian Schwarz
4adc3bdd3a fixups 2025-07-11 12:29:39 +02:00
Christian Schwarz
f29b9737cc implement failpoint lib replacement 2025-07-11 11:59:01 +02:00
Tristan Partin
cec0543b51 Add background to compute migration 0002-alter_roles.sql (#11708)
On December 8th, 2023, an engineering escalation (INC-110) was opened
after it was found that BYPASSRLS was being applied to all roles.

PR that introduced the issue:
https://github.com/neondatabase/neon/pull/5657
Subsequent commit on main:
ad99fa5f03

NOBYPASSRLS and INHERIT are the defaults for a Postgres role, but
because it isn't easy to know if a Postgres cluster is affected by the
issue, we need to keep the migration around for a long time, if not
indefinitely, so any cluster can be fixed.

Branching is the gift that keeps on giving...

Signed-off-by: Tristan Partin <tristan.partin@databricks.com>

Signed-off-by: Tristan Partin <tristan.partin@databricks.com>
2025-07-10 22:58:54 +00:00
Erik Grinaker
8aa9540a05 pageserver/page_api: include block number and rel in gRPC GetPageResponse (#12542)
## Problem

With gRPC `GetPageRequest` batches, we'll have non-trivial
fragmentation/reassembly logic in several places of the stack
(concurrent reads, shard splits, LFC hits, etc). If we included the
block numbers with the pages in `GetPageResponse` we could have better
verification and observability that the final responses are correct.

Touches #11735.
Requires #12480.

## Summary of changes

Add a `Page` struct with`block_number` for `GetPageResponse`, along with
the `RelTag` for completeness, and verify them in the rich gRPC client.
2025-07-10 22:35:14 +00:00
Alex Chi Z.
b91f821e8b fix(libpagestore): update the default stripe size (#12557)
## Problem

Part of LKB-379

The pageserver connstrings are updated in the postmaster and then
there's a hook to propagate it to the shared memory of all backends.
However, the shard stripe doesn't. This would cause problems during
shard splits:

* the compute has active reads/writes
* shard split happens and the cplane applies the new config (pageserver
connstring + stripe size)
* pageserver connstring will be updated immediately once the postmaster
receives the SIGHUP, and it will be copied over the the shared memory of
all other backends.
* stripe size is a normal GUC and we don't have special handling around
that, so if any active backend has ongoing txns the value won't be
applied.
* now it's possible for backends to issue requests based on the wrong
stripe size; what's worse, if a request gets cached in the prefetch
buffer, it will get stuck forever.

## Summary of changes

To make sure it aligns with the current default in storcon.

Signed-off-by: Alex Chi Z <chi@neon.tech>
2025-07-10 21:49:52 +00:00
Erik Grinaker
44ea17b7b2 pageserver/page_api: add attempt to GetPage request ID (#12536)
## Problem

`GetPageRequest::request_id` is supposed to be a unique ID for a
request. It's not, because we may retry the request using the same ID.
This causes assertion failures and confusion.

Touches #11735.
Requires #12480.

## Summary of changes

Extend the request ID with a retry attempt, and handle it in the gRPC
client and server.
2025-07-10 20:39:42 +00:00
Tristan Partin
1b7339b53e PG: add max_wal_rate (#12470)
## Problem
One PG tenant may write too fast and overwhelm the PS. The other tenants
sharing the same PSs will get very little bandwidth.

We had one experiment that two tenants sharing the same PSs. One tenant
runs a large ingestion that delivers hundreds of MB/s while the other
only get < 10 MB/s.

## Summary of changes
Rate limit how fast PG can generate WALs. The default is -1. We may
scale the default value with the CPU count. Need to run some experiments
to verify.

## How is this tested?
CI.

PGBench. No limit first. Then set to 1 MB/s and you can see the tps
drop. Then reverted the change and tps increased again.

pgbench -i -s 10 -p 55432 -h 127.0.0.1 -U cloud_admin -d postgres
pgbench postgres -c 10 -j 10 -T 6000000 -P 1 -b tpcb-like -h 127.0.0.1
-U cloud_admin -p 55432
progress: 33.0 s, 986.0 tps, lat 10.142 ms stddev 3.856 progress: 34.0
s, 973.0 tps, lat 10.299 ms stddev 3.857 progress: 35.0 s, 1004.0 tps,
lat 9.939 ms stddev 3.604 progress: 36.0 s, 984.0 tps, lat 10.183 ms
stddev 3.713 progress: 37.0 s, 998.0 tps, lat 10.004 ms stddev 3.668
progress: 38.0 s, 648.9 tps, lat 12.947 ms stddev 24.970 progress: 39.0
s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 40.0 s, 0.0 tps, lat
0.000 ms stddev 0.000 progress: 41.0 s, 0.0 tps, lat 0.000 ms stddev
0.000 progress: 42.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress:
43.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 44.0 s, 0.0 tps,
lat 0.000 ms stddev 0.000 progress: 45.0 s, 0.0 tps, lat 0.000 ms stddev
0.000 progress: 46.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress:
47.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 48.0 s, 0.0 tps,
lat 0.000 ms stddev 0.000 progress: 49.0 s, 347.3 tps, lat 321.560 ms
stddev 1805.633 progress: 50.0 s, 346.8 tps, lat 9.898 ms stddev 3.809
progress: 51.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 52.0 s,
0.0 tps, lat 0.000 ms stddev 0.000 progress: 53.0 s, 0.0 tps, lat 0.000
ms stddev 0.000 progress: 54.0 s, 0.0 tps, lat 0.000 ms stddev 0.000
progress: 55.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 56.0 s,
0.0 tps, lat 0.000 ms stddev 0.000 progress: 57.0 s, 0.0 tps, lat 0.000
ms stddev 0.000 progress: 58.0 s, 0.0 tps, lat 0.000 ms stddev 0.000
progress: 59.0 s, 0.0 tps, lat 0.000 ms stddev 0.000 progress: 60.0 s,
0.0 tps, lat 0.000 ms stddev 0.000 progress: 61.0 s, 0.0 tps, lat 0.000
ms stddev 0.000 progress: 62.0 s, 0.0 tps, lat 0.000 ms stddev 0.000
progress: 63.0 s, 494.5 tps, lat 276.504 ms stddev 1853.689 progress:
64.0 s, 488.0 tps, lat 20.530 ms stddev 71.981 progress: 65.0 s, 407.8
tps, lat 9.502 ms stddev 3.329 progress: 66.0 s, 0.0 tps, lat 0.000 ms
stddev 0.000 progress: 67.0 s, 0.0 tps, lat 0.000 ms stddev 0.000
progress: 68.0 s, 504.5 tps, lat 71.627 ms stddev 397.733 progress: 69.0
s, 371.0 tps, lat 24.898 ms stddev 29.007 progress: 70.0 s, 541.0 tps,
lat 19.684 ms stddev 24.094 progress: 71.0 s, 342.0 tps, lat 29.542 ms
stddev 54.935

Co-authored-by: Haoyu Huang <haoyu.huang@databricks.com>
2025-07-10 20:34:11 +00:00
Mikhail
3593fe195a split TerminationPending into two values, keeping ComputeStatus stateless (#12506)
After https://github.com/neondatabase/neon/pull/12240 we observed
issues in our go code as `ComputeStatus` is not stateless, thus doesn't
deserialize as string.

```
could not check compute activity: json: cannot unmarshal object into Go struct field
ComputeState.status of type computeclient.ComputeStatus
```

- Fix this by splitting this status into two.
- Update compute OpenApi spec to reflect changes to `/terminate` in
previous PR
2025-07-10 19:28:10 +00:00
Mikhail
c5aaf1ae21 Qualify call to neon extension in compute_ctl's prewarming (#12554)
https://github.com/neondatabase/cloud/issues/19011
Calls without `neon.` failed on staging.
Also fix local tests to work with qualified calls
2025-07-10 18:37:54 +00:00
Alex Chi Z.
13b5e7b26f fix(compute_ctl): reload config before applying spec (#12551)
## Problem

If we have catalog update AND a pageserver migration batched in a single
spec, we will not be able to apply the spec (running the SQL) because
the compute is not attached to the right pageserver and we are not able
to read anything if we don't pick up the latest pageserver connstring.
This is not a case for now because cplane always schedules shard split /
pageserver migrations with `skip_pg_catalog_updates` (I suppose).

Context:
https://databricks.slack.com/archives/C09254R641L/p1752163559259399?thread_ts=1752160163.141149&cid=C09254R641L

With this fix, backpressure will likely not be able to affect
reconfigurations.

## Summary of changes

Do `pg_reload_conf` before we apply specs in SQL.

---------

Signed-off-by: Alex Chi Z <chi@neon.tech>
2025-07-10 18:02:54 +00:00
Erik Grinaker
dcdfe80bf0 pagebench: add support for rich gRPC client (#12477)
## Problem

We need to benchmark the rich gRPC client
`client_grpc::PageserverClient` against the basic, no-frills
`page_api::Client` to determine how much overhead it adds.

Touches #11735.
Requires #12476.

## Summary of changes

Add a `pagebench --rich-client` parameter to use
`client_grpc::PageserverClient`. Also adds a compression parameter to
the client.
2025-07-10 17:30:09 +00:00
Alexander Bayandin
8630d37f5e test_runner: manually reuse ports in PortDistributor (#12423)
## Problem

Sometimes we run out of free ports in `PortDistributor`. This affects
particularly failed tests that we rerun automatically up to 3 times
(which makes it use up to 3x more ports)

## Summary of changes
- Cycle over the range of ports to reuse freed ports from previous tests

Ref: LKB-62
2025-07-10 15:53:38 +00:00
Erik Grinaker
2fc77c836b pageserver/client_grpc: add shard map updates (#12480)
## Problem

The communicator gRPC client must support changing the shard map on
splits.

Touches #11735.
Requires #12476.

## Summary of changes

* Wrap the shard set in a `ArcSwap` to allow swapping it out.
* Add a new `ShardSpec` parameter struct to pass validated shard info to
the client.
* Add `update_shards()` to change the shard set. In-flight requests are
allowed to complete using the old shards.
* Restructure `get_page` to use a stable view of the shard map, and
retry errors at the top (pre-split) level to pick up shard map changes.
* Also marks `tonic::Status::Internal` as non-retryable, so that we can
use it for client-side invariant checks without continually retrying
these.
2025-07-10 15:46:39 +00:00
HaoyuHuang
2c6b327be6 A few PS changes (#12540)
# TLDR
All changes are no-op except some metrics. 

## Summary of changes I
### Pageserver
Added a new global counter metric
`pageserver_pagestream_handler_results_total` that categorizes
pagestream request results according to their outcomes:
1. Success
2. Internal errors
3. Other errors

Internal errors include:
1. Page reconstruction error: This probably indicates a pageserver
bug/corruption
2. LSN timeout error: Could indicate overload or bugs with PS's ability
to reach other components
3. Misrouted request error: Indicates bugs in the Storage Controller/HCC

Other errors include transient errors that are expected during normal
operation or errors indicating bugs with other parts of the system
(e.g., malformed requests, errors due to cancelled operations during PS
shutdown, etc.)    


## Summary of changes II
This PR adds a pageserver endpoint and its counterpart in storage
controller to list visible size of all tenant shards. This will be a
prerequisite of the tenant rebalance command.


## Problem III
We need a way to download WAL
segments/layerfiles from S3 and replay WAL records. We cannot access
production S3 from our laptops directly, and we also can't transfer any
user data out of production systems for GDPR compliance, so we need
solutions.

## Summary of changes III

This PR adds a couple of tools to support the debugging
workflow in production:
1. A new `pagectl download-remote-object` command that can be used to
download remote storage objects assuming the correct access is set up.

## Summary of changes IV
This PR adds a command to list all visible delta and image layers from
index_part. This is useful to debug compaction issues as index_part
often contain a lot of covered layers due to PITR.

---------

Co-authored-by: William Huang <william.huang@databricks.com>
Co-authored-by: Chen Luo <chen.luo@databricks.com>
Co-authored-by: Vlad Lazar <vlad@neon.tech>
2025-07-10 14:39:38 +00:00
Alex Chi Z.
be5bbaecad fix(storcon): correctly handle 404 error in lsn lease (#12537)
## Problem

close LKB-253

## Summary of changes

404 for timeline requests could happen when the tenant is intended to be
on a pageserver but not attached yet. This patch adds handling for the
lease request. In the future, we should extend this handling to more
operations.

---------

Signed-off-by: Alex Chi Z <chi@neon.tech>
2025-07-10 14:28:58 +00:00
Arpad Müller
d33b3c7457 Print viability via custom printing impl (#12544)
As per
https://github.com/neondatabase/neon/pull/12485#issuecomment-3056525882
,

we don't want to print the viability error via a debug impl as it prints
the backtrace. SafekeeperInfo doesn't have a display impl, so fall back
to `Debug` for the `Ok` case. It gives single line output so it's okay
to use `Debug` for it.

Follow up of https://github.com/neondatabase/neon/pull/12485
2025-07-10 14:03:20 +00:00
Vlad Lazar
ffeede085e libs: move metric collection for pageserver and safekeeper in a background task (#12525)
## Problem

Safekeeper and pageserver metrics collection might time out. We've seen
this in both hadron and neon.

## Summary of changes

This PR moves metrics collection in PS/SK to the background so that we
will always get some metrics, despite there may be some delays. Will
leave it to the future work to reduce metrics collection time.

---------

Co-authored-by: Chen Luo <chen.luo@databricks.com>
2025-07-10 11:58:22 +00:00
Mikhail
bdca5b500b Fix test_lfc_prewarm: reduce number of prewarms, sleep before LFC offloading (#12515)
Fixes:
- Sleep before LFC offloading in `test_lfc_prewarm[autoprewarm]` to
ensure offloaded LFC is the one exported after all writes finish
- Reduce number of prewarms and increase timeout in
`test_lfc_prewarm_under_workload` as debug builds were failing due to
timeout.

Additional changes:
- Remove `check_pinned_entries`:
https://github.com/neondatabase/neon/pull/12447#discussion_r2185946210
- Fix LFC error metrics description:
https://github.com/neondatabase/neon/pull/12486#discussion_r2190763107
2025-07-10 11:11:53 +00:00
Erik Grinaker
f4b03ddd7b pageserver/client_grpc: reap idle pool resources (#12476)
## Problem

The gRPC client pools don't reap idle resources.

Touches #11735.
Requires #12475.

## Summary of changes

Reap idle pool resources (channels/clients/streams) after 3 minutes of
inactivity.

Also restructure the `StreamPool` to use a mutex rather than atomics for
synchronization, for simplicity. This will be optimized later.
2025-07-10 10:18:37 +00:00
Vlad Lazar
08b19f001c pageserver: optionally force image layer creation on timeout (#12529)
This PR introduces a `image_creation_timeout` to page servers so that we
can force the image creation after a certain period. This is set to 1
day on dev/staging for now, and will rollout to production 1/2 weeks
later.

Majority of the PR are boilerplate code to add the new knob. Specific
changes of the PR are:
1. During L0 compaction, check if we should force a compaction if
min(LSN) of all delta layers < force_image_creation LSN.
2. During image creation, check if we should force a compaction if the
image's LSN < force_image_creation LSN and there are newer deltas with
overlapping key ranges.
3. Also tweaked the check image creation interval to make sure we honor
image_creation_timeout.

Vlad's note: This should be a no-op. I added an extra PS config for the
large timeline
threshold to enable this.

---------

Co-authored-by: Chen Luo <chen.luo@databricks.com>
2025-07-10 10:07:21 +00:00
Dimitri Fontaine
1a45b2ec90 Review security model for executing Event Trigger code. (#12463)
When a function is owned by a superuser (bootstrap user or otherwise),
we consider it safe to run it. Only a superuser could have installed it,
typically from CREATE EXTENSION script: we trust the code to execute.

## Problem

This is intended to solve running pg_graphql Event Triggers
graphql_watch_ddl and graphql_watch_drop which are executing the secdef
function graphql.increment_schema_version().

## Summary of changes

Allow executing Event Trigger function owned by a superuser and with
SECURITY DEFINER properties. The Event Trigger code runs with superuser
privileges, and we consider that it's fine.

---------

Co-authored-by: Tristan Partin <tristan.partin@databricks.com>
2025-07-10 08:06:33 +00:00
Tristan Partin
13e38a58a1 Grant pg_signal_backend to neon_superuser (#12533)
Allow neon_superuser to cancel backends from non-neon_superusers,
excluding Postgres superusers.

Signed-off-by: Tristan Partin <tristan.partin@databricks.com>
Co-authored-by: Vikas Jain <vikas.jain@databricks.com>
2025-07-09 21:35:39 +00:00
Christian Schwarz
2edd59aefb impr(compaction): unify checking of CompactionError for cancellation reason (#12392)
There are a couple of places that call `CompactionError::is_cancel` but
don't check the `::Other` variant via downcasting for root cause being
cancellation.
The only place that does it is `log_compaction_error`.
It's sad we have to do it, but, until we get around cleaning up all the
culprits,
a step forward is to unify the behavior so that all places that inspect
a
`CompactionError` for cancellation reason follow the same behavior.

Thus, this PR ...
- moves the downcasting checks against the `::Other` variant from
  `log_compaction_error` into `is_cancel()` and
- enforces via type system that `.is_cancel()` is used to check whether
  a CompactionError is due to cancellation (matching on the
  `CompactionError::ShuttingDown` will cause a compile-time error).

I don't think there's a _serious_ case right now where matching instead
of using `is_cancel` causes problems.
The worst case I could find is the circuit breaker and
`compaction_failed`,
which don't really matter if we're shutting down the timeline anyway.
But it's unaesthetic and might cause log/alert noise down the line,
so, this PR fixes that at least.

Refs
- https://databricks.atlassian.net/browse/LKB-182
- slack conversation about this PR:
https://databricks.slack.com/archives/C09254R641L/p1751284317955159
2025-07-09 21:15:44 +00:00
105 changed files with 4589 additions and 749 deletions

34
Cargo.lock generated
View File

@@ -1330,7 +1330,6 @@ dependencies = [
"chrono",
"clap",
"compute_api",
"fail",
"flate2",
"futures",
"hostname-validator",
@@ -1339,6 +1338,7 @@ dependencies = [
"itertools 0.10.5",
"jsonwebtoken",
"metrics",
"neon_failpoint",
"nix 0.30.1",
"notify",
"num_cpus",
@@ -2891,13 +2891,13 @@ dependencies = [
"arc-swap",
"bytes",
"camino",
"fail",
"futures",
"hyper 0.14.30",
"itertools 0.10.5",
"jemalloc_pprof",
"jsonwebtoken",
"metrics",
"neon_failpoint",
"once_cell",
"pprof",
"regex",
@@ -3852,6 +3852,23 @@ dependencies = [
"workspace_hack",
]
[[package]]
name = "neon_failpoint"
version = "0.1.0"
dependencies = [
"anyhow",
"either",
"once_cell",
"parking_lot 0.12.1",
"rand 0.8.5",
"regex",
"serde",
"tokio",
"tokio-util",
"tracing",
"tracing-subscriber",
]
[[package]]
name = "never-say-never"
version = "6.6.666"
@@ -4294,6 +4311,7 @@ dependencies = [
"humantime-serde",
"pageserver_api",
"pageserver_client",
"pageserver_client_grpc",
"pageserver_page_api",
"rand 0.8.5",
"reqwest",
@@ -4323,6 +4341,7 @@ dependencies = [
"pageserver_api",
"postgres_ffi",
"remote_storage",
"serde",
"serde_json",
"svg_fmt",
"thiserror 1.0.69",
@@ -4355,7 +4374,6 @@ dependencies = [
"either",
"enum-map",
"enumset",
"fail",
"futures",
"hashlink",
"hex",
@@ -4370,6 +4388,7 @@ dependencies = [
"jsonwebtoken",
"md5",
"metrics",
"neon_failpoint",
"nix 0.30.1",
"num-traits",
"num_cpus",
@@ -4499,6 +4518,7 @@ name = "pageserver_client_grpc"
version = "0.1.0"
dependencies = [
"anyhow",
"arc-swap",
"bytes",
"compute_api",
"futures",
@@ -4506,6 +4526,7 @@ dependencies = [
"pageserver_page_api",
"tokio",
"tokio-stream",
"tokio-util",
"tonic 0.13.1",
"tracing",
"utils",
@@ -6187,7 +6208,6 @@ dependencies = [
"criterion",
"desim",
"env_logger",
"fail",
"futures",
"hex",
"http 1.1.0",
@@ -6197,6 +6217,7 @@ dependencies = [
"itertools 0.10.5",
"jsonwebtoken",
"metrics",
"neon_failpoint",
"once_cell",
"pageserver_api",
"parking_lot 0.12.1",
@@ -6883,7 +6904,6 @@ dependencies = [
"diesel",
"diesel-async",
"diesel_migrations",
"fail",
"futures",
"governor",
"hex",
@@ -6896,6 +6916,7 @@ dependencies = [
"lasso",
"measured",
"metrics",
"neon_failpoint",
"once_cell",
"pageserver_api",
"pageserver_client",
@@ -8160,7 +8181,7 @@ dependencies = [
"const_format",
"criterion",
"diatomic-waker",
"fail",
"either",
"futures",
"git-version",
"hex",
@@ -8168,6 +8189,7 @@ dependencies = [
"humantime",
"jsonwebtoken",
"metrics",
"neon_failpoint",
"nix 0.30.1",
"once_cell",
"pem",

View File

@@ -21,6 +21,7 @@ members = [
"workspace_hack",
"libs/compute_api",
"libs/http-utils",
"libs/neon_failpoint",
"libs/pageserver_api",
"libs/postgres_ffi",
"libs/postgres_ffi_types",
@@ -97,7 +98,6 @@ diatomic-waker = { version = "0.2.3" }
either = "1.8"
enum-map = "2.4.2"
enumset = "1.0.12"
fail = "0.5.0"
fallible-iterator = "0.2"
framed-websockets = { version = "0.1.0", git = "https://github.com/neondatabase/framed-websockets" }
futures = "0.3"
@@ -258,10 +258,12 @@ desim = { version = "0.1", path = "./libs/desim" }
endpoint_storage = { version = "0.0.1", path = "./endpoint_storage/" }
http-utils = { version = "0.1", path = "./libs/http-utils/" }
metrics = { version = "0.1", path = "./libs/metrics/" }
neon_failpoint = { version = "0.1", path = "./libs/neon_failpoint/" }
neon-shmem = { version = "0.1", path = "./libs/neon-shmem/" }
pageserver = { path = "./pageserver" }
pageserver_api = { version = "0.1", path = "./libs/pageserver_api/" }
pageserver_client = { path = "./pageserver/client" }
pageserver_client_grpc = { path = "./pageserver/client_grpc" }
pageserver_compaction = { version = "0.1", path = "./pageserver/compaction/" }
pageserver_page_api = { path = "./pageserver/page_api" }
postgres_backend = { version = "0.1", path = "./libs/postgres_backend/" }

View File

@@ -7,7 +7,7 @@ license.workspace = true
[features]
default = []
# Enables test specific features.
testing = ["fail/failpoints"]
testing = ["neon_failpoint/testing"]
[dependencies]
async-compression.workspace = true
@@ -23,7 +23,7 @@ camino.workspace = true
chrono.workspace = true
cfg-if.workspace = true
clap.workspace = true
fail.workspace = true
neon_failpoint.workspace = true
flate2.workspace = true
futures.workspace = true
http.workspace = true

View File

@@ -46,11 +46,14 @@ stateDiagram-v2
Configuration --> Failed : Failed to configure the compute
Configuration --> Running : Compute has been configured
Empty --> Init : Compute spec is immediately available
Empty --> TerminationPending : Requested termination
Empty --> TerminationPendingFast : Requested termination
Empty --> TerminationPendingImmediate : Requested termination
Init --> Failed : Failed to start Postgres
Init --> Running : Started Postgres
Running --> TerminationPending : Requested termination
TerminationPending --> Terminated : Terminated compute
Running --> TerminationPendingFast : Requested termination
Running --> TerminationPendingImmediate : Requested termination
TerminationPendingFast --> Terminated compute with 30s delay for cplane to inspect status
TerminationPendingImmediate --> Terminated : Terminated compute immediately
Failed --> [*] : Compute exited
Terminated --> [*] : Compute exited
```

View File

@@ -154,7 +154,7 @@ impl Cli {
fn main() -> Result<()> {
let cli = Cli::parse();
let scenario = failpoint_support::init();
failpoint_support::init().unwrap();
// For historical reasons, the main thread that processes the config and launches postgres
// is synchronous, but we always have this tokio runtime available and we "enter" it so
@@ -201,8 +201,6 @@ fn main() -> Result<()> {
let exit_code = compute_node.run()?;
scenario.teardown();
deinit_and_exit(exit_code);
}

View File

@@ -956,14 +956,20 @@ impl ComputeNode {
None
};
let mut delay_exit = false;
let mut state = self.state.lock().unwrap();
state.terminate_flush_lsn = lsn;
if let ComputeStatus::TerminationPending { mode } = state.status {
let delay_exit = state.status == ComputeStatus::TerminationPendingFast;
if state.status == ComputeStatus::TerminationPendingFast
|| state.status == ComputeStatus::TerminationPendingImmediate
{
info!(
"Changing compute status from {} to {}",
state.status,
ComputeStatus::Terminated
);
state.status = ComputeStatus::Terminated;
self.state_changed.notify_all();
// we were asked to terminate gracefully, don't exit to avoid restart
delay_exit = mode == compute_api::responses::TerminateMode::Fast
}
drop(state);
@@ -1805,6 +1811,8 @@ impl ComputeNode {
tls_config,
)?;
self.pg_reload_conf()?;
if !spec.skip_pg_catalog_updates {
let max_concurrent_connections = spec.reconfigure_concurrency;
// Temporarily reset max_cluster_size in config
@@ -1824,10 +1832,9 @@ impl ComputeNode {
Ok(())
})?;
self.pg_reload_conf()?;
}
self.pg_reload_conf()?;
let unknown_op = "unknown".to_string();
let op_id = spec.operation_uuid.as_ref().unwrap_or(&unknown_op);
info!(
@@ -1900,7 +1907,8 @@ impl ComputeNode {
// exit loop
ComputeStatus::Failed
| ComputeStatus::TerminationPending { .. }
| ComputeStatus::TerminationPendingFast
| ComputeStatus::TerminationPendingImmediate
| ComputeStatus::Terminated => break 'cert_update,
// wait

View File

@@ -70,7 +70,7 @@ impl ComputeNode {
}
};
let row = match client
.query_one("select * from get_prewarm_info()", &[])
.query_one("select * from neon.get_prewarm_info()", &[])
.await
{
Ok(row) => row,
@@ -146,7 +146,7 @@ impl ComputeNode {
ComputeNode::get_maintenance_client(&self.tokio_conn_conf)
.await
.context("connecting to postgres")?
.query_one("select prewarm_local_cache($1)", &[&uncompressed])
.query_one("select neon.prewarm_local_cache($1)", &[&uncompressed])
.await
.context("loading LFC state into postgres")
.map(|_| ())
@@ -196,7 +196,7 @@ impl ComputeNode {
ComputeNode::get_maintenance_client(&self.tokio_conn_conf)
.await
.context("connecting to postgres")?
.query_one("select get_local_cache_state()", &[])
.query_one("select neon.get_local_cache_state()", &[])
.await
.context("querying LFC state")?
.try_get::<usize, &[u8]>(0)

View File

@@ -371,9 +371,28 @@ paths:
summary: Terminate Postgres and wait for it to exit
description: ""
operationId: terminate
parameters:
- name: mode
in: query
description: "Terminate mode: fast (wait 30s before returning) and immediate"
required: false
schema:
type: string
enum: ["fast", "immediate"]
default: fast
responses:
200:
description: Result
content:
application/json:
schema:
$ref: "#/components/schemas/TerminateResponse"
201:
description: Result if compute is already terminated
content:
application/json:
schema:
$ref: "#/components/schemas/TerminateResponse"
412:
description: "wrong state"
content:
@@ -530,11 +549,14 @@ components:
type: string
enum:
- empty
- init
- failed
- running
- configuration_pending
- init
- running
- configuration
- failed
- termination_pending_fast
- termination_pending_immediate
- terminated
example: running
ExtensionInstallRequest:
@@ -660,6 +682,17 @@ components:
description: Role name.
example: "neon"
TerminateResponse:
type: object
required:
- lsn
properties:
lsn:
type: string
nullable: true
description: "last WAL flush LSN"
example: "0/028F10D8"
SetRoleGrantsResponse:
type: object
required:

View File

@@ -1,8 +1,9 @@
use axum::response::{IntoResponse, Response};
use http::StatusCode;
use neon_failpoint::{configure_failpoint, configure_failpoint_with_context};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use tracing::info;
use utils::failpoint_support::apply_failpoint;
pub type ConfigureFailpointsRequest = Vec<FailpointConfig>;
@@ -11,10 +12,16 @@ pub type ConfigureFailpointsRequest = Vec<FailpointConfig>;
pub struct FailpointConfig {
/// Name of the fail point
pub name: String,
/// List of actions to take, using the format described in `fail::cfg`
/// List of actions to take, using the format described in neon_failpoint
///
/// We also support `actions = "exit"` to cause the fail point to immediately exit.
/// We support actions: "pause", "sleep(N)", "return", "return(value)", "exit", "off", "panic(message)"
/// Plus probability-based actions: "N%return(value)", "N%M*return(value)", "N%action", "N%M*action"
pub actions: String,
/// Optional context matching rules for conditional failpoints
/// Each key-value pair specifies a context key and a regex pattern to match against
/// All context matchers must match for the failpoint to trigger
#[serde(default, skip_serializing_if = "Option::is_none")]
pub context_matchers: Option<HashMap<String, String>>,
}
use crate::http::JsonResponse;
@@ -24,7 +31,7 @@ use crate::http::extract::Json;
pub(in crate::http) async fn configure_failpoints(
failpoints: Json<ConfigureFailpointsRequest>,
) -> Response {
if !fail::has_failpoints() {
if !neon_failpoint::has_failpoints() {
return JsonResponse::error(
StatusCode::PRECONDITION_FAILED,
"Cannot manage failpoints because neon was compiled without failpoints support",
@@ -32,16 +39,21 @@ pub(in crate::http) async fn configure_failpoints(
}
for fp in &*failpoints {
info!("cfg failpoint: {} {}", fp.name, fp.actions);
info!(
"cfg failpoint: {} {} (context: {:?})",
fp.name, fp.actions, fp.context_matchers
);
// We recognize one extra "action" that's not natively recognized
// by the failpoints crate: exit, to immediately kill the process
let cfg_result = apply_failpoint(&fp.name, &fp.actions);
let cfg_result = if let Some(context_matchers) = fp.context_matchers.clone() {
configure_failpoint_with_context(&fp.name, &fp.actions, context_matchers)
} else {
configure_failpoint(&fp.name, &fp.actions)
};
if let Err(e) = cfg_result {
return JsonResponse::error(
StatusCode::BAD_REQUEST,
format!("failed to configure failpoints: {e}"),
format!("failed to configure failpoint '{}': {e}", fp.name),
);
}
}

View File

@@ -3,7 +3,7 @@ use crate::http::JsonResponse;
use axum::extract::State;
use axum::response::Response;
use axum_extra::extract::OptionalQuery;
use compute_api::responses::{ComputeStatus, TerminateResponse};
use compute_api::responses::{ComputeStatus, TerminateMode, TerminateResponse};
use http::StatusCode;
use serde::Deserialize;
use std::sync::Arc;
@@ -12,7 +12,7 @@ use tracing::info;
#[derive(Deserialize, Default)]
pub struct TerminateQuery {
mode: compute_api::responses::TerminateMode,
mode: TerminateMode,
}
/// Terminate the compute.
@@ -24,16 +24,16 @@ pub(in crate::http) async fn terminate(
{
let mut state = compute.state.lock().unwrap();
if state.status == ComputeStatus::Terminated {
return JsonResponse::success(StatusCode::CREATED, state.terminate_flush_lsn);
let response = TerminateResponse {
lsn: state.terminate_flush_lsn,
};
return JsonResponse::success(StatusCode::CREATED, response);
}
if !matches!(state.status, ComputeStatus::Empty | ComputeStatus::Running) {
return JsonResponse::invalid_status(state.status);
}
state.set_status(
ComputeStatus::TerminationPending { mode },
&compute.state_changed,
);
state.set_status(mode.into(), &compute.state_changed);
}
forward_termination_signal(false);

View File

@@ -108,7 +108,7 @@ pub(crate) static LFC_PREWARMS: Lazy<IntCounter> = Lazy::new(|| {
pub(crate) static LFC_PREWARM_ERRORS: Lazy<IntCounter> = Lazy::new(|| {
register_int_counter!(
"compute_ctl_lfc_prewarm_errors_total",
"Total number of LFC prewarms errors requested by compute_ctl or autoprewarm option",
"Total number of LFC prewarm errors",
)
.expect("failed to define a metric")
});
@@ -124,7 +124,7 @@ pub(crate) static LFC_OFFLOADS: Lazy<IntCounter> = Lazy::new(|| {
pub(crate) static LFC_OFFLOAD_ERRORS: Lazy<IntCounter> = Lazy::new(|| {
register_int_counter!(
"compute_ctl_lfc_offload_errors_total",
"Total number of LFC offload errors requested by compute_ctl or lfc_offload_period_seconds option",
"Total number of LFC offload errors",
)
.expect("failed to define a metric")
});

View File

@@ -1,5 +1,5 @@
use anyhow::{Context, Result};
use fail::fail_point;
use neon_failpoint::fail_point;
use tokio_postgres::{Client, Transaction};
use tracing::{error, info};
@@ -40,13 +40,14 @@ impl<'m> MigrationRunner<'m> {
// middle of applying a series of migrations fails in an expected
// manner
if cfg!(feature = "testing") {
let fail = (|| {
fail_point!("compute-migration", |fail_migration_id| {
let fail = async {
fail_point!("compute-migration", |fail_migration_id: Option<String>| {
migration_id == fail_migration_id.unwrap().parse::<i64>().unwrap()
});
false
})();
}
.await;
if fail {
return Err(anyhow::anyhow!(format!(

View File

@@ -1,3 +1,16 @@
-- On December 8th, 2023, an engineering escalation (INC-110) was opened after
-- it was found that BYPASSRLS was being applied to all roles.
--
-- PR that introduced the issue: https://github.com/neondatabase/neon/pull/5657
-- Subsequent commit on main: https://github.com/neondatabase/neon/commit/ad99fa5f0393e2679e5323df653c508ffa0ac072
--
-- NOBYPASSRLS and INHERIT are the defaults for a Postgres role, but because it
-- isn't easy to know if a Postgres cluster is affected by the issue, we need to
-- keep the migration around for a long time, if not indefinitely, so any
-- cluster can be fixed.
--
-- Branching is the gift that keeps on giving...
DO $$
DECLARE
role_name text;

View File

@@ -0,0 +1 @@
GRANT pg_signal_backend TO neon_superuser WITH ADMIN OPTION;

View File

@@ -0,0 +1,23 @@
DO $$
DECLARE
signal_backend record;
BEGIN
SELECT pg_has_role('neon_superuser', 'pg_signal_backend', 'member') AS member,
admin_option AS admin
INTO signal_backend
FROM pg_auth_members
WHERE roleid = 'pg_signal_backend'::regrole
AND member = 'neon_superuser'::regrole;
IF signal_backend IS NULL THEN
RAISE EXCEPTION 'no entry in pg_auth_members for neon_superuser and pg_signal_backend';
END IF;
IF signal_backend.member IS NULL OR NOT signal_backend.member THEN
RAISE EXCEPTION 'neon_superuser is not a member of pg_signal_backend';
END IF;
IF signal_backend.admin IS NULL OR NOT signal_backend.admin THEN
RAISE EXCEPTION 'neon_superuser cannot grant pg_signal_backend';
END IF;
END $$;

View File

@@ -84,7 +84,8 @@ impl ComputeMonitor {
if matches!(
compute_status,
ComputeStatus::Terminated
| ComputeStatus::TerminationPending { .. }
| ComputeStatus::TerminationPendingFast
| ComputeStatus::TerminationPendingImmediate
| ComputeStatus::Failed
) {
info!(

View File

@@ -197,6 +197,7 @@ pub async fn handle_migrations(client: &mut Client) -> Result<()> {
include_str!(
"./migrations/0011-grant_pg_show_replication_origin_status_to_neon_superuser.sql"
),
include_str!("./migrations/0012-grant_pg_signal_backend_to_neon_superuser.sql"),
];
MigrationRunner::new(client, &migrations)

View File

@@ -922,7 +922,8 @@ impl Endpoint {
ComputeStatus::Empty
| ComputeStatus::ConfigurationPending
| ComputeStatus::Configuration
| ComputeStatus::TerminationPending { .. }
| ComputeStatus::TerminationPendingFast
| ComputeStatus::TerminationPendingImmediate
| ComputeStatus::Terminated => {
bail!("unexpected compute status: {:?}", state.status)
}

View File

@@ -452,6 +452,12 @@ impl PageServerNode {
.map(|x| x.parse::<usize>())
.transpose()
.context("Failed to parse 'image_creation_threshold' as non zero integer")?,
// HADRON
image_layer_force_creation_period: settings
.remove("image_layer_force_creation_period")
.map(humantime::parse_duration)
.transpose()
.context("Failed to parse 'image_layer_force_creation_period' as duration")?,
image_layer_creation_check_threshold: settings
.remove("image_layer_creation_check_threshold")
.map(|x| x.parse::<u8>())

View File

@@ -121,6 +121,15 @@ pub enum TerminateMode {
Immediate,
}
impl From<TerminateMode> for ComputeStatus {
fn from(mode: TerminateMode) -> Self {
match mode {
TerminateMode::Fast => ComputeStatus::TerminationPendingFast,
TerminateMode::Immediate => ComputeStatus::TerminationPendingImmediate,
}
}
}
#[derive(Serialize, Clone, Copy, Debug, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum ComputeStatus {
@@ -141,7 +150,9 @@ pub enum ComputeStatus {
// control-plane to terminate it.
Failed,
// Termination requested
TerminationPending { mode: TerminateMode },
TerminationPendingFast,
// Termination requested, without waiting 30s before returning from /terminate
TerminationPendingImmediate,
// Terminated Postgres
Terminated,
}
@@ -160,7 +171,10 @@ impl Display for ComputeStatus {
ComputeStatus::Running => f.write_str("running"),
ComputeStatus::Configuration => f.write_str("configuration"),
ComputeStatus::Failed => f.write_str("failed"),
ComputeStatus::TerminationPending { .. } => f.write_str("termination-pending"),
ComputeStatus::TerminationPendingFast => f.write_str("termination-pending-fast"),
ComputeStatus::TerminationPendingImmediate => {
f.write_str("termination-pending-immediate")
}
ComputeStatus::Terminated => f.write_str("terminated"),
}
}

View File

@@ -9,7 +9,7 @@ anyhow.workspace = true
arc-swap.workspace = true
bytes.workspace = true
camino.workspace = true
fail.workspace = true
neon_failpoint.workspace = true
futures.workspace = true
hyper0.workspace = true
itertools.workspace = true

View File

@@ -20,6 +20,7 @@ use tokio_stream::wrappers::ReceiverStream;
use tokio_util::io::ReaderStream;
use tracing::{Instrument, debug, info, info_span, warn};
use utils::auth::{AuthError, Claims, SwappableJwtAuth};
use utils::metrics_collector::{METRICS_COLLECTOR, METRICS_STALE_MILLIS};
use crate::error::{ApiError, api_error_handler, route_error_handler};
use crate::request::{get_query_param, parse_query_param};
@@ -250,9 +251,28 @@ impl std::io::Write for ChannelWriter {
}
}
pub async fn prometheus_metrics_handler(_req: Request<Body>) -> Result<Response<Body>, ApiError> {
pub async fn prometheus_metrics_handler(
req: Request<Body>,
force_metric_collection_on_scrape: bool,
) -> Result<Response<Body>, ApiError> {
SERVE_METRICS_COUNT.inc();
// HADRON
let requested_use_latest = parse_query_param(&req, "use_latest")?;
let use_latest = match requested_use_latest {
None => force_metric_collection_on_scrape,
Some(true) => true,
Some(false) => {
if force_metric_collection_on_scrape {
// We don't cache in this case
true
} else {
false
}
}
};
let started_at = std::time::Instant::now();
let (tx, rx) = mpsc::channel(1);
@@ -277,12 +297,18 @@ pub async fn prometheus_metrics_handler(_req: Request<Body>) -> Result<Response<
let _span = span.entered();
let metrics = metrics::gather();
// HADRON
let collected = if use_latest {
// Skip caching the results if we always force metric collection on scrape.
METRICS_COLLECTOR.run_once(!force_metric_collection_on_scrape)
} else {
METRICS_COLLECTOR.last_collected()
};
let gathered_at = std::time::Instant::now();
let res = encoder
.encode(&metrics, &mut writer)
.encode(&collected.metrics, &mut writer)
.and_then(|_| writer.flush().map_err(|e| e.into()));
// this instant is not when we finally got the full response sent, sending is done by hyper
@@ -295,6 +321,10 @@ pub async fn prometheus_metrics_handler(_req: Request<Body>) -> Result<Response<
let encoded_in = encoded_at - gathered_at - writer.wait_time();
let total = encoded_at - started_at;
// HADRON
let staleness_ms = (encoded_at - collected.collected_at).as_millis();
METRICS_STALE_MILLIS.set(staleness_ms as i64);
match res {
Ok(()) => {
tracing::info!(
@@ -303,6 +333,7 @@ pub async fn prometheus_metrics_handler(_req: Request<Body>) -> Result<Response<
spawning_ms = spawned_in.as_millis(),
collection_ms = collected_in.as_millis(),
encoding_ms = encoded_in.as_millis(),
stalenss_ms = staleness_ms,
"responded /metrics"
);
}

View File

@@ -1,7 +1,8 @@
use hyper::{Body, Request, Response, StatusCode};
use neon_failpoint::{configure_failpoint, configure_failpoint_with_context, has_failpoints};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use tokio_util::sync::CancellationToken;
use utils::failpoint_support::apply_failpoint;
use crate::error::ApiError;
use crate::json::{json_request, json_response};
@@ -13,10 +14,16 @@ pub type ConfigureFailpointsRequest = Vec<FailpointConfig>;
pub struct FailpointConfig {
/// Name of the fail point
pub name: String,
/// List of actions to take, using the format described in `fail::cfg`
/// List of actions to take, using the format described in neon_failpoint
///
/// We also support `actions = "exit"` to cause the fail point to immediately exit.
/// We support actions: "pause", "sleep(N)", "return", "return(value)", "exit", "off", "panic(message)"
/// Plus probability-based actions: "N%return(value)", "N%M*return(value)", "N%action", "N%M*action"
pub actions: String,
/// Optional context matching rules for conditional failpoints
/// Each key-value pair specifies a context key and a regex pattern to match against
/// All context matchers must match for the failpoint to trigger
#[serde(default, skip_serializing_if = "Option::is_none")]
pub context_matchers: Option<HashMap<String, String>>,
}
/// Configure failpoints through http.
@@ -24,7 +31,7 @@ pub async fn failpoints_handler(
mut request: Request<Body>,
_cancel: CancellationToken,
) -> Result<Response<Body>, ApiError> {
if !fail::has_failpoints() {
if !has_failpoints() {
return Err(ApiError::BadRequest(anyhow::anyhow!(
"Cannot manage failpoints because neon was compiled without failpoints support"
)));
@@ -32,15 +39,24 @@ pub async fn failpoints_handler(
let failpoints: ConfigureFailpointsRequest = json_request(&mut request).await?;
for fp in failpoints {
tracing::info!("cfg failpoint: {} {}", fp.name, fp.actions);
tracing::info!(
"cfg failpoint: {} {} (context: {:?})",
fp.name,
fp.actions,
fp.context_matchers
);
// We recognize one extra "action" that's not natively recognized
// by the failpoints crate: exit, to immediately kill the process
let cfg_result = apply_failpoint(&fp.name, &fp.actions);
let cfg_result = if let Some(context_matchers) = fp.context_matchers {
configure_failpoint_with_context(&fp.name, &fp.actions, context_matchers)
} else {
configure_failpoint(&fp.name, &fp.actions)
};
if let Err(err_msg) = cfg_result {
if let Err(err) = cfg_result {
return Err(ApiError::BadRequest(anyhow::anyhow!(
"Failed to configure failpoints: {err_msg}"
"Failed to configure failpoint '{}': {}",
fp.name,
err
)));
}
}

View File

@@ -0,0 +1,27 @@
[package]
name = "neon_failpoint"
version = "0.1.0"
edition = "2021"
[dependencies]
tokio = { workspace = true, features = ["time", "sync", "rt-multi-thread"] }
tokio-util = { workspace = true }
tracing = { workspace = true }
serde = { workspace = true, features = ["derive"] }
anyhow = { workspace = true }
regex = { workspace = true }
once_cell = { workspace = true }
parking_lot = { workspace = true }
rand = { workspace = true }
either = { workspace = true }
[dev-dependencies]
tracing-subscriber = { workspace = true, features = ["fmt"] }
[features]
default = []
testing = []
[[example]]
name = "context_demo"
required-features = ["testing"]

View File

@@ -0,0 +1,460 @@
# Neon Failpoint Library
A modern, async-first failpoint library for Neon, replacing the `fail` crate with enhanced functionality.
## Features
- **Async-first**: All failpoint operations are async and don't require `spawn_blocking`
- **Context matching**: Failpoints can be configured to trigger only when specific context conditions are met
- **Regex support**: Context values can be matched using regular expressions
- **Cancellation support**: All operations support cancellation tokens
- **Dynamic reconfiguration**: Paused and sleeping tasks automatically resume when failpoint configurations change
- **Backward compatibility**: Drop-in replacement for existing `fail` crate usage
## Supported Actions
- `off` - Disable the failpoint
- `pause` - Pause indefinitely until disabled, reconfigured, or cancelled
- `sleep(N)` - Sleep for N milliseconds (can be interrupted by reconfiguration)
- `return` - Return early (empty value)
- `return(value)` - Return early with a specific value
- `exit` - Exit the process immediately
- `panic(message)` - Panic the process with a custom message
- `N%return(value)` - Return with a specific value N% of the time (probability-based)
- `N%M*return(value)` - Return with a specific value N% of the time, maximum M times
- `N%action` - Execute any action N% of the time (probability-based)
- `N%M*action` - Execute any action N% of the time, maximum M times
## Probability-Based Actions
The library supports probability-based failpoints that trigger only a percentage of the time:
```rust
// 50% chance to return a value
configure_failpoint("random_failure", "50%return(error)").unwrap();
// 10% chance to sleep, maximum 3 times
configure_failpoint("occasional_delay", "10%3*sleep(1000)").unwrap();
// 25% chance to panic
configure_failpoint("rare_panic", "25%panic(critical error)").unwrap();
```
The probability system uses a counter to track how many times a probability-based action has been triggered, allowing for precise control over test scenarios.
## Dynamic Behavior
When a failpoint is reconfigured while tasks are waiting on it:
- **Paused tasks** will immediately resume and continue normal execution
- **Sleeping tasks** will wake up early and continue normal execution
- **Removed failpoints** will cause all waiting tasks to resume normally
The new configuration only applies to future hits of the failpoint, not to tasks that are already waiting. This allows for flexible testing scenarios where you can pause execution, inspect state, and then resume execution dynamically.
## Example: Dynamic Reconfiguration
```rust
use neon_failpoint::{configure_failpoint, failpoint, FailpointResult};
use tokio::time::Duration;
// Start a task that will hit a failpoint
let task = tokio::spawn(async {
println!("About to hit failpoint");
match failpoint("test_pause", None).await {
FailpointResult::Return(value) => println!("Returned: {}", value),
FailpointResult::Continue => println!("Continued normally"),
FailpointResult::Cancelled => println!("Cancelled"),
}
});
// Configure the failpoint to pause
configure_failpoint("test_pause", "pause").unwrap();
// Let the task hit the failpoint and pause
tokio::time::sleep(Duration::from_millis(10)).await;
// Change the failpoint configuration - this will wake up the paused task
// The task will resume and continue normally (not apply the new config)
configure_failpoint("test_pause", "return(not_applied)").unwrap();
// The task will complete with Continue, not Return
let result = task.await.unwrap();
```
## Basic Usage
```rust
use neon_failpoint::{configure_failpoint, failpoint, FailpointResult};
// Configure a failpoint
configure_failpoint("my_failpoint", "return(42)").unwrap();
// Use the failpoint
match failpoint("my_failpoint", None).await {
FailpointResult::Return(value) => {
println!("Failpoint returned: {}", value);
return value.parse().unwrap_or_default();
}
FailpointResult::Continue => {
// Continue normal execution
}
FailpointResult::Cancelled => {
// Handle cancellation
}
}
```
## Context-Based Failpoint Configuration
Context allows you to create **conditional failpoints** that only trigger when specific runtime conditions are met. This is particularly useful for testing scenarios where you want to inject failures only for specific tenants, operations, or other contextual conditions.
### Configuring Context-Based Failpoints
Use `configure_failpoint_with_context()` to set up failpoints with context matching:
```rust
use neon_failpoint::configure_failpoint_with_context;
use std::collections::HashMap;
let mut context_matchers = HashMap::new();
context_matchers.insert("tenant_id".to_string(), "test_.*".to_string());
context_matchers.insert("operation".to_string(), "backup".to_string());
configure_failpoint_with_context(
"backup_operation", // failpoint name
"return(simulated_failure)", // action to take
context_matchers // context matching rules
).unwrap();
```
### Context Matching Rules
The context matching system works as follows:
1. **Key-Value Matching**: Each entry in `context_matchers` specifies a key that must exist in the runtime context
2. **Regex Support**: Values in `context_matchers` are treated as regular expressions first
3. **Fallback to Exact Match**: If the regex compilation fails, it falls back to exact string matching
4. **ALL Must Match**: All context matchers must match for the failpoint to trigger
### Runtime Context Usage
When code hits a failpoint, it provides context using a `HashMap<String, String>`:
```rust
use neon_failpoint::{failpoint, FailpointResult};
use std::collections::HashMap;
let mut context = HashMap::new();
context.insert("tenant_id".to_string(), "test_123".to_string());
context.insert("operation".to_string(), "backup".to_string());
context.insert("user_id".to_string(), "user_456".to_string());
match failpoint("backup_operation", Some(&context)) {
either::Either::Left(result) => {
match result {
FailpointResult::Return(value) => {
// This will only trigger if ALL context matchers match
println!("Backup failed: {}", value);
}
FailpointResult::Continue => {
// Continue with normal backup operation
}
FailpointResult::Cancelled => {}
}
}
either::Either::Right(future) => {
match future.await {
FailpointResult::Return(value) => {
// This will only trigger if ALL context matchers match
println!("Backup failed: {}", value);
}
FailpointResult::Continue => {
// Continue with normal backup operation
}
FailpointResult::Cancelled => {}
}
}
}
```
### Context Matching Examples
#### Regex Matching
```rust
// Configure to match test tenants only
let mut matchers = HashMap::new();
matchers.insert("tenant_id".to_string(), "test_.*".to_string());
configure_failpoint_with_context("test_failpoint", "pause", matchers).unwrap();
// This will match
let mut context = HashMap::new();
context.insert("tenant_id".to_string(), "test_123".to_string());
// This will NOT match
let mut context = HashMap::new();
context.insert("tenant_id".to_string(), "prod_123".to_string());
```
#### Multiple Conditions
```rust
// Must match BOTH tenant pattern AND operation
let mut matchers = HashMap::new();
matchers.insert("tenant_id".to_string(), "test_.*".to_string());
matchers.insert("operation".to_string(), "backup".to_string());
configure_failpoint_with_context("backup_test", "return(failed)", matchers).unwrap();
// This will match (both conditions met)
let mut context = HashMap::new();
context.insert("tenant_id".to_string(), "test_123".to_string());
context.insert("operation".to_string(), "backup".to_string());
// This will NOT match (missing operation)
let mut context = HashMap::new();
context.insert("tenant_id".to_string(), "test_123".to_string());
context.insert("operation".to_string(), "restore".to_string());
```
#### Exact String Matching
```rust
// If regex compilation fails, falls back to exact match
let mut matchers = HashMap::new();
matchers.insert("env".to_string(), "staging".to_string());
configure_failpoint_with_context("env_specific", "sleep(1000)", matchers).unwrap();
// This will match
let mut context = HashMap::new();
context.insert("env".to_string(), "staging".to_string());
// This will NOT match
let mut context = HashMap::new();
context.insert("env".to_string(), "production".to_string());
```
### Benefits of Context-Based Failpoints
1. **Selective Testing**: Only inject failures for specific tenants, environments, or operations
2. **Production Safety**: Avoid accidentally triggering failpoints in production by using context filters
3. **Complex Scenarios**: Test interactions between different components with targeted failures
4. **Debugging**: Isolate issues to specific contexts without affecting the entire system
### Context vs. Non-Context Failpoints
- **Without context**: `configure_failpoint("name", "action")` - triggers for ALL hits
- **With context**: `configure_failpoint_with_context("name", "action", matchers)` - triggers only when context matches
## Context-Specific Failpoints
```rust
use neon_failpoint::{configure_failpoint_with_context, failpoint};
use std::collections::HashMap;
// Configure a failpoint that only triggers for specific tenants
let mut context_matchers = HashMap::new();
context_matchers.insert("tenant_id".to_string(), "test_.*".to_string());
context_matchers.insert("operation".to_string(), "backup".to_string());
configure_failpoint_with_context(
"backup_operation",
"return(simulated_failure)",
context_matchers
).unwrap();
// Use with context
let mut context = HashMap::new();
context.insert("tenant_id".to_string(), "test_123".to_string());
context.insert("operation".to_string(), "backup".to_string());
match failpoint("backup_operation", Some(&context)) {
either::Either::Left(result) => {
match result {
FailpointResult::Return(value) => {
// This will trigger for tenant_id matching "test_.*"
println!("Backup failed: {}", value);
}
FailpointResult::Continue => {
// Continue with backup
}
FailpointResult::Cancelled => {}
}
}
either::Either::Right(future) => {
match future.await {
FailpointResult::Return(value) => {
// This will trigger for tenant_id matching "test_.*"
println!("Backup failed: {}", value);
}
FailpointResult::Continue => {
// Continue with backup
}
FailpointResult::Cancelled => {}
}
}
}
```
## Macros
The library provides convenient macros for common patterns:
### `fail_point!` - Basic Failpoint Macro
The `fail_point!` macro has three variants:
1. **Simple failpoint** - `fail_point!(name)`
- Just checks the failpoint and continues or returns early (no value)
- Panics if the failpoint is configured with `return(value)` since no closure is provided
2. **Failpoint with return handler** - `fail_point!(name, closure)`
- Provides a closure to handle return values from the failpoint
- The closure receives `Option<String>` and should return the appropriate value
3. **Conditional failpoint** - `fail_point!(name, condition, closure)`
- Only checks the failpoint if the condition is true
- Provides a closure to handle return values (receives `&str`)
```rust
use neon_failpoint::fail_point;
// Simple failpoint - just continue or return early
fail_point!("my_failpoint");
// Failpoint with return value handling
fail_point!("my_failpoint", |value: Option<String>| {
match value {
Some(v) => {
println!("Got value: {}", v);
return Ok(v.parse().unwrap_or_default());
}
None => return Ok(42), // Default return value
}
});
// Conditional failpoint - only check if condition is met
let should_fail = some_condition();
fail_point!("conditional_failpoint", should_fail, |value: &str| {
println!("Conditional failpoint triggered with: {}", value);
return Err(anyhow::anyhow!("Simulated failure"));
});
```
### `fail_point_with_context!` - Context-Aware Failpoint Macro
The `fail_point_with_context!` macro has three variants that mirror `fail_point!` but include context:
1. **Simple with context** - `fail_point_with_context!(name, context)`
2. **With context and return handler** - `fail_point_with_context!(name, context, closure)`
3. **Conditional with context** - `fail_point_with_context!(name, context, condition, closure)`
```rust
use neon_failpoint::{fail_point_with_context};
use std::collections::HashMap;
let mut context = HashMap::new();
context.insert("tenant_id".to_string(), "test_123".to_string());
context.insert("operation".to_string(), "backup".to_string());
// Simple context failpoint
fail_point_with_context!("backup_failpoint", &context);
// Context failpoint with return handler
fail_point_with_context!("backup_failpoint", &context, |value: Option<String>| {
match value {
Some(v) => return Err(anyhow::anyhow!("Backup failed: {}", v)),
None => return Err(anyhow::anyhow!("Backup failed")),
}
});
// Conditional context failpoint
let is_test_tenant = tenant_id.starts_with("test_");
fail_point_with_context!("backup_failpoint", &context, is_test_tenant, |value: Option<String>| {
// Only triggers for test tenants
return Err(anyhow::anyhow!("Test tenant backup failure"));
});
```
### Other Utility Macros
```rust
use neon_failpoint::{pausable_failpoint, sleep_millis_async};
// Pausable failpoint with cancellation
let cancel_token = CancellationToken::new();
if let Err(()) = pausable_failpoint!("pause_here", &cancel_token).await {
println!("Failpoint was cancelled");
}
// Sleep failpoint
sleep_millis_async!("sleep_here", &cancel_token).await;
// Context creation helper
let mut context = HashMap::new();
context.insert("key1".to_string(), "value1".to_string());
context.insert("key2".to_string(), "value2".to_string());
```
### Argument Reference
- **`name`**: String literal - the name of the failpoint
- **`context`**: Expression that evaluates to `&HashMap<String, String>` - context for matching
- **`condition`**: Boolean expression - only check failpoint if true
- **`closure`**: Closure that handles return values:
- For `fail_point!` with closure: receives `Option<String>`
- For conditional variants: receives `&str`
- For `fail_point_with_context!` with closure: receives `Option<String>`
- **`cancel`**: `&CancellationToken` - for cancellation support
## Migration from `fail` crate
The library provides a compatibility layer in `libs/utils/src/failpoint_support.rs`. Most existing code should work without changes, but you can migrate to the new async APIs for better performance:
### Before (with `fail` crate):
```rust
use utils::failpoint_support::pausable_failpoint;
// This used spawn_blocking internally
pausable_failpoint!("my_failpoint", &cancel_token).await?;
```
### After (with `neon_failpoint`):
```rust
use neon_failpoint::{failpoint_with_cancellation, FailpointResult};
// This is fully async
match failpoint_with_cancellation("my_failpoint", None, &cancel_token).await {
FailpointResult::Continue => {},
FailpointResult::Cancelled => return Err(()),
FailpointResult::Return(_) => {},
}
```
## Environment Variable Support
Failpoints can be configured via the `FAILPOINTS` environment variable:
```bash
FAILPOINTS="failpoint1=return(42);failpoint2=sleep(1000);failpoint3=exit"
```
## Testing
The library includes comprehensive tests and examples. Run them with:
```bash
cargo test --features testing
cargo run --example context_demo --features testing
```
## HTTP Configuration
The library integrates with the existing HTTP failpoint configuration API. Send POST requests to `/v1/failpoints` with:
```json
[
{
"name": "my_failpoint",
"actions": "return(42)"
}
]
```

View File

@@ -0,0 +1,82 @@
use neon_failpoint::{configure_failpoint_with_context, failpoint, FailpointResult};
use std::collections::HashMap;
#[tokio::main]
async fn main() {
// Initialize tracing for better output
tracing_subscriber::fmt::init();
// Set up a context-specific failpoint
let mut context_matchers = HashMap::new();
context_matchers.insert("tenant_id".to_string(), "test_.*".to_string());
context_matchers.insert("operation".to_string(), "backup".to_string());
configure_failpoint_with_context(
"backup_operation",
"return(simulated_failure)",
context_matchers,
)
.unwrap();
// Test with matching context
let mut context = HashMap::new();
context.insert("tenant_id".to_string(), "test_123".to_string());
context.insert("operation".to_string(), "backup".to_string());
println!("Testing with matching context...");
match failpoint("backup_operation", Some(&context)) {
either::Either::Left(result) => match result {
FailpointResult::Return(value) => {
println!("Failpoint triggered with value: {value:?}");
}
FailpointResult::Continue => {
println!("Failpoint not triggered");
}
FailpointResult::Cancelled => {
println!("Failpoint cancelled");
}
},
either::Either::Right(future) => match future.await {
FailpointResult::Return(value) => {
println!("Failpoint triggered with value: {value:?}");
}
FailpointResult::Continue => {
println!("Failpoint not triggered");
}
FailpointResult::Cancelled => {
println!("Failpoint cancelled");
}
},
}
// Test with non-matching context
let mut context = HashMap::new();
context.insert("tenant_id".to_string(), "prod_456".to_string());
context.insert("operation".to_string(), "backup".to_string());
println!("Testing with non-matching context...");
match failpoint("backup_operation", Some(&context)) {
either::Either::Left(result) => match result {
FailpointResult::Return(value) => {
println!("Failpoint triggered with value: {value:?}");
}
FailpointResult::Continue => {
println!("Failpoint not triggered (expected)");
}
FailpointResult::Cancelled => {
println!("Failpoint cancelled");
}
},
either::Either::Right(future) => match future.await {
FailpointResult::Return(value) => {
println!("Failpoint triggered with value: {value:?}");
}
FailpointResult::Continue => {
println!("Failpoint not triggered (expected)");
}
FailpointResult::Cancelled => {
println!("Failpoint cancelled");
}
},
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,356 @@
//! Macros for convenient failpoint usage
/// Simple failpoint macro - async version that doesn't require spawn_blocking
#[macro_export]
macro_rules! fail_point {
($name:literal) => {{
if cfg!(feature = "testing") {
match $crate::failpoint($name, None) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(_) => {
panic!("failpoint was configured with return(X) but Rust code does not pass a closure to map X to a return value");
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(future) => {
match future.await {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(_) => {
panic!("failpoint was configured with return(X) but Rust code does not pass a closure to map X to a return value");
},
$crate::FailpointResult::Cancelled => {},
}
},
}
}
}};
($name:literal, $closure:expr) => {{
if cfg!(feature = "testing") {
match $crate::failpoint($name, None) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(future) => {
match future.await {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
}
}
}};
($name:literal, $condition:expr, $closure:expr) => {{
if cfg!(feature = "testing") {
if $condition {
match $crate::failpoint($name, None) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(future) => {
match future.await {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
}
}
}
}};
}
/// Simple failpoint macro - sync version that panics if async action is triggered
#[macro_export]
macro_rules! fail_point_sync {
($name:literal) => {{
if cfg!(feature = "testing") {
match $crate::failpoint($name, None) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(_) => {
panic!("failpoint was configured with return(X) but Rust code does not pass a closure to map X to a return value");
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(_) => {
panic!("failpoint '{}' triggered an async action (sleep/pause) but fail_point_sync! was used. Use fail_point! instead.", $name);
},
}
}
}};
($name:literal, $closure:expr) => {{
if cfg!(feature = "testing") {
match $crate::failpoint($name, None) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(_) => {
panic!("failpoint '{}' triggered an async action (sleep/pause) but fail_point_sync! was used. Use fail_point! instead.", $name);
},
}
}
}};
($name:literal, $condition:expr, $closure:expr) => {{
if cfg!(feature = "testing") {
if $condition {
match $crate::failpoint($name, None) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(_) => {
panic!("failpoint '{}' triggered an async action (sleep/pause) but fail_point_sync! was used. Use fail_point! instead.", $name);
},
}
}
}
}};
}
/// Failpoint macro with context support
#[macro_export]
macro_rules! fail_point_with_context {
($name:literal, $context:expr) => {{
if cfg!(feature = "testing") {
match $crate::failpoint($name, Some($context)) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(_) => {
panic!("failpoint was configured with return(X) but Rust code does not pass a closure to map X to a return value");
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(future) => {
match future.await {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(_) => {
panic!("failpoint was configured with return(X) but Rust code does not pass a closure to map X to a return value");
},
$crate::FailpointResult::Cancelled => {},
}
},
}
}
}};
($name:literal, $context:expr, $closure:expr) => {{
if cfg!(feature = "testing") {
match $crate::failpoint($name, Some($context)) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(future) => {
match future.await {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
}
}
}};
($name:literal, $context:expr, $condition:expr, $closure:expr) => {{
if cfg!(feature = "testing") {
if $condition {
match $crate::failpoint($name, Some($context)) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(future) => {
match future.await {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
}
}
}
}};
}
/// Failpoint macro with context support - sync version
#[macro_export]
macro_rules! fail_point_with_context_sync {
($name:literal, $context:expr) => {{
if cfg!(feature = "testing") {
match $crate::failpoint($name, Some($context)) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(_) => {
panic!("failpoint was configured with return(X) but Rust code does not pass a closure to map X to a return value");
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(_) => {
panic!("failpoint '{}' triggered an async action (sleep/pause) but fail_point_with_context_sync! was used. Use fail_point_with_context! instead.", $name);
},
}
}
}};
($name:literal, $context:expr, $closure:expr) => {{
if cfg!(feature = "testing") {
match $crate::failpoint($name, Some($context)) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(_) => {
panic!("failpoint '{}' triggered an async action (sleep/pause) but fail_point_with_context_sync! was used. Use fail_point_with_context! instead.", $name);
},
}
}
}};
($name:literal, $context:expr, $condition:expr, $closure:expr) => {{
if cfg!(feature = "testing") {
if $condition {
match $crate::failpoint($name, Some($context)) {
$crate::either::Either::Left(result) => {
match result {
$crate::FailpointResult::Continue => {},
$crate::FailpointResult::Return(value) => {
let closure = $closure;
return closure(value);
},
$crate::FailpointResult::Cancelled => {},
}
},
$crate::either::Either::Right(_) => {
panic!("failpoint '{}' triggered an async action (sleep/pause) but fail_point_with_context_sync! was used. Use fail_point_with_context! instead.", $name);
},
}
}
}
}};
}
/// Pausable failpoint macro - equivalent to the old pausable_failpoint
#[macro_export]
macro_rules! pausable_failpoint {
($name:literal) => {{
if cfg!(feature = "testing") {
let cancel = ::tokio_util::sync::CancellationToken::new();
let _ = $crate::pausable_failpoint!($name, &cancel);
}
}};
($name:literal, $cancel:expr) => {{
if cfg!(feature = "testing") {
::tracing::info!("at failpoint {}", $name); // tests rely on this
match $crate::failpoint_with_cancellation($name, None, $cancel) {
$crate::either::Either::Left(result) => match result {
$crate::FailpointResult::Continue => Ok(()),
$crate::FailpointResult::Return(_) => Ok(()),
$crate::FailpointResult::Cancelled => Err(()),
},
$crate::either::Either::Right(future) => match future.await {
$crate::FailpointResult::Continue => Ok(()),
$crate::FailpointResult::Return(_) => Ok(()),
$crate::FailpointResult::Cancelled => Err(()),
},
}
} else {
Ok(())
}
}};
}
/// Sleep failpoint macro - for async sleep operations
#[macro_export]
macro_rules! sleep_millis_async {
($name:literal) => {{
if cfg!(feature = "testing") {
match $crate::failpoint($name, None) {
$crate::either::Either::Left(_) => {}
$crate::either::Either::Right(future) => {
future.await;
}
}
}
}};
($name:literal, $cancel:expr) => {{
if cfg!(feature = "testing") {
match $crate::failpoint_with_cancellation($name, None, $cancel) {
$crate::either::Either::Left(_) => {}
$crate::either::Either::Right(future) => {
future.await;
}
}
}
}};
}
// Re-export for convenience
pub use fail_point;
pub use fail_point_sync;
pub use fail_point_with_context;
pub use fail_point_with_context_sync;
pub use pausable_failpoint;
pub use sleep_millis_async;

View File

@@ -272,6 +272,9 @@ pub struct ConfigToml {
pub timeline_import_config: TimelineImportConfig,
#[serde(skip_serializing_if = "Option::is_none")]
pub basebackup_cache_config: Option<BasebackupCacheConfig>,
#[serde(skip_serializing_if = "Option::is_none")]
pub image_layer_generation_large_timeline_threshold: Option<u64>,
pub force_metric_collection_on_scrape: bool,
}
#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize, serde::Deserialize)]
@@ -561,6 +564,11 @@ pub struct TenantConfigToml {
pub gc_period: Duration,
// Delta layer churn threshold to create L1 image layers.
pub image_creation_threshold: usize,
// HADRON
// When the timeout is reached, PageServer will (1) force compact any remaining L0 deltas and
// (2) create image layers if there are any L1 deltas.
#[serde(with = "humantime_serde")]
pub image_layer_force_creation_period: Option<Duration>,
// Determines how much history is retained, to allow
// branching and read replicas at an older point in time.
// The unit is time.
@@ -823,6 +831,8 @@ impl Default for ConfigToml {
},
basebackup_cache_config: None,
posthog_config: None,
image_layer_generation_large_timeline_threshold: Some(2 * 1024 * 1024 * 1024),
force_metric_collection_on_scrape: true,
}
}
}
@@ -916,6 +926,7 @@ impl Default for TenantConfigToml {
gc_period: humantime::parse_duration(DEFAULT_GC_PERIOD)
.expect("cannot parse default gc period"),
image_creation_threshold: DEFAULT_IMAGE_CREATION_THRESHOLD,
image_layer_force_creation_period: None,
pitr_interval: humantime::parse_duration(DEFAULT_PITR_INTERVAL)
.expect("cannot parse default PITR interval"),
walreceiver_connect_timeout: humantime::parse_duration(

View File

@@ -597,6 +597,9 @@ pub struct TenantConfigPatch {
pub gc_period: FieldPatch<String>,
#[serde(skip_serializing_if = "FieldPatch::is_noop")]
pub image_creation_threshold: FieldPatch<usize>,
// HADRON
#[serde(skip_serializing_if = "FieldPatch::is_noop")]
pub image_layer_force_creation_period: FieldPatch<String>,
#[serde(skip_serializing_if = "FieldPatch::is_noop")]
pub pitr_interval: FieldPatch<String>,
#[serde(skip_serializing_if = "FieldPatch::is_noop")]
@@ -700,6 +703,11 @@ pub struct TenantConfig {
#[serde(skip_serializing_if = "Option::is_none")]
pub image_creation_threshold: Option<usize>,
// HADRON
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(with = "humantime_serde")]
pub image_layer_force_creation_period: Option<Duration>,
#[serde(skip_serializing_if = "Option::is_none")]
#[serde(with = "humantime_serde")]
pub pitr_interval: Option<Duration>,
@@ -798,6 +806,7 @@ impl TenantConfig {
mut gc_horizon,
mut gc_period,
mut image_creation_threshold,
mut image_layer_force_creation_period,
mut pitr_interval,
mut walreceiver_connect_timeout,
mut lagging_wal_timeout,
@@ -861,6 +870,11 @@ impl TenantConfig {
patch
.image_creation_threshold
.apply(&mut image_creation_threshold);
// HADRON
patch
.image_layer_force_creation_period
.map(|v| humantime::parse_duration(&v))?
.apply(&mut image_layer_force_creation_period);
patch
.pitr_interval
.map(|v| humantime::parse_duration(&v))?
@@ -942,6 +956,7 @@ impl TenantConfig {
gc_horizon,
gc_period,
image_creation_threshold,
image_layer_force_creation_period,
pitr_interval,
walreceiver_connect_timeout,
lagging_wal_timeout,
@@ -1016,6 +1031,9 @@ impl TenantConfig {
image_creation_threshold: self
.image_creation_threshold
.unwrap_or(global_conf.image_creation_threshold),
image_layer_force_creation_period: self
.image_layer_force_creation_period
.or(global_conf.image_layer_force_creation_period),
pitr_interval: self.pitr_interval.unwrap_or(global_conf.pitr_interval),
walreceiver_connect_timeout: self
.walreceiver_connect_timeout

View File

@@ -9,7 +9,7 @@ default = ["rename_noreplace"]
rename_noreplace = []
# Enables test-only APIs, incuding failpoints. In particular, enables the `fail_point!` macro,
# which adds some runtime cost to run tests on outage conditions
testing = ["fail/failpoints"]
testing = ["neon_failpoint/testing"]
[dependencies]
arc-swap.workspace = true
@@ -21,10 +21,11 @@ bytes.workspace = true
camino.workspace = true
chrono.workspace = true
diatomic-waker.workspace = true
either.workspace = true
git-version.workspace = true
hex = { workspace = true, features = ["serde"] }
humantime.workspace = true
fail.workspace = true
neon_failpoint.workspace = true
futures = { workspace = true }
jsonwebtoken.workspace = true
nix = { workspace = true, features = ["ioctl"] }

View File

@@ -1,59 +1,22 @@
//! Failpoint support code shared between pageserver and safekeepers.
//!
//! This module provides a compatibility layer over the new neon_failpoint crate.
pub use neon_failpoint::{configure_failpoint as apply_failpoint, has_failpoints, init};
use tokio_util::sync::CancellationToken;
/// Declare a failpoint that can use to `pause` failpoint action.
/// We don't want to block the executor thread, hence, spawn_blocking + await.
///
/// Optionally pass a cancellation token, and this failpoint will drop out of
/// its pause when the cancellation token fires. This is useful for testing
/// cases where we would like to block something, but test its clean shutdown behavior.
/// The macro evaluates to a Result in that case, where Ok(()) is the case
/// where the failpoint was not paused, and Err() is the case where cancellation
/// token fired while evaluating the failpoint.
///
/// Remember to unpause the failpoint in the test; until that happens, one of the
/// limited number of spawn_blocking thread pool threads is leaked.
/// Mere forward to neon_failpoint::pausable_failpoint
#[macro_export]
macro_rules! pausable_failpoint {
($name:literal) => {{
if cfg!(feature = "testing") {
let cancel = ::tokio_util::sync::CancellationToken::new();
let _ = $crate::pausable_failpoint!($name, &cancel);
}
}};
($name:literal, $cancel:expr) => {{
if cfg!(feature = "testing") {
let failpoint_fut = ::tokio::task::spawn_blocking({
let current = ::tracing::Span::current();
move || {
let _entered = current.entered();
::tracing::info!("at failpoint {}", $name);
::fail::fail_point!($name);
}
});
let cancel_fut = async move {
$cancel.cancelled().await;
};
::tokio::select! {
res = failpoint_fut => {
res.expect("spawn_blocking");
// continue with execution
Ok(())
},
_ = cancel_fut => {
Err(())
}
}
} else {
Ok(())
}
}};
($name:literal) => {
::neon_failpoint::pausable_failpoint!($name)
};
($name:literal, $cancel:expr) => {
::neon_failpoint::pausable_failpoint!($name, $cancel)
};
}
pub use pausable_failpoint;
/// use with fail::cfg("$name", "return(2000)")
/// DEPRECATED! - use with fail::cfg("$name", "return(2000)")
///
/// The effect is similar to a "sleep(2000)" action, i.e. we sleep for the
/// specified time (in milliseconds). The main difference is that we use async
@@ -69,7 +32,7 @@ macro_rules! __failpoint_sleep_millis_async {
// If the failpoint is used with a "return" action, set should_sleep to the
// returned value (as string). Otherwise it's set to None.
let should_sleep = (|| {
::fail::fail_point!($name, |x| x);
::neon_failpoint::fail_point_sync!($name, |x| x);
::std::option::Option::None
})();
@@ -82,7 +45,7 @@ macro_rules! __failpoint_sleep_millis_async {
// If the failpoint is used with a "return" action, set should_sleep to the
// returned value (as string). Otherwise it's set to None.
let should_sleep = (|| {
::fail::fail_point!($name, |x| x);
::neon_failpoint::fail_point_sync!($name, |x| x);
::std::option::Option::None
})();
@@ -126,60 +89,3 @@ pub async fn failpoint_sleep_cancellable_helper(
tokio::time::timeout(d, cancel.cancelled()).await.ok();
tracing::info!("failpoint {:?}: sleep done", name);
}
/// Initialize the configured failpoints
///
/// You must call this function before any concurrent threads do operations.
pub fn init() -> fail::FailScenario<'static> {
// The failpoints lib provides support for parsing the `FAILPOINTS` env var.
// We want non-default behavior for `exit`, though, so, we handle it separately.
//
// Format for FAILPOINTS is "name=actions" separated by ";".
let actions = std::env::var("FAILPOINTS");
if actions.is_ok() {
// SAFETY: this function should before any threads start and access env vars concurrently
unsafe {
std::env::remove_var("FAILPOINTS");
}
} else {
// let the library handle non-utf8, or nothing for not present
}
let scenario = fail::FailScenario::setup();
if let Ok(val) = actions {
val.split(';')
.enumerate()
.map(|(i, s)| s.split_once('=').ok_or((i, s)))
.for_each(|res| {
let (name, actions) = match res {
Ok(t) => t,
Err((i, s)) => {
panic!(
"startup failpoints: missing action on the {}th failpoint; try `{s}=return`",
i + 1,
);
}
};
if let Err(e) = apply_failpoint(name, actions) {
panic!("startup failpoints: failed to apply failpoint {name}={actions}: {e}");
}
});
}
scenario
}
pub fn apply_failpoint(name: &str, actions: &str) -> Result<(), String> {
if actions == "exit" {
fail::cfg_callback(name, exit_failpoint)
} else {
fail::cfg(name, actions)
}
}
#[inline(never)]
fn exit_failpoint() {
tracing::info!("Exit requested by failpoint");
std::process::exit(1);
}

View File

@@ -99,6 +99,8 @@ pub mod elapsed_accum;
#[cfg(target_os = "linux")]
pub mod linux_socket_ioctl;
pub mod metrics_collector;
// Re-export used in macro. Avoids adding git-version as dep in target crates.
#[doc(hidden)]
pub use git_version;

View File

@@ -0,0 +1,75 @@
use std::{
sync::{Arc, RwLock},
time::{Duration, Instant},
};
use metrics::{IntGauge, proto::MetricFamily, register_int_gauge};
use once_cell::sync::Lazy;
pub static METRICS_STALE_MILLIS: Lazy<IntGauge> = Lazy::new(|| {
register_int_gauge!(
"metrics_metrics_stale_milliseconds",
"The current metrics stale time in milliseconds"
)
.expect("failed to define a metric")
});
#[derive(Debug)]
pub struct CollectedMetrics {
pub metrics: Vec<MetricFamily>,
pub collected_at: Instant,
}
impl CollectedMetrics {
fn new(metrics: Vec<MetricFamily>) -> Self {
Self {
metrics,
collected_at: Instant::now(),
}
}
}
#[derive(Debug)]
pub struct MetricsCollector {
last_collected: RwLock<Arc<CollectedMetrics>>,
}
impl MetricsCollector {
pub fn new() -> Self {
Self {
last_collected: RwLock::new(Arc::new(CollectedMetrics::new(vec![]))),
}
}
#[tracing::instrument(name = "metrics_collector", skip_all)]
pub fn run_once(&self, cache_metrics: bool) -> Arc<CollectedMetrics> {
let started = Instant::now();
let metrics = metrics::gather();
let collected = Arc::new(CollectedMetrics::new(metrics));
if cache_metrics {
let mut guard = self.last_collected.write().unwrap();
*guard = collected.clone();
}
tracing::info!(
"Collected {} metric families in {} ms",
collected.metrics.len(),
started.elapsed().as_millis()
);
collected
}
pub fn last_collected(&self) -> Arc<CollectedMetrics> {
self.last_collected.read().unwrap().clone()
}
}
impl Default for MetricsCollector {
fn default() -> Self {
Self::new()
}
}
// Interval for metrics collection. Currently hard-coded to be the same as the metrics scape interval from the obs agent
pub static METRICS_COLLECTION_INTERVAL: Duration = Duration::from_secs(30);
pub static METRICS_COLLECTOR: Lazy<MetricsCollector> = Lazy::new(MetricsCollector::default);

View File

@@ -428,6 +428,12 @@ pub fn empty_shmem() -> crate::bindings::WalproposerShmemState {
shard_number: 0,
};
let empty_wal_rate_limiter = crate::bindings::WalRateLimiter {
should_limit: crate::bindings::pg_atomic_uint32 { value: 0 },
sent_bytes: 0,
last_recorded_time_us: 0,
};
crate::bindings::WalproposerShmemState {
propEpochStartLsn: crate::bindings::pg_atomic_uint64 { value: 0 },
donor_name: [0; 64],
@@ -441,6 +447,7 @@ pub fn empty_shmem() -> crate::bindings::WalproposerShmemState {
num_shards: 0,
replica_promote: false,
min_ps_feedback: empty_feedback,
wal_rate_limiter: empty_wal_rate_limiter,
}
}

View File

@@ -8,7 +8,7 @@ license.workspace = true
default = []
# Enables test-only APIs, incuding failpoints. In particular, enables the `fail_point!` macro,
# which adds some runtime cost to run tests on outage conditions
testing = ["fail/failpoints", "pageserver_api/testing", "wal_decoder/testing", "pageserver_client/testing"]
testing = ["neon_failpoint/testing", "pageserver_api/testing", "wal_decoder/testing", "pageserver_client/testing"]
fuzz-read-path = ["testing"]
@@ -33,7 +33,7 @@ crc32c.workspace = true
either.workspace = true
enum-map.workspace = true
enumset = { workspace = true, features = ["serde"]}
fail.workspace = true
neon_failpoint.workspace = true
futures.workspace = true
hashlink.workspace = true
hex.workspace = true

View File

@@ -1,4 +1,4 @@
use std::collections::HashMap;
use std::collections::{BTreeMap, HashMap};
use std::error::Error as _;
use std::time::Duration;
@@ -251,6 +251,70 @@ impl Client {
Ok(())
}
pub async fn tenant_timeline_compact(
&self,
tenant_shard_id: TenantShardId,
timeline_id: TimelineId,
force_image_layer_creation: bool,
must_force_image_layer_creation: bool,
scheduled: bool,
wait_until_done: bool,
) -> Result<()> {
let mut path = reqwest::Url::parse(&format!(
"{}/v1/tenant/{tenant_shard_id}/timeline/{timeline_id}/compact",
self.mgmt_api_endpoint
))
.expect("Cannot build URL");
if force_image_layer_creation {
path.query_pairs_mut()
.append_pair("force_image_layer_creation", "true");
}
if must_force_image_layer_creation {
path.query_pairs_mut()
.append_pair("must_force_image_layer_creation", "true");
}
if scheduled {
path.query_pairs_mut().append_pair("scheduled", "true");
}
if wait_until_done {
path.query_pairs_mut()
.append_pair("wait_until_scheduled_compaction_done", "true");
path.query_pairs_mut()
.append_pair("wait_until_uploaded", "true");
}
self.request(Method::PUT, path, ()).await?;
Ok(())
}
/* BEGIN_HADRON */
pub async fn tenant_timeline_describe(
&self,
tenant_shard_id: &TenantShardId,
timeline_id: &TimelineId,
) -> Result<TimelineInfo> {
let mut path = reqwest::Url::parse(&format!(
"{}/v1/tenant/{tenant_shard_id}/timeline/{timeline_id}",
self.mgmt_api_endpoint
))
.expect("Cannot build URL");
path.query_pairs_mut()
.append_pair("include-image-consistent-lsn", "true");
let response: reqwest::Response = self.request(Method::GET, path, ()).await?;
let body = response.json().await.map_err(Error::ReceiveBody)?;
Ok(body)
}
pub async fn list_tenant_visible_size(&self) -> Result<BTreeMap<TenantShardId, u64>> {
let uri = format!("{}/v1/list_tenant_visible_size", self.mgmt_api_endpoint);
let resp = self.get(&uri).await?;
resp.json().await.map_err(Error::ReceiveBody)
}
/* END_HADRON */
pub async fn tenant_scan_remote_storage(
&self,
tenant_id: TenantId,

View File

@@ -4,8 +4,12 @@ version = "0.1.0"
edition.workspace = true
license.workspace = true
[features]
testing = ["pageserver_api/testing"]
[dependencies]
anyhow.workspace = true
arc-swap.workspace = true
bytes.workspace = true
compute_api.workspace = true
futures.workspace = true
@@ -13,6 +17,7 @@ pageserver_api.workspace = true
pageserver_page_api.workspace = true
tokio.workspace = true
tokio-stream.workspace = true
tokio-util.workspace = true
tonic.workspace = true
tracing.workspace = true
utils.workspace = true

View File

@@ -3,8 +3,10 @@ use std::num::NonZero;
use std::sync::Arc;
use anyhow::anyhow;
use arc_swap::ArcSwap;
use futures::stream::FuturesUnordered;
use futures::{FutureExt as _, StreamExt as _};
use tonic::codec::CompressionEncoding;
use tracing::instrument;
use crate::pool::{ChannelPool, ClientGuard, ClientPool, StreamGuard, StreamPool};
@@ -55,28 +57,85 @@ const MAX_BULK_STREAM_QUEUE_DEPTH: NonZero<usize> = NonZero::new(4).unwrap();
/// TODO: this client does not support base backups or LSN leases, as these are only used by
/// compute_ctl. Consider adding this, but LSN leases need concurrent requests on all shards.
pub struct PageserverClient {
// TODO: support swapping out the shard map, e.g. via an ArcSwap.
shards: Shards,
/// The tenant ID.
tenant_id: TenantId,
/// The timeline ID.
timeline_id: TimelineId,
/// The JWT auth token for this tenant, if any.
auth_token: Option<String>,
/// The compression to use, if any.
compression: Option<CompressionEncoding>,
/// The shards for this tenant.
shards: ArcSwap<Shards>,
/// The retry configuration.
retry: Retry,
}
impl PageserverClient {
/// Creates a new Pageserver client for a given tenant and timeline. Uses the Pageservers given
/// in the shard map, which must be complete and must use gRPC URLs.
/// in the shard spec, which must be complete and must use gRPC URLs.
pub fn new(
tenant_id: TenantId,
timeline_id: TimelineId,
shard_map: HashMap<ShardIndex, String>,
stripe_size: ShardStripeSize,
shard_spec: ShardSpec,
auth_token: Option<String>,
compression: Option<CompressionEncoding>,
) -> anyhow::Result<Self> {
let shards = Shards::new(tenant_id, timeline_id, shard_map, stripe_size, auth_token)?;
let shards = Shards::new(
tenant_id,
timeline_id,
shard_spec,
auth_token.clone(),
compression,
)?;
Ok(Self {
shards,
tenant_id,
timeline_id,
auth_token,
compression,
shards: ArcSwap::new(Arc::new(shards)),
retry: Retry,
})
}
/// Updates the shards from the given shard spec. In-flight requests will complete using the
/// existing shards, but may retry with the new shards if they fail.
///
/// TODO: verify that in-flight requests are allowed to complete, and that the old pools are
/// properly spun down and dropped afterwards.
pub fn update_shards(&self, shard_spec: ShardSpec) -> anyhow::Result<()> {
// Validate the shard spec. We should really use `ArcSwap::rcu` for this, to avoid races
// with concurrent updates, but that involves creating a new `Shards` on every attempt,
// which spins up a bunch of Tokio tasks and such. These should already be checked elsewhere
// in the stack, and if they're violated then we already have problems elsewhere, so a
// best-effort but possibly-racy check is okay here.
let old = self.shards.load_full();
if shard_spec.count < old.count {
return Err(anyhow!(
"can't reduce shard count from {} to {}",
old.count,
shard_spec.count
));
}
if !old.count.is_unsharded() && shard_spec.stripe_size != old.stripe_size {
return Err(anyhow!(
"can't change stripe size from {} to {}",
old.stripe_size,
shard_spec.stripe_size
));
}
let shards = Shards::new(
self.tenant_id,
self.timeline_id,
shard_spec,
self.auth_token.clone(),
self.compression,
)?;
self.shards.store(Arc::new(shards));
Ok(())
}
/// Returns whether a relation exists.
#[instrument(skip_all, fields(rel=%req.rel, lsn=%req.read_lsn))]
pub async fn check_rel_exists(
@@ -84,9 +143,9 @@ impl PageserverClient {
req: page_api::CheckRelExistsRequest,
) -> tonic::Result<page_api::CheckRelExistsResponse> {
self.retry
.with(async || {
.with(async |_| {
// Relation metadata is only available on shard 0.
let mut client = self.shards.get_zero().client().await?;
let mut client = self.shards.load_full().get_zero().client().await?;
client.check_rel_exists(req).await
})
.await
@@ -99,16 +158,17 @@ impl PageserverClient {
req: page_api::GetDbSizeRequest,
) -> tonic::Result<page_api::GetDbSizeResponse> {
self.retry
.with(async || {
.with(async |_| {
// Relation metadata is only available on shard 0.
let mut client = self.shards.get_zero().client().await?;
let mut client = self.shards.load_full().get_zero().client().await?;
client.get_db_size(req).await
})
.await
}
/// Fetches pages. The `request_id` must be unique across all in-flight requests. Automatically
/// splits requests that straddle shard boundaries, and assembles the responses.
/// Fetches pages. The `request_id` must be unique across all in-flight requests, and the
/// `attempt` must be 0 (incremented on retry). Automatically splits requests that straddle
/// shard boundaries, and assembles the responses.
///
/// Unlike `page_api::Client`, this automatically converts `status_code` into `tonic::Status`
/// errors. All responses will have `GetPageStatusCode::Ok`.
@@ -128,72 +188,96 @@ impl PageserverClient {
if req.block_numbers.is_empty() {
return Err(tonic::Status::invalid_argument("no block number"));
}
// The request attempt must be 0. The client will increment it internally.
if req.request_id.attempt != 0 {
return Err(tonic::Status::invalid_argument("request attempt must be 0"));
}
// The shards may change while we're fetching pages. We execute the request using a stable
// view of the shards (especially important for requests that span shards), but retry the
// top-level (pre-split) request to pick up shard changes. This can lead to unnecessary
// retries and re-splits in some cases where requests span shards, but these are expected to
// be rare.
//
// TODO: the gRPC server and client doesn't yet properly support shard splits. Revisit this
// once we figure out how to handle these.
self.retry
.with(async |attempt| {
let mut req = req.clone();
req.request_id.attempt = attempt as u32;
Self::get_page_with_shards(req, &self.shards.load_full()).await
})
.await
}
/// Fetches pages using the given shards. This uses a stable view of the shards, regardless of
/// concurrent shard updates. Does not retry internally, but is retried by `get_page()`.
async fn get_page_with_shards(
req: page_api::GetPageRequest,
shards: &Shards,
) -> tonic::Result<page_api::GetPageResponse> {
// Fast path: request is for a single shard.
if let Some(shard_id) =
GetPageSplitter::is_single_shard(&req, self.shards.count, self.shards.stripe_size)
GetPageSplitter::for_single_shard(&req, shards.count, shards.stripe_size)
{
return self.get_page_for_shard(shard_id, req).await;
return Self::get_page_with_shard(req, shards.get(shard_id)?).await;
}
// Request spans multiple shards. Split it, dispatch concurrent per-shard requests, and
// reassemble the responses.
//
// TODO: when we support shard map updates, we need to detect when it changes and re-split
// the request on errors.
let mut splitter = GetPageSplitter::split(req, self.shards.count, self.shards.stripe_size);
let mut splitter = GetPageSplitter::split(req, shards.count, shards.stripe_size);
let mut shard_requests: FuturesUnordered<_> = splitter
.drain_requests()
.map(|(shard_id, shard_req)| {
// NB: each request will retry internally.
self.get_page_for_shard(shard_id, shard_req)
.map(move |result| result.map(|resp| (shard_id, resp)))
})
.collect();
let mut shard_requests = FuturesUnordered::new();
for (shard_id, shard_req) in splitter.drain_requests() {
let future = Self::get_page_with_shard(shard_req, shards.get(shard_id)?)
.map(move |result| result.map(|resp| (shard_id, resp)));
shard_requests.push(future);
}
while let Some((shard_id, shard_response)) = shard_requests.next().await.transpose()? {
splitter.add_response(shard_id, shard_response)?;
}
splitter.assemble_response()
splitter.get_response()
}
/// Fetches pages that belong to the given shard.
#[instrument(skip_all, fields(shard = %shard_id))]
async fn get_page_for_shard(
&self,
shard_id: ShardIndex,
/// Fetches pages on the given shard. Does not retry internally.
async fn get_page_with_shard(
req: page_api::GetPageRequest,
shard: &Shard,
) -> tonic::Result<page_api::GetPageResponse> {
let resp = self
.retry
.with(async || {
let stream = self
.shards
.get(shard_id)?
.stream(req.request_class.is_bulk())
.await;
let resp = stream.send(req.clone()).await?;
let stream = shard.stream(req.request_class.is_bulk()).await;
let resp = stream.send(req.clone()).await?;
// Convert per-request errors into a tonic::Status.
if resp.status_code != page_api::GetPageStatusCode::Ok {
return Err(tonic::Status::new(
resp.status_code.into(),
resp.reason.unwrap_or_else(|| String::from("unknown error")),
));
}
// Convert per-request errors into a tonic::Status.
if resp.status_code != page_api::GetPageStatusCode::Ok {
return Err(tonic::Status::new(
resp.status_code.into(),
resp.reason.unwrap_or_else(|| String::from("unknown error")),
));
}
Ok(resp)
})
.await?;
// Make sure we got the right number of pages.
// NB: check outside of the retry loop, since we don't want to retry this.
let (expected, actual) = (req.block_numbers.len(), resp.page_images.len());
if expected != actual {
// Check that we received the expected pages.
if req.rel != resp.rel {
return Err(tonic::Status::internal(format!(
"expected {expected} pages for shard {shard_id}, got {actual}",
"shard {} returned wrong relation, expected {} got {}",
shard.id, req.rel, resp.rel
)));
}
if !req
.block_numbers
.iter()
.copied()
.eq(resp.pages.iter().map(|p| p.block_number))
{
return Err(tonic::Status::internal(format!(
"shard {} returned wrong pages, expected {:?} got {:?}",
shard.id,
req.block_numbers,
resp.pages
.iter()
.map(|page| page.block_number)
.collect::<Vec<_>>()
)));
}
@@ -207,9 +291,9 @@ impl PageserverClient {
req: page_api::GetRelSizeRequest,
) -> tonic::Result<page_api::GetRelSizeResponse> {
self.retry
.with(async || {
.with(async |_| {
// Relation metadata is only available on shard 0.
let mut client = self.shards.get_zero().client().await?;
let mut client = self.shards.load_full().get_zero().client().await?;
client.get_rel_size(req).await
})
.await
@@ -222,50 +306,53 @@ impl PageserverClient {
req: page_api::GetSlruSegmentRequest,
) -> tonic::Result<page_api::GetSlruSegmentResponse> {
self.retry
.with(async || {
.with(async |_| {
// SLRU segments are only available on shard 0.
let mut client = self.shards.get_zero().client().await?;
let mut client = self.shards.load_full().get_zero().client().await?;
client.get_slru_segment(req).await
})
.await
}
}
/// Tracks the tenant's shards.
struct Shards {
/// Shard specification for a PageserverClient.
pub struct ShardSpec {
/// Maps shard indices to gRPC URLs.
///
/// INVARIANT: every shard 0..count is present, and shard 0 is always present.
/// INVARIANT: every URL is valid and uses grpc:// scheme.
urls: HashMap<ShardIndex, String>,
/// The shard count.
///
/// NB: this is 0 for unsharded tenants, following `ShardIndex::unsharded()` convention.
count: ShardCount,
/// The stripe size. Only used for sharded tenants.
/// The stripe size for these shards.
stripe_size: ShardStripeSize,
/// Shards by shard index.
///
/// NB: unsharded tenants use count 0, like `ShardIndex::unsharded()`.
///
/// INVARIANT: every shard 0..count is present.
/// INVARIANT: shard 0 is always present.
map: HashMap<ShardIndex, Shard>,
}
impl Shards {
/// Creates a new set of shards based on a shard map.
fn new(
tenant_id: TenantId,
timeline_id: TimelineId,
shard_map: HashMap<ShardIndex, String>,
stripe_size: ShardStripeSize,
auth_token: Option<String>,
impl ShardSpec {
/// Creates a new shard spec with the given URLs and stripe size. All shards must be given.
/// The stripe size may be omitted for unsharded tenants.
pub fn new(
urls: HashMap<ShardIndex, String>,
stripe_size: Option<ShardStripeSize>,
) -> anyhow::Result<Self> {
let count = match shard_map.len() {
// Compute the shard count.
let count = match urls.len() {
0 => return Err(anyhow!("no shards provided")),
1 => ShardCount::new(0), // NB: unsharded tenants use 0, like `ShardIndex::unsharded()`
n if n > u8::MAX as usize => return Err(anyhow!("too many shards: {n}")),
n => ShardCount::new(n as u8),
};
let mut map = HashMap::new();
for (shard_id, url) in shard_map {
// Determine the stripe size. It doesn't matter for unsharded tenants.
if stripe_size.is_none() && !count.is_unsharded() {
return Err(anyhow!("stripe size must be given for sharded tenants"));
}
let stripe_size = stripe_size.unwrap_or_default();
// Validate the shard spec.
for (shard_id, url) in &urls {
// The shard index must match the computed shard count, even for unsharded tenants.
if shard_id.shard_count != count {
return Err(anyhow!("invalid shard index {shard_id}, expected {count}"));
@@ -276,21 +363,72 @@ impl Shards {
}
// The above conditions guarantee that we have all shards 0..count: len() matches count,
// shard number < count, and numbers are unique (via hashmap).
let shard = Shard::new(url, tenant_id, timeline_id, shard_id, auth_token.clone())?;
map.insert(shard_id, shard);
// Validate the URL.
if PageserverProtocol::from_connstring(url)? != PageserverProtocol::Grpc {
return Err(anyhow!("invalid shard URL {url}: must use gRPC"));
}
}
Ok(Self {
urls,
count,
stripe_size,
map,
})
}
}
/// Tracks the tenant's shards.
struct Shards {
/// Shards by shard index.
///
/// INVARIANT: every shard 0..count is present.
/// INVARIANT: shard 0 is always present.
by_index: HashMap<ShardIndex, Shard>,
/// The shard count.
///
/// NB: this is 0 for unsharded tenants, following `ShardIndex::unsharded()` convention.
count: ShardCount,
/// The stripe size. Only used for sharded tenants.
stripe_size: ShardStripeSize,
}
impl Shards {
/// Creates a new set of shards based on a shard spec.
fn new(
tenant_id: TenantId,
timeline_id: TimelineId,
shard_spec: ShardSpec,
auth_token: Option<String>,
compression: Option<CompressionEncoding>,
) -> anyhow::Result<Self> {
// NB: the shard spec has already been validated when constructed.
let mut shards = HashMap::with_capacity(shard_spec.urls.len());
for (shard_id, url) in shard_spec.urls {
shards.insert(
shard_id,
Shard::new(
url,
tenant_id,
timeline_id,
shard_id,
auth_token.clone(),
compression,
)?,
);
}
Ok(Self {
by_index: shards,
count: shard_spec.count,
stripe_size: shard_spec.stripe_size,
})
}
/// Looks up the given shard.
#[allow(clippy::result_large_err)] // TODO: check perf impact
fn get(&self, shard_id: ShardIndex) -> tonic::Result<&Shard> {
self.map
self.by_index
.get(&shard_id)
.ok_or_else(|| tonic::Status::not_found(format!("unknown shard {shard_id}")))
}
@@ -312,6 +450,8 @@ impl Shards {
/// * Bulk client pool: unbounded.
/// * Bulk stream pool: MAX_BULK_STREAMS and MAX_BULK_STREAM_QUEUE_DEPTH.
struct Shard {
/// The shard ID.
id: ShardIndex,
/// Unary gRPC client pool.
client_pool: Arc<ClientPool>,
/// GetPage stream pool.
@@ -328,12 +468,8 @@ impl Shard {
timeline_id: TimelineId,
shard_id: ShardIndex,
auth_token: Option<String>,
compression: Option<CompressionEncoding>,
) -> anyhow::Result<Self> {
// Sanity-check that the URL uses gRPC.
if PageserverProtocol::from_connstring(&url)? != PageserverProtocol::Grpc {
return Err(anyhow!("invalid shard URL {url}: must use gRPC"));
}
// Common channel pool for unary and stream requests. Bounded by client/stream pools.
let channel_pool = ChannelPool::new(url.clone(), MAX_CLIENTS_PER_CHANNEL)?;
@@ -344,6 +480,7 @@ impl Shard {
timeline_id,
shard_id,
auth_token.clone(),
compression,
Some(MAX_UNARY_CLIENTS),
);
@@ -356,6 +493,7 @@ impl Shard {
timeline_id,
shard_id,
auth_token.clone(),
compression,
None, // unbounded, limited by stream pool
),
Some(MAX_STREAMS),
@@ -371,6 +509,7 @@ impl Shard {
timeline_id,
shard_id,
auth_token,
compression,
None, // unbounded, limited by stream pool
),
Some(MAX_BULK_STREAMS),
@@ -378,6 +517,7 @@ impl Shard {
);
Ok(Self {
id: shard_id,
client_pool,
stream_pool,
bulk_stream_pool,

View File

@@ -3,4 +3,4 @@ mod pool;
mod retry;
mod split;
pub use client::PageserverClient;
pub use client::{PageserverClient, ShardSpec};

View File

@@ -34,10 +34,13 @@ use std::num::NonZero;
use std::ops::{Deref, DerefMut};
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, Mutex, Weak};
use std::time::{Duration, Instant};
use futures::StreamExt as _;
use tokio::sync::mpsc::{Receiver, Sender};
use tokio::sync::{OwnedSemaphorePermit, Semaphore, mpsc, oneshot};
use tokio_util::sync::CancellationToken;
use tonic::codec::CompressionEncoding;
use tonic::transport::{Channel, Endpoint};
use tracing::{error, warn};
@@ -45,6 +48,25 @@ use pageserver_page_api as page_api;
use utils::id::{TenantId, TimelineId};
use utils::shard::ShardIndex;
/// Reap channels/clients/streams that have been idle for this long.
///
/// TODO: this is per-pool. For nested pools, it can take up to 3x as long for a TCP connection to
/// be reaped. First, we must wait for an idle stream to be reaped, which marks its client as idle.
/// Then, we must wait for the idle client to be reaped, which marks its channel as idle. Then, we
/// must wait for the idle channel to be reaped. Is that a problem? Maybe not, we just have to
/// account for it when setting the reap threshold. Alternatively, we can immediately reap empty
/// channels, and/or stream pool clients.
const REAP_IDLE_THRESHOLD: Duration = match cfg!(any(test, feature = "testing")) {
false => Duration::from_secs(180),
true => Duration::from_secs(1), // exercise reaping in tests
};
/// Reap idle resources with this interval.
const REAP_IDLE_INTERVAL: Duration = match cfg!(any(test, feature = "testing")) {
false => Duration::from_secs(10),
true => Duration::from_secs(1), // exercise reaping in tests
};
/// A gRPC channel pool, for a single Pageserver. A channel is shared by many clients (via HTTP/2
/// stream multiplexing), up to `clients_per_channel` -- a new channel will be spun up beyond this.
/// The pool does not limit the number of channels, and instead relies on `ClientPool` or
@@ -52,7 +74,6 @@ use utils::shard::ShardIndex;
///
/// The pool is always wrapped in an outer `Arc`, to allow long-lived guards across tasks/threads.
///
/// TODO: reap idle channels.
/// TODO: consider prewarming a set of channels, to avoid initial connection latency.
/// TODO: consider adding a circuit breaker for errors and fail fast.
pub struct ChannelPool {
@@ -62,6 +83,8 @@ pub struct ChannelPool {
max_clients_per_channel: NonZero<usize>,
/// Open channels.
channels: Mutex<BTreeMap<ChannelID, ChannelEntry>>,
/// Reaps idle channels.
idle_reaper: Reaper,
/// Channel ID generator.
next_channel_id: AtomicUsize,
}
@@ -73,6 +96,9 @@ struct ChannelEntry {
channel: Channel,
/// Number of clients using this channel.
clients: usize,
/// The channel has been idle (no clients) since this time. None if channel is in use.
/// INVARIANT: Some if clients == 0, otherwise None.
idle_since: Option<Instant>,
}
impl ChannelPool {
@@ -82,12 +108,15 @@ impl ChannelPool {
E: TryInto<Endpoint> + Send + Sync + 'static,
<E as TryInto<Endpoint>>::Error: std::error::Error + Send + Sync,
{
Ok(Arc::new(Self {
let pool = Arc::new(Self {
endpoint: endpoint.try_into()?,
max_clients_per_channel,
channels: Mutex::default(),
idle_reaper: Reaper::new(REAP_IDLE_THRESHOLD, REAP_IDLE_INTERVAL),
next_channel_id: AtomicUsize::default(),
}))
});
pool.idle_reaper.spawn(&pool);
Ok(pool)
}
/// Acquires a gRPC channel for a client. Multiple clients may acquire the same channel.
@@ -116,8 +145,14 @@ impl ChannelPool {
entry.clients <= self.max_clients_per_channel.get(),
"channel overflow"
);
assert_eq!(
entry.idle_since.is_some(),
entry.clients == 0,
"incorrect channel idle state"
);
if entry.clients < self.max_clients_per_channel.get() {
entry.clients += 1;
entry.idle_since = None;
return ChannelGuard {
pool: Arc::downgrade(self),
id,
@@ -134,6 +169,7 @@ impl ChannelPool {
let entry = ChannelEntry {
channel: channel.clone(),
clients: 1, // account for the guard below
idle_since: None,
};
channels.insert(id, entry);
@@ -145,6 +181,20 @@ impl ChannelPool {
}
}
impl Reapable for ChannelPool {
/// Reaps channels that have been idle since before the cutoff.
fn reap_idle(&self, cutoff: Instant) {
self.channels.lock().unwrap().retain(|_, entry| {
let Some(idle_since) = entry.idle_since else {
assert_ne!(entry.clients, 0, "empty channel not marked idle");
return true;
};
assert_eq!(entry.clients, 0, "idle channel has clients");
idle_since >= cutoff
})
}
}
/// Tracks a channel acquired from the pool. The owned inner channel can be obtained with `take()`,
/// since the gRPC client requires an owned `Channel`.
pub struct ChannelGuard {
@@ -167,10 +217,15 @@ impl Drop for ChannelGuard {
let Some(pool) = self.pool.upgrade() else {
return; // pool was dropped
};
let mut channels = pool.channels.lock().unwrap();
let entry = channels.get_mut(&self.id).expect("unknown channel");
assert!(entry.idle_since.is_none(), "active channel marked idle");
assert!(entry.clients > 0, "channel underflow");
entry.clients -= 1;
if entry.clients == 0 {
entry.idle_since = Some(Instant::now()); // mark channel as idle
}
}
}
@@ -179,8 +234,6 @@ impl Drop for ChannelGuard {
/// number of concurrent clients to `max_clients` via semaphore.
///
/// The pool is always wrapped in an outer `Arc`, to allow long-lived guards across tasks/threads.
///
/// TODO: reap idle clients.
pub struct ClientPool {
/// Tenant ID.
tenant_id: TenantId,
@@ -190,6 +243,8 @@ pub struct ClientPool {
shard_id: ShardIndex,
/// Authentication token, if any.
auth_token: Option<String>,
/// Compression to use.
compression: Option<CompressionEncoding>,
/// Channel pool to acquire channels from.
channel_pool: Arc<ChannelPool>,
/// Limits the max number of concurrent clients for this pool. None if the pool is unbounded.
@@ -201,6 +256,8 @@ pub struct ClientPool {
/// lower-ordered channels. This allows us to free up and reap higher-numbered channels as idle
/// clients are reaped.
idle: Mutex<BTreeMap<ClientID, ClientEntry>>,
/// Reaps idle clients.
idle_reaper: Reaper,
/// Unique client ID generator.
next_client_id: AtomicUsize,
}
@@ -212,6 +269,9 @@ struct ClientEntry {
client: page_api::Client,
/// The channel guard for the channel used by the client.
channel_guard: ChannelGuard,
/// The client has been idle since this time. All clients in `ClientPool::idle` are idle by
/// definition, so this is the time when it was added back to the pool.
idle_since: Instant,
}
impl ClientPool {
@@ -224,18 +284,23 @@ impl ClientPool {
timeline_id: TimelineId,
shard_id: ShardIndex,
auth_token: Option<String>,
compression: Option<CompressionEncoding>,
max_clients: Option<NonZero<usize>>,
) -> Arc<Self> {
Arc::new(Self {
let pool = Arc::new(Self {
tenant_id,
timeline_id,
shard_id,
auth_token,
compression,
channel_pool,
idle: Mutex::default(),
idle_reaper: Reaper::new(REAP_IDLE_THRESHOLD, REAP_IDLE_INTERVAL),
limiter: max_clients.map(|max| Arc::new(Semaphore::new(max.get()))),
next_client_id: AtomicUsize::default(),
})
});
pool.idle_reaper.spawn(&pool);
pool
}
/// Gets a client from the pool, or creates a new one if necessary. Connections are established
@@ -271,7 +336,7 @@ impl ClientPool {
self.timeline_id,
self.shard_id,
self.auth_token.clone(),
None,
self.compression,
)?;
Ok(ClientGuard {
@@ -287,6 +352,16 @@ impl ClientPool {
}
}
impl Reapable for ClientPool {
/// Reaps clients that have been idle since before the cutoff.
fn reap_idle(&self, cutoff: Instant) {
self.idle
.lock()
.unwrap()
.retain(|_, entry| entry.idle_since >= cutoff)
}
}
/// A client acquired from the pool. The inner client can be accessed via Deref. The client is
/// returned to the pool when dropped.
pub struct ClientGuard {
@@ -317,9 +392,11 @@ impl Drop for ClientGuard {
let Some(pool) = self.pool.upgrade() else {
return; // pool was dropped
};
let entry = ClientEntry {
client: self.client.take().expect("dropped once"),
channel_guard: self.channel_guard.take().expect("dropped once"),
idle_since: Instant::now(),
};
pool.idle.lock().unwrap().insert(self.id, entry);
@@ -334,7 +411,6 @@ impl Drop for ClientGuard {
/// a single request and await the response. Internally, requests are multiplexed across streams and
/// channels. This allows proper queue depth enforcement and response routing.
///
/// TODO: reap idle streams.
/// TODO: consider making this generic over request and response types; not currently needed.
pub struct StreamPool {
/// The client pool to acquire clients from. Must be unbounded.
@@ -344,7 +420,7 @@ pub struct StreamPool {
/// Incoming requests will be sent over an existing stream with available capacity. If all
/// streams are full, a new one is spun up and added to the pool (up to `max_streams`). Each
/// stream has an associated Tokio task that processes requests and responses.
streams: Arc<Mutex<HashMap<StreamID, StreamEntry>>>,
streams: Mutex<HashMap<StreamID, StreamEntry>>,
/// The max number of concurrent streams, or None if unbounded.
max_streams: Option<NonZero<usize>>,
/// The max number of concurrent requests per stream.
@@ -352,6 +428,8 @@ pub struct StreamPool {
/// Limits the max number of concurrent requests, given by `max_streams * max_queue_depth`.
/// None if the pool is unbounded.
limiter: Option<Arc<Semaphore>>,
/// Reaps idle streams.
idle_reaper: Reaper,
/// Stream ID generator.
next_stream_id: AtomicUsize,
}
@@ -364,9 +442,11 @@ type ResponseSender = oneshot::Sender<tonic::Result<page_api::GetPageResponse>>;
struct StreamEntry {
/// Sends caller requests to the stream task. The stream task exits when this is dropped.
sender: RequestSender,
/// Number of in-flight requests on this stream. This is an atomic to allow decrementing it on
/// completion without acquiring the `StreamPool::streams` lock.
queue_depth: Arc<AtomicUsize>,
/// Number of in-flight requests on this stream.
queue_depth: usize,
/// The time when this stream went idle (queue_depth == 0).
/// INVARIANT: Some if queue_depth == 0, otherwise None.
idle_since: Option<Instant>,
}
impl StreamPool {
@@ -383,16 +463,19 @@ impl StreamPool {
max_queue_depth: NonZero<usize>,
) -> Arc<Self> {
assert!(client_pool.limiter.is_none(), "bounded client pool");
Arc::new(Self {
let pool = Arc::new(Self {
client_pool,
streams: Arc::default(),
streams: Mutex::default(),
limiter: max_streams.map(|max_streams| {
Arc::new(Semaphore::new(max_streams.get() * max_queue_depth.get()))
}),
max_streams,
max_queue_depth,
idle_reaper: Reaper::new(REAP_IDLE_THRESHOLD, REAP_IDLE_INTERVAL),
next_stream_id: AtomicUsize::default(),
})
});
pool.idle_reaper.spawn(&pool);
pool
}
/// Acquires an available stream from the pool, or spins up a new stream async if all streams
@@ -412,8 +495,8 @@ impl StreamPool {
/// * Allow concurrent clients to join onto streams while they're spun up.
/// * Allow spinning up multiple streams concurrently, but don't overshoot limits.
///
/// For now, we just do something simple and functional, but very inefficient (linear scan).
pub async fn get(&self) -> StreamGuard {
/// For now, we just do something simple but inefficient (linear scan under mutex).
pub async fn get(self: &Arc<Self>) -> StreamGuard {
// Acquire a permit if the pool is bounded.
let mut permit = None;
if let Some(limiter) = self.limiter.clone() {
@@ -422,23 +505,23 @@ impl StreamPool {
let mut streams = self.streams.lock().unwrap();
// Look for a pooled stream with available capacity.
for entry in streams.values() {
for (&id, entry) in streams.iter_mut() {
assert!(
entry.queue_depth.load(Ordering::Relaxed) <= self.max_queue_depth.get(),
entry.queue_depth <= self.max_queue_depth.get(),
"stream queue overflow"
);
if entry
.queue_depth
.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |queue_depth| {
// Increment the queue depth via compare-and-swap.
// TODO: review ordering.
(queue_depth < self.max_queue_depth.get()).then_some(queue_depth + 1)
})
.is_ok()
{
assert_eq!(
entry.idle_since.is_some(),
entry.queue_depth == 0,
"incorrect stream idle state"
);
if entry.queue_depth < self.max_queue_depth.get() {
entry.queue_depth += 1;
entry.idle_since = None;
return StreamGuard {
pool: Arc::downgrade(self),
id,
sender: entry.sender.clone(),
queue_depth: entry.queue_depth.clone(),
permit,
};
}
@@ -448,11 +531,11 @@ impl StreamPool {
// return the guard, while spinning up the stream task async. This allows other callers to
// join onto this stream and also create additional streams concurrently if this fills up.
let id = self.next_stream_id.fetch_add(1, Ordering::Relaxed);
let queue_depth = Arc::new(AtomicUsize::new(1)); // reserve quota for this caller
let (req_tx, req_rx) = mpsc::channel(self.max_queue_depth.get());
let entry = StreamEntry {
sender: req_tx.clone(),
queue_depth: queue_depth.clone(),
queue_depth: 1, // reserve quota for this caller
idle_since: None,
};
streams.insert(id, entry);
@@ -461,20 +544,23 @@ impl StreamPool {
};
let client_pool = self.client_pool.clone();
let streams = self.streams.clone();
let pool = Arc::downgrade(self);
tokio::spawn(async move {
if let Err(err) = Self::run_stream(client_pool, req_rx).await {
error!("stream failed: {err}");
}
// Remove stream from pool on exit.
let entry = streams.lock().unwrap().remove(&id);
assert!(entry.is_some(), "unknown stream ID: {id}");
// Remove stream from pool on exit. Weak reference to avoid holding the pool alive.
if let Some(pool) = pool.upgrade() {
let entry = pool.streams.lock().unwrap().remove(&id);
assert!(entry.is_some(), "unknown stream ID: {id}");
}
});
StreamGuard {
pool: Arc::downgrade(self),
id,
sender: req_tx,
queue_depth,
permit,
}
}
@@ -505,6 +591,10 @@ impl StreamPool {
// Track caller response channels by request ID. If the task returns early, these response
// channels will be dropped and the waiting callers will receive an error.
//
// NB: this will leak entries if the server doesn't respond to a request (by request ID).
// It shouldn't happen, and if it does it will often hold onto queue depth quota anyway and
// block further use. But we could consider reaping closed channels after some time.
let mut callers = HashMap::new();
// Process requests and responses.
@@ -552,11 +642,26 @@ impl StreamPool {
}
}
impl Reapable for StreamPool {
/// Reaps streams that have been idle since before the cutoff.
fn reap_idle(&self, cutoff: Instant) {
self.streams.lock().unwrap().retain(|_, entry| {
let Some(idle_since) = entry.idle_since else {
assert_ne!(entry.queue_depth, 0, "empty stream not marked idle");
return true;
};
assert_eq!(entry.queue_depth, 0, "idle stream has requests");
idle_since >= cutoff
});
}
}
/// A pooled stream reference. Can be used to send a single request, to properly enforce queue
/// depth. Queue depth is already reserved and will be returned on drop.
pub struct StreamGuard {
pool: Weak<StreamPool>,
id: StreamID,
sender: RequestSender,
queue_depth: Arc<AtomicUsize>,
permit: Option<OwnedSemaphorePermit>, // None if pool is unbounded
}
@@ -588,11 +693,87 @@ impl StreamGuard {
impl Drop for StreamGuard {
fn drop(&mut self) {
let Some(pool) = self.pool.upgrade() else {
return; // pool was dropped
};
// Release the queue depth reservation on drop. This can prematurely decrement it if dropped
// before the response is received, but that's okay.
let prev_queue_depth = self.queue_depth.fetch_sub(1, Ordering::SeqCst);
assert!(prev_queue_depth > 0, "stream queue underflow");
//
// TODO: actually, it's probably not okay. Queue depth release should be moved into the
// stream task, such that it continues to account for the queue depth slot until the server
// responds. Otherwise, if a slow request times out and keeps blocking the stream, the
// server will keep waiting on it and we can pile on subsequent requests (including the
// timeout retry) in the same stream and get blocked. But we may also want to avoid blocking
// requests on e.g. LSN waits and layer downloads, instead returning early to free up the
// stream. Or just scale out streams with a queue depth of 1 to sidestep all head-of-line
// blocking. TBD.
let mut streams = pool.streams.lock().unwrap();
let entry = streams.get_mut(&self.id).expect("unknown stream");
assert!(entry.idle_since.is_none(), "active stream marked idle");
assert!(entry.queue_depth > 0, "stream queue underflow");
entry.queue_depth -= 1;
if entry.queue_depth == 0 {
entry.idle_since = Some(Instant::now()); // mark stream as idle
}
_ = self.permit; // returned on drop, referenced for visibility
}
}
/// Periodically reaps idle resources from a pool.
struct Reaper {
/// The task check interval.
interval: Duration,
/// The threshold for reaping idle resources.
threshold: Duration,
/// Cancels the reaper task. Cancelled when the reaper is dropped.
cancel: CancellationToken,
}
impl Reaper {
/// Creates a new reaper.
pub fn new(threshold: Duration, interval: Duration) -> Self {
Self {
cancel: CancellationToken::new(),
threshold,
interval,
}
}
/// Spawns a task to periodically reap idle resources from the given task pool. The task is
/// cancelled when the reaper is dropped.
pub fn spawn(&self, pool: &Arc<impl Reapable>) {
// NB: hold a weak pool reference, otherwise the task will prevent dropping the pool.
let pool = Arc::downgrade(pool);
let cancel = self.cancel.clone();
let (interval, threshold) = (self.interval, self.threshold);
tokio::spawn(async move {
loop {
tokio::select! {
_ = tokio::time::sleep(interval) => {
let Some(pool) = pool.upgrade() else {
return; // pool was dropped
};
pool.reap_idle(Instant::now() - threshold);
}
_ = cancel.cancelled() => return,
}
}
});
}
}
impl Drop for Reaper {
fn drop(&mut self) {
self.cancel.cancel(); // cancel reaper task
}
}
/// A reapable resource pool.
trait Reapable: Send + Sync + 'static {
/// Reaps resources that have been idle since before the given cutoff.
fn reap_idle(&self, cutoff: Instant);
}

View File

@@ -23,14 +23,14 @@ impl Retry {
/// If true, log successful requests. For debugging.
const LOG_SUCCESS: bool = false;
/// Runs the given async closure with timeouts and retries (exponential backoff). Logs errors,
/// using the current tracing span for context.
/// Runs the given async closure with timeouts and retries (exponential backoff), passing the
/// attempt number starting at 0. Logs errors, using the current tracing span for context.
///
/// Only certain gRPC status codes are retried, see [`Self::should_retry`]. For default
/// timeouts, see [`Self::REQUEST_TIMEOUT`] and [`Self::TOTAL_TIMEOUT`].
pub async fn with<T, F, O>(&self, mut f: F) -> tonic::Result<T>
where
F: FnMut() -> O,
F: FnMut(usize) -> O, // takes attempt number, starting at 0
O: Future<Output = tonic::Result<T>>,
{
let started = Instant::now();
@@ -47,7 +47,7 @@ impl Retry {
}
let request_started = Instant::now();
tokio::time::timeout(Self::REQUEST_TIMEOUT, f())
tokio::time::timeout(Self::REQUEST_TIMEOUT, f(retries))
.await
.map_err(|_| {
tonic::Status::deadline_exceeded(format!(
@@ -131,7 +131,6 @@ impl Retry {
tonic::Code::Aborted => true,
tonic::Code::Cancelled => true,
tonic::Code::DeadlineExceeded => true, // maybe transient slowness
tonic::Code::Internal => true, // maybe transient failure?
tonic::Code::ResourceExhausted => true,
tonic::Code::Unavailable => true,
@@ -139,6 +138,10 @@ impl Retry {
tonic::Code::AlreadyExists => false,
tonic::Code::DataLoss => false,
tonic::Code::FailedPrecondition => false,
// NB: don't retry Internal. It is intended for serious errors such as invariant
// violations, and is also used for client-side invariant checks that would otherwise
// result in retry loops.
tonic::Code::Internal => false,
tonic::Code::InvalidArgument => false,
tonic::Code::NotFound => false,
tonic::Code::OutOfRange => false,

View File

@@ -5,27 +5,24 @@ use bytes::Bytes;
use pageserver_api::key::rel_block_to_key;
use pageserver_api::shard::{ShardStripeSize, key_to_shard_number};
use pageserver_page_api as page_api;
use utils::shard::{ShardCount, ShardIndex};
use utils::shard::{ShardCount, ShardIndex, ShardNumber};
/// Splits GetPageRequests that straddle shard boundaries and assembles the responses.
/// TODO: add tests for this.
pub struct GetPageSplitter {
/// The original request ID. Used for all shard requests.
request_id: page_api::RequestID,
/// Split requests by shard index.
requests: HashMap<ShardIndex, page_api::GetPageRequest>,
/// Maps the offset in `GetPageRequest::block_numbers` to the owning shard. Used to assemble
/// the response pages in the same order as the original request.
/// The response being assembled. Preallocated with empty pages, to be filled in.
response: page_api::GetPageResponse,
/// Maps the offset in `request.block_numbers` and `response.pages` to the owning shard. Used
/// to assemble the response pages in the same order as the original request.
block_shards: Vec<ShardIndex>,
/// Page responses by shard index. Will be assembled into a single response.
responses: HashMap<ShardIndex, Vec<Bytes>>,
}
impl GetPageSplitter {
/// Checks if the given request only touches a single shard, and returns the shard ID. This is
/// the common case, so we check first in order to avoid unnecessary allocations and overhead.
/// The caller must ensure that the request has at least one block number, or this will panic.
pub fn is_single_shard(
pub fn for_single_shard(
req: &page_api::GetPageRequest,
count: ShardCount,
stripe_size: ShardStripeSize,
@@ -35,8 +32,12 @@ impl GetPageSplitter {
return Some(ShardIndex::unsharded());
}
// Find the base shard index for the first page, and compare with the rest.
let key = rel_block_to_key(req.rel, *req.block_numbers.first().expect("no pages"));
// Find the first page's shard, for comparison. If there are no pages, just return the first
// shard (caller likely checked already, otherwise the server will reject it).
let Some(&first_page) = req.block_numbers.first() else {
return Some(ShardIndex::new(ShardNumber(0), count));
};
let key = rel_block_to_key(req.rel, first_page);
let shard_number = key_to_shard_number(count, stripe_size, &key);
req.block_numbers
@@ -57,19 +58,19 @@ impl GetPageSplitter {
) -> Self {
// The caller should make sure we don't split requests unnecessarily.
debug_assert!(
Self::is_single_shard(&req, count, stripe_size).is_none(),
Self::for_single_shard(&req, count, stripe_size).is_none(),
"unnecessary request split"
);
// Split the requests by shard index.
let mut requests = HashMap::with_capacity(2); // common case
let mut block_shards = Vec::with_capacity(req.block_numbers.len());
for blkno in req.block_numbers {
for &blkno in &req.block_numbers {
let key = rel_block_to_key(req.rel, blkno);
let shard_number = key_to_shard_number(count, stripe_size, &key);
let shard_id = ShardIndex::new(shard_number, count);
let shard_req = requests
requests
.entry(shard_id)
.or_insert_with(|| page_api::GetPageRequest {
request_id: req.request_id,
@@ -77,27 +78,47 @@ impl GetPageSplitter {
rel: req.rel,
read_lsn: req.read_lsn,
block_numbers: Vec::new(),
});
shard_req.block_numbers.push(blkno);
})
.block_numbers
.push(blkno);
block_shards.push(shard_id);
}
Self {
// Construct a response to be populated by shard responses. Preallocate empty page slots
// with the expected block numbers.
let response = page_api::GetPageResponse {
request_id: req.request_id,
responses: HashMap::with_capacity(requests.len()),
status_code: page_api::GetPageStatusCode::Ok,
reason: None,
rel: req.rel,
pages: req
.block_numbers
.into_iter()
.map(|block_number| {
page_api::Page {
block_number,
image: Bytes::new(), // empty page slot to be filled in
}
})
.collect(),
};
Self {
requests,
response,
block_shards,
}
}
/// Drains the per-shard requests, moving them out of the hashmap to avoid extra allocations.
/// Drains the per-shard requests, moving them out of the splitter to avoid extra allocations.
pub fn drain_requests(
&mut self,
) -> impl Iterator<Item = (ShardIndex, page_api::GetPageRequest)> {
self.requests.drain()
}
/// Adds a response from the given shard.
/// Adds a response from the given shard. The response must match the request ID and have an OK
/// status code. A response must not already exist for the given shard ID.
#[allow(clippy::result_large_err)]
pub fn add_response(
&mut self,
@@ -105,68 +126,84 @@ impl GetPageSplitter {
response: page_api::GetPageResponse,
) -> tonic::Result<()> {
// The caller should already have converted status codes into tonic::Status.
assert_eq!(response.status_code, page_api::GetPageStatusCode::Ok);
// Make sure the response matches the request ID.
if response.request_id != self.request_id {
if response.status_code != page_api::GetPageStatusCode::Ok {
return Err(tonic::Status::internal(format!(
"response ID {} does not match request ID {}",
response.request_id, self.request_id
"unexpected non-OK response for shard {shard_id}: {} {}",
response.status_code,
response.reason.unwrap_or_default()
)));
}
// Add the response data to the map.
let old = self.responses.insert(shard_id, response.page_images);
if old.is_some() {
if response.request_id != self.response.request_id {
return Err(tonic::Status::internal(format!(
"duplicate response for shard {shard_id}",
"response ID mismatch for shard {shard_id}: expected {}, got {}",
self.response.request_id, response.request_id
)));
}
// Place the shard response pages into the assembled response, in request order.
let mut pages = response.pages.into_iter();
for (i, &s) in self.block_shards.iter().enumerate() {
if shard_id != s {
continue;
}
let Some(slot) = self.response.pages.get_mut(i) else {
return Err(tonic::Status::internal(format!(
"no block_shards slot {i} for shard {shard_id}"
)));
};
let Some(page) = pages.next() else {
return Err(tonic::Status::internal(format!(
"missing page {} in shard {shard_id} response",
slot.block_number
)));
};
if page.block_number != slot.block_number {
return Err(tonic::Status::internal(format!(
"shard {shard_id} returned wrong page at index {i}, expected {} got {}",
slot.block_number, page.block_number
)));
}
if !slot.image.is_empty() {
return Err(tonic::Status::internal(format!(
"shard {shard_id} returned duplicate page {} at index {i}",
slot.block_number
)));
}
*slot = page;
}
// Make sure we've consumed all pages from the shard response.
if let Some(extra_page) = pages.next() {
return Err(tonic::Status::internal(format!(
"shard {shard_id} returned extra page: {}",
extra_page.block_number
)));
}
Ok(())
}
/// Assembles the shard responses into a single response. Responses must be present for all
/// relevant shards, and the total number of pages must match the original request.
/// Fetches the final, assembled response.
#[allow(clippy::result_large_err)]
pub fn assemble_response(self) -> tonic::Result<page_api::GetPageResponse> {
let mut response = page_api::GetPageResponse {
request_id: self.request_id,
status_code: page_api::GetPageStatusCode::Ok,
reason: None,
page_images: Vec::with_capacity(self.block_shards.len()),
};
// Set up per-shard page iterators we can pull from.
let mut shard_responses = HashMap::with_capacity(self.responses.len());
for (shard_id, responses) in self.responses {
shard_responses.insert(shard_id, responses.into_iter());
}
// Reassemble the responses in the same order as the original request.
for shard_id in &self.block_shards {
let page = shard_responses
.get_mut(shard_id)
.ok_or_else(|| {
tonic::Status::internal(format!("missing response for shard {shard_id}"))
})?
.next()
.ok_or_else(|| {
tonic::Status::internal(format!("missing page from shard {shard_id}"))
})?;
response.page_images.push(page);
}
// Make sure there are no additional pages.
for (shard_id, mut pages) in shard_responses {
if pages.next().is_some() {
pub fn get_response(self) -> tonic::Result<page_api::GetPageResponse> {
// Check that the response is complete.
for (i, page) in self.response.pages.iter().enumerate() {
if page.image.is_empty() {
return Err(tonic::Status::internal(format!(
"extra pages returned from shard {shard_id}"
"missing page {} for shard {}",
page.block_number,
self.block_shards
.get(i)
.map(|s| s.to_string())
.unwrap_or_else(|| "?".to_string())
)));
}
}
Ok(response)
Ok(self.response)
}
}

View File

@@ -17,6 +17,7 @@ pageserver = { path = ".." }
pageserver_api.workspace = true
remote_storage = { path = "../../libs/remote_storage" }
postgres_ffi.workspace = true
serde.workspace = true
thiserror.workspace = true
tokio.workspace = true
tokio-util.workspace = true

View File

@@ -0,0 +1,85 @@
use camino::Utf8PathBuf;
use clap::Parser;
use tokio_util::sync::CancellationToken;
/// Download a specific object from remote storage to a local file.
///
/// The remote storage configuration is supplied via the `REMOTE_STORAGE_CONFIG` environment
/// variable, in the same TOML format that the pageserver itself understands. This allows the
/// command to work with any cloud supported by the `remote_storage` crate (currently AWS S3,
/// Azure Blob Storage and local files), as long as the credentials are available via the
/// standard environment variables expected by the underlying SDKs.
///
/// Examples for setting the environment variable:
///
/// ```bash
/// # AWS S3 (region can also be provided via AWS_REGION)
/// export REMOTE_STORAGE_CONFIG='remote_storage = { bucket_name = "my-bucket", bucket_region = "us-east-2" }'
///
/// # Azure Blob Storage (account key picked up from AZURE_STORAGE_ACCOUNT_KEY)
/// export REMOTE_STORAGE_CONFIG='remote_storage = { container = "my-container", account = "my-account" }'
/// ```
#[derive(Parser)]
pub(crate) struct DownloadRemoteObjectCmd {
/// Key / path of the object to download (relative to the remote storage prefix).
///
/// Examples:
/// "wal/3aa8f.../00000001000000000000000A"
/// "pageserver/v1/tenants/<tenant_id>/timelines/<timeline_id>/layer_12345"
pub remote_path: String,
/// Path of the local file to create. Existing file will be overwritten.
///
/// Examples:
/// "./segment"
/// "/tmp/layer_12345.parquet"
pub output_file: Utf8PathBuf,
}
pub(crate) async fn main(cmd: &DownloadRemoteObjectCmd) -> anyhow::Result<()> {
use remote_storage::{DownloadOpts, GenericRemoteStorage, RemotePath, RemoteStorageConfig};
// Fetch remote storage configuration from the environment
let config_str = std::env::var("REMOTE_STORAGE_CONFIG").map_err(|_| {
anyhow::anyhow!(
"'REMOTE_STORAGE_CONFIG' environment variable must be set to a valid remote storage TOML config"
)
})?;
let config = RemoteStorageConfig::from_toml_str(&config_str)?;
// Initialise remote storage client
let storage = GenericRemoteStorage::from_config(&config).await?;
// RemotePath must be relative leading slashes confuse the parser.
let remote_path_str = cmd.remote_path.trim_start_matches('/');
let remote_path = RemotePath::from_string(remote_path_str)?;
let cancel = CancellationToken::new();
println!(
"Downloading '{remote_path}' from remote storage bucket {:?} ...",
config.storage.bucket_name()
);
// Start the actual download
let download = storage
.download(&remote_path, &DownloadOpts::default(), &cancel)
.await?;
// Stream to file
let mut reader = tokio_util::io::StreamReader::new(download.download_stream);
let tmp_path = cmd.output_file.with_extension("tmp");
let mut file = tokio::fs::File::create(&tmp_path).await?;
tokio::io::copy(&mut reader, &mut file).await?;
file.sync_all().await?;
// Atomically move into place
tokio::fs::rename(&tmp_path, &cmd.output_file).await?;
println!(
"Downloaded to '{}'. Last modified: {:?}, etag: {}",
cmd.output_file, download.last_modified, download.etag
);
Ok(())
}

View File

@@ -1,14 +1,16 @@
use std::str::FromStr;
use anyhow::Context;
use anyhow::{Context, Ok};
use camino::Utf8PathBuf;
use pageserver::tenant::{
IndexPart,
layer_map::{LayerMap, SearchResult},
remote_timeline_client::remote_layer_path,
storage_layer::{PersistentLayerDesc, ReadableLayerWeak},
remote_timeline_client::{index::LayerFileMetadata, remote_layer_path},
storage_layer::{LayerName, LayerVisibilityHint, PersistentLayerDesc, ReadableLayerWeak},
};
use pageserver_api::key::Key;
use serde::Serialize;
use std::collections::BTreeMap;
use utils::{
id::{TenantId, TimelineId},
lsn::Lsn,
@@ -33,6 +35,31 @@ pub(crate) enum IndexPartCmd {
#[arg(long)]
lsn: String,
},
/// List all visible delta and image layers at the latest LSN.
ListVisibleLayers {
#[arg(long)]
path: Utf8PathBuf,
},
}
fn create_layer_map_from_index_part(
index_part: &IndexPart,
tenant_shard_id: TenantShardId,
timeline_id: TimelineId,
) -> LayerMap {
let mut layer_map = LayerMap::default();
{
let mut updates = layer_map.batch_update();
for (key, value) in index_part.layer_metadata.iter() {
updates.insert_historic(PersistentLayerDesc::from_filename(
tenant_shard_id,
timeline_id,
key.clone(),
value.file_size,
));
}
}
layer_map
}
async fn search_layers(
@@ -49,18 +76,7 @@ async fn search_layers(
let bytes = tokio::fs::read(path).await?;
IndexPart::from_json_bytes(&bytes).unwrap()
};
let mut layer_map = LayerMap::default();
{
let mut updates = layer_map.batch_update();
for (key, value) in index_json.layer_metadata.iter() {
updates.insert_historic(PersistentLayerDesc::from_filename(
tenant_shard_id,
timeline_id,
key.clone(),
value.file_size,
));
}
}
let layer_map = create_layer_map_from_index_part(&index_json, tenant_shard_id, timeline_id);
let key = Key::from_hex(key)?;
let lsn = Lsn::from_str(lsn).unwrap();
@@ -98,6 +114,69 @@ async fn search_layers(
Ok(())
}
#[derive(Debug, Clone, Serialize)]
struct VisibleLayers {
pub total_images: u64,
pub total_image_bytes: u64,
pub total_deltas: u64,
pub total_delta_bytes: u64,
pub layer_metadata: BTreeMap<LayerName, LayerFileMetadata>,
}
impl VisibleLayers {
pub fn new() -> Self {
Self {
layer_metadata: BTreeMap::new(),
total_images: 0,
total_image_bytes: 0,
total_deltas: 0,
total_delta_bytes: 0,
}
}
pub fn add_layer(&mut self, name: LayerName, layer: LayerFileMetadata) {
match name {
LayerName::Image(_) => {
self.total_images += 1;
self.total_image_bytes += layer.file_size;
}
LayerName::Delta(_) => {
self.total_deltas += 1;
self.total_delta_bytes += layer.file_size;
}
}
self.layer_metadata.insert(name, layer);
}
}
async fn list_visible_layers(path: &Utf8PathBuf) -> anyhow::Result<()> {
let tenant_id = TenantId::generate();
let tenant_shard_id = TenantShardId::unsharded(tenant_id);
let timeline_id = TimelineId::generate();
let bytes = tokio::fs::read(path).await.context("read file")?;
let index_part = IndexPart::from_json_bytes(&bytes).context("deserialize")?;
let layer_map = create_layer_map_from_index_part(&index_part, tenant_shard_id, timeline_id);
let mut visible_layers = VisibleLayers::new();
let (layers, _key_space) = layer_map.get_visibility(Vec::new());
for (layer, visibility) in layers {
if visibility == LayerVisibilityHint::Visible {
visible_layers.add_layer(
layer.layer_name(),
index_part
.layer_metadata
.get(&layer.layer_name())
.unwrap()
.clone(),
);
}
}
let output = serde_json::to_string_pretty(&visible_layers).context("serialize output")?;
println!("{output}");
Ok(())
}
pub(crate) async fn main(cmd: &IndexPartCmd) -> anyhow::Result<()> {
match cmd {
IndexPartCmd::Dump { path } => {
@@ -114,5 +193,6 @@ pub(crate) async fn main(cmd: &IndexPartCmd) -> anyhow::Result<()> {
key,
lsn,
} => search_layers(tenant_id, timeline_id, path, key, lsn).await,
IndexPartCmd::ListVisibleLayers { path } => list_visible_layers(path).await,
}
}

View File

@@ -4,6 +4,7 @@
//!
//! Separate, `metadata` subcommand allows to print and update pageserver's metadata file.
mod download_remote_object;
mod draw_timeline_dir;
mod index_part;
mod key;
@@ -16,6 +17,7 @@ use std::time::{Duration, SystemTime};
use camino::{Utf8Path, Utf8PathBuf};
use clap::{Parser, Subcommand};
use download_remote_object::DownloadRemoteObjectCmd;
use index_part::IndexPartCmd;
use layers::LayerCmd;
use page_trace::PageTraceCmd;
@@ -63,6 +65,7 @@ enum Commands {
/// Debug print a hex key found from logs
Key(key::DescribeKeyCommand),
PageTrace(PageTraceCmd),
DownloadRemoteObject(DownloadRemoteObjectCmd),
}
/// Read and update pageserver metadata file
@@ -185,6 +188,9 @@ async fn main() -> anyhow::Result<()> {
}
Commands::Key(dkc) => dkc.execute(),
Commands::PageTrace(cmd) => page_trace::main(&cmd)?,
Commands::DownloadRemoteObject(cmd) => {
download_remote_object::main(&cmd).await?;
}
};
Ok(())
}

View File

@@ -153,7 +153,7 @@ message GetDbSizeResponse {
message GetPageRequest {
// A request ID. Will be included in the response. Should be unique for
// in-flight requests on the stream.
uint64 request_id = 1;
RequestID request_id = 1;
// The request class.
GetPageClass request_class = 2;
// The LSN to read at.
@@ -177,6 +177,14 @@ message GetPageRequest {
repeated uint32 block_number = 5;
}
// A Request ID. Should be unique for in-flight requests on a stream. Included in the response.
message RequestID {
// The base request ID.
uint64 id = 1;
// The request attempt. Starts at 0, incremented on each retry.
uint32 attempt = 2;
}
// A GetPageRequest class. Primarily intended for observability, but may also be
// used for prioritization in the future.
enum GetPageClass {
@@ -199,13 +207,26 @@ enum GetPageClass {
// the entire batch is ready, so no one can make use of the individual pages.
message GetPageResponse {
// The original request's ID.
uint64 request_id = 1;
// The response status code.
RequestID request_id = 1;
// The response status code. If not OK, the rel and page fields will be empty.
GetPageStatusCode status_code = 2;
// A string describing the status, if any.
string reason = 3;
// The 8KB page images, in the same order as the request. Empty if status_code != OK.
repeated bytes page_image = 4;
// The relation that the pages belong to.
RelTag rel = 4;
// The page(s), in the same order as the request.
repeated Page page = 5;
}
// A page.
//
// TODO: it would be slightly more efficient (but less convenient) to have separate arrays of block
// numbers and images, but given the 8KB page size it's probably negligible. Benchmark it anyway.
message Page {
// The page number.
uint32 block_number = 1;
// The materialized page image, as an 8KB byte vector.
bytes image = 2;
}
// A GetPageResponse status code.

View File

@@ -1,4 +1,5 @@
use anyhow::Context as _;
use futures::future::ready;
use futures::{Stream, StreamExt as _, TryStreamExt as _};
use tokio::io::AsyncRead;
use tokio_util::io::StreamReader;
@@ -110,7 +111,7 @@ impl Client {
) -> tonic::Result<impl Stream<Item = tonic::Result<GetPageResponse>> + Send + 'static> {
let reqs = reqs.map(proto::GetPageRequest::from);
let resps = self.inner.get_pages(reqs).await?.into_inner();
Ok(resps.map_ok(GetPageResponse::from))
Ok(resps.and_then(|resp| ready(GetPageResponse::try_from(resp).map_err(|err| err.into()))))
}
/// Returns the size of a relation, as # of blocks.

View File

@@ -356,7 +356,10 @@ impl TryFrom<proto::GetPageRequest> for GetPageRequest {
return Err(ProtocolError::Missing("block_number"));
}
Ok(Self {
request_id: pb.request_id,
request_id: pb
.request_id
.ok_or(ProtocolError::Missing("request_id"))?
.into(),
request_class: pb.request_class.into(),
read_lsn: pb
.read_lsn
@@ -371,7 +374,7 @@ impl TryFrom<proto::GetPageRequest> for GetPageRequest {
impl From<GetPageRequest> for proto::GetPageRequest {
fn from(request: GetPageRequest) -> Self {
Self {
request_id: request.request_id,
request_id: Some(request.request_id.into()),
request_class: request.request_class.into(),
read_lsn: Some(request.read_lsn.into()),
rel: Some(request.rel.into()),
@@ -380,8 +383,51 @@ impl From<GetPageRequest> for proto::GetPageRequest {
}
}
/// A GetPage request ID.
pub type RequestID = u64;
/// A GetPage request ID and retry attempt. Should be unique for in-flight requests on a stream.
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Hash, PartialOrd, Ord)]
pub struct RequestID {
/// The base request ID.
pub id: u64,
// The request attempt. Starts at 0, incremented on each retry.
pub attempt: u32,
}
impl RequestID {
/// Creates a new RequestID with the given ID and an initial attempt of 0.
pub fn new(id: u64) -> Self {
Self { id, attempt: 0 }
}
}
impl Display for RequestID {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}.{}", self.id, self.attempt)
}
}
impl From<proto::RequestId> for RequestID {
fn from(pb: proto::RequestId) -> Self {
Self {
id: pb.id,
attempt: pb.attempt,
}
}
}
impl From<u64> for RequestID {
fn from(id: u64) -> Self {
Self::new(id)
}
}
impl From<RequestID> for proto::RequestId {
fn from(request_id: RequestID) -> Self {
Self {
id: request_id.id,
attempt: request_id.attempt,
}
}
}
/// A GetPage request class.
#[derive(Clone, Copy, Debug, strum_macros::Display)]
@@ -456,32 +502,41 @@ impl From<GetPageClass> for i32 {
pub struct GetPageResponse {
/// The original request's ID.
pub request_id: RequestID,
/// The response status code.
/// The response status code. If not OK, the `rel` and `pages` fields will be empty.
pub status_code: GetPageStatusCode,
/// A string describing the status, if any.
pub reason: Option<String>,
/// The 8KB page images, in the same order as the request. Empty if status != OK.
pub page_images: Vec<Bytes>,
/// The relation that the pages belong to.
pub rel: RelTag,
// The page(s), in the same order as the request.
pub pages: Vec<Page>,
}
impl From<proto::GetPageResponse> for GetPageResponse {
fn from(pb: proto::GetPageResponse) -> Self {
Self {
request_id: pb.request_id,
impl TryFrom<proto::GetPageResponse> for GetPageResponse {
type Error = ProtocolError;
fn try_from(pb: proto::GetPageResponse) -> Result<Self, ProtocolError> {
Ok(Self {
request_id: pb
.request_id
.ok_or(ProtocolError::Missing("request_id"))?
.into(),
status_code: pb.status_code.into(),
reason: Some(pb.reason).filter(|r| !r.is_empty()),
page_images: pb.page_image,
}
rel: pb.rel.ok_or(ProtocolError::Missing("rel"))?.try_into()?,
pages: pb.page.into_iter().map(Page::from).collect(),
})
}
}
impl From<GetPageResponse> for proto::GetPageResponse {
fn from(response: GetPageResponse) -> Self {
Self {
request_id: response.request_id,
request_id: Some(response.request_id.into()),
status_code: response.status_code.into(),
reason: response.reason.unwrap_or_default(),
page_image: response.page_images,
rel: Some(response.rel.into()),
page: response.pages.into_iter().map(proto::Page::from).collect(),
}
}
}
@@ -514,11 +569,39 @@ impl GetPageResponse {
request_id,
status_code,
reason: Some(status.message().to_string()),
page_images: Vec::new(),
rel: RelTag::default(),
pages: Vec::new(),
})
}
}
// A page.
#[derive(Clone, Debug)]
pub struct Page {
/// The page number.
pub block_number: u32,
/// The materialized page image, as an 8KB byte vector.
pub image: Bytes,
}
impl From<proto::Page> for Page {
fn from(pb: proto::Page) -> Self {
Self {
block_number: pb.block_number,
image: pb.image,
}
}
}
impl From<Page> for proto::Page {
fn from(page: Page) -> Self {
Self {
block_number: page.block_number,
image: page.image,
}
}
}
/// A GetPage response status code.
///
/// These are effectively equivalent to gRPC statuses. However, we use a bidirectional stream

View File

@@ -27,8 +27,9 @@ tokio-util.workspace = true
tonic.workspace = true
url.workspace = true
pageserver_client.workspace = true
pageserver_api.workspace = true
pageserver_client.workspace = true
pageserver_client_grpc.workspace = true
pageserver_page_api.workspace = true
utils = { path = "../../libs/utils/" }
workspace_hack = { version = "0.1", path = "../../workspace_hack" }

View File

@@ -10,12 +10,14 @@ use anyhow::Context;
use async_trait::async_trait;
use bytes::Bytes;
use camino::Utf8PathBuf;
use futures::stream::FuturesUnordered;
use futures::{Stream, StreamExt as _};
use pageserver_api::key::Key;
use pageserver_api::keyspace::KeySpaceAccum;
use pageserver_api::pagestream_api::{PagestreamGetPageRequest, PagestreamRequest};
use pageserver_api::reltag::RelTag;
use pageserver_api::shard::TenantShardId;
use pageserver_client_grpc::{self as client_grpc, ShardSpec};
use pageserver_page_api as page_api;
use rand::prelude::*;
use tokio::task::JoinSet;
@@ -37,6 +39,10 @@ pub(crate) struct Args {
/// Pageserver connection string. Supports postgresql:// and grpc:// protocols.
#[clap(long, default_value = "postgres://postgres@localhost:64000")]
page_service_connstring: String,
/// Use the rich gRPC Pageserver client `client_grpc::PageserverClient`, rather than the basic
/// no-frills `page_api::Client`. Only valid with grpc:// connstrings.
#[clap(long)]
rich_client: bool,
#[clap(long)]
pageserver_jwt: Option<String>,
#[clap(long, default_value = "1")]
@@ -332,6 +338,7 @@ async fn main_impl(
let client: Box<dyn Client> = match scheme.as_str() {
"postgresql" | "postgres" => {
assert!(!args.compression, "libpq does not support compression");
assert!(!args.rich_client, "rich client requires grpc://");
Box::new(
LibpqClient::new(&args.page_service_connstring, worker_id.timeline)
.await
@@ -339,6 +346,16 @@ async fn main_impl(
)
}
"grpc" if args.rich_client => Box::new(
RichGrpcClient::new(
&args.page_service_connstring,
worker_id.timeline,
args.compression,
)
.await
.unwrap(),
),
"grpc" => Box::new(
GrpcClient::new(
&args.page_service_connstring,
@@ -657,7 +674,7 @@ impl Client for GrpcClient {
blks: Vec<u32>,
) -> anyhow::Result<()> {
let req = page_api::GetPageRequest {
request_id: req_id,
request_id: req_id.into(),
request_class: page_api::GetPageClass::Normal,
read_lsn: page_api::ReadLsn {
request_lsn: req_lsn,
@@ -677,6 +694,79 @@ impl Client for GrpcClient {
"unexpected status code: {}",
resp.status_code,
);
Ok((resp.request_id, resp.page_images))
Ok((
resp.request_id.id,
resp.pages.into_iter().map(|p| p.image).collect(),
))
}
}
/// A rich gRPC Pageserver client.
struct RichGrpcClient {
inner: Arc<client_grpc::PageserverClient>,
requests: FuturesUnordered<
Pin<Box<dyn Future<Output = anyhow::Result<page_api::GetPageResponse>> + Send>>,
>,
}
impl RichGrpcClient {
async fn new(
connstring: &str,
ttid: TenantTimelineId,
compression: bool,
) -> anyhow::Result<Self> {
let inner = Arc::new(client_grpc::PageserverClient::new(
ttid.tenant_id,
ttid.timeline_id,
ShardSpec::new(
[(ShardIndex::unsharded(), connstring.to_string())].into(),
None,
)?,
None,
compression.then_some(tonic::codec::CompressionEncoding::Zstd),
)?);
Ok(Self {
inner,
requests: FuturesUnordered::new(),
})
}
}
#[async_trait]
impl Client for RichGrpcClient {
async fn send_get_page(
&mut self,
req_id: u64,
req_lsn: Lsn,
mod_lsn: Lsn,
rel: RelTag,
blks: Vec<u32>,
) -> anyhow::Result<()> {
let req = page_api::GetPageRequest {
request_id: req_id.into(),
request_class: page_api::GetPageClass::Normal,
read_lsn: page_api::ReadLsn {
request_lsn: req_lsn,
not_modified_since_lsn: Some(mod_lsn),
},
rel,
block_numbers: blks,
};
let inner = self.inner.clone();
self.requests.push(Box::pin(async move {
inner
.get_page(req)
.await
.map_err(|err| anyhow::anyhow!("{err}"))
}));
Ok(())
}
async fn recv_get_page(&mut self) -> anyhow::Result<(u64, Vec<Bytes>)> {
let resp = self.requests.next().await.unwrap()?;
Ok((
resp.request_id.id,
resp.pages.into_iter().map(|p| p.image).collect(),
))
}
}

View File

@@ -17,6 +17,7 @@ use anyhow::{Context, anyhow};
use async_compression::tokio::write::GzipEncoder;
use bytes::{BufMut, Bytes, BytesMut};
use fail::fail_point;
use neon_failpoint as fail;
use pageserver_api::key::{Key, rel_block_to_key};
use pageserver_api::reltag::{RelTag, SlruKind};
use postgres_ffi::pg_constants::{PG_HBA, PGDATA_SPECIAL_FILES};

View File

@@ -29,8 +29,8 @@ use pageserver::task_mgr::{
};
use pageserver::tenant::{TenantSharedResources, mgr, secondary};
use pageserver::{
CancellableTask, ConsumptionMetricsTasks, HttpEndpointListener, HttpsEndpointListener, http,
page_cache, page_service, task_mgr, virtual_file,
CancellableTask, ConsumptionMetricsTasks, HttpEndpointListener, HttpsEndpointListener,
MetricsCollectionTask, http, page_cache, page_service, task_mgr, virtual_file,
};
use postgres_backend::AuthType;
use remote_storage::GenericRemoteStorage;
@@ -41,6 +41,7 @@ use tracing_utils::OtelGuard;
use utils::auth::{JwtAuth, SwappableJwtAuth};
use utils::crashsafe::syncfs;
use utils::logging::TracingErrorLayerEnablement;
use utils::metrics_collector::{METRICS_COLLECTION_INTERVAL, METRICS_COLLECTOR};
use utils::sentry_init::init_sentry;
use utils::{failpoint_support, logging, project_build_tag, project_git_version, tcp_listener};
@@ -67,7 +68,7 @@ const FEATURES: &[&str] = &[
fn version() -> String {
format!(
"{GIT_VERSION} failpoints: {}, features: {:?}",
fail::has_failpoints(),
neon_failpoint::has_failpoints(),
FEATURES,
)
}
@@ -83,7 +84,7 @@ fn main() -> anyhow::Result<()> {
}
// Initialize up failpoints support
let scenario = failpoint_support::init();
failpoint_support::init().unwrap();
let workdir = arg_matches
.get_one::<String>("workdir")
@@ -220,7 +221,6 @@ fn main() -> anyhow::Result<()> {
start_pageserver(launch_ts, conf, ignored, otel_guard).context("Failed to start pageserver")?;
scenario.teardown();
Ok(())
}
@@ -365,16 +365,9 @@ fn start_pageserver(
// If any failpoints were set from FAILPOINTS environment variable,
// print them to the log for debugging purposes
let failpoints = fail::list();
if !failpoints.is_empty() {
info!(
"started with failpoints: {}",
failpoints
.iter()
.map(|(name, actions)| format!("{name}={actions}"))
.collect::<Vec<String>>()
.join(";")
)
let failpoints = neon_failpoint::list();
for (name, actions) in failpoints {
info!("starting with failpoint: {name} {actions}");
}
// Create and lock PID file. This ensures that there cannot be more than one
@@ -763,6 +756,41 @@ fn start_pageserver(
(http_task, https_task)
};
/* BEGIN_HADRON */
let metrics_collection_task = {
let cancel = shutdown_pageserver.child_token();
let task = crate::BACKGROUND_RUNTIME.spawn({
let cancel = cancel.clone();
let background_jobs_barrier = background_jobs_barrier.clone();
async move {
if conf.force_metric_collection_on_scrape {
return;
}
// first wait until background jobs are cleared to launch.
tokio::select! {
_ = cancel.cancelled() => { return; },
_ = background_jobs_barrier.wait() => {}
};
let mut interval = tokio::time::interval(METRICS_COLLECTION_INTERVAL);
loop {
tokio::select! {
_ = cancel.cancelled() => {
tracing::info!("cancelled metrics collection task, exiting...");
break;
},
_ = interval.tick() => {}
}
tokio::task::spawn_blocking(|| {
METRICS_COLLECTOR.run_once(true);
});
}
}
});
MetricsCollectionTask(CancellableTask { task, cancel })
};
/* END_HADRON */
let consumption_metrics_tasks = {
let cancel = shutdown_pageserver.child_token();
let task = crate::BACKGROUND_RUNTIME.spawn({
@@ -844,6 +872,7 @@ fn start_pageserver(
https_endpoint_listener,
page_service,
page_service_grpc,
metrics_collection_task,
consumption_metrics_tasks,
disk_usage_eviction_task,
&tenant_manager,

View File

@@ -252,6 +252,14 @@ pub struct PageServerConf {
pub timeline_import_config: pageserver_api::config::TimelineImportConfig,
pub basebackup_cache_config: Option<pageserver_api::config::BasebackupCacheConfig>,
/// Defines what is a big tenant for the purpose of image layer generation.
/// See Timeline::should_check_if_image_layers_required
pub image_layer_generation_large_timeline_threshold: Option<u64>,
/// Controls whether to collect all metrics on each scrape or to return potentially stale
/// results.
pub force_metric_collection_on_scrape: bool,
}
/// Token for authentication to safekeepers
@@ -432,6 +440,8 @@ impl PageServerConf {
posthog_config,
timeline_import_config,
basebackup_cache_config,
image_layer_generation_large_timeline_threshold,
force_metric_collection_on_scrape,
} = config_toml;
let mut conf = PageServerConf {
@@ -490,6 +500,8 @@ impl PageServerConf {
dev_mode,
timeline_import_config,
basebackup_cache_config,
image_layer_generation_large_timeline_threshold,
force_metric_collection_on_scrape,
// ------------------------------------------------------------
// fields that require additional validation or custom handling

View File

@@ -6,6 +6,8 @@ use camino::{Utf8Path, Utf8PathBuf};
use super::{NewMetricsRoot, NewRawMetric, RawMetric};
use crate::consumption_metrics::NewMetricsRefRoot;
use neon_failpoint as fail;
pub(super) fn read_metrics_from_serde_value(
json_value: serde_json::Value,
) -> anyhow::Result<Vec<NewRawMetric>> {
@@ -129,7 +131,7 @@ pub(super) async fn flush_metrics_to_disk(
tempfile.flush()?;
tempfile.as_file().sync_all()?;
fail::fail_point!("before-persist-last-metrics-collected");
fail::fail_point_sync!("before-persist-last-metrics-collected");
drop(tempfile.persist(&*path).map_err(|e| e.error)?);

View File

@@ -8,6 +8,7 @@
use std::time::Duration;
use neon_failpoint as fail;
use remote_storage::{GenericRemoteStorage, RemotePath, TimeoutOrCancel};
use tokio_util::sync::CancellationToken;
use tracing::{info, warn};

View File

@@ -2,7 +2,9 @@
//! Management HTTP API
//!
use std::cmp::Reverse;
use std::collections::{BinaryHeap, HashMap};
use std::collections::BTreeMap;
use std::collections::BinaryHeap;
use std::collections::HashMap;
use std::str::FromStr;
use std::sync::Arc;
use std::time::Duration;
@@ -26,6 +28,7 @@ use http_utils::{RequestExt, RouterBuilder};
use humantime::format_rfc3339;
use hyper::{Body, Request, Response, StatusCode, Uri, header};
use metrics::launch_timestamp::LaunchTimestamp;
use neon_failpoint as fail;
use pageserver_api::models::virtual_file::IoMode;
use pageserver_api::models::{
DetachBehavior, DownloadRemoteLayersTaskSpawnRequest, IngestAuxFilesRequest,
@@ -79,8 +82,8 @@ use crate::tenant::storage_layer::{IoConcurrency, LayerAccessStatsReset, LayerNa
use crate::tenant::timeline::layer_manager::LayerManagerLockHolder;
use crate::tenant::timeline::offload::{OffloadError, offload_timeline};
use crate::tenant::timeline::{
CompactFlags, CompactOptions, CompactRequest, CompactionError, MarkInvisibleRequest, Timeline,
WaitLsnTimeout, WaitLsnWaiter, import_pgdata,
CompactFlags, CompactOptions, CompactRequest, MarkInvisibleRequest, Timeline, WaitLsnTimeout,
WaitLsnWaiter, import_pgdata,
};
use crate::tenant::{
GetTimelineError, LogicalSizeCalculationCause, OffloadedTimeline, PageReconstructError,
@@ -2500,9 +2503,10 @@ async fn timeline_checkpoint_handler(
.compact(&cancel, flags, &ctx)
.await
.map_err(|e|
match e {
CompactionError::ShuttingDown => ApiError::ShuttingDown,
CompactionError::Other(e) => ApiError::InternalServerError(e),
if e.is_cancel() {
ApiError::ShuttingDown
} else {
ApiError::InternalServerError(e.into_anyhow())
}
)?;
}
@@ -3213,6 +3217,30 @@ async fn get_utilization(
.map_err(ApiError::InternalServerError)
}
/// HADRON
async fn list_tenant_visible_size_handler(
request: Request<Body>,
_cancel: CancellationToken,
) -> Result<Response<Body>, ApiError> {
check_permission(&request, None)?;
let state = get_state(&request);
let mut map = BTreeMap::new();
for (tenant_shard_id, slot) in state.tenant_manager.list() {
match slot {
TenantSlot::Attached(tenant) => {
let visible_size = tenant.get_visible_size();
map.insert(tenant_shard_id, visible_size);
}
TenantSlot::Secondary(_) | TenantSlot::InProgress(_) => {
continue;
}
}
}
json_response(StatusCode::OK, map)
}
async fn list_aux_files(
mut request: Request<Body>,
_cancel: CancellationToken,
@@ -3937,13 +3965,18 @@ pub fn make_router(
.expect("construct launch timestamp header middleware"),
);
let force_metric_collection_on_scrape = state.conf.force_metric_collection_on_scrape;
let prometheus_metrics_handler_wrapper =
move |req| prometheus_metrics_handler(req, force_metric_collection_on_scrape);
Ok(router
.data(state)
.get("/metrics", |r| request_span(r, prometheus_metrics_handler))
.get("/metrics", move |r| request_span(r, prometheus_metrics_handler_wrapper))
.get("/profile/cpu", |r| request_span(r, profile_cpu_handler))
.get("/profile/heap", |r| request_span(r, profile_heap_handler))
.get("/v1/status", |r| api_handler(r, status_handler))
.put("/v1/failpoints", |r| {
.post("/v1/failpoints", |r| {
testing_api_handler("manage failpoints", r, failpoints_handler)
})
.post("/v1/reload_auth_validation_keys", |r| {
@@ -4145,6 +4178,7 @@ pub fn make_router(
.put("/v1/io_engine", |r| api_handler(r, put_io_engine_handler))
.put("/v1/io_mode", |r| api_handler(r, put_io_mode_handler))
.get("/v1/utilization", |r| api_handler(r, get_utilization))
.get("/v1/list_tenant_visible_size", |r| api_handler(r, list_tenant_visible_size_handler))
.post(
"/v1/tenant/:tenant_shard_id/timeline/:timeline_id/ingest_aux_files",
|r| testing_api_handler("ingest_aux_files", r, ingest_aux_files),

View File

@@ -73,6 +73,9 @@ pub struct HttpEndpointListener(pub CancellableTask);
pub struct HttpsEndpointListener(pub CancellableTask);
pub struct ConsumptionMetricsTasks(pub CancellableTask);
pub struct DiskUsageEvictionTask(pub CancellableTask);
// HADRON
pub struct MetricsCollectionTask(pub CancellableTask);
impl CancellableTask {
pub async fn shutdown(self) {
self.cancel.cancel();
@@ -87,6 +90,7 @@ pub async fn shutdown_pageserver(
https_listener: Option<HttpsEndpointListener>,
page_service: page_service::Listener,
grpc_task: Option<CancellableTask>,
metrics_collection_task: MetricsCollectionTask,
consumption_metrics_worker: ConsumptionMetricsTasks,
disk_usage_eviction_task: Option<DiskUsageEvictionTask>,
tenant_manager: &TenantManager,
@@ -211,6 +215,14 @@ pub async fn shutdown_pageserver(
// Best effort to persist any outstanding deletions, to avoid leaking objects
deletion_queue.shutdown(Duration::from_secs(5)).await;
// HADRON
timed(
metrics_collection_task.0.shutdown(),
"shutdown metrics collections metrics",
Duration::from_secs(1),
)
.await;
timed(
consumption_metrics_worker.0.shutdown(),
"shutdown consumption metrics",

View File

@@ -2847,6 +2847,24 @@ pub(crate) static MISROUTED_PAGESTREAM_REQUESTS: Lazy<IntCounter> = Lazy::new(||
.expect("failed to define a metric")
});
// Global counter for PageStream request results by outcome. Outcomes are divided into 3 categories:
// - success
// - internal_error: errors that indicate bugs in the storage cluster (e.g. page reconstruction errors, misrouted requests, LSN timeout errors)
// - other_error: transient error conditions that are expected in normal operation or indicate bugs with other parts of the system (e.g. error due to pageserver shutdown, malformed requests etc.)
pub(crate) static PAGESTREAM_HANDLER_RESULTS_TOTAL: Lazy<IntCounterVec> = Lazy::new(|| {
register_int_counter_vec!(
"pageserver_pagestream_handler_results_total",
"Number of pageserver pagestream handler results by outcome (success, internal_error, other_error)",
&["outcome"]
)
.expect("failed to define a metric")
});
// Constants for pageserver_pagestream_handler_results_total's outcome labels
pub(crate) const PAGESTREAM_HANDLER_OUTCOME_SUCCESS: &str = "success";
pub(crate) const PAGESTREAM_HANDLER_OUTCOME_INTERNAL_ERROR: &str = "internal_error";
pub(crate) const PAGESTREAM_HANDLER_OUTCOME_OTHER_ERROR: &str = "other_error";
// Metrics collected on WAL redo operations
//
// We collect the time spent in actual WAL redo ('redo'), and time waiting

View File

@@ -19,6 +19,7 @@ use futures::future::BoxFuture;
use futures::{FutureExt, Stream};
use itertools::Itertools;
use jsonwebtoken::TokenData;
use neon_failpoint as fail;
use once_cell::sync::OnceCell;
use pageserver_api::config::{
GetVectoredConcurrentIo, PageServicePipeliningConfig, PageServicePipeliningConfigPipelined,
@@ -70,7 +71,7 @@ use crate::context::{
};
use crate::metrics::{
self, COMPUTE_COMMANDS_COUNTERS, ComputeCommandKind, GetPageBatchBreakReason, LIVE_CONNECTIONS,
MISROUTED_PAGESTREAM_REQUESTS, SmgrOpTimer, TimelineMetrics,
MISROUTED_PAGESTREAM_REQUESTS, PAGESTREAM_HANDLER_RESULTS_TOTAL, SmgrOpTimer, TimelineMetrics,
};
use crate::pgdatadir_mapping::{LsnRange, Version};
use crate::span::{
@@ -335,18 +336,21 @@ async fn page_service_conn_main(
let default_timeout_ms = 10 * 60 * 1000; // 10 minutes by default
let socket_timeout_ms = (|| {
fail::fail_point!("simulated-bad-compute-connection", |avg_timeout_ms| {
// Exponential distribution for simulating
// poor network conditions, expect about avg_timeout_ms to be around 15
// in tests
if let Some(avg_timeout_ms) = avg_timeout_ms {
let avg = avg_timeout_ms.parse::<i64>().unwrap() as f32;
let u = rand::random::<f32>();
((1.0 - u).ln() / (-avg)) as u64
} else {
default_timeout_ms
fail::fail_point_sync!(
"simulated-bad-compute-connection",
|avg_timeout_ms: Option<String>| {
// Exponential distribution for simulating
// poor network conditions, expect about avg_timeout_ms to be around 15
// in tests
if let Some(avg_timeout_ms) = avg_timeout_ms {
let avg = avg_timeout_ms.parse::<i64>().unwrap() as f32;
let u = rand::random::<f32>();
((1.0 - u).ln() / (-avg)) as u64
} else {
default_timeout_ms
}
}
});
);
default_timeout_ms
})();
@@ -1441,20 +1445,57 @@ impl PageServerHandler {
let (response_msg, ctx) = match handler_result {
Err(e) => match &e.err {
PageStreamError::Shutdown => {
// BEGIN HADRON
PAGESTREAM_HANDLER_RESULTS_TOTAL
.with_label_values(&[metrics::PAGESTREAM_HANDLER_OUTCOME_OTHER_ERROR])
.inc();
// END HADRON
// If we fail to fulfil a request during shutdown, which may be _because_ of
// shutdown, then do not send the error to the client. Instead just drop the
// connection.
span.in_scope(|| info!("dropping connection due to shutdown"));
return Err(QueryError::Shutdown);
}
PageStreamError::Reconnect(reason) => {
span.in_scope(|| info!("handler requested reconnect: {reason}"));
PageStreamError::Reconnect(_reason) => {
span.in_scope(|| {
// BEGIN HADRON
// We can get here because the compute node is pointing at the wrong PS. We
// already have a metric to keep track of this so suppressing this log to
// reduce log spam. The information in this log message is not going to be that
// helpful given the volume of logs that can be generated.
// info!("handler requested reconnect: {reason}")
// END HADRON
});
// BEGIN HADRON
PAGESTREAM_HANDLER_RESULTS_TOTAL
.with_label_values(&[
metrics::PAGESTREAM_HANDLER_OUTCOME_INTERNAL_ERROR,
])
.inc();
// END HADRON
return Err(QueryError::Reconnect);
}
PageStreamError::Read(_)
| PageStreamError::LsnTimeout(_)
| PageStreamError::NotFound(_)
| PageStreamError::BadRequest(_) => {
// BEGIN HADRON
if let PageStreamError::Read(_) | PageStreamError::LsnTimeout(_) = &e.err {
PAGESTREAM_HANDLER_RESULTS_TOTAL
.with_label_values(&[
metrics::PAGESTREAM_HANDLER_OUTCOME_INTERNAL_ERROR,
])
.inc();
} else {
PAGESTREAM_HANDLER_RESULTS_TOTAL
.with_label_values(&[
metrics::PAGESTREAM_HANDLER_OUTCOME_OTHER_ERROR,
])
.inc();
}
// END HADRON
// print the all details to the log with {:#}, but for the client the
// error message is enough. Do not log if shutting down, as the anyhow::Error
// here includes cancellation which is not an error.
@@ -1472,7 +1513,15 @@ impl PageServerHandler {
)
}
},
Ok((response_msg, _op_timer_already_observed, ctx)) => (response_msg, Some(ctx)),
Ok((response_msg, _op_timer_already_observed, ctx)) => {
// BEGIN HADRON
PAGESTREAM_HANDLER_RESULTS_TOTAL
.with_label_values(&[metrics::PAGESTREAM_HANDLER_OUTCOME_SUCCESS])
.inc();
// END HADRON
(response_msg, Some(ctx))
}
};
let ctx = ctx.map(|req_ctx| {
@@ -2998,7 +3047,7 @@ where
_pgb: &mut PostgresBackend<IO>,
sm: &FeStartupPacket,
) -> Result<(), QueryError> {
fail::fail_point!("ps::connection-start::startup-packet");
fail::fail_point_sync!("ps::connection-start::startup-packet");
if let FeStartupPacket::StartupMessage { params, .. } = sm {
if let Some(app_name) = params.get("application_name") {
@@ -3293,9 +3342,12 @@ impl GrpcPageServiceHandler {
}
/// Generates a PagestreamRequest header from a ReadLsn and request ID.
fn make_hdr(read_lsn: page_api::ReadLsn, req_id: u64) -> PagestreamRequest {
fn make_hdr(
read_lsn: page_api::ReadLsn,
req_id: Option<page_api::RequestID>,
) -> PagestreamRequest {
PagestreamRequest {
reqid: req_id,
reqid: req_id.map(|r| r.id).unwrap_or_default(),
request_lsn: read_lsn.request_lsn,
not_modified_since: read_lsn
.not_modified_since_lsn
@@ -3405,7 +3457,7 @@ impl GrpcPageServiceHandler {
batch.push(BatchedGetPageRequest {
req: PagestreamGetPageRequest {
hdr: Self::make_hdr(req.read_lsn, req.request_id),
hdr: Self::make_hdr(req.read_lsn, Some(req.request_id)),
rel: req.rel,
blkno,
},
@@ -3435,12 +3487,16 @@ impl GrpcPageServiceHandler {
request_id: req.request_id,
status_code: page_api::GetPageStatusCode::Ok,
reason: None,
page_images: Vec::with_capacity(results.len()),
rel: req.rel,
pages: Vec::with_capacity(results.len()),
};
for result in results {
match result {
Ok((PagestreamBeMessage::GetPage(r), _, _)) => resp.page_images.push(r.page),
Ok((PagestreamBeMessage::GetPage(r), _, _)) => resp.pages.push(page_api::Page {
block_number: r.req.blkno,
image: r.page,
}),
Ok((resp, _, _)) => {
return Err(tonic::Status::internal(format!(
"unexpected response: {resp:?}"
@@ -3483,7 +3539,7 @@ impl proto::PageService for GrpcPageServiceHandler {
span_record!(rel=%req.rel, lsn=%req.read_lsn);
let req = PagestreamExistsRequest {
hdr: Self::make_hdr(req.read_lsn, 0),
hdr: Self::make_hdr(req.read_lsn, None),
rel: req.rel,
};
@@ -3633,7 +3689,7 @@ impl proto::PageService for GrpcPageServiceHandler {
span_record!(db_oid=%req.db_oid, lsn=%req.read_lsn);
let req = PagestreamDbSizeRequest {
hdr: Self::make_hdr(req.read_lsn, 0),
hdr: Self::make_hdr(req.read_lsn, None),
dbnode: req.db_oid,
};
@@ -3683,7 +3739,7 @@ impl proto::PageService for GrpcPageServiceHandler {
.await?
.downgrade();
while let Some(req) = reqs.message().await? {
let req_id = req.request_id;
let req_id = req.request_id.map(page_api::RequestID::from).unwrap_or_default();
let result = Self::get_page(&ctx, &timeline, req, io_concurrency.clone())
.instrument(span.clone()) // propagate request span
.await;
@@ -3722,7 +3778,7 @@ impl proto::PageService for GrpcPageServiceHandler {
span_record!(rel=%req.rel, lsn=%req.read_lsn);
let req = PagestreamNblocksRequest {
hdr: Self::make_hdr(req.read_lsn, 0),
hdr: Self::make_hdr(req.read_lsn, None),
rel: req.rel,
};
@@ -3755,7 +3811,7 @@ impl proto::PageService for GrpcPageServiceHandler {
span_record!(kind=%req.kind, segno=%req.segno, lsn=%req.read_lsn);
let req = PagestreamGetSlruSegmentRequest {
hdr: Self::make_hdr(req.read_lsn, 0),
hdr: Self::make_hdr(req.read_lsn, None),
kind: req.kind as u8,
segno: req.segno,
};

View File

@@ -14,6 +14,7 @@ use crate::{PERF_TRACE_TARGET, ensure_walingest};
use anyhow::Context;
use bytes::{Buf, Bytes, BytesMut};
use enum_map::Enum;
use neon_failpoint as fail;
use pageserver_api::key::{
AUX_FILES_KEY, CHECKPOINT_KEY, CONTROLFILE_KEY, CompactKey, DBDIR_KEY, Key, RelDirExists,
TWOPHASEDIR_KEY, dbdir_key_range, rel_block_to_key, rel_dir_to_key, rel_key_range,

View File

@@ -30,6 +30,7 @@ use enumset::EnumSet;
use futures::StreamExt;
use futures::stream::FuturesUnordered;
use itertools::Itertools as _;
use neon_failpoint as fail;
use once_cell::sync::Lazy;
pub use pageserver_api::models::TenantState;
use pageserver_api::models::{self, RelSizeMigration};
@@ -3291,7 +3292,7 @@ impl TenantShard {
// Ignore this, we likely raced with unarchival.
OffloadError::NotArchived => Ok(()),
OffloadError::AlreadyInProgress => Ok(()),
OffloadError::Cancelled => Err(CompactionError::ShuttingDown),
OffloadError::Cancelled => Err(CompactionError::new_cancelled()),
// don't break the anyhow chain
OffloadError::Other(err) => Err(CompactionError::Other(err)),
})?;
@@ -3321,16 +3322,13 @@ impl TenantShard {
/// Trips the compaction circuit breaker if appropriate.
pub(crate) fn maybe_trip_compaction_breaker(&self, err: &CompactionError) {
match err {
err if err.is_cancel() => {}
CompactionError::ShuttingDown => (),
CompactionError::Other(err) => {
self.compaction_circuit_breaker
.lock()
.unwrap()
.fail(&CIRCUIT_BREAKERS_BROKEN, err);
}
if err.is_cancel() {
return;
}
self.compaction_circuit_breaker
.lock()
.unwrap()
.fail(&CIRCUIT_BREAKERS_BROKEN, err);
}
/// Cancel scheduled compaction tasks
@@ -4174,6 +4172,15 @@ impl TenantShard {
.unwrap_or(self.conf.default_tenant_conf.image_creation_threshold)
}
// HADRON
pub fn get_image_creation_timeout(&self) -> Option<Duration> {
let tenant_conf = self.tenant_conf.load().tenant_conf.clone();
tenant_conf.image_layer_force_creation_period.or(self
.conf
.default_tenant_conf
.image_layer_force_creation_period)
}
pub fn get_pitr_interval(&self) -> Duration {
let tenant_conf = self.tenant_conf.load().tenant_conf.clone();
tenant_conf
@@ -5713,6 +5720,16 @@ impl TenantShard {
.unwrap_or(0)
}
/// HADRON
/// Return the visible size of all timelines in this tenant.
pub(crate) fn get_visible_size(&self) -> u64 {
let timelines = self.timelines.lock().unwrap();
timelines
.values()
.map(|t| t.metrics.visible_physical_size_gauge.get())
.sum()
}
/// Builds a new tenant manifest, and uploads it if it differs from the last-known tenant
/// manifest in `Self::remote_tenant_manifest`.
///
@@ -9555,7 +9572,7 @@ mod tests {
writer.finish_write(Lsn(0x30));
drop(writer);
fail::cfg(
neon_failpoint::configure_failpoint(
"flush-layer-before-update-remote-consistent-lsn",
"return()",
)

View File

@@ -12,6 +12,7 @@ use anyhow::Context;
use camino::{Utf8DirEntry, Utf8Path, Utf8PathBuf};
use futures::StreamExt;
use itertools::Itertools;
use neon_failpoint as fail;
use pageserver_api::key::Key;
use pageserver_api::models::{DetachBehavior, LocationConfigMode};
use pageserver_api::shard::{

View File

@@ -194,6 +194,7 @@ pub(crate) use download::{
};
use index::GcCompactionState;
pub(crate) use index::LayerFileMetadata;
use neon_failpoint as fail;
use pageserver_api::models::{RelSizeMigration, TimelineArchivalState, TimelineVisibilityState};
use pageserver_api::shard::{ShardIndex, TenantShardId};
use regex::Regex;

View File

@@ -11,6 +11,7 @@ use std::time::SystemTime;
use anyhow::{Context, anyhow};
use camino::{Utf8Path, Utf8PathBuf};
use neon_failpoint as fail;
use pageserver_api::shard::TenantShardId;
use remote_storage::{
DownloadError, DownloadKind, DownloadOpts, GenericRemoteStorage, ListingMode, RemotePath,

View File

@@ -8,6 +8,7 @@ use anyhow::{Context, bail};
use bytes::Bytes;
use camino::Utf8Path;
use fail::fail_point;
use neon_failpoint as fail;
use pageserver_api::shard::TenantShardId;
use remote_storage::{GenericRemoteStorage, RemotePath, TimeTravelError};
use tokio::fs::{self, File};

View File

@@ -225,7 +225,7 @@ impl fmt::Display for ImageLayerName {
/// storage and object names in remote storage consist of the LayerName plus some extra qualifiers
/// that uniquely identify the physical incarnation of a layer (see [crate::tenant::remote_timeline_client::remote_layer_path])
/// and [`crate::tenant::storage_layer::layer::local_layer_path`])
#[derive(Debug, PartialEq, Eq, Hash, Clone)]
#[derive(Debug, PartialEq, Eq, Hash, Clone, Ord, PartialOrd)]
pub enum LayerName {
Image(ImageLayerName),
Delta(DeltaLayerName),

View File

@@ -17,17 +17,14 @@ use tracing::*;
use utils::backoff::exponential_backoff_duration;
use utils::completion::Barrier;
use utils::pausable_failpoint;
use utils::sync::gate::GateError;
use crate::context::{DownloadBehavior, RequestContext};
use crate::metrics::{self, BackgroundLoopSemaphoreMetricsRecorder, TENANT_TASK_EVENTS};
use crate::task_mgr::{self, BACKGROUND_RUNTIME, TOKIO_WORKER_THREADS, TaskKind};
use crate::tenant::blob_io::WriteBlobError;
use crate::tenant::throttle::Stats;
use crate::tenant::timeline::CompactionError;
use crate::tenant::timeline::compaction::CompactionOutcome;
use crate::tenant::{TenantShard, TenantState};
use crate::virtual_file::owned_buffers_io::write::FlushTaskError;
/// Semaphore limiting concurrent background tasks (across all tenants).
///
@@ -310,45 +307,12 @@ pub(crate) fn log_compaction_error(
task_cancelled: bool,
degrade_to_warning: bool,
) {
use CompactionError::*;
let is_cancel = err.is_cancel();
use crate::tenant::PageReconstructError;
use crate::tenant::upload_queue::NotInitialized;
let level = match err {
e if e.is_cancel() => return,
ShuttingDown => return,
_ if task_cancelled => Level::INFO,
Other(err) => {
let root_cause = err.root_cause();
let upload_queue = root_cause
.downcast_ref::<NotInitialized>()
.is_some_and(|e| e.is_stopping());
let timeline = root_cause
.downcast_ref::<PageReconstructError>()
.is_some_and(|e| e.is_cancel());
let buffered_writer_flush_task_canelled = root_cause
.downcast_ref::<FlushTaskError>()
.is_some_and(|e| e.is_cancel());
let write_blob_cancelled = root_cause
.downcast_ref::<WriteBlobError>()
.is_some_and(|e| e.is_cancel());
let gate_closed = root_cause
.downcast_ref::<GateError>()
.is_some_and(|e| e.is_cancel());
let is_stopping = upload_queue
|| timeline
|| buffered_writer_flush_task_canelled
|| write_blob_cancelled
|| gate_closed;
if is_stopping {
Level::INFO
} else {
Level::ERROR
}
}
let level = if is_cancel || task_cancelled {
Level::INFO
} else {
Level::ERROR
};
if let Some((error_count, sleep_duration)) = retry_info {

View File

@@ -39,6 +39,7 @@ use layer_manager::{
LayerManagerLockHolder, LayerManagerReadGuard, LayerManagerWriteGuard, LockedLayerManager,
Shutdown,
};
use neon_failpoint as fail;
use once_cell::sync::Lazy;
use pageserver_api::config::tenant_conf_defaults::DEFAULT_PITR_INTERVAL;
@@ -351,6 +352,13 @@ pub struct Timeline {
last_image_layer_creation_check_at: AtomicLsn,
last_image_layer_creation_check_instant: std::sync::Mutex<Option<Instant>>,
// HADRON
/// If a key range has writes with LSN > force_image_creation_lsn, then we should force image layer creation
/// on this key range.
force_image_creation_lsn: AtomicLsn,
/// The last time instant when force_image_creation_lsn is computed.
force_image_creation_lsn_computed_at: std::sync::Mutex<Option<Instant>>,
/// Current logical size of the "datadir", at the last LSN.
current_logical_size: LogicalSize,
@@ -1002,7 +1010,7 @@ impl From<WaitLsnError> for tonic::Status {
impl From<CreateImageLayersError> for CompactionError {
fn from(e: CreateImageLayersError) -> Self {
match e {
CreateImageLayersError::Cancelled => CompactionError::ShuttingDown,
CreateImageLayersError::Cancelled => CompactionError::new_cancelled(),
CreateImageLayersError::Other(e) => {
CompactionError::Other(e.context("create image layers"))
}
@@ -2117,12 +2125,7 @@ impl Timeline {
match &result {
Ok(_) => self.compaction_failed.store(false, AtomicOrdering::Relaxed),
Err(e) if e.is_cancel() => {}
Err(CompactionError::ShuttingDown) => {
// Covered by the `Err(e) if e.is_cancel()` branch.
}
Err(CompactionError::Other(_)) => {
self.compaction_failed.store(true, AtomicOrdering::Relaxed)
}
Err(_) => self.compaction_failed.store(true, AtomicOrdering::Relaxed),
};
result
@@ -2851,6 +2854,18 @@ impl Timeline {
.unwrap_or(self.conf.default_tenant_conf.image_creation_threshold)
}
// HADRON
fn get_image_creation_timeout(&self) -> Option<Duration> {
let tenant_conf = self.tenant_conf.load();
tenant_conf
.tenant_conf
.image_layer_force_creation_period
.or(self
.conf
.default_tenant_conf
.image_layer_force_creation_period)
}
fn get_compaction_algorithm_settings(&self) -> CompactionAlgorithmSettings {
let tenant_conf = &self.tenant_conf.load();
tenant_conf
@@ -3120,7 +3135,9 @@ impl Timeline {
repartition_threshold: 0,
last_image_layer_creation_check_at: AtomicLsn::new(0),
last_image_layer_creation_check_instant: Mutex::new(None),
// HADRON
force_image_creation_lsn: AtomicLsn::new(0),
force_image_creation_lsn_computed_at: std::sync::Mutex::new(None),
last_received_wal: Mutex::new(None),
rel_size_latest_cache: RwLock::new(HashMap::new()),
rel_size_snapshot_cache: Mutex::new(LruCache::new(relsize_snapshot_cache_capacity)),
@@ -5041,6 +5058,7 @@ impl Timeline {
.create_image_layers(
&partitions,
self.initdb_lsn,
None,
ImageLayerCreationMode::Initial,
ctx,
LastImageLayerCreationStatus::Initial,
@@ -5166,7 +5184,9 @@ impl Timeline {
*self.applied_gc_cutoff_lsn.read(),
);
fail_point!("checkpoint-before-saving-metadata", |x| bail!(
neon_failpoint::fail_point_sync!("checkpoint-before-saving-metadata", |x: Option<
String,
>| bail!(
"{}",
x.unwrap()
));
@@ -5312,14 +5332,19 @@ impl Timeline {
}
// Is it time to create a new image layer for the given partition? True if we want to generate.
async fn time_for_new_image_layer(&self, partition: &KeySpace, lsn: Lsn) -> bool {
async fn time_for_new_image_layer(
&self,
partition: &KeySpace,
lsn: Lsn,
force_image_creation_lsn: Option<Lsn>,
) -> bool {
let threshold = self.get_image_creation_threshold();
let guard = self.layers.read(LayerManagerLockHolder::Compaction).await;
let Ok(layers) = guard.layer_map() else {
return false;
};
let mut min_image_lsn: Lsn = Lsn::MAX;
let mut max_deltas = 0;
for part_range in &partition.ranges {
let image_coverage = layers.image_coverage(part_range, lsn);
@@ -5354,9 +5379,22 @@ impl Timeline {
return true;
}
}
min_image_lsn = min(min_image_lsn, img_lsn);
}
}
// HADRON
if min_image_lsn < force_image_creation_lsn.unwrap_or(Lsn(0)) && max_deltas > 0 {
info!(
"forcing image creation for partitioned range {}-{}. Min image LSN: {}, force image creation LSN: {}",
partition.ranges[0].start,
partition.ranges[0].end,
min_image_lsn,
force_image_creation_lsn.unwrap()
);
return true;
}
debug!(
max_deltas,
"none of the partitioned ranges had >= {threshold} deltas"
@@ -5582,7 +5620,7 @@ impl Timeline {
/// suffer from the lack of image layers
/// 2. For small tenants (that can mostly fit in RAM), we use a much longer interval
fn should_check_if_image_layers_required(self: &Arc<Timeline>, lsn: Lsn) -> bool {
const LARGE_TENANT_THRESHOLD: u64 = 2 * 1024 * 1024 * 1024;
let large_timeline_threshold = self.conf.image_layer_generation_large_timeline_threshold;
let last_checks_at = self.last_image_layer_creation_check_at.load();
let distance = lsn
@@ -5596,12 +5634,12 @@ impl Timeline {
let mut time_based_decision = false;
let mut last_check_instant = self.last_image_layer_creation_check_instant.lock().unwrap();
if let CurrentLogicalSize::Exact(logical_size) = self.current_logical_size.current_size() {
let check_required_after = if Into::<u64>::into(&logical_size) >= LARGE_TENANT_THRESHOLD
{
self.get_checkpoint_timeout()
} else {
Duration::from_secs(3600 * 48)
};
let check_required_after =
if Some(Into::<u64>::into(&logical_size)) >= large_timeline_threshold {
self.get_checkpoint_timeout()
} else {
Duration::from_secs(3600 * 48)
};
time_based_decision = match *last_check_instant {
Some(last_check) => {
@@ -5629,10 +5667,12 @@ impl Timeline {
/// true = we have generate all image layers, false = we preempt the process for L0 compaction.
///
/// `partition_mode` is only for logging purpose and is not used anywhere in this function.
#[allow(clippy::too_many_arguments)]
async fn create_image_layers(
self: &Arc<Timeline>,
partitioning: &KeyPartitioning,
lsn: Lsn,
force_image_creation_lsn: Option<Lsn>,
mode: ImageLayerCreationMode,
ctx: &RequestContext,
last_status: LastImageLayerCreationStatus,
@@ -5736,7 +5776,11 @@ impl Timeline {
} else if let ImageLayerCreationMode::Try = mode {
// check_for_image_layers = false -> skip
// check_for_image_layers = true -> check time_for_new_image_layer -> skip/generate
if !check_for_image_layers || !self.time_for_new_image_layer(partition, lsn).await {
if !check_for_image_layers
|| !self
.time_for_new_image_layer(partition, lsn, force_image_creation_lsn)
.await
{
start = img_range.end;
continue;
}
@@ -6057,26 +6101,88 @@ impl Drop for Timeline {
}
}
/// Top-level failure to compact.
#[derive(Debug, thiserror::Error)]
pub(crate) enum CompactionError {
#[error("The timeline or pageserver is shutting down")]
ShuttingDown,
#[error(transparent)]
Other(anyhow::Error),
}
pub(crate) use compaction_error::CompactionError;
/// In a private mod to enforce that [`CompactionError::is_cancel`] is used
/// instead of `match`ing on [`CompactionError::ShuttingDown`].
mod compaction_error {
use utils::sync::gate::GateError;
impl CompactionError {
/// Errors that can be ignored, i.e., cancel and shutdown.
pub fn is_cancel(&self) -> bool {
matches!(self, Self::ShuttingDown)
use crate::{
pgdatadir_mapping::CollectKeySpaceError,
tenant::{PageReconstructError, blob_io::WriteBlobError, upload_queue::NotInitialized},
virtual_file::owned_buffers_io::write::FlushTaskError,
};
/// Top-level failure to compact. Use [`Self::is_cancel`].
#[derive(Debug, thiserror::Error)]
pub(crate) enum CompactionError {
/// Use [`Self::is_cancel`] instead of checking for this variant.
#[error("The timeline or pageserver is shutting down")]
#[allow(private_interfaces)]
ShuttingDown(ForbidMatching), // private ForbidMatching enforces use of [`Self::is_cancel`].
#[error(transparent)]
Other(anyhow::Error),
}
pub fn from_collect_keyspace(err: CollectKeySpaceError) -> Self {
if err.is_cancel() {
Self::ShuttingDown
} else {
Self::Other(err.into_anyhow())
#[derive(Debug)]
struct ForbidMatching;
impl CompactionError {
pub fn new_cancelled() -> Self {
Self::ShuttingDown(ForbidMatching)
}
/// Errors that can be ignored, i.e., cancel and shutdown.
pub fn is_cancel(&self) -> bool {
let other = match self {
CompactionError::ShuttingDown(_) => return true,
CompactionError::Other(other) => other,
};
// The write path of compaction in particular often lacks differentiated
// handling errors stemming from cancellation from other errors.
// So, if requested, we also check the ::Other variant by downcasting.
// The list below has been found empirically from flaky tests and production logs.
// The process is simple: on ::Other(), compaction will print the enclosed
// anyhow::Error in debug mode, i.e., with backtrace. That backtrace contains the
// line where the write path / compaction code does undifferentiated error handling
// from a non-anyhow type to an anyhow type. Add the type to the list of downcasts
// below, following the same is_cancel() pattern.
let root_cause = other.root_cause();
let upload_queue = root_cause
.downcast_ref::<NotInitialized>()
.is_some_and(|e| e.is_stopping());
let timeline = root_cause
.downcast_ref::<PageReconstructError>()
.is_some_and(|e| e.is_cancel());
let buffered_writer_flush_task_canelled = root_cause
.downcast_ref::<FlushTaskError>()
.is_some_and(|e| e.is_cancel());
let write_blob_cancelled = root_cause
.downcast_ref::<WriteBlobError>()
.is_some_and(|e| e.is_cancel());
let gate_closed = root_cause
.downcast_ref::<GateError>()
.is_some_and(|e| e.is_cancel());
upload_queue
|| timeline
|| buffered_writer_flush_task_canelled
|| write_blob_cancelled
|| gate_closed
}
pub fn into_anyhow(self) -> anyhow::Error {
match self {
CompactionError::ShuttingDown(ForbidMatching) => anyhow::Error::new(self),
CompactionError::Other(e) => e,
}
}
pub fn from_collect_keyspace(err: CollectKeySpaceError) -> Self {
if err.is_cancel() {
Self::new_cancelled()
} else {
Self::Other(err.into_anyhow())
}
}
}
}
@@ -6088,7 +6194,7 @@ impl From<super::upload_queue::NotInitialized> for CompactionError {
CompactionError::Other(anyhow::anyhow!(value))
}
super::upload_queue::NotInitialized::ShuttingDown
| super::upload_queue::NotInitialized::Stopped => CompactionError::ShuttingDown,
| super::upload_queue::NotInitialized::Stopped => CompactionError::new_cancelled(),
}
}
}
@@ -6098,7 +6204,7 @@ impl From<super::storage_layer::layer::DownloadError> for CompactionError {
match e {
super::storage_layer::layer::DownloadError::TimelineShutdown
| super::storage_layer::layer::DownloadError::DownloadCancelled => {
CompactionError::ShuttingDown
CompactionError::new_cancelled()
}
super::storage_layer::layer::DownloadError::ContextAndConfigReallyDeniesDownloads
| super::storage_layer::layer::DownloadError::DownloadRequired
@@ -6117,14 +6223,14 @@ impl From<super::storage_layer::layer::DownloadError> for CompactionError {
impl From<layer_manager::Shutdown> for CompactionError {
fn from(_: layer_manager::Shutdown) -> Self {
CompactionError::ShuttingDown
CompactionError::new_cancelled()
}
}
impl From<super::storage_layer::errors::PutError> for CompactionError {
fn from(e: super::storage_layer::errors::PutError) -> Self {
if e.is_cancel() {
CompactionError::ShuttingDown
CompactionError::new_cancelled()
} else {
CompactionError::Other(e.into_anyhow())
}
@@ -6223,7 +6329,7 @@ impl Timeline {
let mut guard = tokio::select! {
guard = self.layers.write(LayerManagerLockHolder::Compaction) => guard,
_ = self.cancel.cancelled() => {
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
};

View File

@@ -4,10 +4,11 @@
//!
//! The old legacy algorithm is implemented directly in `timeline.rs`.
use std::cmp::min;
use std::collections::{BinaryHeap, HashMap, HashSet, VecDeque};
use std::ops::{Deref, Range};
use std::sync::Arc;
use std::time::{Duration, Instant};
use std::time::{Duration, Instant, SystemTime};
use super::layer_manager::LayerManagerLockHolder;
use super::{
@@ -25,6 +26,7 @@ use enumset::EnumSet;
use fail::fail_point;
use futures::FutureExt;
use itertools::Itertools;
use neon_failpoint as fail;
use once_cell::sync::Lazy;
use pageserver_api::config::tenant_conf_defaults::DEFAULT_CHECKPOINT_DISTANCE;
use pageserver_api::key::{KEY_SIZE, Key};
@@ -33,6 +35,7 @@ use pageserver_api::models::{CompactInfoResponse, CompactKeyRange};
use pageserver_api::shard::{ShardCount, ShardIdentity, TenantShardId};
use pageserver_compaction::helpers::{fully_contains, overlaps_with};
use pageserver_compaction::interface::*;
use postgres_ffi::to_pg_timestamp;
use serde::Serialize;
use tokio::sync::{OwnedSemaphorePermit, Semaphore};
use tokio_util::sync::CancellationToken;
@@ -45,6 +48,7 @@ use wal_decoder::models::value::Value;
use crate::context::{AccessStatsBehavior, RequestContext, RequestContextBuilder};
use crate::page_cache;
use crate::pgdatadir_mapping::LsnForTimestamp;
use crate::statvfs::Statvfs;
use crate::tenant::checks::check_valid_layermap;
use crate::tenant::gc_block::GcBlock;
@@ -572,8 +576,8 @@ impl GcCompactionQueue {
}
match res {
Ok(res) => Ok(res),
Err(CompactionError::ShuttingDown) => Err(CompactionError::ShuttingDown),
Err(CompactionError::Other(_)) => {
Err(e) if e.is_cancel() => Err(e),
Err(_) => {
// There are some cases where traditional gc might collect some layer
// files causing gc-compaction cannot read the full history of the key.
// This needs to be resolved in the long-term by improving the compaction
@@ -1260,13 +1264,19 @@ impl Timeline {
// Is the timeline being deleted?
if self.is_stopping() {
trace!("Dropping out of compaction on timeline shutdown");
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
let target_file_size = self.get_checkpoint_distance();
// Define partitioning schema if needed
// HADRON
let force_image_creation_lsn = self
.get_or_compute_force_image_creation_lsn(cancel, ctx)
.await
.map_err(CompactionError::Other)?;
// 1. L0 Compact
let l0_outcome = {
let timer = self.metrics.compact_time_histo.start_timer();
@@ -1274,6 +1284,7 @@ impl Timeline {
.compact_level0(
target_file_size,
options.flags.contains(CompactFlags::ForceL0Compaction),
force_image_creation_lsn,
ctx,
)
.await?;
@@ -1376,6 +1387,7 @@ impl Timeline {
.create_image_layers(
&partitioning,
lsn,
force_image_creation_lsn,
mode,
&image_ctx,
self.last_image_layer_creation_status
@@ -1472,6 +1484,63 @@ impl Timeline {
Ok(CompactionOutcome::Done)
}
/* BEGIN_HADRON */
// Get the force image creation LSN. Compute it if the last computed LSN is too old.
async fn get_or_compute_force_image_creation_lsn(
self: &Arc<Self>,
cancel: &CancellationToken,
ctx: &RequestContext,
) -> anyhow::Result<Option<Lsn>> {
const FORCE_IMAGE_CREATION_LSN_COMPUTE_INTERVAL: Duration = Duration::from_secs(10 * 60); // 10 minutes
let image_layer_force_creation_period = self.get_image_creation_timeout();
if image_layer_force_creation_period.is_none() {
return Ok(None);
}
let image_layer_force_creation_period = image_layer_force_creation_period.unwrap();
let force_image_creation_lsn_computed_at =
*self.force_image_creation_lsn_computed_at.lock().unwrap();
if force_image_creation_lsn_computed_at.is_none()
|| force_image_creation_lsn_computed_at.unwrap().elapsed()
> FORCE_IMAGE_CREATION_LSN_COMPUTE_INTERVAL
{
let now: SystemTime = SystemTime::now();
let timestamp = now
.checked_sub(image_layer_force_creation_period)
.ok_or_else(|| {
anyhow::anyhow!(
"image creation timeout is too large: {image_layer_force_creation_period:?}"
)
})?;
let timestamp = to_pg_timestamp(timestamp);
let force_image_creation_lsn = match self
.find_lsn_for_timestamp(timestamp, cancel, ctx)
.await?
{
LsnForTimestamp::Present(lsn) | LsnForTimestamp::Future(lsn) => lsn,
_ => {
let gc_lsn = *self.get_applied_gc_cutoff_lsn();
tracing::info!(
"no LSN found for timestamp {timestamp:?}, using latest GC cutoff LSN {}",
gc_lsn
);
gc_lsn
}
};
self.force_image_creation_lsn
.store(force_image_creation_lsn);
*self.force_image_creation_lsn_computed_at.lock().unwrap() = Some(Instant::now());
tracing::info!(
"computed force image creation LSN: {}",
force_image_creation_lsn
);
Ok(Some(force_image_creation_lsn))
} else {
Ok(Some(self.force_image_creation_lsn.load()))
}
}
/* END_HADRON */
/// Check for layers that are elegible to be rewritten:
/// - Shard splitting: After a shard split, ancestor layers beyond pitr_interval, so that
/// we don't indefinitely retain keys in this shard that aren't needed.
@@ -1624,7 +1693,7 @@ impl Timeline {
for (i, layer) in layers_to_rewrite.into_iter().enumerate() {
if self.cancel.is_cancelled() {
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
info!(layer=%layer, "rewriting layer after shard split: {}/{}", i, total);
@@ -1722,7 +1791,7 @@ impl Timeline {
Ok(()) => {},
Err(WaitCompletionError::NotInitialized(ni)) => return Err(CompactionError::from(ni)),
Err(WaitCompletionError::UploadQueueShutDownOrStopped) => {
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
},
// Don't wait if there's L0 compaction to do. We don't need to update the outcome
@@ -1801,6 +1870,7 @@ impl Timeline {
self: &Arc<Self>,
target_file_size: u64,
force_compaction_ignore_threshold: bool,
force_compaction_lsn: Option<Lsn>,
ctx: &RequestContext,
) -> Result<CompactionOutcome, CompactionError> {
let CompactLevel0Phase1Result {
@@ -1821,6 +1891,7 @@ impl Timeline {
stats,
target_file_size,
force_compaction_ignore_threshold,
force_compaction_lsn,
&ctx,
)
.instrument(phase1_span)
@@ -1843,6 +1914,7 @@ impl Timeline {
mut stats: CompactLevel0Phase1StatsBuilder,
target_file_size: u64,
force_compaction_ignore_threshold: bool,
force_compaction_lsn: Option<Lsn>,
ctx: &RequestContext,
) -> Result<CompactLevel0Phase1Result, CompactionError> {
let begin = tokio::time::Instant::now();
@@ -1872,11 +1944,28 @@ impl Timeline {
return Ok(CompactLevel0Phase1Result::default());
}
} else {
debug!(
level0_deltas = level0_deltas.len(),
threshold, "too few deltas to compact"
);
return Ok(CompactLevel0Phase1Result::default());
// HADRON
let min_lsn = level0_deltas
.iter()
.map(|a| a.get_lsn_range().start)
.reduce(min);
if force_compaction_lsn.is_some()
&& min_lsn.is_some()
&& min_lsn.unwrap() < force_compaction_lsn.unwrap()
{
info!(
"forcing L0 compaction of {} L0 deltas. Min lsn: {}, force compaction lsn: {}",
level0_deltas.len(),
min_lsn.unwrap(),
force_compaction_lsn.unwrap()
);
} else {
debug!(
level0_deltas = level0_deltas.len(),
threshold, "too few deltas to compact"
);
return Ok(CompactLevel0Phase1Result::default());
}
}
}
@@ -1985,7 +2074,7 @@ impl Timeline {
let mut all_keys = Vec::new();
for l in deltas_to_compact.iter() {
if self.cancel.is_cancelled() {
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
let delta = l.get_as_delta(ctx).await.map_err(CompactionError::Other)?;
let keys = delta
@@ -2078,7 +2167,7 @@ impl Timeline {
stats.read_lock_held_compute_holes_micros = stats.read_lock_held_key_sort_micros.till_now();
if self.cancel.is_cancelled() {
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
stats.read_lock_drop_micros = stats.read_lock_held_compute_holes_micros.till_now();
@@ -2186,7 +2275,7 @@ impl Timeline {
// avoid hitting the cancellation token on every key. in benches, we end up
// shuffling an order of million keys per layer, this means we'll check it
// around tens of times per layer.
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
let same_key = prev_key == Some(key);
@@ -2271,7 +2360,7 @@ impl Timeline {
if writer.is_none() {
if self.cancel.is_cancelled() {
// to be somewhat responsive to cancellation, check for each new layer
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
// Create writer if not initiaized yet
writer = Some(
@@ -2527,7 +2616,7 @@ impl Timeline {
// Is the timeline being deleted?
if self.is_stopping() {
trace!("Dropping out of compaction on timeline shutdown");
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
let (dense_ks, _sparse_ks) = self
@@ -3189,7 +3278,7 @@ impl Timeline {
let gc_lock = async {
tokio::select! {
guard = self.gc_lock.lock() => Ok(guard),
_ = cancel.cancelled() => Err(CompactionError::ShuttingDown),
_ = cancel.cancelled() => Err(CompactionError::new_cancelled()),
}
};
@@ -3462,7 +3551,7 @@ impl Timeline {
}
total_layer_size += layer.layer_desc().file_size;
if cancel.is_cancelled() {
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
let should_yield = yield_for_l0
&& self
@@ -3609,7 +3698,7 @@ impl Timeline {
}
if cancel.is_cancelled() {
return Err(CompactionError::ShuttingDown);
return Err(CompactionError::new_cancelled());
}
let should_yield = yield_for_l0

View File

@@ -2,6 +2,7 @@ use std::ops::{Deref, DerefMut};
use std::sync::Arc;
use anyhow::Context;
use neon_failpoint as fail;
use pageserver_api::models::TimelineState;
use pageserver_api::shard::TenantShardId;
use remote_storage::DownloadError;

View File

@@ -4,6 +4,7 @@ use std::sync::Arc;
use anyhow::Context;
use bytes::Bytes;
use http_utils::error::ApiError;
use neon_failpoint as fail;
use pageserver_api::key::Key;
use pageserver_api::keyspace::KeySpace;
use pageserver_api::models::DetachBehavior;
@@ -1113,7 +1114,7 @@ pub(super) async fn detach_and_reparent(
// others will fail as if those timelines had been stopped for whatever reason.
#[cfg(feature = "testing")]
let failpoint_sem = || -> Option<Arc<Semaphore>> {
fail::fail_point!("timeline-detach-ancestor::allow_one_reparented", |_| Some(
fail::fail_point_sync!("timeline-detach-ancestor::allow_one_reparented", |_| Some(
Arc::new(Semaphore::new(1))
));
None

View File

@@ -5,6 +5,7 @@ use std::sync::Arc;
use anyhow::Context;
use camino::Utf8PathBuf;
use neon_failpoint as fail;
use tracing::{error, info, info_span};
use utils::fs_ext;
use utils::id::TimelineId;

View File

@@ -11,6 +11,7 @@ use bytes::BytesMut;
use chrono::{NaiveDateTime, Utc};
use fail::fail_point;
use futures::StreamExt;
use neon_failpoint as fail;
use postgres_backend::is_expected_io_error;
use postgres_connection::PgConnectionConfig;
use postgres_ffi::WAL_SEGMENT_SIZE;

View File

@@ -1410,7 +1410,7 @@ pg_init_libpagestore(void)
"sharding stripe size",
NULL,
&stripe_size,
32768, 1, INT_MAX,
2048, 1, INT_MAX,
PGC_SIGHUP,
GUC_UNIT_BLOCKS,
NULL, NULL, NULL);

View File

@@ -953,7 +953,9 @@ neon_fmgr_hook(FmgrHookEventType event, FmgrInfo *flinfo, Datum *private)
/*
* Fire Event Trigger if both function owner and current user are
* superuser, or none of them are.
* superuser. Allow executing Event Trigger function that belongs to a
* superuser when connected as a non-superuser, even when the function is
* SECURITY DEFINER.
*/
else if (event == FHET_START
/* still enable it to pass pg_regress tests */
@@ -976,32 +978,7 @@ neon_fmgr_hook(FmgrHookEventType event, FmgrInfo *flinfo, Datum *private)
function_is_owned_by_super = superuser_arg(function_owner);
/*
* 1. Refuse to run SECURITY DEFINER function that belongs to a
* superuser when the current user is not a superuser itself.
*/
if (!role_is_super
&& function_is_owned_by_super
&& function_is_secdef)
{
char *func_name = get_func_name(flinfo->fn_oid);
ereport(WARNING,
(errmsg("Skipping Event Trigger"),
errdetail("Event Trigger function \"%s\" is owned by \"%s\" "
"and is SECURITY DEFINER",
func_name,
GetUserNameFromId(function_owner, false))));
/*
* we can't skip execution directly inside the fmgr_hook so
* instead we change the event trigger function to a noop
* function.
*/
force_noop(flinfo);
}
/*
* 2. Refuse to run functions that belongs to a non-superuser when the
* Refuse to run functions that belongs to a non-superuser when the
* current user is a superuser.
*
* We could run a SECURITY DEFINER user-function here and be safe with
@@ -1009,7 +986,7 @@ neon_fmgr_hook(FmgrHookEventType event, FmgrInfo *flinfo, Datum *private)
* infrastructure maintenance operations, where we prefer to skip
* running user-defined code.
*/
else if (role_is_super && !function_is_owned_by_super)
if (role_is_super && !function_is_owned_by_super)
{
char *func_name = get_func_name(flinfo->fn_oid);

View File

@@ -376,6 +376,18 @@ typedef struct PageserverFeedback
uint32 shard_number;
} PageserverFeedback;
/* BEGIN_HADRON */
typedef struct WalRateLimiter
{
/* If the value is 1, PG backends will hit backpressure. */
pg_atomic_uint32 should_limit;
/* The number of bytes sent in the current second. */
uint64 sent_bytes;
/* The last recorded time in microsecond. */
TimestampTz last_recorded_time_us;
} WalRateLimiter;
/* END_HADRON */
typedef struct WalproposerShmemState
{
pg_atomic_uint64 propEpochStartLsn;
@@ -395,6 +407,11 @@ typedef struct WalproposerShmemState
/* aggregated feedback with min LSNs across shards */
PageserverFeedback min_ps_feedback;
/* BEGIN_HADRON */
/* The WAL rate limiter */
WalRateLimiter wal_rate_limiter;
/* END_HADRON */
} WalproposerShmemState;
/*

View File

@@ -66,6 +66,9 @@ int wal_acceptor_reconnect_timeout = 1000;
int wal_acceptor_connection_timeout = 10000;
int safekeeper_proto_version = 3;
char *safekeeper_conninfo_options = "";
/* BEGIN_HADRON */
int databricks_max_wal_mb_per_second = -1;
/* END_HADRON */
/* Set to true in the walproposer bgw. */
static bool am_walproposer;
@@ -252,6 +255,18 @@ nwp_register_gucs(void)
PGC_POSTMASTER,
0,
NULL, NULL, NULL);
/* BEGIN_HADRON */
DefineCustomIntVariable(
"databricks.max_wal_mb_per_second",
"The maximum WAL MB per second allowed. If breached, sending WAL hit the backpressure. Setting to -1 disables the limit.",
NULL,
&databricks_max_wal_mb_per_second,
-1, -1, INT_MAX,
PGC_SUSET,
GUC_UNIT_MB,
NULL, NULL, NULL);
/* END_HADRON */
}
@@ -393,6 +408,7 @@ assign_neon_safekeepers(const char *newval, void *extra)
static uint64
backpressure_lag_impl(void)
{
struct WalproposerShmemState* state = NULL;
if (max_replication_apply_lag > 0 || max_replication_flush_lag > 0 || max_replication_write_lag > 0)
{
XLogRecPtr writePtr;
@@ -426,6 +442,18 @@ backpressure_lag_impl(void)
return (myFlushLsn - applyPtr - max_replication_apply_lag * MB);
}
}
/* BEGIN_HADRON */
if (databricks_max_wal_mb_per_second == -1) {
return 0;
}
state = GetWalpropShmemState();
if (state != NULL && pg_atomic_read_u32(&state->wal_rate_limiter.should_limit) == 1)
{
return 1;
}
/* END_HADRON */
return 0;
}
@@ -472,6 +500,9 @@ WalproposerShmemInit(void)
pg_atomic_init_u64(&walprop_shared->mineLastElectedTerm, 0);
pg_atomic_init_u64(&walprop_shared->backpressureThrottlingTime, 0);
pg_atomic_init_u64(&walprop_shared->currentClusterSize, 0);
/* BEGIN_HADRON */
pg_atomic_init_u32(&walprop_shared->wal_rate_limiter.should_limit, 0);
/* END_HADRON */
}
LWLockRelease(AddinShmemInitLock);
@@ -487,6 +518,9 @@ WalproposerShmemInit_SyncSafekeeper(void)
pg_atomic_init_u64(&walprop_shared->propEpochStartLsn, 0);
pg_atomic_init_u64(&walprop_shared->mineLastElectedTerm, 0);
pg_atomic_init_u64(&walprop_shared->backpressureThrottlingTime, 0);
/* BEGIN_HADRON */
pg_atomic_init_u32(&walprop_shared->wal_rate_limiter.should_limit, 0);
/* END_HADRON */
}
#define BACK_PRESSURE_DELAY 10000L // 0.01 sec
@@ -521,7 +555,6 @@ backpressure_throttling_impl(void)
if (lag == 0)
return retry;
old_status = get_ps_display(&len);
new_status = (char *) palloc(len + 64 + 1);
memcpy(new_status, old_status, len);
@@ -1458,6 +1491,8 @@ XLogBroadcastWalProposer(WalProposer *wp)
{
XLogRecPtr startptr;
XLogRecPtr endptr;
struct WalproposerShmemState *state = NULL;
TimestampTz now = 0;
/* Start from the last sent position */
startptr = sentPtr;
@@ -1502,13 +1537,36 @@ XLogBroadcastWalProposer(WalProposer *wp)
* that arbitrary LSN is eventually reported as written, flushed and
* applied, so that it can measure the elapsed time.
*/
LagTrackerWrite(endptr, GetCurrentTimestamp());
now = GetCurrentTimestamp();
LagTrackerWrite(endptr, now);
/* Do we have any work to do? */
Assert(startptr <= endptr);
if (endptr <= startptr)
return;
/* BEGIN_HADRON */
state = GetWalpropShmemState();
if (databricks_max_wal_mb_per_second != -1 && state != NULL)
{
uint64 max_wal_bytes = (uint64) databricks_max_wal_mb_per_second * 1024 * 1024;
struct WalRateLimiter *limiter = &state->wal_rate_limiter;
if (now - limiter->last_recorded_time_us > USECS_PER_SEC)
{
/* Reset the rate limiter */
limiter->last_recorded_time_us = now;
limiter->sent_bytes = 0;
pg_atomic_exchange_u32(&limiter->should_limit, 0);
}
limiter->sent_bytes += (endptr - startptr);
if (limiter->sent_bytes > max_wal_bytes)
{
pg_atomic_exchange_u32(&limiter->should_limit, 1);
}
}
/* END_HADRON */
WalProposerBroadcast(wp, startptr, endptr);
sentPtr = endptr;

View File

@@ -6,9 +6,7 @@ license.workspace = true
[features]
default = []
# Enables test-only APIs, incuding failpoints. In particular, enables the `fail_point!` macro,
# which adds some runtime cost to run tests on outage conditions
testing = ["fail/failpoints"]
testing = ["neon_failpoint/testing"]
benchmarking = []
[dependencies]
@@ -21,7 +19,7 @@ camino-tempfile.workspace = true
chrono.workspace = true
clap = { workspace = true, features = ["derive"] }
crc32c.workspace = true
fail.workspace = true
neon_failpoint.workspace = true
hex.workspace = true
humantime.workspace = true
http.workspace = true

View File

@@ -37,6 +37,7 @@ use tracing::*;
use utils::auth::{JwtAuth, Scope, SwappableJwtAuth};
use utils::id::NodeId;
use utils::logging::{self, LogFormat, SecretString};
use utils::metrics_collector::{METRICS_COLLECTION_INTERVAL, METRICS_COLLECTOR};
use utils::sentry_init::init_sentry;
use utils::{pid_file, project_build_tag, project_git_version, tcp_listener};
@@ -64,7 +65,7 @@ const FEATURES: &[&str] = &[
fn version() -> String {
format!(
"{GIT_VERSION} failpoints: {}, features: {:?}",
fail::has_failpoints(),
neon_failpoint::has_failpoints(),
FEATURES,
)
}
@@ -243,6 +244,11 @@ struct Args {
#[arg(long)]
enable_tls_wal_service_api: bool,
/// Controls whether to collect all metrics on each scrape or to return potentially stale
/// results.
#[arg(long, default_value_t = true)]
force_metric_collection_on_scrape: bool,
/// Run in development mode (disables security checks)
#[arg(long, help = "Run in development mode (disables security checks)")]
dev: bool,
@@ -428,6 +434,7 @@ async fn main() -> anyhow::Result<()> {
ssl_ca_certs,
use_https_safekeeper_api: args.use_https_safekeeper_api,
enable_tls_wal_service_api: args.enable_tls_wal_service_api,
force_metric_collection_on_scrape: args.force_metric_collection_on_scrape,
});
// initialize sentry if SENTRY_DSN is provided
@@ -640,6 +647,26 @@ async fn start_safekeeper(conf: Arc<SafeKeeperConf>) -> Result<()> {
.map(|res| ("broker main".to_owned(), res));
tasks_handles.push(Box::pin(broker_task_handle));
/* BEGIN_HADRON */
if conf.force_metric_collection_on_scrape {
let metrics_handle = current_thread_rt
.as_ref()
.unwrap_or_else(|| BACKGROUND_RUNTIME.handle())
.spawn(async move {
let mut interval: tokio::time::Interval =
tokio::time::interval(METRICS_COLLECTION_INTERVAL);
loop {
interval.tick().await;
tokio::task::spawn_blocking(|| {
METRICS_COLLECTOR.run_once(true);
});
}
})
.map(|res| ("broker main".to_owned(), res));
tasks_handles.push(Box::pin(metrics_handle));
}
/* END_HADRON */
set_build_info_metric(GIT_VERSION, BUILD_TAG);
// TODO: update tokio-stream, convert to real async Stream with

View File

@@ -699,6 +699,11 @@ pub fn make_router(
}))
}
let force_metric_collection_on_scrape = conf.force_metric_collection_on_scrape;
let prometheus_metrics_handler_wrapper =
move |req| prometheus_metrics_handler(req, force_metric_collection_on_scrape);
// NB: on any changes do not forget to update the OpenAPI spec
// located nearby (/safekeeper/src/http/openapi_spec.yaml).
let auth = conf.http_auth.clone();
@@ -706,11 +711,13 @@ pub fn make_router(
.data(conf)
.data(global_timelines)
.data(auth)
.get("/metrics", |r| request_span(r, prometheus_metrics_handler))
.get("/metrics", move |r| {
request_span(r, prometheus_metrics_handler_wrapper)
})
.get("/profile/cpu", |r| request_span(r, profile_cpu_handler))
.get("/profile/heap", |r| request_span(r, profile_heap_handler))
.get("/v1/status", |r| request_span(r, status_handler))
.put("/v1/failpoints", |r| {
.post("/v1/failpoints", |r| {
request_span(r, move |r| async {
check_permission(&r, None)?;
let cancel = CancellationToken::new();

View File

@@ -134,6 +134,7 @@ pub struct SafeKeeperConf {
pub ssl_ca_certs: Vec<Pem>,
pub use_https_safekeeper_api: bool,
pub enable_tls_wal_service_api: bool,
pub force_metric_collection_on_scrape: bool,
}
impl SafeKeeperConf {
@@ -183,6 +184,7 @@ impl SafeKeeperConf {
ssl_ca_certs: Vec::new(),
use_https_safekeeper_api: false,
enable_tls_wal_service_api: false,
force_metric_collection_on_scrape: true,
}
}
}

View File

@@ -59,6 +59,15 @@ pub static FLUSH_WAL_SECONDS: Lazy<Histogram> = Lazy::new(|| {
.expect("Failed to register safekeeper_flush_wal_seconds histogram")
});
/* BEGIN_HADRON */
// Counter of all ProposerAcceptorMessage requests received
pub static PROPOSER_ACCEPTOR_MESSAGES_TOTAL: Lazy<IntCounterVec> = Lazy::new(|| {
register_int_counter_vec!(
"safekeeper_proposer_acceptor_messages_total",
"Total number of ProposerAcceptorMessage requests received by the Safekeeper.",
&["outcome"]
)
.expect("Failed to register safekeeper_proposer_acceptor_messages_total counter")
});
pub static WAL_DISK_IO_ERRORS: Lazy<IntCounter> = Lazy::new(|| {
register_int_counter!(
"safekeeper_wal_disk_io_errors",

View File

@@ -24,7 +24,7 @@ use utils::id::{NodeId, TenantId, TimelineId};
use utils::lsn::Lsn;
use utils::pageserver_feedback::PageserverFeedback;
use crate::metrics::MISC_OPERATION_SECONDS;
use crate::metrics::{MISC_OPERATION_SECONDS, PROPOSER_ACCEPTOR_MESSAGES_TOTAL};
use crate::state::TimelineState;
use crate::{control_file, wal_storage};
@@ -938,7 +938,7 @@ where
&mut self,
msg: &ProposerAcceptorMessage,
) -> Result<Option<AcceptorProposerMessage>> {
match msg {
let res = match msg {
ProposerAcceptorMessage::Greeting(msg) => self.handle_greeting(msg).await,
ProposerAcceptorMessage::VoteRequest(msg) => self.handle_vote_request(msg).await,
ProposerAcceptorMessage::Elected(msg) => self.handle_elected(msg).await,
@@ -949,7 +949,20 @@ where
self.handle_append_request(msg, false).await
}
ProposerAcceptorMessage::FlushWAL => self.handle_flush().await,
}
};
// BEGIN HADRON
match &res {
Ok(_) => PROPOSER_ACCEPTOR_MESSAGES_TOTAL
.with_label_values(&["success"])
.inc(),
Err(_) => PROPOSER_ACCEPTOR_MESSAGES_TOTAL
.with_label_values(&["error"])
.inc(),
};
res
// END HADRON
}
/// Handle initial message from proposer: check its sanity and send my

View File

@@ -872,14 +872,15 @@ impl<IO: AsyncRead + AsyncWrite + Unpin> WalSender<'_, IO> {
async fn wait_wal(&mut self) -> Result<(), CopyStreamHandlerEnd> {
loop {
self.end_pos = self.end_watch.get();
let have_something_to_send = (|| {
fail::fail_point!(
let have_something_to_send = async {
neon_failpoint::fail_point!(
"sk-pause-send",
self.appname.as_deref() != Some("pageserver"),
|_| { false }
);
self.end_pos > self.start_pos
})();
}
.await;
if have_something_to_send {
trace!("got end_pos {:?}, streaming", self.end_pos);
@@ -931,14 +932,15 @@ impl<IO: AsyncRead + AsyncWrite + Unpin> WalSender<'_, IO> {
/// - Err in case of error -- only if 1) term changed while fetching in recovery
/// mode 2) watch channel closed, which must never happen.
async fn wait_for_lsn(&mut self) -> anyhow::Result<Option<Lsn>> {
let fp = (|| {
fail::fail_point!(
let fp = async {
neon_failpoint::fail_point!(
"sk-pause-send",
self.appname.as_deref() != Some("pageserver"),
|_| { true }
);
false
})();
}
.await;
if fp {
tokio::time::sleep(POLL_STATE_TIMEOUT).await;
return Ok(None);

View File

@@ -166,7 +166,7 @@ fn hadron_determine_offloader(mgr: &Manager, state: &StateSnapshot) -> (Option<N
let backup_lag = state.commit_lsn.checked_sub(state.backup_lsn);
if backup_lag.is_none() {
info!("Backup lag is None. Skipping re-election.");
debug!("Backup lag is None. Skipping re-election.");
return (offloader, election_dbg_str);
}
@@ -657,7 +657,7 @@ pub async fn delete_timeline(
pausable_failpoint!("sk-delete-timeline-remote-pause");
fail::fail_point!("sk-delete-timeline-remote", |_| {
neon_failpoint::fail_point!("sk-delete-timeline-remote", |_| {
Err(anyhow::anyhow!("failpoint: sk-delete-timeline-remote"))
});

View File

@@ -301,7 +301,7 @@ impl PhysicalStorage {
format!("Failed to open tmp wal file {:?}", &tmp_path)
})?;
fail::fail_point!("sk-zero-segment", |_| {
neon_failpoint::fail_point!("sk-zero-segment", |_| {
info!("sk-zero-segment failpoint hit");
Err(anyhow::anyhow!("failpoint: sk-zero-segment"))
});

View File

@@ -190,6 +190,7 @@ pub fn run_server(os: NodeOs, disk: Arc<SafekeeperDisk>) -> Result<()> {
ssl_ca_certs: Vec::new(),
use_https_safekeeper_api: false,
enable_tls_wal_service_api: false,
force_metric_collection_on_scrape: true,
};
let mut global = GlobalMap::new(disk, conf.clone())?;

View File

@@ -22,7 +22,7 @@ clap.workspace = true
clashmap.workspace = true
compute_api.workspace = true
cron.workspace = true
fail.workspace = true
neon_failpoint.workspace = true
futures.workspace = true
governor.workspace = true
hex.workspace = true

View File

@@ -25,6 +25,7 @@ use futures::stream::FuturesUnordered;
use http_utils::error::ApiError;
use hyper::Uri;
use itertools::Itertools;
use neon_failpoint as fail;
use pageserver_api::config::PostHogConfig;
use pageserver_api::controller_api::{
AvailabilityZone, MetadataHealthRecord, MetadataHealthUpdateRequest, NodeAvailability,
@@ -1984,11 +1985,14 @@ impl Service {
});
// Check that there is enough safekeepers configured that we can create new timelines
let test_sk_res = this.safekeepers_for_new_timeline().await;
let test_sk_res_str = match this.safekeepers_for_new_timeline().await {
Ok(v) => format!("Ok({v:?})"),
Err(v) => format!("Err({v:})"),
};
tracing::info!(
timeline_safekeeper_count = config.timeline_safekeeper_count,
timelines_onto_safekeepers = config.timelines_onto_safekeepers,
"viability test result (test timeline creation on safekeepers): {test_sk_res:?}",
"viability test result (test timeline creation on safekeepers): {test_sk_res_str}",
);
Ok(this)
@@ -4758,6 +4762,7 @@ impl Service {
)
.await;
let mut retry_if_not_attached = false;
let targets = {
let locked = self.inner.read().unwrap();
let mut targets = Vec::new();
@@ -4774,6 +4779,24 @@ impl Service {
.expect("Pageservers may not be deleted while referenced");
targets.push((*tenant_shard_id, node.clone()));
if let Some(location) = shard.observed.locations.get(node_id) {
if let Some(ref conf) = location.conf {
if conf.mode != LocationConfigMode::AttachedSingle
&& conf.mode != LocationConfigMode::AttachedMulti
{
// If the shard is attached as secondary, we need to retry if 404.
retry_if_not_attached = true;
}
// If the shard is attached as primary, we should succeed.
} else {
// Location conf is not available yet, retry if 404.
retry_if_not_attached = true;
}
} else {
// The shard is not attached to the intended pageserver yet, retry if 404.
retry_if_not_attached = true;
}
}
}
targets
@@ -4804,6 +4827,18 @@ impl Service {
valid_until = Some(lease.valid_until);
}
}
Err(mgmt_api::Error::ApiError(StatusCode::NOT_FOUND, _))
if retry_if_not_attached =>
{
// This is expected if the attach is not finished yet. Return 503 so that the client can retry.
return Err(ApiError::ResourceUnavailable(
format!(
"Timeline is not attached to the pageserver {} yet, please retry",
node.get_id()
)
.into(),
));
}
Err(e) => {
return Err(passthrough_api_error(&node, e));
}
@@ -5992,7 +6027,7 @@ impl Service {
tenant_id: TenantId,
split_req: TenantShardSplitRequest,
) -> Result<ShardSplitAction, ApiError> {
fail::fail_point!("shard-split-validation", |_| Err(ApiError::BadRequest(
fail::fail_point_sync!("shard-split-validation", |_| Err(ApiError::BadRequest(
anyhow::anyhow!("failpoint")
)));

View File

@@ -159,16 +159,59 @@ class EndpointHttpClient(requests.Session):
res.raise_for_status()
return res.json()
def configure_failpoints(self, *args: tuple[str, str]) -> None:
body: list[dict[str, str]] = []
def configure_failpoints(
self, *args: tuple[str, str] | list[dict[str, str | dict[str, str]]]
) -> None:
"""Configure failpoints for testing purposes.
for fp in args:
body.append(
{
"name": fp[0],
"action": fp[1],
Args:
*args: Can be one of:
- Variable number of (name, actions) tuples
- Single list of dicts with keys: name, actions, and optionally context_matchers
Examples:
# Basic failpoints
client.configure_failpoints(("test_fp", "return(error)"))
client.configure_failpoints(("fp1", "return"), ("fp2", "sleep(1000)"))
# Probability-based failpoint
client.configure_failpoints(("test_fp", "50%return(error)"))
# Context-based failpoint
client.configure_failpoints([{
"name": "test_fp",
"actions": "return(error)",
"context_matchers": {"tenant_id": ".*test.*"}
}])
"""
request_body: list[dict[str, Any]] = []
if (
len(args) == 1
and isinstance(args[0], list)
and args[0]
and isinstance(args[0][0], dict)
):
# Handle list of dicts (context-based failpoints)
failpoint_configs = args[0]
for config in failpoint_configs:
server_config: dict[str, Any] = {
"name": config["name"],
"actions": config["actions"],
}
)
if "context_matchers" in config:
server_config["context_matchers"] = config["context_matchers"]
request_body.append(server_config)
else:
# Handle tuples (basic failpoints)
for fp in args:
request_body.append(
{
"name": fp[0],
"actions": fp[1],
}
)
res = self.post(f"http://localhost:{self.internal_port}/failpoints", json=body)
res = self.post(f"http://localhost:{self.internal_port}/failpoints", json=request_body)
res.raise_for_status()

View File

@@ -1795,6 +1795,33 @@ def neon_env_builder(
record_property("preserve_database_files", builder.preserve_database_files)
@pytest.fixture(scope="function")
def neon_env_builder_local(
neon_env_builder: NeonEnvBuilder,
test_output_dir: Path,
pg_distrib_dir: Path,
) -> NeonEnvBuilder:
"""
Fixture to create a Neon environment for test with its own pg_install copy.
This allows the test to edit the list of available extensions in the
local instance of Postgres used for the test, and install extensions via
downloading them when a remote extension is tested, for instance, or
copying files around for local extension testing.
"""
test_local_pginstall = test_output_dir / "pg_install"
log.info(f"copy {pg_distrib_dir} to {test_local_pginstall}")
# We can't copy only the version that we are currently testing because other
# binaries like the storage controller need specific Postgres versions.
shutil.copytree(pg_distrib_dir, test_local_pginstall)
neon_env_builder.pg_distrib_dir = test_local_pginstall
log.info(f"local neon_env_builder.pg_distrib_dir: {neon_env_builder.pg_distrib_dir}")
return neon_env_builder
@dataclass
class PageserverPort:
pg: int
@@ -2587,22 +2614,68 @@ class NeonStorageController(MetricsGetter, LogUtils):
)
return res.json()
def configure_failpoints(self, config_strings: tuple[str, str] | list[tuple[str, str]]):
if isinstance(config_strings, tuple):
pairs = [config_strings]
else:
pairs = config_strings
def configure_failpoints(
self,
config_strings: tuple[str, str]
| list[tuple[str, str]]
| list[dict[str, str | dict[str, str]]],
):
"""
Configure failpoints for testing purposes.
log.info(f"Requesting config failpoints: {repr(pairs)}")
Args:
config_strings: Can be one of:
- Single tuple of (name, actions)
- List of tuples [(name, actions), ...]
- List of dicts with keys: name, actions, and optionally context_matchers
Examples:
# Basic failpoint
client.configure_failpoints(("test_fp", "return(error)"))
# Multiple basic failpoints
client.configure_failpoints([("fp1", "return"), ("fp2", "sleep(1000)")])
# Probability-based failpoint
client.configure_failpoints(("test_fp", "50%return(error)"))
# Context-based failpoint
client.configure_failpoints([{
"name": "test_fp",
"actions": "return(error)",
"context_matchers": {"tenant_id": ".*test.*"}
}])
"""
# Handle single tuple case
if isinstance(config_strings, tuple):
config_strings = [config_strings]
# Convert to server format
body: list[dict[str, str | dict[str, str]]] = []
for config in config_strings:
if isinstance(config, tuple):
# Simple (name, actions) tuple
body.append({"name": config[0], "actions": config[1]})
elif isinstance(config, dict):
# Dict with name, actions, and optional context_matchers
server_config: dict[str, str | dict[str, str]] = {
"name": config["name"],
"actions": config["actions"],
}
if "context_matchers" in config:
server_config["context_matchers"] = config["context_matchers"]
body.append(server_config)
else:
raise ValueError(f"Invalid config format: {config}")
res = self.request(
"PUT",
f"{self.api}/debug/v1/failpoints",
json=[{"name": name, "actions": actions} for name, actions in pairs],
f"{self.api_root()}/debug/v1/failpoints",
json=body,
headers=self.headers(TokenScope.ADMIN),
)
log.info(f"Got failpoints request response code {res.status_code}")
res.raise_for_status()
if res.status_code != 200:
self.raise_api_exception(res)
def get_tenants_placement(self) -> defaultdict[str, dict[str, Any]]:
"""

View File

@@ -309,30 +309,76 @@ class PageserverHttpClient(requests.Session, MetricsGetter):
def check_status(self):
self.get(f"http://localhost:{self.port}/v1/status").raise_for_status()
def configure_failpoints(self, config_strings: tuple[str, str] | list[tuple[str, str]]):
def configure_failpoints(
self,
config_strings: tuple[str, str]
| list[tuple[str, str]]
| list[dict[str, str | dict[str, str]]],
):
"""
Configure failpoints for testing purposes.
Args:
config_strings: Can be one of:
- Single tuple of (name, actions)
- List of tuples [(name, actions), ...]
- List of dicts with keys: name, actions, and optionally context_matchers
Examples:
# Basic failpoint
client.configure_failpoints(("test_fp", "return(error)"))
# Multiple basic failpoints
client.configure_failpoints([("fp1", "return"), ("fp2", "sleep(1000)")])
# Probability-based failpoint
client.configure_failpoints(("test_fp", "50%return(error)"))
# Context-based failpoint
client.configure_failpoints([{
"name": "test_fp",
"actions": "return(error)",
"context_matchers": {"tenant_id": ".*test.*"}
}])
"""
self.is_testing_enabled_or_skip()
# Handle single tuple case
if isinstance(config_strings, tuple):
pairs = [config_strings]
else:
pairs = config_strings
config_strings = [config_strings]
log.info(f"Requesting config failpoints: {repr(pairs)}")
# Convert to server format
body: list[dict[str, str | dict[str, str]]] = []
for config in config_strings:
if isinstance(config, tuple):
# Simple (name, actions) tuple
body.append({"name": config[0], "actions": config[1]})
elif isinstance(config, dict):
# Dict with name, actions, and optional context_matchers
server_config = {"name": config["name"], "actions": config["actions"]}
if "context_matchers" in config:
server_config["context_matchers"] = config["context_matchers"]
body.append(server_config)
else:
raise ValueError(f"Invalid config format: {config}")
res = self.put(
f"http://localhost:{self.port}/v1/failpoints",
json=[{"name": name, "actions": actions} for name, actions in pairs],
)
log.info(f"Got failpoints request response code {res.status_code}")
self.verbose_error(res)
res_json = res.json()
assert res_json is None
return res_json
res = self.post(f"{self.base_url}/v1/failpoints", json=body)
if res.status_code != 200:
raise PageserverApiException(
f"Failed to configure failpoints: {res.text}", res.status_code
)
def reload_auth_validation_keys(self):
res = self.post(f"http://localhost:{self.port}/v1/reload_auth_validation_keys")
self.verbose_error(res)
def list_tenant_visible_size(self) -> dict[TenantShardId, int]:
res = self.get(f"http://localhost:{self.port}/v1/list_tenant_visible_size")
self.verbose_error(res)
res_json = res.json()
assert isinstance(res_json, dict)
return res_json
def tenant_list(self) -> list[dict[Any, Any]]:
res = self.get(f"http://localhost:{self.port}/v1/tenant")
self.verbose_error(res)
@@ -1002,7 +1048,7 @@ class PageserverHttpClient(requests.Session, MetricsGetter):
def get_metrics_str(self) -> str:
"""You probably want to use get_metrics() instead."""
res = self.get(f"http://localhost:{self.port}/metrics")
res = self.get(f"http://localhost:{self.port}/metrics?use_latest=true")
self.verbose_error(res)
return res.text

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import re
import socket
from contextlib import closing
from itertools import cycle
from fixtures.log_helper import log
@@ -34,15 +35,23 @@ def can_bind(host: str, port: int) -> bool:
class PortDistributor:
def __init__(self, base_port: int, port_number: int):
self.iterator = iter(range(base_port, base_port + port_number))
self.base_port = base_port
self.port_number = port_number
self.cycle = cycle(range(base_port, base_port + port_number))
self.port_map: dict[int, int] = {}
def get_port(self) -> int:
for port in self.iterator:
checked = 0
for port in self.cycle:
if can_bind("localhost", port):
return port
elif checked < self.port_number:
checked += 1
else:
break
raise RuntimeError(
"port range configured for test is exhausted, consider enlarging the range"
f"port range ({self.base_port}..{self.base_port + self.port_number}) configured for test is exhausted, consider enlarging the range"
)
def replace_with_new_port(self, value: int | str) -> int | str:

View File

@@ -8,7 +8,6 @@ import pytest
import requests
from fixtures.common_types import Lsn, TenantId, TenantTimelineId, TimelineId
from fixtures.log_helper import log
from fixtures.metrics import Metrics, MetricsGetter, parse_metrics
from fixtures.utils import EnhancedJSONEncoder, wait_until
@@ -143,7 +142,7 @@ class SafekeeperHttpClient(requests.Session, MetricsGetter):
def get_metrics_str(self) -> str:
"""You probably want to use get_metrics() instead."""
request_result = self.get(f"http://localhost:{self.port}/metrics")
request_result = self.get(f"http://localhost:{self.port}/metrics?use_latest=true")
request_result.raise_for_status()
return request_result.text
@@ -155,25 +154,62 @@ class SafekeeperHttpClient(requests.Session, MetricsGetter):
if not self.is_testing_enabled:
pytest.skip("safekeeper was built without 'testing' feature")
def configure_failpoints(self, config_strings: tuple[str, str] | list[tuple[str, str]]):
def configure_failpoints(
self,
config_strings: tuple[str, str]
| list[tuple[str, str]]
| list[dict[str, str | dict[str, str]]],
):
"""
Configure failpoints for testing purposes.
Args:
config_strings: Can be one of:
- Single tuple of (name, actions)
- List of tuples [(name, actions), ...]
- List of dicts with keys: name, actions, and optionally context_matchers
Examples:
# Basic failpoint
client.configure_failpoints(("test_fp", "return(error)"))
# Multiple basic failpoints
client.configure_failpoints([("fp1", "return"), ("fp2", "sleep(1000)")])
# Probability-based failpoint
client.configure_failpoints(("test_fp", "50%return(error)"))
# Context-based failpoint
client.configure_failpoints([{
"name": "test_fp",
"actions": "return(error)",
"context_matchers": {"tenant_id": ".*test.*"}
}])
"""
self.is_testing_enabled_or_skip()
# Handle single tuple case
if isinstance(config_strings, tuple):
pairs = [config_strings]
else:
pairs = config_strings
config_strings = [config_strings]
log.info(f"Requesting config failpoints: {repr(pairs)}")
# Convert to server format
body: list[dict[str, str | dict[str, str]]] = []
for config in config_strings:
if isinstance(config, tuple):
# Simple (name, actions) tuple
body.append({"name": config[0], "actions": config[1]})
elif isinstance(config, dict):
# Dict with name, actions, and optional context_matchers
server_config = {"name": config["name"], "actions": config["actions"]}
if "context_matchers" in config:
server_config["context_matchers"] = config["context_matchers"]
body.append(server_config)
else:
raise ValueError(f"Invalid config format: {config}")
res = self.put(
f"http://localhost:{self.port}/v1/failpoints",
json=[{"name": name, "actions": actions} for name, actions in pairs],
)
log.info(f"Got failpoints request response code {res.status_code}")
res.raise_for_status()
res_json = res.json()
assert res_json is None
return res_json
res = self.post(f"http://localhost:{self.port}/v1/failpoints", json=body)
if res.status_code != 200:
raise RuntimeError(f"Failed to configure failpoints: {res.text}")
def tenant_delete_force(self, tenant_id: TenantId) -> dict[Any, Any]:
res = self.delete(f"http://localhost:{self.port}/v1/tenant/{tenant_id}")

View File

@@ -0,0 +1,32 @@
\echo Use "CREATE EXTENSION test_event_trigger_extension" to load this file. \quit
CREATE SCHEMA event_trigger;
create sequence if not exists event_trigger.seq_schema_version as int cycle;
create or replace function event_trigger.increment_schema_version()
returns event_trigger
security definer
language plpgsql
as $$
begin
perform pg_catalog.nextval('event_trigger.seq_schema_version');
end;
$$;
create or replace function event_trigger.get_schema_version()
returns int
security definer
language sql
as $$
select last_value from event_trigger.seq_schema_version;
$$;
-- On DDL event, increment the schema version number
create event trigger event_trigger_watch_ddl
on ddl_command_end
execute procedure event_trigger.increment_schema_version();
create event trigger event_trigger_watch_drop
on sql_drop
execute procedure event_trigger.increment_schema_version();

View File

@@ -0,0 +1,8 @@
default_version = '1.0'
comment = 'Test extension with Event Trigger'
# make sure the extension objects are owned by the bootstrap user
# to check that the SECURITY DEFINER event trigger function is still
# called during non-superuser DDL events.
superuser = true
trusted = true

View File

@@ -165,6 +165,7 @@ def test_fully_custom_config(positive_env: NeonEnv):
"gc_horizon": 23 * (1024 * 1024),
"gc_period": "2h 13m",
"image_creation_threshold": 7,
"image_layer_force_creation_period": "1m",
"pitr_interval": "1m",
"lagging_wal_timeout": "23m",
"lazy_slru_download": True,

View File

@@ -944,3 +944,78 @@ def test_image_layer_compression(neon_env_builder: NeonEnvBuilder, enabled: bool
f"SELECT count(*) FROM foo WHERE id={v} and val=repeat('abcde{v:0>3}', 500)"
)
assert res[0][0] == 1
# BEGIN_HADRON
def get_layer_map(env, tenant_shard_id, timeline_id, ps_id):
client = env.pageservers[ps_id].http_client()
layer_map = client.layer_map_info(tenant_shard_id, timeline_id)
image_layer_count = 0
delta_layer_count = 0
for layer in layer_map.historic_layers:
if layer.kind == "Image":
image_layer_count += 1
elif layer.kind == "Delta":
delta_layer_count += 1
return image_layer_count, delta_layer_count
def test_image_creation_timeout(neon_env_builder: NeonEnvBuilder):
"""
Tests that page server can force creating new images if image creation timeout is enabled
"""
# use large knobs to disable L0 compaction/image creation except for the force image creation
tenant_conf = {
"compaction_threshold": "100",
"image_creation_threshold": "100",
"image_layer_creation_check_threshold": "1",
"checkpoint_distance": 10 * 1024,
"checkpoint_timeout": "1s",
"image_layer_force_creation_period": "1s",
# The lsn for forced image layer creations is calculated once every 10 minutes.
# Hence, drive compaction manually such that the test doesn't compute it at the
# wrong time.
"compaction_period": "0s",
}
# consider every tenant large to run the image layer generation check more eagerly
neon_env_builder.pageserver_config_override = (
"image_layer_generation_large_timeline_threshold=0"
)
neon_env_builder.num_pageservers = 1
neon_env_builder.num_safekeepers = 1
env = neon_env_builder.init_start(initial_tenant_conf=tenant_conf)
tenant_id = env.initial_tenant
timeline_id = env.initial_timeline
endpoint = env.endpoints.create_start("main")
endpoint.safe_psql("CREATE TABLE foo (id INTEGER PRIMARY KEY, val text)")
# Generate some rows.
for v in range(10):
endpoint.safe_psql(f"INSERT INTO foo (id, val) VALUES ({v}, repeat('abcde{v:0>3}', 500))")
# Sleep a bit such that the inserts are considered when calculating the forced image layer creation LSN.
time.sleep(2)
def check_force_image_creation():
ps_http = env.pageserver.http_client()
ps_http.timeline_compact(tenant_id, timeline_id)
image, delta = get_layer_map(env, tenant_id, timeline_id, 0)
log.info(f"images: {image}, deltas: {delta}")
assert image > 0
env.pageserver.assert_log_contains("forcing L0 compaction of")
env.pageserver.assert_log_contains("forcing image creation for partitioned range")
wait_until(check_force_image_creation)
endpoint.stop_and_destroy()
env.pageserver.allowed_errors.append(
".*created delta file of size.*larger than double of target.*"
)
# END_HADRON

Some files were not shown because too many files have changed in this diff Show More