Compare commits

..

281 Commits

Author SHA1 Message Date
Arpad Müller
7b860b837c Merge pull request #7154 from neondatabase/rc/2024-03-18
Release 2024-03-18
2024-03-19 12:07:14 +01:00
Christian Schwarz
41fc96e20f fixup(#7160 / tokio_epoll_uring_ext): double-panic caused by info! in thread-local's drop() (#7164)
Manual testing of the changes in #7160 revealed that, if the
thread-local destructor ever runs (it apparently doesn't in our test
suite runs, otherwise #7160 would not have auto-merged), we can
encounter an `abort()` due to a double-panic in the tracing code.

This github comment here contains the stack trace:
https://github.com/neondatabase/neon/pull/7160#issuecomment-2003778176

This PR reverts #7160 and uses a atomic counter to identify the
thread-local in log messages, instead of the memory address of the
thread local, which may be re-used.
2024-03-18 16:28:17 +01:00
Christian Schwarz
fb2b1ce57b fixup(#7141 / tokio_epoll_uring_ext): high frequency log message
The PR #7141 added log message

```
ThreadLocalState is being dropped and id might be re-used in the future
```

which was supposed to be emitted when the thread-local is destroyed.
Instead, it was emitted on _each_ call to `thread_local_system()`,
ie.., on each tokio-epoll-uring operation.
2024-03-18 13:01:17 +01:00
Joonas Koivunen
464717451b build: make procfs linux only dependency (#7156)
the dependency refuses to build on macos so builds on `main` are broken
right now, including the `release` PR.
2024-03-18 09:32:49 +00:00
Joonas Koivunen
c6ed86d3d0 Merge pull request #7081 from neondatabase/rc/2024-03-11
Release 2024-03-11
2024-03-11 14:41:39 +02:00
Roman Zaynetdinov
f0a9017008 Export db size, deadlocks and changed row metrics (#7050)
## Problem

We want to report metrics for the oldest user database.
2024-03-11 11:55:06 +00:00
Christian Schwarz
bb7949ba00 Merge pull request #6993 from neondatabase/rc/2024-03-04
Release 2024-03-04
2024-03-04 13:08:44 +01:00
Arthur Petukhovsky
1df0f69664 Merge pull request #6973 from neondatabase/rc/2024-02-29-manual
Release 2024-02-29
2024-02-29 17:26:33 +00:00
Vlad Lazar
970066a914 libs: fix expired token in auth decode test (#6963)
The test token expired earlier today (1709200879). I regenerated the
token, but without an expiration date this time.
2024-02-29 17:23:25 +00:00
Arthur Petukhovsky
1ebd3897c0 Merge pull request #6956 from neondatabase/rc/2024-02-28
Release 2024-02-28
2024-02-29 16:39:52 +00:00
Arthur Petukhovsky
6460beffcd Merge pull request #6901 from neondatabase/rc/2024-02-26
Release 2024-02-26
2024-02-26 17:08:19 +00:00
John Spray
6f7f8958db pageserver: only write out legacy tenant config if no generation (#6891)
## Problem

Previously we always wrote out both legacy and modern tenant config
files. The legacy write enabled rollbacks, but we are long past the
point where that is needed.

We still need the legacy format for situations where someone is running
tenants without generations (that will be yanked as well eventually),
but we can avoid writing it out at all if we do have a generation number
set. We implicitly also avoid writing the legacy config if our mode is
Secondary (secondary mode is newer than generations).

## Summary of changes

- Make writing legacy tenant config conditional on there being no
generation number set.
2024-02-26 10:25:25 +00:00
Christian Schwarz
936a00e077 pageserver: remove two obsolete/unused per-timeline metrics (#6893)
over-compensating the addition of a new per-timeline metric in
https://github.com/neondatabase/neon/pull/6834

part of https://github.com/neondatabase/neon/issues/6737
2024-02-26 09:16:24 +00:00
Nikita Kalyanov
96a4e8de66 Add /terminate API (#6745) (#6853)
this is to speed up suspends, see
https://github.com/neondatabase/cloud/issues/10284


Cherry-pick to release branch to build new compute images
2024-02-22 11:51:19 +02:00
Arseny Sher
01180666b0 Merge pull request #6803 from neondatabase/releases/2024-02-19
Release 2024-02-19
2024-02-19 16:38:35 +04:00
Conrad Ludgate
6c94269c32 Merge pull request #6758 from neondatabase/release-proxy-2024-02-14
2024-02-14 Proxy Release
2024-02-15 09:45:08 +00:00
Anna Khanova
edc691647d Proxy: remove fail fast logic to connect to compute (#6759)
## Problem

Flaky tests

## Summary of changes

Remove failfast logic
2024-02-15 07:42:12 +00:00
Conrad Ludgate
855d7b4781 hold cancel session (#6750)
## Problem

In a recent refactor, we accidentally dropped the cancel session early

## Summary of changes

Hold the cancel session during proxy passthrough
2024-02-14 14:57:22 +00:00
Anna Khanova
c49c9707ce Proxy: send cancel notifications to all instances (#6719)
## Problem

If cancel request ends up on the wrong proxy instance, it doesn't take
an effect.

## Summary of changes

Send redis notifications to all proxy pods about the cancel request.

Related issue: https://github.com/neondatabase/neon/issues/5839,
https://github.com/neondatabase/cloud/issues/10262
2024-02-14 14:57:22 +00:00
Anna Khanova
2227540a0d Proxy refactor auth+connect (#6708)
## Problem

Not really a problem, just refactoring.

## Summary of changes

Separate authenticate from wake compute.

Do not call wake compute second time if we managed to connect to
postgres or if we got it not from cache.
2024-02-14 14:57:22 +00:00
Conrad Ludgate
f1347f2417 proxy: add more http logging (#6726)
## Problem

hard to see where time is taken during HTTP flow.

## Summary of changes

add a lot more for query state. add a conn_id field to the sql-over-http
span
2024-02-14 14:57:22 +00:00
Conrad Ludgate
30b295b017 proxy: some more parquet data (#6711)
## Summary of changes

add auth_method and database to the parquet logs
2024-02-14 14:57:22 +00:00
Anna Khanova
1cef395266 Proxy: copy bidirectional fork (#6720)
## Problem

`tokio::io::copy_bidirectional` doesn't close the connection once one of
the sides closes it. It's not really suitable for the postgres protocol.

## Summary of changes

Fork `copy_bidirectional` and initiate a shutdown for both connections.

---------

Co-authored-by: Conrad Ludgate <conradludgate@gmail.com>
2024-02-14 14:57:22 +00:00
John Spray
78d160f76d Merge pull request #6721 from neondatabase/releases/2024-02-12
Release 2024-02-12
2024-02-12 09:35:30 +00:00
Vlad Lazar
b9238059d6 Merge pull request #6617 from neondatabase/releases/2024-02-05
Release 2024-02-05
2024-02-05 12:50:38 +00:00
Arpad Müller
d0cb4b88c8 Don't preserve temp files on creation errors of delta layers (#6612)
There is currently no cleanup done after a delta layer creation error,
so delta layers can accumulate. The problem gets worse as the operation
gets retried and delta layers accumulate on the disk. Therefore, delete
them from disk (if something has been written to disk).
2024-02-05 09:58:18 +00:00
John Spray
1ec3e39d4e Merge pull request #6504 from neondatabase/releases/2024-01-29
Release 2024-01-29
2024-01-29 10:05:01 +00:00
John Spray
a1a74eef2c Merge pull request #6420 from neondatabase/releases/2024-01-22
Release 2024-01-22
2024-01-22 17:24:11 +00:00
John Spray
90e689adda pageserver: mark tenant broken when cancelling attach (#6430)
## Problem

When a tenant is in Attaching state, and waiting for the
`concurrent_tenant_warmup` semaphore, it also listens for the tenant
cancellation token. When that token fires, Tenant::attach drops out.
Meanwhile, Tenant::set_stopping waits forever for the tenant to exit
Attaching state.

Fixes: https://github.com/neondatabase/neon/issues/6423

## Summary of changes

- In the absence of a valid state for the tenant, it is set to Broken in
this path. A more elegant solution will require more refactoring, beyond
this minimal fix.

(cherry picked from commit 93572a3e99)
2024-01-22 16:20:57 +00:00
Christian Schwarz
f0b2d4b053 fixup(#6037): actually fix the issue, #6388 failed to do so (#6429)
Before this patch, the select! still retured immediately if `futs` was
empty. Must have tested a stale build in my manual testing of #6388.

(cherry picked from commit 15c0df4de7)
2024-01-22 15:23:12 +00:00
Anna Khanova
299d9474c9 Proxy: fix gc (#6426)
## Problem

Gc currently doesn't work properly.

## Summary of changes

Change statement on running gc.
2024-01-22 14:39:09 +01:00
Conrad Ludgate
7234208b36 bump shlex (#6421)
## Problem

https://rustsec.org/advisories/RUSTSEC-2024-0006

## Summary of changes

`cargo update -p shlex`

(cherry picked from commit 5559b16953)
2024-01-22 09:49:33 +00:00
Christian Schwarz
93450f11f5 Merge pull request #6354 from neondatabase/releases/2024-01-15
Release 2024-01-15

NB: the previous release PR https://github.com/neondatabase/neon/pull/6286 was accidentally merged by merge-by-squash instead of merge-by-merge-commit.
See https://github.com/neondatabase/neon/pull/6354#issuecomment-1891706321 for more context.
2024-01-15 14:30:25 +01:00
Christian Schwarz
2f0f9edf33 Merge remote-tracking branch 'origin/release' into releases/2024-01-15 2024-01-15 09:36:42 +00:00
Christian Schwarz
d424f2b7c8 empty commit so we can produce a merge commit 2024-01-15 09:36:22 +00:00
Christian Schwarz
21315e80bc Merge branch 'releases/2024-01-08--not-squashed' into releases/2024-01-15 2024-01-15 09:31:07 +00:00
vipvap
483b66d383 Merge branch 'release' into releases/2024-01-08 (not-squashed merge of #6286)
Release PR https://github.com/neondatabase/neon/pull/6286 got
accidentally merged-by-squash intstead of merge-by-merge-commit.

This commit shows how things would look like if 6286 had been
merged-by-squash.

```
git reset --hard 9f1327772
git merge --no-ff 5c0264b591
```

Co-authored-by: Christian Schwarz <christian@neon.tech>
2024-01-15 09:28:08 +00:00
vipvap
aa72a22661 Release 2024-01-08 (#6286)
Release 2024-01-08
2024-01-08 09:26:27 +00:00
Shany Pozin
5c0264b591 Merge branch 'release' into releases/2024-01-08 2024-01-08 09:34:06 +02:00
Arseny Sher
9f13277729 Merge pull request #6242 from neondatabase/releases/2024-01-02
Release 2024-01-02
2024-01-02 12:04:43 +04:00
Arseny Sher
54aa319805 Don't split WAL record across two XLogData's when sending from safekeepers.
As protocol demands. Not following this makes standby complain about corrupted
WAL in various ways.

https://neondb.slack.com/archives/C05L7D1JAUS/p1703774799114719
closes https://github.com/neondatabase/cloud/issues/9057
2024-01-02 10:54:00 +04:00
Arseny Sher
4a227484bf Add large insertion and slow WAL sending to test_hot_standby.
To exercise MAX_SEND_SIZE sending from safekeeper; we've had a bug with WAL
records torn across several XLogData messages. Add failpoint to safekeeper to
slow down sending. Also check for corrupted WAL complains in standby log.

Make the test a bit simpler in passing, e.g. we don't need explicit commits as
autocommit is enabled by default.

https://neondb.slack.com/archives/C05L7D1JAUS/p1703774799114719
https://github.com/neondatabase/cloud/issues/9057
2024-01-02 10:54:00 +04:00
Arseny Sher
2f83f85291 Add failpoint support to safekeeper.
Just a copy paste from pageserver.
2024-01-02 10:54:00 +04:00
Arseny Sher
d6cfcb0d93 Move failpoint support code to utils.
To enable them in safekeeper as well.
2024-01-02 10:54:00 +04:00
Arseny Sher
392843ad2a Fix safekeeper START_REPLICATION (term=n).
It was giving WAL only up to commit_lsn instead of flush_lsn, so recovery of
uncommitted WAL since cdb08f03 hanged. Add test for this.
2024-01-02 10:54:00 +04:00
Arseny Sher
bd4dae8f4a compute_ctl: kill postgres and sync-safekeeprs on exit.
Otherwise they are left orphaned when compute_ctl is terminated with a
signal. It was invisible most of the time because normally neon_local or k8s
kills postgres directly and then compute_ctl finishes gracefully. However, in
some tests compute_ctl gets stuck waiting for sync-safekeepers which
intentionally never ends because safekeepers are offline, and we want to stop
compute_ctl without leaving orphanes behind.

This is a quite rough approach which doesn't wait for children termination. A
better way would be to convert compute_ctl to async which would make waiting
easy.
2024-01-02 10:54:00 +04:00
Shany Pozin
b05fe53cfd Merge pull request #6240 from neondatabase/releases/2024-01-01
Release 2024-01-01
2024-01-01 11:07:30 +02:00
Christian Schwarz
c13a2f0df1 Merge pull request #6192 from neondatabase/releases/2023-12-19
Release 2023-12-19

We need to do a config change that requires restarting the pageservers.
Slip in two metrics-related commits that didn't make this week's regularly release.
2023-12-19 14:52:47 +01:00
Christian Schwarz
39be366fc5 higher resolution histograms for getpage@lsn (#6177)
part of https://github.com/neondatabase/cloud/issues/7811
2023-12-19 13:46:59 +00:00
Christian Schwarz
6eda0a3158 [PRE-MERGE] fix metric pageserver_initial_logical_size_start_calculation
(This is a pre-merge cherry-pick of https://github.com/neondatabase/neon/pull/6191)

It wasn't being incremented.

Fixup of

    commit 1c88824ed0
    Author: Christian Schwarz <christian@neon.tech>
    Date:   Fri Dec 1 12:52:59 2023 +0100

        initial logical size calculation: add a bunch of metrics (#5995)
2023-12-19 13:46:55 +00:00
Shany Pozin
306c7a1813 Merge pull request #6173 from neondatabase/sasha_release_bypassrls_replication
Grant BYPASSRLS and REPLICATION explicitly to neon_superuser roles
2023-12-18 22:16:36 +02:00
Sasha Krassovsky
80be423a58 Grant BYPASSRLS and REPLICATION explicitly to neon_superuser roles 2023-12-18 10:22:36 -08:00
Shany Pozin
5dcfef82f2 Merge pull request #6163 from neondatabase/releases/2023-12-18
Release 2023-12-18-2
2023-12-18 15:34:17 +02:00
Christian Schwarz
e67b8f69c0 [PRE-MERGE] pageserver: Reduce tracing overhead in timeline::get #6115
Pre-merge `git merge --squash` of
https://github.com/neondatabase/neon/pull/6115

Lowering the tracing level in get_value_reconstruct_data and
get_or_maybe_download from info to debug reduces the overhead
of span creation in non-debug environments.
2023-12-18 13:39:48 +01:00
Shany Pozin
e546872ab4 Merge pull request #6158 from neondatabase/releases/2023-12-18
Release 2023-12-18
2023-12-18 14:24:34 +02:00
John Spray
322ea1cf7c pageserver: on-demand activation cleanups (#6157)
## Problem

#6112 added some logs and metrics: clean these up a bit:
- Avoid counting startup completions for tenants launched after startup
- exclude no-op cases from timing histograms 
- remove a rogue log messages
2023-12-18 11:14:19 +00:00
Vadim Kharitonov
3633742de9 Merge pull request #6121 from neondatabase/releases/2023-12-13
Release 2023-12-13
2023-12-13 12:39:43 +01:00
Joonas Koivunen
079d3a37ba Merge remote-tracking branch 'origin/release' into releases/2023-12-13
this handles the hotfix introduced conflict.
2023-12-13 10:07:19 +00:00
Vadim Kharitonov
a46e77b476 Merge pull request #6090 from neondatabase/releases/2023-12-11
Release 2023-12-11
2023-12-12 12:10:35 +01:00
Tristan Partin
a92702b01e Add submodule paths as safe directories as a precaution
The check-codestyle-rust-arm job requires this for some reason, so let's
just add them everywhere we do this workaround.
2023-12-11 22:00:35 +00:00
Tristan Partin
8ff3253f20 Fix git ownership issue in check-codestyle-rust-arm
We have this workaround for other jobs. Looks like this one was
forgotten about.
2023-12-11 22:00:35 +00:00
Joonas Koivunen
04b82c92a7 fix: accidential return Ok (#6106)
Error indicating request cancellation OR timeline shutdown was deemed as
a reason to exit the background worker that calculated synthetic size.
Fix it to only be considered for avoiding logging such of such errors.

This conflicted on tenant_shard_id having already replaced tenant_id on
`main`.
2023-12-11 21:41:36 +00:00
Vadim Kharitonov
e5bf423e68 Merge branch 'release' into releases/2023-12-11 2023-12-11 11:55:48 +01:00
Vadim Kharitonov
60af392e45 Merge pull request #6057 from neondatabase/vk/patch_timescale_for_production
Revert timescaledb for pg14 and pg15 (#6056)
2023-12-06 16:21:16 +01:00
Vadim Kharitonov
661fc41e71 Revert timescaledb for pg14 and pg15 (#6056)
```
could not start the compute node: compute is in state "failed": db error: ERROR: could not access file "$libdir/timescaledb-2.10.1": No such file or directory Caused by: ERROR: could not access file "$libdir/timescaledb-2.10.1": No such file or directory
```
2023-12-06 16:14:07 +01:00
Shany Pozin
702c488f32 Merge pull request #6022 from neondatabase/releases/2023-12-04
Release 2023-12-04
2023-12-05 17:03:28 +02:00
Sasha Krassovsky
45c5122754 Remove trusted from wal2json 2023-12-04 12:36:19 -08:00
Shany Pozin
558394f710 fix merge 2023-12-04 11:41:27 +02:00
Shany Pozin
73b0898608 Merge branch 'release' into releases/2023-12-04 2023-12-04 11:36:26 +02:00
Joonas Koivunen
e65be4c2dc Merge pull request #6013 from neondatabase/releases/2023-12-01-hotfix
fix: use create_new instead of create for mutex file
2023-12-01 15:35:56 +02:00
Joonas Koivunen
40087b8164 fix: use create_new instead of create for mutex file 2023-12-01 12:54:49 +00:00
Shany Pozin
c762b59483 Merge pull request #5986 from neondatabase/Release-11-30-hotfix
Notify safekeeper readiness with systemd.
2023-11-30 10:01:05 +02:00
Arseny Sher
5d71601ca9 Notify safekeeper readiness with systemd.
To avoid downtime during deploy, as in busy regions initial load can currently
take ~30s.
2023-11-30 08:23:31 +03:00
Shany Pozin
a113c3e433 Merge pull request #5945 from neondatabase/release-2023-11-28-hotfix
Release 2023 11 28 hotfix
2023-11-28 08:14:59 +02:00
Anastasia Lubennikova
e81fc598f4 Update neon extension relocatable for existing installations (#5943) 2023-11-28 00:12:39 +00:00
Anastasia Lubennikova
48b845fa76 Make neon extension relocatable to allow SET SCHEMA (#5942) 2023-11-28 00:12:32 +00:00
Shany Pozin
27096858dc Merge pull request #5922 from neondatabase/releases/2023-11-27
Release 2023-11-27
2023-11-27 09:58:51 +02:00
Shany Pozin
4430d0ae7d Merge pull request #5876 from neondatabase/releases/2023-11-17
Release 2023-11-17
2023-11-20 09:11:58 +02:00
Joonas Koivunen
6e183aa0de Merge branch 'main' into releases/2023-11-17 2023-11-19 15:25:47 +00:00
Vadim Kharitonov
fd6d0b7635 Merge branch 'release' into releases/2023-11-17 2023-11-17 10:51:45 +01:00
Vadim Kharitonov
3710c32aae Merge pull request #5778 from neondatabase/releases/2023-11-03
Release 2023-11-03
2023-11-03 16:06:58 +01:00
Vadim Kharitonov
be83bee49d Merge branch 'release' into releases/2023-11-03 2023-11-03 11:18:15 +01:00
Alexander Bayandin
cf28e5922a Merge pull request #5685 from neondatabase/releases/2023-10-26
Release 2023-10-26
2023-10-27 10:42:12 +01:00
Em Sharnoff
7d384d6953 Bump vm-builder v0.18.2 -> v0.18.4 (#5666)
Only applicable change was neondatabase/autoscaling#584, setting
pgbouncer auth_dbname=postgres in order to fix superuser connections
from preventing dropping databases.
2023-10-26 20:15:45 +01:00
Em Sharnoff
4b3b37b912 Bump vm-builder v0.18.1 -> v0.18.2 (#5646)
Only applicable change was neondatabase/autoscaling#571, removing the
postgres_exporter flags `--auto-discover-databases` and
`--exclude-databases=...`
2023-10-26 20:15:29 +01:00
Shany Pozin
1d8d200f4d Merge pull request #5668 from neondatabase/sp/aux_files_cherry_pick
Cherry pick: Ignore missed AUX_FILES_KEY when generating image layer (#5660)
2023-10-26 10:08:16 +03:00
Konstantin Knizhnik
0d80d6ce18 Ignore missed AUX_FILES_KEY when generating image layer (#5660)
## Problem

Logical replication requires new AUX_FILES_KEY which is definitely
absent in existed database.
We do not have function to check if key exists in our KV storage.
So I have to handle the error in `list_aux_files` method.
But this key is also included in key space range and accessed y
`create_image_layer` method.

## Summary of changes

Check if AUX_FILES_KEY  exists before including it in keyspace.

---------

Co-authored-by: Konstantin Knizhnik <knizhnik@neon.tech>
Co-authored-by: Shany Pozin <shany@neon.tech>
Co-authored-by: Arpad Müller <arpad-m@users.noreply.github.com>
2023-10-26 09:30:28 +03:00
Shany Pozin
f653ee039f Merge pull request #5638 from neondatabase/releases/2023-10-24
Release 2023-10-24
2023-10-24 12:10:52 +03:00
Em Sharnoff
e614a95853 Merge pull request #5610 from neondatabase/sharnoff/rc-2023-10-20-vm-monitor-fixes
Release 2023-10-20: vm-monitor memory.high throttling fixes
2023-10-20 00:11:06 -07:00
Em Sharnoff
850db4cc13 vm-monitor: Deny not fail downscale if no memory stats yet (#5606)
Fixes an issue we observed on staging that happens when the
autoscaler-agent attempts to immediately downscale the VM after binding,
which is typical for pooled computes.

The issue was occurring because the autoscaler-agent was requesting
downscaling before the vm-monitor had gathered sufficient cgroup memory
stats to be confident in approving it. When the vm-monitor returned an
internal error instead of denying downscaling, the autoscaler-agent
retried the connection and immediately hit the same issue (in part
because cgroup stats are collected per-connection, rather than
globally).
2023-10-19 21:56:55 -07:00
Em Sharnoff
8a316b1277 vm-monitor: Log full error on message handling failure (#5604)
There's currently an issue with the vm-monitor on staging that's not
really feasible to debug because the current display impl gives no
context to the errors (just says "failed to downscale").

Logging the full error should help.

For communications with the autoscaler-agent, it's ok to only provide
the outermost cause, because we can cross-reference with the VM logs.
At some point in the future, we may want to change that.
2023-10-19 21:56:50 -07:00
Em Sharnoff
4d13bae449 vm-monitor: Switch from memory.high to polling memory.stat (#5524)
tl;dr it's really hard to avoid throttling from memory.high, and it
counts tmpfs & page cache usage, so it's also hard to make sense of.

In the interest of fixing things quickly with something that should be
*good enough*, this PR switches to instead periodically fetch memory
statistics from the cgroup's memory.stat and use that data to determine
if and when we should upscale.

This PR fixes #5444, which has a lot more detail on the difficulties
we've hit with memory.high. This PR also supersedes #5488.
2023-10-19 21:56:36 -07:00
Vadim Kharitonov
49377abd98 Merge pull request #5577 from neondatabase/releases/2023-10-17
Release 2023-10-17
2023-10-17 12:21:20 +02:00
Christian Schwarz
a6b2f4e54e limit imitate accesses concurrency, using same semaphore as compactions (#5578)
Before this PR, when we restarted pageserver, we'd see a rush of
`$number_of_tenants` concurrent eviction tasks starting to do imitate
accesses building up in the period of `[init_order allows activations,
$random_access_delay + EvictionPolicyLayerAccessThreshold::period]`.

We simply cannot handle that degree of concurrent IO.

We already solved the problem for compactions by adding a semaphore.
So, this PR shares that semaphore for use by evictions.

Part of https://github.com/neondatabase/neon/issues/5479

Which is again part of https://github.com/neondatabase/neon/issues/4743

Risks / Changes In System Behavior
==================================

* we don't do evictions as timely as we currently do
* we log a bunch of warnings about eviction taking too long
* imitate accesses and compactions compete for the same concurrency
limit, so, they'll slow each other down through this shares semaphore

Changes
=======

- Move the `CONCURRENT_COMPACTIONS` semaphore into `tasks.rs`
- Rename it to `CONCURRENT_BACKGROUND_TASKS`
- Use it also for the eviction imitate accesses:
    - Imitate acceses are both per-TIMELINE and per-TENANT
    - The per-TENANT is done through coalescing all the per-TIMELINE
      tasks via a tokio mutex `eviction_task_tenant_state`.
    - We acquire the CONCURRENT_BACKGROUND_TASKS permit early, at the
      beginning of the eviction iteration, much before the imitate
      acesses start (and they may not even start at all in the given
      iteration, as they happen only every $threshold).
    - Acquiring early is **sub-optimal** because when the per-timline
      tasks coalesce on the `eviction_task_tenant_state` mutex,
      they are already holding a CONCURRENT_BACKGROUND_TASKS permit.
    - It's also unfair because tenants with many timelines win
      the CONCURRENT_BACKGROUND_TASKS more often.
    - I don't think there's another way though, without refactoring
      more of the imitate accesses logic, e.g, making it all per-tenant.
- Add metrics for queue depth behind the semaphore.
I found these very useful to understand what work is queued in the
system.

    - The metrics are tagged by the new `BackgroundLoopKind`.
    - On a green slate, I would have used `TaskKind`, but we already had
      pre-existing labels whose names didn't map exactly to task kind.
      Also the task kind is kind of a lower-level detail, so, I think
it's fine to have a separate enum to identify background work kinds.

Future Work
===========

I guess I could move the eviction tasks from a ticker to "sleep for
$period".
The benefit would be that the semaphore automatically "smears" the
eviction task scheduling over time, so, we only have the rush on restart
but a smeared-out rush afterward.

The downside is that this perverts the meaning of "$period", as we'd
actually not run the eviction at a fixed period. It also means the the
"took to long" warning & metric becomes meaningless.

Then again, that is already the case for the compaction and gc tasks,
which do sleep for `$period` instead of using a ticker.

(cherry picked from commit 9256788273)
2023-10-17 12:16:26 +02:00
Shany Pozin
face60d50b Merge pull request #5526 from neondatabase/releases/2023-10-11
Release 2023-10-11
2023-10-11 11:16:39 +03:00
Shany Pozin
9768aa27f2 Merge pull request #5516 from neondatabase/releases/2023-10-10
Release 2023-10-10
2023-10-10 14:16:47 +03:00
Shany Pozin
96b2e575e1 Merge pull request #5445 from neondatabase/releases/2023-10-03
Release 2023-10-03
2023-10-04 13:53:37 +03:00
Alexander Bayandin
7222777784 Update checksums for pg_jsonschema & pg_graphql (#5455)
## Problem

Folks have re-taged releases for `pg_jsonschema` and `pg_graphql` (to
increase timeouts on their CI), for us, these are a noop changes, 
but unfortunately, this will cause our builds to fail due to checksums 
mismatch (this might not strike right away because of the build cache).
- 8ba7c7be9d
- aa7509370a

## Summary of changes
- `pg_jsonschema` update checksum
- `pg_graphql` update checksum
2023-10-03 18:44:30 +01:00
Em Sharnoff
5469fdede0 Merge pull request #5422 from neondatabase/sharnoff/rc-2023-09-28-fix-restart-on-postmaster-SIGKILL
Release 2023-09-28: Fix (lack of) restart on neonvm postmaster SIGKILL
2023-09-28 10:48:51 -07:00
MMeent
72aa6b9fdd Fix neon_zeroextend's WAL logging (#5387)
When you log more than a few blocks, you need to reserve the space in
advance. We didn't do that, so we got errors. Now we do that, and
shouldn't get errors.
2023-09-28 09:37:28 -07:00
Em Sharnoff
ae0634b7be Bump vm-builder v0.17.11 -> v0.17.12 (#5407)
Only relevant change is neondatabase/autoscaling#534 - refer there for
more details.
2023-09-28 09:28:04 -07:00
Shany Pozin
70711f32fa Merge pull request #5375 from neondatabase/releases/2023-09-26
Release 2023-09-26
2023-09-26 15:19:45 +03:00
Vadim Kharitonov
52a88af0aa Merge pull request #5336 from neondatabase/releases/2023-09-19
Release 2023-09-19
2023-09-19 11:16:43 +02:00
Alexander Bayandin
b7a43bf817 Merge branch 'release' into releases/2023-09-19 2023-09-19 09:07:20 +01:00
Alexander Bayandin
dce91b33a4 Merge pull request #5318 from neondatabase/releases/2023-09-15-1
Postgres 14/15: Use previous extensions versions
2023-09-15 16:30:44 +01:00
Alexander Bayandin
23ee4f3050 Revert plv8 only 2023-09-15 15:45:23 +01:00
Alexander Bayandin
46857e8282 Postgres 14/15: Use previous extensions versions 2023-09-15 15:27:00 +01:00
Alexander Bayandin
368ab0ce54 Merge pull request #5313 from neondatabase/releases/2023-09-15
Release 2023-09-15
2023-09-15 10:39:56 +01:00
Konstantin Knizhnik
a5987eebfd References to old and new blocks were mixed in xlog_heap_update handler (#5312)
## Problem

See https://neondb.slack.com/archives/C05L7D1JAUS/p1694614585955029

https://www.notion.so/neondatabase/Duplicate-key-issue-651627ce843c45188fbdcb2d30fd2178

## Summary of changes

Swap old/new block references

## Checklist before requesting a review

- [ ] I have performed a self-review of my code.
- [ ] If it is a core feature, I have added thorough tests.
- [ ] Do we need to implement analytics? if so did you add the relevant
metrics to the dashboard?
- [ ] If this PR requires public announcement, mark it with
/release-notes label and add several sentences in this section.

## Checklist before merging

- [ ] Do not forget to reformat commit message to not include the above
checklist

---------

Co-authored-by: Konstantin Knizhnik <knizhnik@neon.tech>
Co-authored-by: Heikki Linnakangas <heikki@neon.tech>
2023-09-15 10:11:41 +01:00
Alexander Bayandin
6686ede30f Update checksum for pg_hint_plan (#5309)
## Problem

The checksum for `pg_hint_plan` doesn't match:
```
sha256sum: WARNING: 1 computed checksum did NOT match
```

Ref
https://github.com/neondatabase/neon/actions/runs/6185715461/job/16793609251?pr=5307

It seems that the release was retagged yesterday:
https://github.com/ossc-db/pg_hint_plan/releases/tag/REL16_1_6_0

I don't see any malicious changes from 15_1.5.1:
https://github.com/ossc-db/pg_hint_plan/compare/REL15_1_5_1...REL16_1_6_0,
so it should be ok to update.

## Summary of changes
- Update checksum for `pg_hint_plan` 16_1.6.0
2023-09-15 09:54:42 +01:00
Em Sharnoff
373c7057cc vm-monitor: Fix cgroup throttling (#5303)
I believe this (not actual IO problems) is the cause of the "disk speed
issue" that we've had for VMs recently. See e.g.:

1. https://neondb.slack.com/archives/C03H1K0PGKH/p1694287808046179?thread_ts=1694271790.580099&cid=C03H1K0PGKH
2. https://neondb.slack.com/archives/C03H1K0PGKH/p1694511932560659

The vm-informant (and now, the vm-monitor, its replacement) is supposed
to gradually increase the `neon-postgres` cgroup's memory.high value,
because otherwise the kernel will throttle all the processes in the
cgroup.

This PR fixes a bug with the vm-monitor's implementation of this
behavior.

---

Other references, for the vm-informant's implementation:

- Original issue: neondatabase/autoscaling#44
- Original PR: neondatabase/autoscaling#223
2023-09-15 09:54:42 +01:00
Shany Pozin
7d6ec16166 Merge pull request #5296 from neondatabase/releases/2023-09-13
Release 2023-09-13
2023-09-13 13:49:14 +03:00
Shany Pozin
0e6fdc8a58 Merge pull request #5283 from neondatabase/releases/2023-09-12
Release 2023-09-12
2023-09-12 14:56:47 +03:00
Christian Schwarz
521438a5c6 fix deadlock around TENANTS (#5285)
The sequence that can lead to a deadlock:

1. DELETE request gets all the way to `tenant.shutdown(progress,
false).await.is_err() ` , while holding TENANTS.read()
2. POST request for tenant creation comes in, calls `tenant_map_insert`,
it does `let mut guard = TENANTS.write().await;`
3. Something that `tenant.shutdown()` needs to wait for needs a
`TENANTS.read().await`.
The only case identified in exhaustive manual scanning of the code base
is this one:
Imitate size access does `get_tenant().await`, which does
`TENANTS.read().await` under the hood.

In the above case (1) waits for (3), (3)'s read-lock request is queued
behind (2)'s write-lock, and (2) waits for (1).
Deadlock.

I made a reproducer/proof-that-above-hypothesis-holds in
https://github.com/neondatabase/neon/pull/5281 , but, it's not ready for
merge yet and we want the fix _now_.

fixes https://github.com/neondatabase/neon/issues/5284
2023-09-12 14:13:13 +03:00
Vadim Kharitonov
07d7874bc8 Merge pull request #5202 from neondatabase/releases/2023-09-05
Release 2023-09-05
2023-09-05 12:16:06 +02:00
Anastasia Lubennikova
1804111a02 Merge pull request #5161 from neondatabase/rc-2023-08-31
Release 2023-08-31
2023-08-31 16:53:17 +03:00
Arthur Petukhovsky
cd0178efed Merge pull request #5150 from neondatabase/release-sk-fix-active-timeline
Release 2023-08-30
2023-08-30 11:43:39 +02:00
Shany Pozin
333574be57 Merge pull request #5133 from neondatabase/releases/2023-08-29
Release 2023-08-29
2023-08-29 14:02:58 +03:00
Alexander Bayandin
79a799a143 Merge branch 'release' into releases/2023-08-29 2023-08-29 11:17:57 +01:00
Conrad Ludgate
9da06af6c9 Merge pull request #5113 from neondatabase/release-http-connection-fix
Release 2023-08-25
2023-08-25 17:21:35 +01:00
Conrad Ludgate
ce1753d036 proxy: dont return connection pending (#5107)
## Problem

We were returning Pending when a connection had a notice/notification
(introduced recently in #5020). When returning pending, the runtime
assumes you will call `cx.waker().wake()` in order to continue
processing.

We weren't doing that, so the connection task would get stuck

## Summary of changes

Don't return pending. Loop instead
2023-08-25 16:42:30 +01:00
Alek Westover
67db8432b4 Fix cargo deny errors (#5068)
## Problem
cargo deny lint broken

Links to the CVEs:

[rustsec.org/advisories/RUSTSEC-2023-0052](https://rustsec.org/advisories/RUSTSEC-2023-0052)

[rustsec.org/advisories/RUSTSEC-2023-0053](https://rustsec.org/advisories/RUSTSEC-2023-0053)
One is fixed, the other one isn't so we allow it (for now), to unbreak
CI. Then later we'll try to get rid of webpki in favour of the rustls
fork.

## Summary of changes
```
+ignore = ["RUSTSEC-2023-0052"]
```
2023-08-25 16:42:30 +01:00
Vadim Kharitonov
4e2e44e524 Enable neon-pool-opt-in (#5062) 2023-08-22 09:06:14 +01:00
Vadim Kharitonov
ed786104f3 Merge pull request #5060 from neondatabase/releases/2023-08-22
Release 2023-08-22
2023-08-22 09:41:02 +02:00
Stas Kelvich
84b74f2bd1 Merge pull request #4997 from neondatabase/sk/proxy-release-23-07-15
Fix lint
2023-08-15 18:54:20 +03:00
Arthur Petukhovsky
fec2ad6283 Fix lint 2023-08-15 18:49:02 +03:00
Stas Kelvich
98eebd4682 Merge pull request #4996 from neondatabase/sk/proxy_release
Disable neon-pool-opt-in
2023-08-15 18:37:50 +03:00
Arthur Petukhovsky
2f74287c9b Disable neon-pool-opt-in 2023-08-15 18:34:17 +03:00
Shany Pozin
aee1bf95e3 Merge pull request #4990 from neondatabase/releases/2023-08-15
Release 2023-08-15
2023-08-15 15:34:38 +03:00
Shany Pozin
b9de9d75ff Merge branch 'release' into releases/2023-08-15 2023-08-15 14:35:00 +03:00
Stas Kelvich
7943b709e6 Merge pull request #4940 from neondatabase/sk/release-23-05-25-proxy-fixup
Release: proxy retry fixup
2023-08-09 13:53:19 +03:00
Conrad Ludgate
d7d066d493 proxy: delay auth on retry (#4929)
## Problem

When an endpoint is shutting down, it can take a few seconds. Currently
when starting a new compute, this causes an "endpoint is in transition"
error. We need to add delays before retrying to ensure that we allow
time for the endpoint to shutdown properly.

## Summary of changes

Adds a delay before retrying in auth. connect_to_compute already has
this delay
2023-08-09 12:54:24 +03:00
Felix Prasanna
e78ac22107 release fix: revert vm builder bump from 0.13.1 -> 0.15.0-alpha1 (#4932)
This reverts commit 682dfb3a31.

hotfix for a CLI arg issue in the monitor
2023-08-08 21:08:46 +03:00
Vadim Kharitonov
76a8f2bb44 Merge pull request #4923 from neondatabase/releases/2023-08-08
Release 2023-08-08
2023-08-08 11:44:38 +02:00
Vadim Kharitonov
8d59a8581f Merge branch 'release' into releases/2023-08-08 2023-08-08 10:54:34 +02:00
Vadim Kharitonov
b1ddd01289 Define NEON_SMGR to make it possible for extensions to use Neon SMG API (#4889)
Co-authored-by: Konstantin Knizhnik <knizhnik@garret.ru>
Co-authored-by: Konstantin Knizhnik <knizhnik@neon.tech>
2023-08-03 16:28:31 +03:00
Alexander Bayandin
6eae4fc9aa Release 2023-08-02: update pg_embedding (#4877)
Cherry-picking ca4d71a954 from `main` into
the `release`

Co-authored-by: Vadim Kharitonov <vadim2404@users.noreply.github.com>
2023-08-03 08:48:09 +02:00
Christian Schwarz
765455bca2 Merge pull request #4861 from neondatabase/releases/2023-08-01--2-fix-pipeline
ci: fix upload-postgres-extensions-to-s3 job
2023-08-01 13:22:07 +02:00
Christian Schwarz
4204960942 ci: fix upload-postgres-extensions-to-s3 job
commit

	commit 5f8fd640bf
	Author: Alek Westover <alek.westover@gmail.com>
	Date:   Wed Jul 26 08:24:03 2023 -0400

	    Upload Test Remote Extensions (#4792)

switched to using the release tag instead of `latest`, but,
the `promote-images` job only uploads `latest` to the prod ECR.

The switch to using release tag was good in principle, but,
reverting that part to make the release pipeine work.

Note that a proper fix should abandon use of `:latest` tag
at all: currently, if a `main` pipeline runs concurrently
with a `release` pipeline, the `release` pipeline may end
up using the `main` pipeline's images.
2023-08-01 12:01:45 +02:00
Christian Schwarz
67345d66ea Merge pull request #4858 from neondatabase/releases/2023-08-01
Release 2023-08-01
2023-08-01 10:44:01 +02:00
Shany Pozin
2266ee5971 Merge pull request #4803 from neondatabase/releases/2023-07-25
Release 2023-07-25
2023-07-25 14:21:07 +03:00
Shany Pozin
b58445d855 Merge pull request #4746 from neondatabase/releases/2023-07-18
Release 2023-07-18
2023-07-18 14:45:39 +03:00
Conrad Ludgate
36050e7f3d Merge branch 'release' into releases/2023-07-18 2023-07-18 12:00:09 +01:00
Alexander Bayandin
33360ed96d Merge pull request #4705 from neondatabase/release-2023-07-12
Release 2023-07-12 (only proxy)
2023-07-12 19:44:36 +01:00
Conrad Ludgate
39a28d1108 proxy wake_compute loop (#4675)
## Problem

If we fail to wake up the compute node, a subsequent connect attempt
will definitely fail. However, kubernetes won't fail the connection
immediately, instead it hangs until we timeout (10s).

## Summary of changes

Refactor the loop to allow fast retries of compute_wake and to skip a
connect attempt.
2023-07-12 18:40:11 +01:00
Conrad Ludgate
efa6aa134f allow repeated IO errors from compute node (#4624)
## Problem

#4598 compute nodes are not accessible some time after wake up due to
kubernetes DNS not being fully propagated.

## Summary of changes

Update connect retry mechanism to support handling IO errors and
sleeping for 100ms

## Checklist before requesting a review

- [x] I have performed a self-review of my code.
- [ ] If it is a core feature, I have added thorough tests.
- [ ] Do we need to implement analytics? if so did you add the relevant
metrics to the dashboard?
- [ ] If this PR requires public announcement, mark it with
/release-notes label and add several sentences in this section.
2023-07-12 18:40:06 +01:00
Alexander Bayandin
2c724e56e2 Merge pull request #4646 from neondatabase/releases/2023-07-06-hotfix
Release 2023-07-06 (add pg_embedding extension only)
2023-07-06 12:19:52 +01:00
Alexander Bayandin
feff887c6f Compile pg_embedding extension (#4634)
```
CREATE EXTENSION embedding;
CREATE TABLE t (val real[]);
INSERT INTO t (val) VALUES ('{0,0,0}'), ('{1,2,3}'), ('{1,1,1}'), (NULL);
CREATE INDEX ON t USING hnsw (val) WITH (maxelements = 10, dims=3, m=3);
INSERT INTO t (val) VALUES (array[1,2,4]);

SELECT * FROM t ORDER BY val <-> array[3,3,3];
   val   
---------
 {1,2,3}
 {1,2,4}
 {1,1,1}
 {0,0,0}
 
(5 rows)
```
2023-07-06 09:39:41 +01:00
Vadim Kharitonov
353d915fcf Merge pull request #4633 from neondatabase/releases/2023-07-05
Release 2023-07-05
2023-07-05 15:10:47 +02:00
Vadim Kharitonov
2e38098cbc Merge branch 'release' into releases/2023-07-05 2023-07-05 12:41:48 +02:00
Vadim Kharitonov
a6fe5ea1ac Merge pull request #4571 from neondatabase/releases/2023-06-27
Release 2023-06-27
2023-06-27 12:55:33 +02:00
Vadim Kharitonov
05b0aed0c1 Merge branch 'release' into releases/2023-06-27 2023-06-27 12:22:12 +02:00
Alex Chi Z
cd1705357d Merge pull request #4561 from neondatabase/releases/2023-06-23-hotfix
Release 2023-06-23 (pageserver-only)
2023-06-23 15:38:50 -04:00
Christian Schwarz
6bc7561290 don't use MGMT_REQUEST_RUNTIME for consumption metrics synthetic size worker
The consumption metrics synthetic size worker does logical size calculation.
Logical size calculation currently does synchronous disk IO.
This blocks the MGMT_REQUEST_RUNTIME's executor threads, starving other futures.

While there's work on the way to move the synchronous disk IO into spawn_blocking,
the quickfix here is to use the BACKGROUND_RUNTIME instead of MGMT_REQUEST_RUNTIME.

Actually it's not just a quickfix. We simply shouldn't be blocking MGMT_REQUEST_RUNTIME
executor threads on CPU or sync disk IO.
That work isn't done yet, as many of the mgmt tasks still _do_ disk IO.
But it's not as intensive as the logical size calculations that we're fixing here.

While we're at it, fix disk-usage-based eviction in a similar way.
It wasn't the culprit here, according to prod logs, but it can theoretically be
a little CPU-intensive.

More context, including graphs from Prod:
https://neondb.slack.com/archives/C03F5SM1N02/p1687541681336949

(cherry picked from commit d6e35222ea)
2023-06-23 20:54:07 +02:00
Christian Schwarz
fbd3ac14b5 Merge pull request #4544 from neondatabase/releases/2023-06-21-hotfix
Release 2023-06-21 (fixup for post-merge failed 2023-06-20)
2023-06-21 16:54:34 +03:00
Christian Schwarz
e437787c8f cargo update -p openssl (#4542)
To unblock release
https://github.com/neondatabase/neon/pull/4536#issuecomment-1600678054

Context: https://rustsec.org/advisories/RUSTSEC-2023-0044
2023-06-21 15:52:56 +03:00
Christian Schwarz
3460dbf90b Merge pull request #4536 from neondatabase/releases/2023-06-20
Release 2023-06-20 (actually 2023-06-21)
2023-06-21 14:19:14 +03:00
Vadim Kharitonov
6b89d99677 Merge pull request #4521 from neondatabase/release_2023-06-15
Release 2023 06 15
2023-06-15 17:40:01 +02:00
Vadim Kharitonov
6cc8ea86e4 Merge branch 'main' into release_2023-06-15 2023-06-15 16:50:44 +02:00
Shany Pozin
e62a492d6f Merge pull request #4486 from neondatabase/releases/2023-06-13
Release 2023-06-13
2023-06-13 15:21:35 +03:00
Alexey Kondratov
a475cdf642 [compute_ctl] Fix logging if catalog updates are skipped (#4480)
Otherwise, it wasn't clear from the log when Postgres started up
completely if catalog updates were skipped.

Follow-up for 4936ab6
2023-06-13 13:37:24 +02:00
Stas Kelvich
7002c79a47 Merge pull request #4447 from neondatabase/release_proxy_08-06-2023
Release proxy 08 06 2023
2023-06-08 21:02:54 +03:00
Vadim Kharitonov
ee6cf357b4 Merge pull request #4427 from neondatabase/releases/2023-06-06
Release 2023-06-06
2023-06-06 14:42:21 +02:00
Vadim Kharitonov
e5c2086b5f Merge branch 'release' into releases/2023-06-06 2023-06-06 12:33:56 +02:00
Shany Pozin
5f1208296a Merge pull request #4395 from neondatabase/releases/2023-06-01
Release 2023-06-01
2023-06-01 10:58:00 +03:00
Stas Kelvich
88e8e473cd Merge pull request #4345 from neondatabase/release-23-05-25-proxy
Release 23-05-25, take 3
2023-05-25 19:40:43 +03:00
Stas Kelvich
b0a77844f6 Add SQL-over-HTTP endpoint to Proxy
This commit introduces an SQL-over-HTTP endpoint in the proxy, with a JSON
response structure resembling that of the node-postgres driver. This method,
using HTTP POST, achieves smaller amortized latencies in edge setups due to
fewer round trips and an enhanced open connection reuse by the v8 engine.

This update involves several intricacies:
1. SQL injection protection: We employed the extended query protocol, modifying
   the rust-postgres driver to send queries in one roundtrip using a text
   protocol rather than binary, bypassing potential issues like those identified
   in https://github.com/sfackler/rust-postgres/issues/1030.

2. Postgres type compatibility: As not all postgres types have binary
   representations (e.g., acl's in pg_class), we adjusted rust-postgres to
   respond with text protocol, simplifying serialization and fixing queries with
   text-only types in response.

3. Data type conversion: Considering JSON supports fewer data types than
   Postgres, we perform conversions where possible, passing all other types as
   strings. Key conversions include:
   - postgres int2, int4, float4, float8 -> json number (NaN and Inf remain
     text)
   - postgres bool, null, text -> json bool, null, string
   - postgres array -> json array
   - postgres json and jsonb -> json object

4. Alignment with node-postgres: To facilitate integration with js libraries,
   we've matched the response structure of node-postgres, returning command tags
   and column oids. Command tag capturing was added to the rust-postgres
   functionality as part of this change.
2023-05-25 17:59:17 +03:00
Vadim Kharitonov
1baf464307 Merge pull request #4309 from neondatabase/releases/2023-05-23
Release 2023-05-23
2023-05-24 11:56:54 +02:00
Alexander Bayandin
e9b8e81cea Merge branch 'release' into releases/2023-05-23 2023-05-23 12:54:08 +01:00
Alexander Bayandin
85d6194aa4 Fix regress-tests job for Postgres 15 on release branch (#4254)
## Problem

Compatibility tests don't support Postgres 15 yet, but we're still
trying to upload compatibility snapshot (which we do not collect).

Ref
https://github.com/neondatabase/neon/actions/runs/4991394158/jobs/8940369368#step:4:38129

## Summary of changes

Add `pg_version` parameter to `run-python-test-set` actions and do not
upload compatibility snapshot for Postgres 15
2023-05-16 17:19:12 +01:00
Vadim Kharitonov
333a7a68ef Merge pull request #4245 from neondatabase/releases/2023-05-16
Release 2023-05-16
2023-05-16 13:38:40 +02:00
Vadim Kharitonov
6aa4e41bee Merge branch 'release' into releases/2023-05-16 2023-05-16 12:48:23 +02:00
Joonas Koivunen
840183e51f try: higher page_service timeouts to isolate an issue 2023-05-11 16:24:53 +03:00
Shany Pozin
cbccc94b03 Merge pull request #4184 from neondatabase/releases/2023-05-09
Release 2023-05-09
2023-05-09 15:30:36 +03:00
Stas Kelvich
fce227df22 Merge pull request #4163 from neondatabase/main
Release 23-05-05
2023-05-05 15:56:23 +03:00
Stas Kelvich
bd787e800f Merge pull request #4133 from neondatabase/main
Release 23-04-01
2023-05-01 18:52:46 +03:00
Shany Pozin
4a7704b4a3 Merge pull request #4131 from neondatabase/sp/hotfix_adding_sks_us_west
Hotfix: Adding 4 new pageservers and two sets of safekeepers to us west 2
2023-05-01 15:17:38 +03:00
Shany Pozin
ff1119da66 Add 2 new sets of safekeepers to us-west2 2023-05-01 14:35:31 +03:00
Shany Pozin
4c3ba1627b Add 4 new Pageservers for retool launch 2023-05-01 14:34:38 +03:00
Vadim Kharitonov
1407174fb2 Merge pull request #4110 from neondatabase/vk/release_2023-04-28
Release 2023 04 28
2023-04-28 17:43:16 +02:00
Vadim Kharitonov
ec9dcb1889 Merge branch 'release' into vk/release_2023-04-28 2023-04-28 16:32:26 +02:00
Joonas Koivunen
d11d781afc revert: "Add check for duplicates of generated image layers" (#4104)
This reverts commit 732acc5.

Reverted PR: #3869

As noted in PR #4094, we do in fact try to insert duplicates to the
layer map, if L0->L1 compaction is interrupted. We do not have a proper
fix for that right now, and we are in a hurry to make a release to
production, so revert the changes related to this to the state that we
have in production currently. We know that we have a bug here, but
better to live with the bug that we've had in production for a long
time, than rush a fix to production without testing it in staging first.

Cc: #4094, #4088
2023-04-28 16:31:35 +02:00
Anastasia Lubennikova
4e44565b71 Merge pull request #4000 from neondatabase/releases/2023-04-11
Release 2023-04-11
2023-04-11 17:47:41 +03:00
Stas Kelvich
4ed51ad33b Add more proxy cnames 2023-04-11 15:59:35 +03:00
Arseny Sher
1c1ebe5537 Merge pull request #3946 from neondatabase/releases/2023-04-04
Release 2023-04-04
2023-04-04 14:38:40 +04:00
Christian Schwarz
c19cb7f386 Merge pull request #3935 from neondatabase/releases/2023-04-03
Release 2023-04-03
2023-04-03 16:19:49 +02:00
Vadim Kharitonov
4b97d31b16 Merge pull request #3896 from neondatabase/releases/2023-03-28
Release 2023-03-28
2023-03-28 17:58:06 +04:00
Shany Pozin
923ade3dd7 Merge pull request #3855 from neondatabase/releases/2023-03-21
Release 2023-03-21
2023-03-21 13:12:32 +02:00
Arseny Sher
b04e711975 Merge pull request #3825 from neondatabase/release-2023-03-15
Release 2023.03.15
2023-03-15 15:38:00 +03:00
Arseny Sher
afd0a6b39a Forward framed read buf contents to compute before proxy pass.
Otherwise they get lost. Normally buffer is empty before proxy pass, but this is
not the case with pipeline mode of out npm driver; fixes connection hangup
introduced by b80fe41af3 for it.

fixes https://github.com/neondatabase/neon/issues/3822
2023-03-15 15:36:06 +04:00
Lassi Pölönen
99752286d8 Use RollingUpdate strategy also for legacy proxy (#3814)
## Describe your changes
We have previously changed the neon-proxy to use RollingUpdate. This
should be enabled in legacy proxy too in order to avoid breaking
connections for the clients and allow for example backups to run even
during deployment. (https://github.com/neondatabase/neon/pull/3683)

## Issue ticket number and link
https://github.com/neondatabase/neon/issues/3333
2023-03-15 15:35:51 +04:00
Arseny Sher
15df93363c Merge pull request #3804 from neondatabase/release-2023-03-13
Release 2023.03.13
2023-03-13 20:25:40 +03:00
Vadim Kharitonov
bc0ab741af Merge pull request #3758 from neondatabase/releases/2023-03-07
Release 2023-03-07
2023-03-07 12:38:47 +01:00
Christian Schwarz
51d9dfeaa3 Merge pull request #3743 from neondatabase/releases/2023-03-03
Release 2023-03-03
2023-03-03 19:20:21 +01:00
Shany Pozin
f63cb18155 Merge pull request #3713 from neondatabase/releases/2023-02-28
Release 2023-02-28
2023-02-28 12:52:24 +02:00
Arseny Sher
0de603d88e Merge pull request #3707 from neondatabase/release-2023-02-24
Release 2023-02-24

Hotfix for UNLOGGED tables. Contains #3706
Also contains rebase on 14.7 and 15.2 #3581
2023-02-25 00:32:11 +04:00
Heikki Linnakangas
240913912a Fix UNLOGGED tables.
Instead of trying to create missing files on the way, send init fork contents as
main fork from pageserver during basebackup. Add test for that. Call
put_rel_drop for init forks; previously they weren't removed. Bump
vendor/postgres to revert previous approach on Postgres side.

Co-authored-by: Arseny Sher <sher-ars@yandex.ru>

ref https://github.com/neondatabase/postgres/pull/264
ref https://github.com/neondatabase/postgres/pull/259
ref https://github.com/neondatabase/neon/issues/1222
2023-02-24 23:54:53 +04:00
MMeent
91a4ea0de2 Update vendored PostgreSQL versions to 14.7 and 15.2 (#3581)
## Describe your changes
Rebase vendored PostgreSQL onto 14.7 and 15.2

## Issue ticket number and link

#3579

## Checklist before requesting a review
- [x] I have performed a self-review of my code.
- [x] If it is a core feature, I have added thorough tests.
- [ ] Do we need to implement analytics? if so did you add the relevant
metrics to the dashboard?
- [x] If this PR requires public announcement, mark it with
/release-notes label and add several sentences in this section.
    ```
The version of PostgreSQL that we use is updated to 14.7 for PostgreSQL
14 and 15.2 for PostgreSQL 15.
    ```
2023-02-24 23:54:42 +04:00
Arseny Sher
8608704f49 Merge pull request #3691 from neondatabase/release-2023-02-23
Release 2023-02-23

Hotfix for the unlogged tables with indexes issue.

neondatabase/postgres#259
neondatabase/postgres#262
2023-02-23 13:39:33 +04:00
Arseny Sher
efef68ce99 Bump vendor/postgres to include hotfix for unlogged tables with indexes.
https://github.com/neondatabase/postgres/pull/259
https://github.com/neondatabase/postgres/pull/262
2023-02-23 08:49:43 +04:00
Joonas Koivunen
8daefd24da Merge pull request #3679 from neondatabase/releases/2023-02-22
Releases/2023-02-22
2023-02-22 15:56:55 +02:00
Arthur Petukhovsky
46cc8b7982 Remove safekeeper-1.ap-southeast-1.aws.neon.tech (#3671)
We migrated all timelines to
`safekeeper-3.ap-southeast-1.aws.neon.tech`, now old instance can be
removed.
2023-02-22 15:07:57 +02:00
Sergey Melnikov
38cd90dd0c Add -v to ansible invocations (#3670)
To get more debug output on failures
2023-02-22 15:07:57 +02:00
Joonas Koivunen
a51b269f15 fix: hold permit until GetObject eof (#3663)
previously we applied the ratelimiting only up to receiving the headers
from s3, or somewhere near it. the commit adds an adapter which carries
the permit until the AsyncRead has been disposed.

fixes #3662.
2023-02-22 15:07:57 +02:00
Joonas Koivunen
43bf6d0a0f calculate_logical_size: no longer use spawn_blocking (#3664)
Calculation of logical size is now async because of layer downloads, so
we shouldn't use spawn_blocking for it. Use of `spawn_blocking`
exhausted resources which are needed by `tokio::io::copy` when copying
from a stream to a file which lead to deadlock.

Fixes: #3657
2023-02-22 15:07:57 +02:00
Joonas Koivunen
15273a9b66 chore: ignore all compaction inactive tenant errors (#3665)
these are happening in tests because of #3655 but they sure took some
time to appear.

makes the `Compaction failed, retrying in 2s: Cannot run compaction
iteration on inactive tenant` into a globally allowed error, because it
has been seen failing on different test cases.
2023-02-22 15:07:57 +02:00
Joonas Koivunen
78aca668d0 fix: log download failed error (#3661)
Fixes #3659
2023-02-22 15:07:57 +02:00
Vadim Kharitonov
acbf4148ea Merge pull request #3656 from neondatabase/releases/2023-02-21
Release 2023-02-21
2023-02-21 16:03:48 +01:00
Vadim Kharitonov
6508540561 Merge branch 'release' into releases/2023-02-21 2023-02-21 15:31:16 +01:00
Arthur Petukhovsky
a41b5244a8 Add new safekeeper to ap-southeast-1 prod (#3645) (#3646)
To trigger deployment of #3645 to production.
2023-02-20 15:22:49 +00:00
Shany Pozin
2b3189be95 Merge pull request #3600 from neondatabase/releases/2023-02-14
Release 2023-02-14
2023-02-15 13:31:30 +02:00
Vadim Kharitonov
248563c595 Merge pull request #3553 from neondatabase/releases/2023-02-07
Release 2023-02-07
2023-02-07 14:07:44 +01:00
Vadim Kharitonov
14cd6ca933 Merge branch 'release' into releases/2023-02-07 2023-02-07 12:11:56 +01:00
Vadim Kharitonov
eb36403e71 Release 2023 01 31 (#3497)
Co-authored-by: Kirill Bulatov <kirill@neon.tech>
Co-authored-by: Heikki Linnakangas <heikki@neon.tech>
Co-authored-by: Anastasia Lubennikova <anastasia@neon.tech>
Co-authored-by: bojanserafimov <bojan.serafimov7@gmail.com>
Co-authored-by: Christian Schwarz <christian@neon.tech>
Co-authored-by: Alexey Kondratov <kondratov.aleksey@gmail.com>
Co-authored-by: Joonas Koivunen <joonas@neon.tech>
Co-authored-by: Konstantin Knizhnik <knizhnik@garret.ru>
Co-authored-by: Shany Pozin <shany@neon.tech>
Co-authored-by: Sergey Melnikov <sergey@neon.tech>
Co-authored-by: Dmitry Rodionov <dmitry@neon.tech>
Co-authored-by: Rory de Zoete <33318916+zoete@users.noreply.github.com>
Co-authored-by: Rory de Zoete <rdezoete@Rorys-Mac-Studio.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@RorysMacStudio.fritz.box>
Co-authored-by: Lassi Pölönen <lassi.polonen@iki.fi>
2023-01-31 15:06:35 +02:00
Anastasia Lubennikova
3c6f779698 Merge pull request #3411 from neondatabase/release_2023_01_23
Fix Release 2023 01 23
2023-01-23 20:10:03 +02:00
Joonas Koivunen
f67f0c1c11 More tenant size fixes (#3410)
Small changes, but hopefully this will help with the panic detected in
staging, for which we cannot get the debugging information right now
(end-of-branch before branch-point).
2023-01-23 17:46:13 +02:00
Shany Pozin
edb02d3299 Adding pageserver3 to staging (#3403) 2023-01-23 17:46:13 +02:00
Konstantin Knizhnik
664a69e65b Fix slru_segment_key_range function: segno was assigned to incorrect Key field (#3354) 2023-01-23 17:46:13 +02:00
Anastasia Lubennikova
478322ebf9 Fix tenant size orphans (#3377)
Before only the timelines which have passed the `gc_horizon` were
processed which failed with orphans at the tree_sort phase. Example
input in added `test_branched_empty_timeline_size` test case.

The PR changes iteration to happen through all timelines, and in
addition to that, any learned branch points will be calculated as they
would had been in the original implementation if the ancestor branch had
been over the `gc_horizon`.

This also changes how tenants where all timelines are below `gc_horizon`
are handled. Previously tenant_size 0 was returned, but now they will
have approximately `initdb_lsn` worth of tenant_size.

The PR also adds several new tenant size tests that describe various corner
cases of branching structure and `gc_horizon` setting.
They are currently disabled to not consume time during CI.

Co-authored-by: Joonas Koivunen <joonas@neon.tech>
Co-authored-by: Anastasia Lubennikova <anastasia@neon.tech>
2023-01-23 17:46:13 +02:00
Joonas Koivunen
802f174072 fix: dont stop pageserver if we fail to calculate synthetic size 2023-01-23 17:46:13 +02:00
Alexey Kondratov
47f9890bae [compute_ctl] Make role deletion spec processing idempotent (#3380)
Previously, we were trying to re-assign owned objects of the already
deleted role. This were causing a crash loop in the case when compute
was restarted with a spec that includes delta operation for role
deletion. To avoid such cases, check that role is still present before
calling `reassign_owned_objects`.

Resolves neondatabase/cloud#3553
2023-01-23 17:46:13 +02:00
Christian Schwarz
262265daad Revert "Use actual temporary dir for pageserver unit tests"
This reverts commit 826e89b9ce.

The problem with that commit was that it deletes the TempDir while
there are still EphemeralFile instances open.

At first I thought this could be fixed by simply adding

  Handle::current().block_on(task_mgr::shutdown(None, Some(tenant_id), None))

to TenantHarness::drop, but it turned out to be insufficient.

So, reverting the commit until we find a proper solution.

refs https://github.com/neondatabase/neon/issues/3385
2023-01-23 17:46:13 +02:00
bojanserafimov
300da5b872 Improve layer map docstrings (#3382) 2023-01-23 17:46:13 +02:00
Heikki Linnakangas
7b22b5c433 Switch to 'tracing' for logging, restructure code to make use of spans.
Refactors Compute::prepare_and_run. It's split into subroutines
differently, to make it easier to attach tracing spans to the
different stages. The high-level logic for waiting for Postgres to
exit is moved to the caller.

Replace 'env_logger' with 'tracing', and add `#instrument` directives
to different stages fo the startup process. This is a fairly
mechanical change, except for the changes in 'spec.rs'. 'spec.rs'
contained some complicated formatting, where parts of log messages
were printed directly to stdout with `print`s. That was a bit messed
up because the log normally goes to stderr, but those lines were
printed to stdout. In our docker images, stderr and stdout both go to
the same place so you wouldn't notice, but I don't think it was
intentional.

This changes the log format to the default
'tracing_subscriber::format' format. It's different from the Postgres
log format, however, and because both compute_tools and Postgres print
to the same log, it's now a mix of two different formats.  I'm not
sure how the Grafana log parsing pipeline can handle that. If it's a
problem, we can build custom formatter to change the compute_tools log
format to be the same as Postgres's, like it was before this commit,
or we can change the Postgres log format to match tracing_formatter's,
or we can start printing compute_tool's log output to a different
destination than Postgres
2023-01-23 17:46:12 +02:00
Kirill Bulatov
ffca97bc1e Enable logs in unit tests 2023-01-23 17:46:12 +02:00
Kirill Bulatov
cb356f3259 Use actual temporary dir for pageserver unit tests 2023-01-23 17:46:12 +02:00
Vadim Kharitonov
c85374295f Change SENTRY_ENVIRONMENT from "development" to "staging" 2023-01-23 17:46:12 +02:00
Anastasia Lubennikova
4992160677 Fix metric_collection_endpoint for prod.
It was incorrectly set to staging url
2023-01-23 17:46:12 +02:00
Heikki Linnakangas
bd535b3371 If an error happens while checking for core dumps, don't panic.
If we panic, we skip the 30s wait in 'main', and don't give the
console a chance to observe the error. Which is not nice.

Spotted by @ololobus at
https://github.com/neondatabase/neon/pull/3352#discussion_r1072806981
2023-01-23 17:46:12 +02:00
Kirill Bulatov
d90c5a03af Add more io::Error context when fail to operate on a path (#3254)
I have a test failure that shows 

```
Caused by:
    0: Failed to reconstruct a page image:
    1: Directory not empty (os error 39)
```

but does not really show where exactly that happens.

https://neon-github-public-dev.s3.amazonaws.com/reports/pr-3227/release/3823785365/index.html#categories/c0057473fc9ec8fb70876fd29a171ce8/7088dab272f2c7b7/?attachment=60fe6ed2add4d82d

The PR aims to add more context in debugging that issue.
2023-01-23 17:46:12 +02:00
Anastasia Lubennikova
2d02cc9079 Merge pull request #3365 from neondatabase/main
Release 2023-01-17
2023-01-17 16:41:34 +02:00
Christian Schwarz
49ad94b99f Merge pull request #3301 from neondatabase/release-2023-01-10
Release 2023-01-10
2023-01-10 16:42:26 +01:00
Christian Schwarz
948a217398 Merge commit '95bf19b85a06b27a7fc3118dee03d48648efab15' into release-2023-01-10
Conflicts:
        .github/helm-values/neon-stress.proxy-scram.yaml
        .github/helm-values/neon-stress.proxy.yaml
        .github/helm-values/staging.proxy-scram.yaml
        .github/helm-values/staging.proxy.yaml
        All of the above were deleted in `main` after we hotfixed them
        in `release. Deleting them here
        storage_broker/src/bin/storage_broker.rs
        Hotfix toned down logging, but `main` has sinced implemented
        a proper fix. Taken `main`'s side, see
        https://neondb.slack.com/archives/C033RQ5SPDH/p1673354385387479?thread_ts=1673354306.474729&cid=C033RQ5SPDH

closes https://github.com/neondatabase/neon/issues/3287
2023-01-10 15:40:14 +01:00
Dmitry Rodionov
125381eae7 Merge pull request #3236 from neondatabase/dkr/retrofit-sk4-sk4-change
Move zenith-1-sk-3 to zenith-1-sk-4 (#3164)
2022-12-30 14:13:50 +03:00
Arthur Petukhovsky
cd01bbc715 Move zenith-1-sk-3 to zenith-1-sk-4 (#3164) 2022-12-30 12:32:52 +02:00
Dmitry Rodionov
d8b5e3b88d Merge pull request #3229 from neondatabase/dkr/add-pageserver-for-release
add pageserver to new region see https://github.com/neondatabase/aws/pull/116

decrease log volume for pageserver
2022-12-30 12:34:04 +03:00
Dmitry Rodionov
06d25f2186 switch to debug from info to produce less noise 2022-12-29 17:48:47 +02:00
Dmitry Rodionov
f759b561f3 add pageserver to new region see https://github.com/neondatabase/aws/pull/116 2022-12-29 17:17:35 +02:00
Sergey Melnikov
ece0555600 Push proxy metrics to Victoria Metrics (#3106) 2022-12-16 14:44:49 +02:00
Joonas Koivunen
73ea0a0b01 fix(remote_storage): use cached credentials (#3128)
IMDSv2 has limits, and if we query it on every s3 interaction we are
going to go over those limits. Changes the s3_bucket client
configuration to use:
- ChainCredentialsProvider to handle env variables or imds usage
- LazyCachingCredentialsProvider to actually cache any credentials

Related: https://github.com/awslabs/aws-sdk-rust/issues/629
Possibly related: https://github.com/neondatabase/neon/issues/3118
2022-12-16 14:44:49 +02:00
Arseny Sher
d8f6d6fd6f Merge pull request #3126 from neondatabase/broker-lb-release
Deploy broker with L4 LB in new env.
2022-12-16 01:25:28 +03:00
Arseny Sher
d24de169a7 Deploy broker with L4 LB in new env.
Seems to be fixing issue with missing keepalives.
2022-12-16 01:45:32 +04:00
Arseny Sher
0816168296 Hotfix: terminate subscription if channel is full.
Might help as a hotfix, but need to understand root better.
2022-12-15 12:23:56 +03:00
Dmitry Rodionov
277b44d57a Merge pull request #3102 from neondatabase/main
Hotfix. See commits for details
2022-12-14 19:38:43 +03:00
MMeent
68c2c3880e Merge pull request #3038 from neondatabase/main
Release 22-12-14
2022-12-14 14:35:47 +01:00
Arthur Petukhovsky
49da498f65 Merge pull request #2833 from neondatabase/main
Release 2022-11-16
2022-11-17 08:44:10 +01:00
Stas Kelvich
2c76ba3dd7 Merge pull request #2718 from neondatabase/main-rc-22-10-28
Release 22-10-28
2022-10-28 20:33:56 +03:00
Arseny Sher
dbe3dc69ad Merge branch 'main' into main-rc-22-10-28
Release 22-10-28.
2022-10-28 19:10:11 +04:00
Arseny Sher
8e5bb3ed49 Enable etcd compaction in neon_local. 2022-10-27 12:53:20 +03:00
Stas Kelvich
ab0be7b8da Avoid debian-testing packages in compute Dockerfiles
plv8 can only be built with a fairly new gold linker version. We used to install
it via binutils packages from testing, but it also updates libc and that causes
troubles in the resulting image as different extensions were built against
different libc versions. We could either use libc from debian-testing everywhere
or restrain from using testing packages and install necessary programs manually.
This patch uses the latter approach: gold for plv8 and cmake for h3 are
installed manually.

In a passing declare h3_postgis as a safe extension (previous omission).
2022-10-27 12:53:20 +03:00
bojanserafimov
b4c55f5d24 Move pagestream api to libs/pageserver_api (#2698) 2022-10-27 12:53:20 +03:00
mikecaat
ede70d833c Add a docker-compose example file (#1943) (#2666)
Co-authored-by: Masahiro Ikeda <masahiro.ikeda.us@hco.ntt.co.jp>
2022-10-27 12:53:20 +03:00
Sergey Melnikov
70c3d18bb0 Do not release to new staging proxies on release (#2685) 2022-10-27 12:53:20 +03:00
bojanserafimov
7a491f52c4 Add draw_timeline binary (#2688) 2022-10-27 12:53:20 +03:00
Alexander Bayandin
323c4ecb4f Add data format backward compatibility tests (#2626) 2022-10-27 12:53:20 +03:00
Anastasia Lubennikova
3d2466607e Merge pull request #2692 from neondatabase/main-rc
Release 2022-10-25
2022-10-25 18:18:58 +03:00
Anastasia Lubennikova
ed478b39f4 Merge branch 'release' into main-rc 2022-10-25 17:06:33 +03:00
Stas Kelvich
91585a558d Merge pull request #2678 from neondatabase/stas/hotfix_schema
Hotfix to disable grant create on public schema
2022-10-22 02:54:31 +03:00
Stas Kelvich
93467eae1f Hotfix to disable grant create on public schema
`GRANT CREATE ON SCHEMA public` fails if there is no schema `public`.
Disable it in release for now and make a better fix later (it is
needed for v15 support).
2022-10-22 02:26:28 +03:00
Stas Kelvich
f3aac81d19 Merge pull request #2668 from neondatabase/main
Release 2022-10-21
2022-10-21 15:21:42 +03:00
Stas Kelvich
979ad60c19 Merge pull request #2581 from neondatabase/main
Release 2022-10-07
2022-10-07 16:50:55 +03:00
Stas Kelvich
9316cb1b1f Merge pull request #2573 from neondatabase/main
Release 2022-10-06
2022-10-07 11:07:06 +03:00
Anastasia Lubennikova
e7939a527a Merge pull request #2377 from neondatabase/main
Release 2022-09-01
2022-09-01 20:20:44 +03:00
Arthur Petukhovsky
36d26665e1 Merge pull request #2299 from neondatabase/main
* Check for entire range during sasl validation (#2281)

* Gen2 GH runner (#2128)

* Re-add rustup override

* Try s3 bucket

* Set git version

* Use v4 cache key to prevent problems

* Switch to v5 for key

* Add second rustup fix

* Rebase

* Add kaniko steps

* Fix typo and set compress level

* Disable global run default

* Specify shell for step

* Change approach with kaniko

* Try less verbose shell spec

* Add submodule pull

* Add promote step

* Adjust dependency chain

* Try default swap again

* Use env

* Don't override aws key

* Make kaniko build conditional

* Specify runs on

* Try without dependency link

* Try soft fail

* Use image with git

* Try passing to next step

* Fix duplicate

* Try other approach

* Try other approach

* Fix typo

* Try other syntax

* Set env

* Adjust setup

* Try step 1

* Add link

* Try global env

* Fix mistake

* Debug

* Try other syntax

* Try other approach

* Change order

* Move output one step down

* Put output up one level

* Try other syntax

* Skip build

* Try output

* Re-enable build

* Try other syntax

* Skip middle step

* Update check

* Try first step of dockerhub push

* Update needs dependency

* Try explicit dir

* Add missing package

* Try other approach

* Try other approach

* Specify region

* Use with

* Try other approach

* Add debug

* Try other approach

* Set region

* Follow AWS example

* Try github approach

* Skip Qemu

* Try stdin

* Missing steps

* Add missing close

* Add echo debug

* Try v2 endpoint

* Use v1 endpoint

* Try without quotes

* Revert

* Try crane

* Add debug

* Split steps

* Fix duplicate

* Add shell step

* Conform to options

* Add verbose flag

* Try single step

* Try workaround

* First request fails hunch

* Try bullseye image

* Try other approach

* Adjust verbose level

* Try previous step

* Add more debug

* Remove debug step

* Remove rogue indent

* Try with larger image

* Add build tag step

* Update workflow for testing

* Add tag step for test

* Remove unused

* Update dependency chain

* Add ownership fix

* Use matrix for promote

* Force update

* Force build

* Remove unused

* Add new image

* Add missing argument

* Update dockerfile copy

* Update Dockerfile

* Update clone

* Update dockerfile

* Go to correct folder

* Use correct format

* Update dockerfile

* Remove cd

* Debug find where we are

* Add debug on first step

* Changedir to postgres

* Set workdir

* Use v1 approach

* Use other dependency

* Try other approach

* Try other approach

* Update dockerfile

* Update approach

* Update dockerfile

* Update approach

* Update dockerfile

* Update dockerfile

* Add workspace hack

* Update Dockerfile

* Update Dockerfile

* Update Dockerfile

* Change last step

* Cleanup pull in prep for review

* Force build images

* Add condition for latest tagging

* Use pinned version

* Try without name value

* Remove more names

* Shorten names

* Add kaniko comments

* Pin kaniko

* Pin crane and ecr helper

* Up one level

* Switch to pinned tag for rust image

* Force update for test

Co-authored-by: Rory de Zoete <rdezoete@RorysMacStudio.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@b04468bf-cdf4-41eb-9c94-aff4ca55e4bf.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@Rorys-Mac-Studio.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@4795e9ee-4f32-401f-85f3-f316263b62b8.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@2f8bc4e5-4ec2-4ea2-adb1-65d863c4a558.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@27565b2b-72d5-4742-9898-a26c9033e6f9.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@ecc96c26-c6c4-4664-be6e-34f7c3f89a3c.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@7caff3a5-bf03-4202-bd0e-f1a93c86bdae.fritz.box>

* Add missing step output, revert one deploy step (#2285)

* Add missing step output, revert one deploy step

* Conform to syntax

* Update approach

* Add missing value

* Add missing needs

Co-authored-by: Rory de Zoete <rdezoete@RorysMacStudio.fritz.box>

* Error for fatal not git repo (#2286)

Co-authored-by: Rory de Zoete <rdezoete@RorysMacStudio.fritz.box>

* Use main, not branch for ref check (#2288)

* Use main, not branch for ref check

* Add more debug

* Count main, not head

* Try new approach

* Conform to syntax

* Update approach

* Get full history

* Skip checkout

* Cleanup debug

* Remove more debug

Co-authored-by: Rory de Zoete <rdezoete@RorysMacStudio.fritz.box>

* Fix docker zombie process issue (#2289)

* Fix docker zombie process issue

* Init everywhere

Co-authored-by: Rory de Zoete <rdezoete@RorysMacStudio.fritz.box>

* Fix 1.63 clippy lints (#2282)

* split out timeline metrics, track layer map loading and size calculation

* reset rust cache for clippy run to avoid an ICE

additionally remove trailing whitespaces

* Rename pg_control_ffi.h to bindgen_deps.h, for clarity.

The pg_control_ffi.h name implies that it only includes stuff related to
pg_control.h. That's mostly true currently, but really the point of the
file is to include everything that we need to generate Rust definitions
from.

* Make local mypy behave like CI mypy (#2291)

* Fix flaky pageserver restarts in tests (#2261)

* Remove extra type aliases (#2280)

* Update cachepot endpoint (#2290)

* Update cachepot endpoint

* Update dockerfile & remove env

* Update image building process

* Cannot use metadata endpoint for this

* Update workflow

* Conform to kaniko syntax

* Update syntax

* Update approach

* Update dockerfiles

* Force update

* Update dockerfiles

* Update dockerfile

* Cleanup dockerfiles

* Update s3 test location

* Revert s3 experiment

* Add more debug

* Specify aws region

* Remove debug, add prefix

* Remove one more debug

Co-authored-by: Rory de Zoete <rdezoete@RorysMacStudio.fritz.box>

* workflows/benchmarking: increase timeout (#2294)

* Rework `init` in pageserver CLI  (#2272)

* Do not create initial tenant and timeline (adjust Python tests for that)
* Rework config handling during init, add --update-config to manage local config updates

* Fix: Always build images (#2296)

* Always build images

* Remove unused

Co-authored-by: Rory de Zoete <rdezoete@RorysMacStudio.fritz.box>

* Move auto-generated 'bindings' to a separate inner module.

Re-export only things that are used by other modules.

In the future, I'm imagining that we run bindgen twice, for Postgres
v14 and v15. The two sets of bindings would go into separate
'bindings_v14' and 'bindings_v15' modules.

Rearrange postgres_ffi modules.

Move function, to avoid Postgres version dependency in timelines.rs
Move function to generate a logical-message WAL record to postgres_ffi.

* fix cargo test

* Fix walreceiver and safekeeper bugs (#2295)

- There was an issue with zero commit_lsn `reason: LaggingWal { current_commit_lsn: 0/0, new_commit_lsn: 1/6FD90D38, threshold: 10485760 } }`. The problem was in `send_wal.rs`, where we initialized `end_pos = Lsn(0)` and in some cases sent it to the pageserver.
- IDENTIFY_SYSTEM previously returned `flush_lsn` as a physical end of WAL. Now it returns `flush_lsn` (as it was) to walproposer and `commit_lsn` to everyone else including pageserver.
- There was an issue with backoff where connection was cancelled right after initialization: `connected!` -> `safekeeper_handle_db: Connection cancelled` -> `Backoff: waiting 3 seconds`. The problem was in sleeping before establishing the connection. This is fixed by reworking retry logic.
- There was an issue with getting `NoKeepAlives` reason in a loop. The issue is probably the same as the previous.
- There was an issue with filtering safekeepers based on retry attempts, which could filter some safekeepers indefinetely. This is fixed by using retry cooldown duration instead of retry attempts.
- Some `send_wal.rs` connections failed with errors without context. This is fixed by adding a timeline to safekeepers errors.

New retry logic works like this:
- Every candidate has a `next_retry_at` timestamp and is not considered for connection until that moment
- When walreceiver connection is closed, we update `next_retry_at` using exponential backoff, increasing the cooldown on every disconnect.
- When `last_record_lsn` was advanced using the WAL from the safekeeper, we reset the retry cooldown and exponential backoff, allowing walreceiver to reconnect to the same safekeeper instantly.

* on safekeeper registration pass availability zone param (#2292)

Co-authored-by: Kirill Bulatov <kirill@neon.tech>
Co-authored-by: Rory de Zoete <33318916+zoete@users.noreply.github.com>
Co-authored-by: Rory de Zoete <rdezoete@RorysMacStudio.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@b04468bf-cdf4-41eb-9c94-aff4ca55e4bf.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@Rorys-Mac-Studio.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@4795e9ee-4f32-401f-85f3-f316263b62b8.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@2f8bc4e5-4ec2-4ea2-adb1-65d863c4a558.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@27565b2b-72d5-4742-9898-a26c9033e6f9.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@ecc96c26-c6c4-4664-be6e-34f7c3f89a3c.fritz.box>
Co-authored-by: Rory de Zoete <rdezoete@7caff3a5-bf03-4202-bd0e-f1a93c86bdae.fritz.box>
Co-authored-by: Dmitry Rodionov <dmitry@neon.tech>
Co-authored-by: Heikki Linnakangas <heikki@neon.tech>
Co-authored-by: bojanserafimov <bojan.serafimov7@gmail.com>
Co-authored-by: Alexander Bayandin <alexander@neon.tech>
Co-authored-by: Anastasia Lubennikova <anastasia@neon.tech>
Co-authored-by: Anton Galitsyn <agalitsyn@users.noreply.github.com>
2022-08-18 15:32:33 +03:00
Arthur Petukhovsky
873347f977 Merge pull request #2275 from neondatabase/main
* github/workflows: Fix git dubious ownership (#2223)

* Move relation size cache from WalIngest to DatadirTimeline (#2094)

* Move relation sie cache to layered timeline

* Fix obtaining current LSN for relation size cache

* Resolve merge conflicts

* Resolve merge conflicts

* Reestore 'lsn' field in DatadirModification

* adjust DatadirModification lsn in ingest_record

* Fix formatting

* Pass lsn to get_relsize

* Fix merge conflict

* Update pageserver/src/pgdatadir_mapping.rs

Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>

* Update pageserver/src/pgdatadir_mapping.rs

Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>

Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>

* refactor: replace lazy-static with once-cell (#2195)

- Replacing all the occurrences of lazy-static with `once-cell::sync::Lazy`
- fixes #1147

Signed-off-by: Ankur Srivastava <best.ankur@gmail.com>

* Add more buckets to pageserver latency metrics (#2225)

* ignore record property warning to fix benchmarks

* increase statement timeout

* use event so it fires only if workload thread successfully finished

* remove debug log

* increase timeout to pass test with real s3

* avoid duplicate parameter, increase timeout

* Major migration script (#2073)

This script can be used to migrate a tenant across breaking storage versions, or (in the future) upgrading postgres versions. See the comment at the top for an overview.

Co-authored-by: Anastasia Lubennikova <anastasia@neon.tech>

* Fix etcd typos

* Fix links to safekeeper protocol docs. (#2188)

safekeeper/README_PROTO.md was moved to docs/safekeeper-protocol.md in
commit 0b14fdb078, as part of reorganizing the docs into 'mdbook' format.

Fixes issue #1475. Thanks to @banks for spotting the outdated references.

In addition to fixing the above issue, this patch also fixes other broken links as a result of 0b14fdb078. See https://github.com/neondatabase/neon/pull/2188#pullrequestreview-1055918480.

Co-authored-by: Heikki Linnakangas <heikki@neon.tech>
Co-authored-by: Thang Pham <thang@neon.tech>

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* support node id and remote storage params in docker_entrypoint.sh

* Safe truncate (#2218)

* Move relation sie cache to layered timeline

* Fix obtaining current LSN for relation size cache

* Resolve merge conflicts

* Resolve merge conflicts

* Reestore 'lsn' field in DatadirModification

* adjust DatadirModification lsn in ingest_record

* Fix formatting

* Pass lsn to get_relsize

* Fix merge conflict

* Update pageserver/src/pgdatadir_mapping.rs

Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>

* Update pageserver/src/pgdatadir_mapping.rs

Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>

* Check if relation exists before trying to truncat it

refer #1932

* Add test reporducing FSM truncate problem

Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>

* Fix exponential backoff values

* Update back `vendor/postgres` back; it was changed accidentally. (#2251)

Commit 4227cfc96e accidentally reverted vendor/postgres to an older
version. Update it back.

* Add pageserver checkpoint_timeout option.

To flush inmemory layer eventually when no new data arrives, which helps
safekeepers to suspend activity (stop pushing to the broker). Default 10m should
be ok.

* Share exponential backoff code and fix logic for delete task failure (#2252)

* Fix bug when import large (>1GB) relations (#2172)

Resolves #2097 

- use timeline modification's `lsn` and timeline's `last_record_lsn` to determine the corresponding LSN to query data in `DatadirModification::get`
- update `test_import_from_pageserver`. Split the test into 2 variants: `small` and `multisegment`. 
  + `small` is the old test
  + `multisegment` is to simulate #2097 by using a larger number of inserted rows to create multiple segment files of a relation. `multisegment` is configured to only run with a `release` build

* Fix timeline physical size flaky tests (#2244)

Resolves #2212.

- use `wait_for_last_flush_lsn` in `test_timeline_physical_size_*` tests

## Context
Need to wait for the pageserver to catch up with the compute's last flush LSN because during the timeline physical size API call, it's possible that there are running `LayerFlushThread` threads. These threads flush new layers into disk and hence update the physical size. This results in a mismatch between the physical size reported by the API and the actual physical size on disk.

### Note
The `LayerFlushThread` threads are processed **concurrently**, so it's possible that the above error still persists even with this patch. However, making the tests wait to finish processing all the WALs (not flushing) before calculating the physical size should help reduce the "flakiness" significantly

* postgres_ffi/waldecoder: validate more header fields

* postgres_ffi/waldecoder: remove unused startlsn

* postgres_ffi/waldecoder: introduce explicit `enum State`

Previously it was emulated with a combination of nullable fields.
This change should make the logic more readable.

* disable `test_import_from_pageserver_multisegment` (#2258)

This test failed consistently on `main` now. It's better to temporarily disable it to avoid blocking others' PRs while investigating the root cause for the test failure.

See: #2255, #2256

* get_binaries uses DOCKER_TAG taken from docker image build step (#2260)

* [proxy] Rework wire format of the password hack and some errors (#2236)

The new format has a few benefits: it's shorter, simpler and
human-readable as well. We don't use base64 anymore, since
url encoding got us covered.

We also show a better error in case we couldn't parse the
payload; the users should know it's all about passing the
correct project name.

* test_runner/pg_clients: collect docker logs (#2259)

* get_binaries script fix (#2263)

* get_binaries uses DOCKER_TAG taken from docker image build step

* remove docker tag discovery at all and fix get_binaries for version variable

* Better storage sync logs (#2268)

* Find end of WAL on safekeepers using WalStreamDecoder.

We could make it inside wal_storage.rs, but taking into account that
 - wal_storage.rs reading is async
 - we don't need s3 here
 - error handling is different; error during decoding is normal
I decided to put it separately.

Test
cargo test test_find_end_of_wal_last_crossing_segment
prepared earlier by @yeputons passes now.

Fixes https://github.com/neondatabase/neon/issues/544
      https://github.com/neondatabase/cloud/issues/2004
Supersedes https://github.com/neondatabase/neon/pull/2066

* Improve walreceiver logic (#2253)

This patch makes walreceiver logic more complicated, but it should work better in most cases. Added `test_wal_lagging` to test scenarios where alive safekeepers can lag behind other alive safekeepers.

- There was a bug which looks like `etcd_info.timeline.commit_lsn > Some(self.local_timeline.get_last_record_lsn())` filtered all safekeepers in some strange cases. I removed this filter, it should probably help with #2237
- Now walreceiver_connection reports status, including commit_lsn. This allows keeping safekeeper connection even when etcd is down.
- Safekeeper connection now fails if pageserver doesn't receive safekeeper messages for some time. Usually safekeeper sends messages at least once per second.
- `LaggingWal` check now uses `commit_lsn` directly from safekeeper. This fixes the issue with often reconnects, when compute generates WAL really fast.
- `NoWalTimeout` is rewritten to trigger only when we know about the new WAL and the connected safekeeper doesn't stream any WAL. This allows setting a small `lagging_wal_timeout` because it will trigger only when we observe that the connected safekeeper has stuck.

* increase timeout in wait_for_upload to avoid spurious failures when testing with real s3

* Bump vendor/postgres to include XLP_FIRST_IS_CONTRECORD fix. (#2274)

* Set up a workflow to run pgbench against captest (#2077)

Signed-off-by: Ankur Srivastava <best.ankur@gmail.com>
Co-authored-by: Alexander Bayandin <alexander@neon.tech>
Co-authored-by: Konstantin Knizhnik <knizhnik@garret.ru>
Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>
Co-authored-by: Ankur Srivastava <ansrivas@users.noreply.github.com>
Co-authored-by: bojanserafimov <bojan.serafimov7@gmail.com>
Co-authored-by: Dmitry Rodionov <dmitry@neon.tech>
Co-authored-by: Anastasia Lubennikova <anastasia@neon.tech>
Co-authored-by: Kirill Bulatov <kirill@neon.tech>
Co-authored-by: Heikki Linnakangas <heikki@neon.tech>
Co-authored-by: Thang Pham <thang@neon.tech>
Co-authored-by: Stas Kelvich <stas.kelvich@gmail.com>
Co-authored-by: Arseny Sher <sher-ars@yandex.ru>
Co-authored-by: Egor Suvorov <egor@neon.tech>
Co-authored-by: Andrey Taranik <andrey@cicd.team>
Co-authored-by: Dmitry Ivanov <ivadmi5@gmail.com>
2022-08-15 21:30:45 +03:00
Arthur Petukhovsky
e814ac16f9 Merge pull request #2219 from neondatabase/main
Release 2022-08-04
2022-08-04 20:06:34 +03:00
Heikki Linnakangas
ad3055d386 Merge pull request #2203 from neondatabase/release-uuid-ossp
Deploy new storage and compute version to production

Release 2022-08-02
2022-08-02 15:08:14 +03:00
Heikki Linnakangas
94e03eb452 Merge remote-tracking branch 'origin/main' into 'release'
Release 2022-08-01
2022-08-02 12:43:49 +03:00
Sergey Melnikov
380f26ef79 Merge pull request #2170 from neondatabase/main (Release 2022-07-28)
Release 2022-07-28
2022-07-28 14:16:52 +03:00
Arthur Petukhovsky
3c5b7f59d7 Merge pull request #2119 from neondatabase/main
Release 2022-07-19
2022-07-19 11:58:48 +03:00
Arthur Petukhovsky
fee89f80b5 Merge pull request #2115 from neondatabase/main-2022-07-18
Release 2022-07-18
2022-07-18 19:21:11 +03:00
Arthur Petukhovsky
41cce8eaf1 Merge remote-tracking branch 'origin/release' into main-2022-07-18 2022-07-18 18:21:20 +03:00
Alexey Kondratov
f88fe0218d Merge pull request #1842 from neondatabase/release-deploy-hotfix
[HOTFIX] Release deploy fix

This PR uses this branch neondatabase/postgres#171 and several required commits from the main to use only locally built compute-tools. This should allow us to rollout safekeepers sync issue fix on prod
2022-06-01 11:04:30 +03:00
Alexey Kondratov
cc856eca85 Install missing openssl packages in the Github Actions workflow 2022-05-31 21:31:31 +02:00
Alexey Kondratov
cf350c6002 Use :local compute-tools tag to build compute-node image 2022-05-31 21:31:16 +02:00
Arseny Sher
0ce6b6a0a3 Merge pull request #1836 from neondatabase/release-hotfix-basebackup-lsn-page-boundary
Bump vendor/postgres to hotfix basebackup LSN comparison.
2022-05-31 16:54:03 +04:00
Arseny Sher
73f247d537 Bump vendor/postgres to hotfix basebackup LSN comparison. 2022-05-31 16:00:50 +04:00
Andrey Taranik
960be82183 Merge pull request #1792 from neondatabase/main
Release 2202-05-25 (second)
2022-05-25 16:37:57 +03:00
Andrey Taranik
806e5a6c19 Merge pull request #1787 from neondatabase/main
Release 2022-05-25
2022-05-25 13:34:11 +03:00
Alexey Kondratov
8d5df07cce Merge pull request #1385 from zenithdb/main
Release main 2022-03-22
2022-03-22 05:04:34 -05:00
Andrey Taranik
df7a9d1407 release fix 2022-03-16 (#1375) 2022-03-17 00:43:28 +03:00
163 changed files with 3067 additions and 7020 deletions

View File

@@ -1121,16 +1121,10 @@ jobs:
run: | run: |
if [[ "$GITHUB_REF_NAME" == "main" ]]; then if [[ "$GITHUB_REF_NAME" == "main" ]]; then
gh workflow --repo neondatabase/aws run deploy-dev.yml --ref main -f branch=main -f dockerTag=${{needs.tag.outputs.build-tag}} -f deployPreprodRegion=false gh workflow --repo neondatabase/aws run deploy-dev.yml --ref main -f branch=main -f dockerTag=${{needs.tag.outputs.build-tag}} -f deployPreprodRegion=false
elif [[ "$GITHUB_REF_NAME" == "release" ]]; then
gh workflow --repo neondatabase/aws run deploy-dev.yml --ref main \
-f deployPgSniRouter=false \
-f deployProxy=false \
-f deployStorage=true \
-f deployStorageBroker=true \
-f branch=main \
-f dockerTag=${{needs.tag.outputs.build-tag}} \
-f deployPreprodRegion=true
# TODO: move deployPreprodRegion to release (`"$GITHUB_REF_NAME" == "release"` block), once Staging support different compute tag prefixes for different regions
gh workflow --repo neondatabase/aws run deploy-dev.yml --ref main -f branch=main -f dockerTag=${{needs.tag.outputs.build-tag}} -f deployPreprodRegion=true
elif [[ "$GITHUB_REF_NAME" == "release" ]]; then
gh workflow --repo neondatabase/aws run deploy-prod.yml --ref main \ gh workflow --repo neondatabase/aws run deploy-prod.yml --ref main \
-f deployPgSniRouter=false \ -f deployPgSniRouter=false \
-f deployProxy=false \ -f deployProxy=false \
@@ -1139,15 +1133,6 @@ jobs:
-f branch=main \ -f branch=main \
-f dockerTag=${{needs.tag.outputs.build-tag}} -f dockerTag=${{needs.tag.outputs.build-tag}}
elif [[ "$GITHUB_REF_NAME" == "release-proxy" ]]; then elif [[ "$GITHUB_REF_NAME" == "release-proxy" ]]; then
gh workflow --repo neondatabase/aws run deploy-dev.yml --ref main \
-f deployPgSniRouter=true \
-f deployProxy=true \
-f deployStorage=false \
-f deployStorageBroker=false \
-f branch=main \
-f dockerTag=${{needs.tag.outputs.build-tag}} \
-f deployPreprodRegion=true
gh workflow --repo neondatabase/aws run deploy-proxy-prod.yml --ref main \ gh workflow --repo neondatabase/aws run deploy-proxy-prod.yml --ref main \
-f deployPgSniRouter=true \ -f deployPgSniRouter=true \
-f deployProxy=true \ -f deployProxy=true \

View File

@@ -1,13 +1,12 @@
/compute_tools/ @neondatabase/control-plane @neondatabase/compute /compute_tools/ @neondatabase/control-plane @neondatabase/compute
/control_plane/attachment_service @neondatabase/storage /control_plane/attachment_service @neondatabase/storage
/libs/pageserver_api/ @neondatabase/storage /libs/pageserver_api/ @neondatabase/storage
/libs/postgres_ffi/ @neondatabase/compute @neondatabase/safekeepers /libs/postgres_ffi/ @neondatabase/compute
/libs/remote_storage/ @neondatabase/storage /libs/remote_storage/ @neondatabase/storage
/libs/safekeeper_api/ @neondatabase/safekeepers /libs/safekeeper_api/ @neondatabase/safekeepers
/libs/vm_monitor/ @neondatabase/autoscaling /libs/vm_monitor/ @neondatabase/autoscaling
/pageserver/ @neondatabase/storage /pageserver/ @neondatabase/storage
/pgxn/ @neondatabase/compute /pgxn/ @neondatabase/compute
/pgxn/neon/ @neondatabase/compute @neondatabase/safekeepers
/proxy/ @neondatabase/proxy /proxy/ @neondatabase/proxy
/safekeeper/ @neondatabase/safekeepers /safekeeper/ @neondatabase/safekeepers
/vendor/ @neondatabase/compute /vendor/ @neondatabase/compute

206
Cargo.lock generated
View File

@@ -276,7 +276,7 @@ version = "0.1.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"aws-config", "aws-config",
"bytes", "aws-sdk-secretsmanager",
"camino", "camino",
"clap", "clap",
"control_plane", "control_plane",
@@ -288,8 +288,6 @@ dependencies = [
"hex", "hex",
"humantime", "humantime",
"hyper", "hyper",
"lasso",
"measured",
"metrics", "metrics",
"once_cell", "once_cell",
"pageserver_api", "pageserver_api",
@@ -297,7 +295,6 @@ dependencies = [
"postgres_connection", "postgres_connection",
"r2d2", "r2d2",
"reqwest", "reqwest",
"routerify",
"serde", "serde",
"serde_json", "serde_json",
"thiserror", "thiserror",
@@ -346,9 +343,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-credential-types" name = "aws-credential-types"
version = "1.1.8" version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa8587ae17c8e967e4b05a62d495be2fb7701bec52a97f7acfe8a29f938384c8" checksum = "33cc49dcdd31c8b6e79850a179af4c367669150c7ac0135f176c61bec81a70f7"
dependencies = [ dependencies = [
"aws-smithy-async", "aws-smithy-async",
"aws-smithy-runtime-api", "aws-smithy-runtime-api",
@@ -358,9 +355,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-runtime" name = "aws-runtime"
version = "1.1.8" version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b13dc54b4b49f8288532334bba8f87386a40571c47c37b1304979b556dc613c8" checksum = "eb031bff99877c26c28895766f7bb8484a05e24547e370768d6cc9db514662aa"
dependencies = [ dependencies = [
"aws-credential-types", "aws-credential-types",
"aws-sigv4", "aws-sigv4",
@@ -380,29 +377,6 @@ dependencies = [
"uuid", "uuid",
] ]
[[package]]
name = "aws-sdk-iam"
version = "1.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8ae76026bfb1b80a6aed0bb400c1139cd9c0563e26bce1986cd021c6a968c7b"
dependencies = [
"aws-credential-types",
"aws-runtime",
"aws-smithy-async",
"aws-smithy-http",
"aws-smithy-json",
"aws-smithy-query",
"aws-smithy-runtime",
"aws-smithy-runtime-api",
"aws-smithy-types",
"aws-smithy-xml",
"aws-types",
"http 0.2.9",
"once_cell",
"regex-lite",
"tracing",
]
[[package]] [[package]]
name = "aws-sdk-s3" name = "aws-sdk-s3"
version = "1.14.0" version = "1.14.0"
@@ -432,6 +406,29 @@ dependencies = [
"url", "url",
] ]
[[package]]
name = "aws-sdk-secretsmanager"
version = "1.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0a0b64e61e7d632d9df90a2e0f32630c68c24960cab1d27d848718180af883d3"
dependencies = [
"aws-credential-types",
"aws-runtime",
"aws-smithy-async",
"aws-smithy-http",
"aws-smithy-json",
"aws-smithy-runtime",
"aws-smithy-runtime-api",
"aws-smithy-types",
"aws-types",
"bytes",
"fastrand 2.0.0",
"http 0.2.9",
"once_cell",
"regex-lite",
"tracing",
]
[[package]] [[package]]
name = "aws-sdk-sso" name = "aws-sdk-sso"
version = "1.12.0" version = "1.12.0"
@@ -501,9 +498,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-sigv4" name = "aws-sigv4"
version = "1.2.0" version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11d6f29688a4be9895c0ba8bef861ad0c0dac5c15e9618b9b7a6c233990fc263" checksum = "c371c6b0ac54d4605eb6f016624fb5c7c2925d315fdf600ac1bf21b19d5f1742"
dependencies = [ dependencies = [
"aws-credential-types", "aws-credential-types",
"aws-smithy-eventstream", "aws-smithy-eventstream",
@@ -516,7 +513,7 @@ dependencies = [
"hex", "hex",
"hmac", "hmac",
"http 0.2.9", "http 0.2.9",
"http 1.1.0", "http 1.0.0",
"once_cell", "once_cell",
"p256", "p256",
"percent-encoding", "percent-encoding",
@@ -530,9 +527,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-smithy-async" name = "aws-smithy-async"
version = "1.1.8" version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d26ea8fa03025b2face2b3038a63525a10891e3d8829901d502e5384a0d8cd46" checksum = "72ee2d09cce0ef3ae526679b522835d63e75fb427aca5413cd371e490d52dcc6"
dependencies = [ dependencies = [
"futures-util", "futures-util",
"pin-project-lite", "pin-project-lite",
@@ -573,9 +570,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-smithy-http" name = "aws-smithy-http"
version = "0.60.7" version = "0.60.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f10fa66956f01540051b0aa7ad54574640f748f9839e843442d99b970d3aff9" checksum = "dab56aea3cd9e1101a0a999447fb346afb680ab1406cebc44b32346e25b4117d"
dependencies = [ dependencies = [
"aws-smithy-eventstream", "aws-smithy-eventstream",
"aws-smithy-runtime-api", "aws-smithy-runtime-api",
@@ -594,18 +591,18 @@ dependencies = [
[[package]] [[package]]
name = "aws-smithy-json" name = "aws-smithy-json"
version = "0.60.7" version = "0.60.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4683df9469ef09468dad3473d129960119a0d3593617542b7d52086c8486f2d6" checksum = "fd3898ca6518f9215f62678870064398f00031912390efd03f1f6ef56d83aa8e"
dependencies = [ dependencies = [
"aws-smithy-types", "aws-smithy-types",
] ]
[[package]] [[package]]
name = "aws-smithy-query" name = "aws-smithy-query"
version = "0.60.7" version = "0.60.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2fbd61ceb3fe8a1cb7352e42689cec5335833cd9f94103a61e98f9bb61c64bb" checksum = "bda4b1dfc9810e35fba8a620e900522cd1bd4f9578c446e82f49d1ce41d2e9f9"
dependencies = [ dependencies = [
"aws-smithy-types", "aws-smithy-types",
"urlencoding", "urlencoding",
@@ -613,9 +610,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-smithy-runtime" name = "aws-smithy-runtime"
version = "1.1.8" version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec81002d883e5a7fd2bb063d6fb51c4999eb55d404f4fff3dd878bf4733b9f01" checksum = "fafdab38f40ad7816e7da5dec279400dd505160780083759f01441af1bbb10ea"
dependencies = [ dependencies = [
"aws-smithy-async", "aws-smithy-async",
"aws-smithy-http", "aws-smithy-http",
@@ -638,15 +635,14 @@ dependencies = [
[[package]] [[package]]
name = "aws-smithy-runtime-api" name = "aws-smithy-runtime-api"
version = "1.2.0" version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9acb931e0adaf5132de878f1398d83f8677f90ba70f01f65ff87f6d7244be1c5" checksum = "c18276dd28852f34b3bf501f4f3719781f4999a51c7bff1a5c6dc8c4529adc29"
dependencies = [ dependencies = [
"aws-smithy-async", "aws-smithy-async",
"aws-smithy-types", "aws-smithy-types",
"bytes", "bytes",
"http 0.2.9", "http 0.2.9",
"http 1.1.0",
"pin-project-lite", "pin-project-lite",
"tokio", "tokio",
"tracing", "tracing",
@@ -655,9 +651,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-smithy-types" name = "aws-smithy-types"
version = "1.1.8" version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "abe14dceea1e70101d38fbf2a99e6a34159477c0fb95e68e05c66bd7ae4c3729" checksum = "bb3e134004170d3303718baa2a4eb4ca64ee0a1c0a7041dca31b38be0fb414f3"
dependencies = [ dependencies = [
"base64-simd", "base64-simd",
"bytes", "bytes",
@@ -678,18 +674,18 @@ dependencies = [
[[package]] [[package]]
name = "aws-smithy-xml" name = "aws-smithy-xml"
version = "0.60.7" version = "0.60.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "872c68cf019c0e4afc5de7753c4f7288ce4b71663212771bf5e4542eb9346ca9" checksum = "8604a11b25e9ecaf32f9aa56b9fe253c5e2f606a3477f0071e96d3155a5ed218"
dependencies = [ dependencies = [
"xmlparser", "xmlparser",
] ]
[[package]] [[package]]
name = "aws-types" name = "aws-types"
version = "1.1.8" version = "1.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0dbf2f3da841a8930f159163175cf6a3d16ddde517c1b0fba7aa776822800f40" checksum = "789bbe008e65636fe1b6dbbb374c40c8960d1232b96af5ff4aec349f9c4accf4"
dependencies = [ dependencies = [
"aws-credential-types", "aws-credential-types",
"aws-smithy-async", "aws-smithy-async",
@@ -2396,9 +2392,9 @@ dependencies = [
[[package]] [[package]]
name = "http" name = "http"
version = "1.1.0" version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "21b9ddb458710bc376481b842f5da65cdf31522de232c1ca8146abce2a358258" checksum = "b32afd38673a8016f7c9ae69e5af41a58f81b1d31689040f2f1959594ce194ea"
dependencies = [ dependencies = [
"bytes", "bytes",
"fnv", "fnv",
@@ -2498,7 +2494,7 @@ dependencies = [
"hyper", "hyper",
"log", "log",
"rustls 0.21.9", "rustls 0.21.9",
"rustls-native-certs 0.6.2", "rustls-native-certs",
"tokio", "tokio",
"tokio-rustls 0.24.0", "tokio-rustls 0.24.0",
] ]
@@ -2884,35 +2880,6 @@ version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "490cc448043f947bae3cbee9c203358d62dbee0db12107a74be5c30ccfd09771" checksum = "490cc448043f947bae3cbee9c203358d62dbee0db12107a74be5c30ccfd09771"
[[package]]
name = "measured"
version = "0.0.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f246648d027839a34b420e27c7de1165ace96e19ef894985d0a6ff89a7840a9f"
dependencies = [
"bytes",
"hashbrown 0.14.0",
"itoa",
"lasso",
"measured-derive",
"memchr",
"parking_lot 0.12.1",
"rustc-hash",
"ryu",
]
[[package]]
name = "measured-derive"
version = "0.0.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "edaa5cc22d99d5d6d7d99c3b5b5f7e7f8034c22f1b5d62a1adecd2ed005d9b80"
dependencies = [
"heck",
"proc-macro2",
"quote",
"syn 2.0.52",
]
[[package]] [[package]]
name = "memchr" name = "memchr"
version = "2.6.4" version = "2.6.4"
@@ -3581,7 +3548,6 @@ dependencies = [
"strum_macros", "strum_macros",
"svg_fmt", "svg_fmt",
"sync_wrapper", "sync_wrapper",
"sysinfo",
"tenant_size_model", "tenant_size_model",
"thiserror", "thiserror",
"tokio", "tokio",
@@ -3935,7 +3901,7 @@ dependencies = [
[[package]] [[package]]
name = "postgres" name = "postgres"
version = "0.19.4" version = "0.19.4"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#20031d7a9ee1addeae6e0968e3899ae6bf01cee2" source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#988d0ddb4184c408fa7fc1bd0ecca7993c02978f"
dependencies = [ dependencies = [
"bytes", "bytes",
"fallible-iterator", "fallible-iterator",
@@ -3948,7 +3914,7 @@ dependencies = [
[[package]] [[package]]
name = "postgres-native-tls" name = "postgres-native-tls"
version = "0.5.0" version = "0.5.0"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#20031d7a9ee1addeae6e0968e3899ae6bf01cee2" source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#988d0ddb4184c408fa7fc1bd0ecca7993c02978f"
dependencies = [ dependencies = [
"native-tls", "native-tls",
"tokio", "tokio",
@@ -3959,7 +3925,7 @@ dependencies = [
[[package]] [[package]]
name = "postgres-protocol" name = "postgres-protocol"
version = "0.6.4" version = "0.6.4"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#20031d7a9ee1addeae6e0968e3899ae6bf01cee2" source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#988d0ddb4184c408fa7fc1bd0ecca7993c02978f"
dependencies = [ dependencies = [
"base64 0.20.0", "base64 0.20.0",
"byteorder", "byteorder",
@@ -3972,13 +3938,12 @@ dependencies = [
"rand 0.8.5", "rand 0.8.5",
"sha2", "sha2",
"stringprep", "stringprep",
"tokio",
] ]
[[package]] [[package]]
name = "postgres-types" name = "postgres-types"
version = "0.2.4" version = "0.2.4"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#20031d7a9ee1addeae6e0968e3899ae6bf01cee2" source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#988d0ddb4184c408fa7fc1bd0ecca7993c02978f"
dependencies = [ dependencies = [
"bytes", "bytes",
"fallible-iterator", "fallible-iterator",
@@ -4200,10 +4165,6 @@ version = "0.1.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
"aws-config",
"aws-sdk-iam",
"aws-sigv4",
"aws-types",
"base64 0.13.1", "base64 0.13.1",
"bstr", "bstr",
"bytes", "bytes",
@@ -4214,7 +4175,6 @@ dependencies = [
"consumption_metrics", "consumption_metrics",
"dashmap", "dashmap",
"env_logger", "env_logger",
"fallible-iterator",
"futures", "futures",
"git-version", "git-version",
"hashbrown 0.13.2", "hashbrown 0.13.2",
@@ -4222,7 +4182,6 @@ dependencies = [
"hex", "hex",
"hmac", "hmac",
"hostname", "hostname",
"http 1.1.0",
"humantime", "humantime",
"hyper", "hyper",
"hyper-tungstenite", "hyper-tungstenite",
@@ -4266,7 +4225,6 @@ dependencies = [
"smallvec", "smallvec",
"smol_str", "smol_str",
"socket2 0.5.5", "socket2 0.5.5",
"subtle",
"sync_wrapper", "sync_wrapper",
"task-local-extensions", "task-local-extensions",
"thiserror", "thiserror",
@@ -4438,9 +4396,9 @@ dependencies = [
[[package]] [[package]]
name = "redis" name = "redis"
version = "0.25.2" version = "0.24.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "71d64e978fd98a0e6b105d066ba4889a7301fca65aeac850a877d8797343feeb" checksum = "c580d9cbbe1d1b479e8d67cf9daf6a62c957e6846048408b80b43ac3f6af84cd"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"bytes", "bytes",
@@ -4449,15 +4407,15 @@ dependencies = [
"itoa", "itoa",
"percent-encoding", "percent-encoding",
"pin-project-lite", "pin-project-lite",
"rustls 0.22.2", "rustls 0.21.9",
"rustls-native-certs 0.7.0", "rustls-native-certs",
"rustls-pemfile 2.1.1", "rustls-pemfile 1.0.2",
"rustls-pki-types", "rustls-webpki 0.101.7",
"ryu", "ryu",
"sha1_smol", "sha1_smol",
"socket2 0.5.5", "socket2 0.4.9",
"tokio", "tokio",
"tokio-rustls 0.25.0", "tokio-rustls 0.24.0",
"tokio-util", "tokio-util",
"url", "url",
] ]
@@ -4886,19 +4844,6 @@ dependencies = [
"security-framework", "security-framework",
] ]
[[package]]
name = "rustls-native-certs"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f1fb85efa936c42c6d5fc28d2629bb51e4b2f4b8a5211e297d599cc5a093792"
dependencies = [
"openssl-probe",
"rustls-pemfile 2.1.1",
"rustls-pki-types",
"schannel",
"security-framework",
]
[[package]] [[package]]
name = "rustls-pemfile" name = "rustls-pemfile"
version = "1.0.2" version = "1.0.2"
@@ -5401,23 +5346,13 @@ checksum = "ae1a47186c03a32177042e55dbc5fd5aee900b8e0069a8d70fba96a9375cd012"
[[package]] [[package]]
name = "sha2" name = "sha2"
version = "0.10.8" version = "0.10.6"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "793db75ad2bcafc3ffa7c68b215fee268f537982cd901d132f89c6343f3a3dc8" checksum = "82e6b795fe2e3b1e845bafcb27aa35405c4d47cdfc92af5fc8d3002f76cebdc0"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"cpufeatures", "cpufeatures",
"digest", "digest",
"sha2-asm",
]
[[package]]
name = "sha2-asm"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f27ba7066011e3fb30d808b51affff34f0a66d3a03a58edd787c6e420e40e44e"
dependencies = [
"cc",
] ]
[[package]] [[package]]
@@ -6000,7 +5935,7 @@ dependencies = [
[[package]] [[package]]
name = "tokio-postgres" name = "tokio-postgres"
version = "0.7.7" version = "0.7.7"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#20031d7a9ee1addeae6e0968e3899ae6bf01cee2" source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#988d0ddb4184c408fa7fc1bd0ecca7993c02978f"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"byteorder", "byteorder",
@@ -6166,7 +6101,7 @@ dependencies = [
"percent-encoding", "percent-encoding",
"pin-project", "pin-project",
"prost", "prost",
"rustls-native-certs 0.6.2", "rustls-native-certs",
"rustls-pemfile 1.0.2", "rustls-pemfile 1.0.2",
"tokio", "tokio",
"tokio-rustls 0.24.0", "tokio-rustls 0.24.0",
@@ -6533,7 +6468,6 @@ version = "0.1.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"arc-swap", "arc-swap",
"async-compression",
"async-trait", "async-trait",
"bincode", "bincode",
"byteorder", "byteorder",
@@ -6572,14 +6506,12 @@ dependencies = [
"thiserror", "thiserror",
"tokio", "tokio",
"tokio-stream", "tokio-stream",
"tokio-tar",
"tokio-util", "tokio-util",
"tracing", "tracing",
"tracing-error", "tracing-error",
"tracing-subscriber", "tracing-subscriber",
"url", "url",
"uuid", "uuid",
"walkdir",
"workspace_hack", "workspace_hack",
] ]
@@ -7051,6 +6983,7 @@ dependencies = [
"aws-sigv4", "aws-sigv4",
"aws-smithy-async", "aws-smithy-async",
"aws-smithy-http", "aws-smithy-http",
"aws-smithy-runtime-api",
"aws-smithy-types", "aws-smithy-types",
"axum", "axum",
"base64 0.21.1", "base64 0.21.1",
@@ -7096,7 +7029,6 @@ dependencies = [
"scopeguard", "scopeguard",
"serde", "serde",
"serde_json", "serde_json",
"sha2",
"smallvec", "smallvec",
"subtle", "subtle",
"syn 1.0.109", "syn 1.0.109",

View File

@@ -52,12 +52,10 @@ async-stream = "0.3"
async-trait = "0.1" async-trait = "0.1"
aws-config = { version = "1.1.4", default-features = false, features=["rustls"] } aws-config = { version = "1.1.4", default-features = false, features=["rustls"] }
aws-sdk-s3 = "1.14" aws-sdk-s3 = "1.14"
aws-sdk-iam = "1.15.0" aws-sdk-secretsmanager = { version = "1.14.0" }
aws-smithy-async = { version = "1.1.4", default-features = false, features=["rt-tokio"] } aws-smithy-async = { version = "1.1.4", default-features = false, features=["rt-tokio"] }
aws-smithy-types = "1.1.4" aws-smithy-types = "1.1.4"
aws-credential-types = "1.1.4" aws-credential-types = "1.1.4"
aws-sigv4 = { version = "1.2.0", features = ["sign-http"] }
aws-types = "1.1.7"
axum = { version = "0.6.20", features = ["ws"] } axum = { version = "0.6.20", features = ["ws"] }
base64 = "0.13.0" base64 = "0.13.0"
bincode = "1.3" bincode = "1.3"
@@ -78,7 +76,6 @@ either = "1.8"
enum-map = "2.4.2" enum-map = "2.4.2"
enumset = "1.0.12" enumset = "1.0.12"
fail = "0.5.0" fail = "0.5.0"
fallible-iterator = "0.2"
fs2 = "0.4.3" fs2 = "0.4.3"
futures = "0.3" futures = "0.3"
futures-core = "0.3" futures-core = "0.3"
@@ -91,7 +88,6 @@ hex = "0.4"
hex-literal = "0.4" hex-literal = "0.4"
hmac = "0.12.1" hmac = "0.12.1"
hostname = "0.3.1" hostname = "0.3.1"
http = {version = "1.1.0", features = ["std"]}
http-types = { version = "2", default-features = false } http-types = { version = "2", default-features = false }
humantime = "2.1" humantime = "2.1"
humantime-serde = "1.1.1" humantime-serde = "1.1.1"
@@ -105,7 +101,6 @@ lasso = "0.7"
leaky-bucket = "1.0.1" leaky-bucket = "1.0.1"
libc = "0.2" libc = "0.2"
md5 = "0.7.0" md5 = "0.7.0"
measured = { version = "0.0.13", features=["default", "lasso"] }
memoffset = "0.8" memoffset = "0.8"
native-tls = "0.2" native-tls = "0.2"
nix = { version = "0.27", features = ["fs", "process", "socket", "signal", "poll"] } nix = { version = "0.27", features = ["fs", "process", "socket", "signal", "poll"] }
@@ -125,7 +120,7 @@ procfs = "0.14"
prometheus = {version = "0.13", default_features=false, features = ["process"]} # removes protobuf dependency prometheus = {version = "0.13", default_features=false, features = ["process"]} # removes protobuf dependency
prost = "0.11" prost = "0.11"
rand = "0.8" rand = "0.8"
redis = { version = "0.25.2", features = ["tokio-rustls-comp", "keep-alive"] } redis = { version = "0.24.0", features = ["tokio-rustls-comp", "keep-alive"] }
regex = "1.10.2" regex = "1.10.2"
reqwest = { version = "0.11", default-features = false, features = ["rustls-tls"] } reqwest = { version = "0.11", default-features = false, features = ["rustls-tls"] }
reqwest-tracing = { version = "0.4.7", features = ["opentelemetry_0_20"] } reqwest-tracing = { version = "0.4.7", features = ["opentelemetry_0_20"] }
@@ -153,7 +148,6 @@ smol_str = { version = "0.2.0", features = ["serde"] }
socket2 = "0.5" socket2 = "0.5"
strum = "0.24" strum = "0.24"
strum_macros = "0.24" strum_macros = "0.24"
"subtle" = "2.5.0"
svg_fmt = "0.4.1" svg_fmt = "0.4.1"
sync_wrapper = "0.1.2" sync_wrapper = "0.1.2"
tar = "0.4" tar = "0.4"

View File

@@ -135,7 +135,7 @@ WORKDIR /home/nonroot
# Rust # Rust
# Please keep the version of llvm (installed above) in sync with rust llvm (`rustc --version --verbose | grep LLVM`) # Please keep the version of llvm (installed above) in sync with rust llvm (`rustc --version --verbose | grep LLVM`)
ENV RUSTC_VERSION=1.77.0 ENV RUSTC_VERSION=1.76.0
ENV RUSTUP_HOME="/home/nonroot/.rustup" ENV RUSTUP_HOME="/home/nonroot/.rustup"
ENV PATH="/home/nonroot/.cargo/bin:${PATH}" ENV PATH="/home/nonroot/.cargo/bin:${PATH}"
RUN curl -sSO https://static.rust-lang.org/rustup/dist/$(uname -m)-unknown-linux-gnu/rustup-init && whoami && \ RUN curl -sSO https://static.rust-lang.org/rustup/dist/$(uname -m)-unknown-linux-gnu/rustup-init && whoami && \
@@ -149,7 +149,7 @@ RUN curl -sSO https://static.rust-lang.org/rustup/dist/$(uname -m)-unknown-linux
cargo install --git https://github.com/paritytech/cachepot && \ cargo install --git https://github.com/paritytech/cachepot && \
cargo install rustfilt && \ cargo install rustfilt && \
cargo install cargo-hakari && \ cargo install cargo-hakari && \
cargo install cargo-deny --locked && \ cargo install cargo-deny && \
cargo install cargo-hack && \ cargo install cargo-hack && \
cargo install cargo-nextest && \ cargo install cargo-nextest && \
rm -rf /home/nonroot/.cargo/registry && \ rm -rf /home/nonroot/.cargo/registry && \

View File

@@ -32,29 +32,6 @@ compute_ctl -D /var/db/postgres/compute \
-b /usr/local/bin/postgres -b /usr/local/bin/postgres
``` ```
## State Diagram
Computes can be in various states. Below is a diagram that details how a
compute moves between states.
```mermaid
%% https://mermaid.js.org/syntax/stateDiagram.html
stateDiagram-v2
[*] --> Empty : Compute spawned
Empty --> ConfigurationPending : Waiting for compute spec
ConfigurationPending --> Configuration : Received compute spec
Configuration --> Failed : Failed to configure the compute
Configuration --> Running : Compute has been configured
Empty --> Init : Compute spec is immediately available
Empty --> TerminationPending : Requested termination
Init --> Failed : Failed to start Postgres
Init --> Running : Started Postgres
Running --> TerminationPending : Requested termination
TerminationPending --> Terminated : Terminated compute
Failed --> [*] : Compute exited
Terminated --> [*] : Compute exited
```
## Tests ## Tests
Cargo formatter: Cargo formatter:

View File

@@ -17,7 +17,6 @@ pub fn line_in_file(path: &Path, line: &str) -> Result<bool> {
.write(true) .write(true)
.create(true) .create(true)
.append(false) .append(false)
.truncate(false)
.open(path)?; .open(path)?;
let buf = io::BufReader::new(&file); let buf = io::BufReader::new(&file);
let mut count: usize = 0; let mut count: usize = 0;

View File

@@ -16,7 +16,7 @@ testing = []
[dependencies] [dependencies]
anyhow.workspace = true anyhow.workspace = true
aws-config.workspace = true aws-config.workspace = true
bytes.workspace = true aws-sdk-secretsmanager.workspace = true
camino.workspace = true camino.workspace = true
clap.workspace = true clap.workspace = true
fail.workspace = true fail.workspace = true
@@ -25,20 +25,17 @@ git-version.workspace = true
hex.workspace = true hex.workspace = true
hyper.workspace = true hyper.workspace = true
humantime.workspace = true humantime.workspace = true
lasso.workspace = true
once_cell.workspace = true once_cell.workspace = true
pageserver_api.workspace = true pageserver_api.workspace = true
pageserver_client.workspace = true pageserver_client.workspace = true
postgres_connection.workspace = true postgres_connection.workspace = true
reqwest.workspace = true reqwest.workspace = true
routerify.workspace = true
serde.workspace = true serde.workspace = true
serde_json.workspace = true serde_json.workspace = true
thiserror.workspace = true thiserror.workspace = true
tokio.workspace = true tokio.workspace = true
tokio-util.workspace = true tokio-util.workspace = true
tracing.workspace = true tracing.workspace = true
measured.workspace = true
diesel = { version = "2.1.4", features = ["serde_json", "postgres", "r2d2"] } diesel = { version = "2.1.4", features = ["serde_json", "postgres", "r2d2"] }
diesel_migrations = { version = "2.1.0" } diesel_migrations = { version = "2.1.0" }

View File

@@ -1,3 +0,0 @@
UPDATE tenant_shards set placement_policy='{"Double": 1}' where placement_policy='{"Attached": 1}';
UPDATE tenant_shards set placement_policy='"Single"' where placement_policy='{"Attached": 0}';

View File

@@ -1,3 +0,0 @@
UPDATE tenant_shards set placement_policy='{"Attached": 1}' where placement_policy='{"Double": 1}';
UPDATE tenant_shards set placement_policy='{"Attached": 0}' where placement_policy='"Single"';

View File

@@ -139,7 +139,7 @@ impl HeartbeaterTask {
.with_client_retries( .with_client_retries(
|client| async move { client.get_utilization().await }, |client| async move { client.get_utilization().await },
&jwt_token, &jwt_token,
3, 2,
3, 3,
Duration::from_secs(1), Duration::from_secs(1),
&cancel, &cancel,

View File

@@ -1,11 +1,5 @@
use crate::metrics::{
HttpRequestLatencyLabelGroup, HttpRequestStatusLabelGroup, PageserverRequestLabelGroup,
METRICS_REGISTRY,
};
use crate::reconciler::ReconcileError; use crate::reconciler::ReconcileError;
use crate::service::{Service, STARTUP_RECONCILE_TIMEOUT}; use crate::service::{Service, STARTUP_RECONCILE_TIMEOUT};
use futures::Future;
use hyper::header::CONTENT_TYPE;
use hyper::{Body, Request, Response}; use hyper::{Body, Request, Response};
use hyper::{StatusCode, Uri}; use hyper::{StatusCode, Uri};
use pageserver_api::models::{ use pageserver_api::models::{
@@ -40,8 +34,6 @@ use pageserver_api::upcall_api::{ReAttachRequest, ValidateRequest};
use control_plane::storage_controller::{AttachHookRequest, InspectRequest}; use control_plane::storage_controller::{AttachHookRequest, InspectRequest};
use routerify::Middleware;
/// State available to HTTP request handlers /// State available to HTTP request handlers
#[derive(Clone)] #[derive(Clone)]
pub struct HttpState { pub struct HttpState {
@@ -321,7 +313,7 @@ async fn handle_tenant_timeline_passthrough(
tracing::info!("Proxying request for tenant {} ({})", tenant_id, path); tracing::info!("Proxying request for tenant {} ({})", tenant_id, path);
// Find the node that holds shard zero // Find the node that holds shard zero
let (node, tenant_shard_id) = service.tenant_shard0_node(tenant_id)?; let (base_url, tenant_shard_id) = service.tenant_shard0_baseurl(tenant_id)?;
// Callers will always pass an unsharded tenant ID. Before proxying, we must // Callers will always pass an unsharded tenant ID. Before proxying, we must
// rewrite this to a shard-aware shard zero ID. // rewrite this to a shard-aware shard zero ID.
@@ -330,39 +322,12 @@ async fn handle_tenant_timeline_passthrough(
let tenant_shard_str = format!("{}", tenant_shard_id); let tenant_shard_str = format!("{}", tenant_shard_id);
let path = path.replace(&tenant_str, &tenant_shard_str); let path = path.replace(&tenant_str, &tenant_shard_str);
let latency = &METRICS_REGISTRY let client = mgmt_api::Client::new(base_url, service.get_config().jwt_token.as_deref());
.metrics_group
.storage_controller_passthrough_request_latency;
// This is a bit awkward. We remove the param from the request
// and join the words by '_' to get a label for the request.
let just_path = path.replace(&tenant_shard_str, "");
let path_label = just_path
.split('/')
.filter(|token| !token.is_empty())
.collect::<Vec<_>>()
.join("_");
let labels = PageserverRequestLabelGroup {
pageserver_id: &node.get_id().to_string(),
path: &path_label,
method: crate::metrics::Method::Get,
};
let _timer = latency.start_timer(labels.clone());
let client = mgmt_api::Client::new(node.base_url(), service.get_config().jwt_token.as_deref());
let resp = client.get_raw(path).await.map_err(|_e| let resp = client.get_raw(path).await.map_err(|_e|
// FIXME: give APiError a proper Unavailable variant. We return 503 here because // FIXME: give APiError a proper Unavailable variant. We return 503 here because
// if we can't successfully send a request to the pageserver, we aren't available. // if we can't successfully send a request to the pageserver, we aren't available.
ApiError::ShuttingDown)?; ApiError::ShuttingDown)?;
if !resp.status().is_success() {
let error_counter = &METRICS_REGISTRY
.metrics_group
.storage_controller_passthrough_request_error;
error_counter.inc(labels);
}
// We have a reqest::Response, would like a http::Response // We have a reqest::Response, would like a http::Response
let mut builder = hyper::Response::builder() let mut builder = hyper::Response::builder()
.status(resp.status()) .status(resp.status())
@@ -388,16 +353,6 @@ async fn handle_tenant_locate(
json_response(StatusCode::OK, service.tenant_locate(tenant_id)?) json_response(StatusCode::OK, service.tenant_locate(tenant_id)?)
} }
async fn handle_tenant_describe(
service: Arc<Service>,
req: Request<Body>,
) -> Result<Response<Body>, ApiError> {
check_permissions(&req, Scope::Admin)?;
let tenant_id: TenantId = parse_request_param(&req, "tenant_id")?;
json_response(StatusCode::OK, service.tenant_describe(tenant_id)?)
}
async fn handle_node_register(mut req: Request<Body>) -> Result<Response<Body>, ApiError> { async fn handle_node_register(mut req: Request<Body>) -> Result<Response<Body>, ApiError> {
check_permissions(&req, Scope::Admin)?; check_permissions(&req, Scope::Admin)?;
@@ -533,11 +488,7 @@ impl From<ReconcileError> for ApiError {
/// Common wrapper for request handlers that call into Service and will operate on tenants: they must only /// Common wrapper for request handlers that call into Service and will operate on tenants: they must only
/// be allowed to run if Service has finished its initial reconciliation. /// be allowed to run if Service has finished its initial reconciliation.
async fn tenant_service_handler<R, H>( async fn tenant_service_handler<R, H>(request: Request<Body>, handler: H) -> R::Output
request: Request<Body>,
handler: H,
request_name: RequestName,
) -> R::Output
where where
R: std::future::Future<Output = Result<Response<Body>, ApiError>> + Send + 'static, R: std::future::Future<Output = Result<Response<Body>, ApiError>> + Send + 'static,
H: FnOnce(Arc<Service>, Request<Body>) -> R + Send + Sync + 'static, H: FnOnce(Arc<Service>, Request<Body>) -> R + Send + Sync + 'static,
@@ -557,10 +508,9 @@ where
)); ));
} }
named_request_span( request_span(
request, request,
|request| async move { handler(service, request).await }, |request| async move { handler(service, request).await },
request_name,
) )
.await .await
} }
@@ -571,98 +521,11 @@ fn check_permissions(request: &Request<Body>, required_scope: Scope) -> Result<(
}) })
} }
#[derive(Clone, Debug)]
struct RequestMeta {
method: hyper::http::Method,
at: Instant,
}
fn prologue_metrics_middleware<B: hyper::body::HttpBody + Send + Sync + 'static>(
) -> Middleware<B, ApiError> {
Middleware::pre(move |req| async move {
let meta = RequestMeta {
method: req.method().clone(),
at: Instant::now(),
};
req.set_context(meta);
Ok(req)
})
}
fn epilogue_metrics_middleware<B: hyper::body::HttpBody + Send + Sync + 'static>(
) -> Middleware<B, ApiError> {
Middleware::post_with_info(move |resp, req_info| async move {
let request_name = match req_info.context::<RequestName>() {
Some(name) => name,
None => {
return Ok(resp);
}
};
if let Some(meta) = req_info.context::<RequestMeta>() {
let status = &crate::metrics::METRICS_REGISTRY
.metrics_group
.storage_controller_http_request_status;
let latency = &crate::metrics::METRICS_REGISTRY
.metrics_group
.storage_controller_http_request_latency;
status.inc(HttpRequestStatusLabelGroup {
path: request_name.0,
method: meta.method.clone().into(),
status: crate::metrics::StatusCode(resp.status()),
});
latency.observe(
HttpRequestLatencyLabelGroup {
path: request_name.0,
method: meta.method.into(),
},
meta.at.elapsed().as_secs_f64(),
);
}
Ok(resp)
})
}
pub async fn measured_metrics_handler(_req: Request<Body>) -> Result<Response<Body>, ApiError> {
pub const TEXT_FORMAT: &str = "text/plain; version=0.0.4";
let payload = crate::metrics::METRICS_REGISTRY.encode();
let response = Response::builder()
.status(200)
.header(CONTENT_TYPE, TEXT_FORMAT)
.body(payload.into())
.unwrap();
Ok(response)
}
#[derive(Clone)]
struct RequestName(&'static str);
async fn named_request_span<R, H>(
request: Request<Body>,
handler: H,
name: RequestName,
) -> R::Output
where
R: Future<Output = Result<Response<Body>, ApiError>> + Send + 'static,
H: FnOnce(Request<Body>) -> R + Send + Sync + 'static,
{
request.set_context(name);
request_span(request, handler).await
}
pub fn make_router( pub fn make_router(
service: Arc<Service>, service: Arc<Service>,
auth: Option<Arc<SwappableJwtAuth>>, auth: Option<Arc<SwappableJwtAuth>>,
) -> RouterBuilder<hyper::Body, ApiError> { ) -> RouterBuilder<hyper::Body, ApiError> {
let mut router = endpoint::make_router() let mut router = endpoint::make_router();
.middleware(prologue_metrics_middleware())
.middleware(epilogue_metrics_middleware());
if auth.is_some() { if auth.is_some() {
router = router.middleware(auth_middleware(|request| { router = router.middleware(auth_middleware(|request| {
let state = get_state(request); let state = get_state(request);
@@ -671,166 +534,96 @@ pub fn make_router(
} else { } else {
state.auth.as_deref() state.auth.as_deref()
} }
})); }))
} }
router router
.data(Arc::new(HttpState::new(service, auth))) .data(Arc::new(HttpState::new(service, auth)))
.get("/metrics", |r| {
named_request_span(r, measured_metrics_handler, RequestName("metrics"))
})
// Non-prefixed generic endpoints (status, metrics) // Non-prefixed generic endpoints (status, metrics)
.get("/status", |r| { .get("/status", |r| request_span(r, handle_status))
named_request_span(r, handle_status, RequestName("status")) .get("/ready", |r| request_span(r, handle_ready))
})
.get("/ready", |r| {
named_request_span(r, handle_ready, RequestName("ready"))
})
// Upcalls for the pageserver: point the pageserver's `control_plane_api` config to this prefix // Upcalls for the pageserver: point the pageserver's `control_plane_api` config to this prefix
.post("/upcall/v1/re-attach", |r| { .post("/upcall/v1/re-attach", |r| {
named_request_span(r, handle_re_attach, RequestName("upcall_v1_reattach")) request_span(r, handle_re_attach)
})
.post("/upcall/v1/validate", |r| {
named_request_span(r, handle_validate, RequestName("upcall_v1_validate"))
}) })
.post("/upcall/v1/validate", |r| request_span(r, handle_validate))
// Test/dev/debug endpoints // Test/dev/debug endpoints
.post("/debug/v1/attach-hook", |r| { .post("/debug/v1/attach-hook", |r| {
named_request_span(r, handle_attach_hook, RequestName("debug_v1_attach_hook")) request_span(r, handle_attach_hook)
})
.post("/debug/v1/inspect", |r| {
named_request_span(r, handle_inspect, RequestName("debug_v1_inspect"))
}) })
.post("/debug/v1/inspect", |r| request_span(r, handle_inspect))
.post("/debug/v1/tenant/:tenant_id/drop", |r| { .post("/debug/v1/tenant/:tenant_id/drop", |r| {
named_request_span(r, handle_tenant_drop, RequestName("debug_v1_tenant_drop")) request_span(r, handle_tenant_drop)
}) })
.post("/debug/v1/node/:node_id/drop", |r| { .post("/debug/v1/node/:node_id/drop", |r| {
named_request_span(r, handle_node_drop, RequestName("debug_v1_node_drop")) request_span(r, handle_node_drop)
})
.get("/debug/v1/tenant", |r| {
named_request_span(r, handle_tenants_dump, RequestName("debug_v1_tenant"))
})
.get("/debug/v1/tenant/:tenant_id/locate", |r| {
tenant_service_handler(
r,
handle_tenant_locate,
RequestName("debug_v1_tenant_locate"),
)
}) })
.get("/debug/v1/tenant", |r| request_span(r, handle_tenants_dump))
.get("/debug/v1/scheduler", |r| { .get("/debug/v1/scheduler", |r| {
named_request_span(r, handle_scheduler_dump, RequestName("debug_v1_scheduler")) request_span(r, handle_scheduler_dump)
}) })
.post("/debug/v1/consistency_check", |r| { .post("/debug/v1/consistency_check", |r| {
named_request_span( request_span(r, handle_consistency_check)
r,
handle_consistency_check,
RequestName("debug_v1_consistency_check"),
)
}) })
.put("/debug/v1/failpoints", |r| { .put("/debug/v1/failpoints", |r| {
request_span(r, |r| failpoints_handler(r, CancellationToken::new())) request_span(r, |r| failpoints_handler(r, CancellationToken::new()))
}) })
.get("/control/v1/tenant/:tenant_id/locate", |r| {
tenant_service_handler(r, handle_tenant_locate)
})
// Node operations // Node operations
.post("/control/v1/node", |r| { .post("/control/v1/node", |r| {
named_request_span(r, handle_node_register, RequestName("control_v1_node")) request_span(r, handle_node_register)
})
.get("/control/v1/node", |r| {
named_request_span(r, handle_node_list, RequestName("control_v1_node"))
}) })
.get("/control/v1/node", |r| request_span(r, handle_node_list))
.put("/control/v1/node/:node_id/config", |r| { .put("/control/v1/node/:node_id/config", |r| {
named_request_span( request_span(r, handle_node_configure)
r,
handle_node_configure,
RequestName("control_v1_node_config"),
)
}) })
// Tenant Shard operations // Tenant Shard operations
.put("/control/v1/tenant/:tenant_shard_id/migrate", |r| { .put("/control/v1/tenant/:tenant_shard_id/migrate", |r| {
tenant_service_handler( tenant_service_handler(r, handle_tenant_shard_migrate)
r,
handle_tenant_shard_migrate,
RequestName("control_v1_tenant_migrate"),
)
}) })
.put("/control/v1/tenant/:tenant_id/shard_split", |r| { .put("/control/v1/tenant/:tenant_id/shard_split", |r| {
tenant_service_handler( tenant_service_handler(r, handle_tenant_shard_split)
r,
handle_tenant_shard_split,
RequestName("control_v1_tenant_shard_split"),
)
})
.get("/control/v1/tenant/:tenant_id", |r| {
tenant_service_handler(
r,
handle_tenant_describe,
RequestName("control_v1_tenant_describe"),
)
}) })
// Tenant operations // Tenant operations
// The ^/v1/ endpoints act as a "Virtual Pageserver", enabling shard-naive clients to call into // The ^/v1/ endpoints act as a "Virtual Pageserver", enabling shard-naive clients to call into
// this service to manage tenants that actually consist of many tenant shards, as if they are a single entity. // this service to manage tenants that actually consist of many tenant shards, as if they are a single entity.
.post("/v1/tenant", |r| { .post("/v1/tenant", |r| {
tenant_service_handler(r, handle_tenant_create, RequestName("v1_tenant")) tenant_service_handler(r, handle_tenant_create)
}) })
.delete("/v1/tenant/:tenant_id", |r| { .delete("/v1/tenant/:tenant_id", |r| {
tenant_service_handler(r, handle_tenant_delete, RequestName("v1_tenant")) tenant_service_handler(r, handle_tenant_delete)
}) })
.put("/v1/tenant/config", |r| { .put("/v1/tenant/config", |r| {
tenant_service_handler(r, handle_tenant_config_set, RequestName("v1_tenant_config")) tenant_service_handler(r, handle_tenant_config_set)
}) })
.get("/v1/tenant/:tenant_id/config", |r| { .get("/v1/tenant/:tenant_id/config", |r| {
tenant_service_handler(r, handle_tenant_config_get, RequestName("v1_tenant_config")) tenant_service_handler(r, handle_tenant_config_get)
}) })
.put("/v1/tenant/:tenant_shard_id/location_config", |r| { .put("/v1/tenant/:tenant_shard_id/location_config", |r| {
tenant_service_handler( tenant_service_handler(r, handle_tenant_location_config)
r,
handle_tenant_location_config,
RequestName("v1_tenant_location_config"),
)
}) })
.put("/v1/tenant/:tenant_id/time_travel_remote_storage", |r| { .put("/v1/tenant/:tenant_id/time_travel_remote_storage", |r| {
tenant_service_handler( tenant_service_handler(r, handle_tenant_time_travel_remote_storage)
r,
handle_tenant_time_travel_remote_storage,
RequestName("v1_tenant_time_travel_remote_storage"),
)
}) })
.post("/v1/tenant/:tenant_id/secondary/download", |r| { .post("/v1/tenant/:tenant_id/secondary/download", |r| {
tenant_service_handler( tenant_service_handler(r, handle_tenant_secondary_download)
r,
handle_tenant_secondary_download,
RequestName("v1_tenant_secondary_download"),
)
}) })
// Timeline operations // Timeline operations
.delete("/v1/tenant/:tenant_id/timeline/:timeline_id", |r| { .delete("/v1/tenant/:tenant_id/timeline/:timeline_id", |r| {
tenant_service_handler( tenant_service_handler(r, handle_tenant_timeline_delete)
r,
handle_tenant_timeline_delete,
RequestName("v1_tenant_timeline"),
)
}) })
.post("/v1/tenant/:tenant_id/timeline", |r| { .post("/v1/tenant/:tenant_id/timeline", |r| {
tenant_service_handler( tenant_service_handler(r, handle_tenant_timeline_create)
r,
handle_tenant_timeline_create,
RequestName("v1_tenant_timeline"),
)
}) })
// Tenant detail GET passthrough to shard zero // Tenant detail GET passthrough to shard zero
.get("/v1/tenant/:tenant_id", |r| { .get("/v1/tenant/:tenant_id", |r| {
tenant_service_handler( tenant_service_handler(r, handle_tenant_timeline_passthrough)
r,
handle_tenant_timeline_passthrough,
RequestName("v1_tenant_passthrough"),
)
}) })
// Timeline GET passthrough to shard zero. Note that the `*` in the URL is a wildcard: any future // Timeline GET passthrough to shard zero. Note that the `*` in the URL is a wildcard: any future
// timeline GET APIs will be implicitly included. // timeline GET APIs will be implicitly included.
.get("/v1/tenant/:tenant_id/timeline*", |r| { .get("/v1/tenant/:tenant_id/timeline*", |r| {
tenant_service_handler( tenant_service_handler(r, handle_tenant_timeline_passthrough)
r,
handle_tenant_timeline_passthrough,
RequestName("v1_tenant_timeline_passthrough"),
)
}) })
} }

View File

@@ -8,7 +8,6 @@ pub mod http;
mod id_lock_map; mod id_lock_map;
pub mod metrics; pub mod metrics;
mod node; mod node;
mod pageserver_client;
pub mod persistence; pub mod persistence;
mod reconciler; mod reconciler;
mod scheduler; mod scheduler;

View File

@@ -3,6 +3,7 @@ use attachment_service::http::make_router;
use attachment_service::metrics::preinitialize_metrics; use attachment_service::metrics::preinitialize_metrics;
use attachment_service::persistence::Persistence; use attachment_service::persistence::Persistence;
use attachment_service::service::{Config, Service, MAX_UNAVAILABLE_INTERVAL_DEFAULT}; use attachment_service::service::{Config, Service, MAX_UNAVAILABLE_INTERVAL_DEFAULT};
use aws_config::{BehaviorVersion, Region};
use camino::Utf8PathBuf; use camino::Utf8PathBuf;
use clap::Parser; use clap::Parser;
use diesel::Connection; use diesel::Connection;
@@ -54,31 +55,11 @@ struct Cli {
#[arg(long)] #[arg(long)]
database_url: Option<String>, database_url: Option<String>,
/// Flag to enable dev mode, which permits running without auth
#[arg(long, default_value = "false")]
dev: bool,
/// Grace period before marking unresponsive pageserver offline /// Grace period before marking unresponsive pageserver offline
#[arg(long)] #[arg(long)]
max_unavailable_interval: Option<humantime::Duration>, max_unavailable_interval: Option<humantime::Duration>,
} }
enum StrictMode {
/// In strict mode, we will require that all secrets are loaded, i.e. security features
/// may not be implicitly turned off by omitting secrets in the environment.
Strict,
/// In dev mode, secrets are optional, and omitting a particular secret will implicitly
/// disable the auth related to it (e.g. no pageserver jwt key -> send unauthenticated
/// requests, no public key -> don't authenticate incoming requests).
Dev,
}
impl Default for StrictMode {
fn default() -> Self {
Self::Strict
}
}
/// Secrets may either be provided on the command line (for testing), or loaded from AWS SecretManager: this /// Secrets may either be provided on the command line (for testing), or loaded from AWS SecretManager: this
/// type encapsulates the logic to decide which and do the loading. /// type encapsulates the logic to decide which and do the loading.
struct Secrets { struct Secrets {
@@ -89,6 +70,13 @@ struct Secrets {
} }
impl Secrets { impl Secrets {
const DATABASE_URL_SECRET: &'static str = "rds-neon-storage-controller-url";
const PAGESERVER_JWT_TOKEN_SECRET: &'static str =
"neon-storage-controller-pageserver-jwt-token";
const CONTROL_PLANE_JWT_TOKEN_SECRET: &'static str =
"neon-storage-controller-control-plane-jwt-token";
const PUBLIC_KEY_SECRET: &'static str = "neon-storage-controller-public-key";
const DATABASE_URL_ENV: &'static str = "DATABASE_URL"; const DATABASE_URL_ENV: &'static str = "DATABASE_URL";
const PAGESERVER_JWT_TOKEN_ENV: &'static str = "PAGESERVER_JWT_TOKEN"; const PAGESERVER_JWT_TOKEN_ENV: &'static str = "PAGESERVER_JWT_TOKEN";
const CONTROL_PLANE_JWT_TOKEN_ENV: &'static str = "CONTROL_PLANE_JWT_TOKEN"; const CONTROL_PLANE_JWT_TOKEN_ENV: &'static str = "CONTROL_PLANE_JWT_TOKEN";
@@ -99,41 +87,111 @@ impl Secrets {
/// - Environment variables if DATABASE_URL is set. /// - Environment variables if DATABASE_URL is set.
/// - AWS Secrets Manager secrets /// - AWS Secrets Manager secrets
async fn load(args: &Cli) -> anyhow::Result<Self> { async fn load(args: &Cli) -> anyhow::Result<Self> {
let Some(database_url) = match &args.database_url {
Self::load_secret(&args.database_url, Self::DATABASE_URL_ENV).await Some(url) => Self::load_cli(url, args),
else { None => match std::env::var(Self::DATABASE_URL_ENV) {
anyhow::bail!( Ok(database_url) => Self::load_env(database_url),
"Database URL is not set (set `--database-url`, or `DATABASE_URL` environment)" Err(_) => Self::load_aws_sm().await,
) },
}; }
let public_key = match Self::load_secret(&args.public_key, Self::PUBLIC_KEY_ENV).await {
Some(v) => Some(JwtAuth::from_key(v).context("Loading public key")?),
None => None,
};
let this = Self {
database_url,
public_key,
jwt_token: Self::load_secret(&args.jwt_token, Self::PAGESERVER_JWT_TOKEN_ENV).await,
control_plane_jwt_token: Self::load_secret(
&args.control_plane_jwt_token,
Self::CONTROL_PLANE_JWT_TOKEN_ENV,
)
.await,
};
Ok(this)
} }
async fn load_secret(cli: &Option<String>, env_name: &str) -> Option<String> { fn load_env(database_url: String) -> anyhow::Result<Self> {
if let Some(v) = cli { let public_key = match std::env::var(Self::PUBLIC_KEY_ENV) {
Some(v.clone()) Ok(public_key) => Some(JwtAuth::from_key(public_key).context("Loading public key")?),
} else if let Ok(v) = std::env::var(env_name) { Err(_) => None,
Some(v) };
} else { Ok(Self {
None database_url,
public_key,
jwt_token: std::env::var(Self::PAGESERVER_JWT_TOKEN_ENV).ok(),
control_plane_jwt_token: std::env::var(Self::CONTROL_PLANE_JWT_TOKEN_ENV).ok(),
})
}
async fn load_aws_sm() -> anyhow::Result<Self> {
let Ok(region) = std::env::var("AWS_REGION") else {
anyhow::bail!("AWS_REGION is not set, cannot load secrets automatically: either set this, or use CLI args to supply secrets");
};
let config = aws_config::defaults(BehaviorVersion::v2023_11_09())
.region(Region::new(region.clone()))
.load()
.await;
let asm = aws_sdk_secretsmanager::Client::new(&config);
let Some(database_url) = asm
.get_secret_value()
.secret_id(Self::DATABASE_URL_SECRET)
.send()
.await?
.secret_string()
.map(str::to_string)
else {
anyhow::bail!(
"Database URL secret not found at {region}/{}",
Self::DATABASE_URL_SECRET
)
};
let jwt_token = asm
.get_secret_value()
.secret_id(Self::PAGESERVER_JWT_TOKEN_SECRET)
.send()
.await?
.secret_string()
.map(str::to_string);
if jwt_token.is_none() {
tracing::warn!("No pageserver JWT token set: this will only work if authentication is disabled on the pageserver");
} }
let control_plane_jwt_token = asm
.get_secret_value()
.secret_id(Self::CONTROL_PLANE_JWT_TOKEN_SECRET)
.send()
.await?
.secret_string()
.map(str::to_string);
if jwt_token.is_none() {
tracing::warn!("No control plane JWT token set: this will only work if authentication is disabled on the pageserver");
}
let public_key = asm
.get_secret_value()
.secret_id(Self::PUBLIC_KEY_SECRET)
.send()
.await?
.secret_string()
.map(str::to_string);
let public_key = match public_key {
Some(key) => Some(JwtAuth::from_key(key)?),
None => {
tracing::warn!(
"No public key set: inccoming HTTP requests will not be authenticated"
);
None
}
};
Ok(Self {
database_url,
public_key,
jwt_token,
control_plane_jwt_token,
})
}
fn load_cli(database_url: &str, args: &Cli) -> anyhow::Result<Self> {
let public_key = match &args.public_key {
None => None,
Some(key) => Some(JwtAuth::from_key(key.clone()).context("Loading public key")?),
};
Ok(Self {
database_url: database_url.to_owned(),
public_key,
jwt_token: args.jwt_token.clone(),
control_plane_jwt_token: args.control_plane_jwt_token.clone(),
})
} }
} }
@@ -189,42 +247,8 @@ async fn async_main() -> anyhow::Result<()> {
args.listen args.listen
); );
let strict_mode = if args.dev {
StrictMode::Dev
} else {
StrictMode::Strict
};
let secrets = Secrets::load(&args).await?; let secrets = Secrets::load(&args).await?;
// Validate required secrets and arguments are provided in strict mode
match strict_mode {
StrictMode::Strict
if (secrets.public_key.is_none()
|| secrets.jwt_token.is_none()
|| secrets.control_plane_jwt_token.is_none()) =>
{
// Production systems should always have secrets configured: if public_key was not set
// then we would implicitly disable auth.
anyhow::bail!(
"Insecure config! One or more secrets is not set. This is only permitted in `--dev` mode"
);
}
StrictMode::Strict if args.compute_hook_url.is_none() => {
// Production systems should always have a compute hook set, to prevent falling
// back to trying to use neon_local.
anyhow::bail!(
"`--compute-hook-url` is not set: this is only permitted in `--dev` mode"
);
}
StrictMode::Strict => {
tracing::info!("Starting in strict mode: configuration is OK.")
}
StrictMode::Dev => {
tracing::warn!("Starting in dev mode: this may be an insecure configuration.")
}
}
let config = Config { let config = Config {
jwt_token: secrets.jwt_token, jwt_token: secrets.jwt_token,
control_plane_jwt_token: secrets.control_plane_jwt_token, control_plane_jwt_token: secrets.control_plane_jwt_token,

View File

@@ -1,284 +1,32 @@
//! use metrics::{register_int_counter, register_int_counter_vec, IntCounter, IntCounterVec};
//! This module provides metric definitions for the storage controller.
//!
//! All metrics are grouped in [`StorageControllerMetricGroup`]. [`StorageControllerMetrics`] holds
//! the mentioned metrics and their encoder. It's globally available via the [`METRICS_REGISTRY`]
//! constant.
//!
//! The rest of the code defines label group types and deals with converting outer types to labels.
//!
use bytes::Bytes;
use measured::{
label::{LabelValue, StaticLabelSet},
FixedCardinalityLabel, MetricGroup,
};
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use std::sync::Mutex;
use crate::persistence::{DatabaseError, DatabaseOperation}; pub(crate) struct ReconcilerMetrics {
pub(crate) spawned: IntCounter,
pub(crate) complete: IntCounterVec,
}
pub(crate) static METRICS_REGISTRY: Lazy<StorageControllerMetrics> = impl ReconcilerMetrics {
Lazy::new(StorageControllerMetrics::default); // Labels used on [`Self::complete`]
pub(crate) const SUCCESS: &'static str = "ok";
pub(crate) const ERROR: &'static str = "success";
pub(crate) const CANCEL: &'static str = "cancel";
}
pub(crate) static RECONCILER: Lazy<ReconcilerMetrics> = Lazy::new(|| ReconcilerMetrics {
spawned: register_int_counter!(
"storage_controller_reconcile_spawn",
"Count of how many times we spawn a reconcile task",
)
.expect("failed to define a metric"),
complete: register_int_counter_vec!(
"storage_controller_reconcile_complete",
"Reconciler tasks completed, broken down by success/failure/cancelled",
&["status"],
)
.expect("failed to define a metric"),
});
pub fn preinitialize_metrics() { pub fn preinitialize_metrics() {
Lazy::force(&METRICS_REGISTRY); Lazy::force(&RECONCILER);
}
pub(crate) struct StorageControllerMetrics {
pub(crate) metrics_group: StorageControllerMetricGroup,
encoder: Mutex<measured::text::TextEncoder>,
}
#[derive(measured::MetricGroup)]
pub(crate) struct StorageControllerMetricGroup {
/// Count of how many times we spawn a reconcile task
pub(crate) storage_controller_reconcile_spawn: measured::Counter,
/// Reconciler tasks completed, broken down by success/failure/cancelled
pub(crate) storage_controller_reconcile_complete:
measured::CounterVec<ReconcileCompleteLabelGroupSet>,
/// HTTP request status counters for handled requests
pub(crate) storage_controller_http_request_status:
measured::CounterVec<HttpRequestStatusLabelGroupSet>,
/// HTTP request handler latency across all status codes
pub(crate) storage_controller_http_request_latency:
measured::HistogramVec<HttpRequestLatencyLabelGroupSet, 5>,
/// Count of HTTP requests to the pageserver that resulted in an error,
/// broken down by the pageserver node id, request name and method
pub(crate) storage_controller_pageserver_request_error:
measured::CounterVec<PageserverRequestLabelGroupSet>,
/// Latency of HTTP requests to the pageserver, broken down by pageserver
/// node id, request name and method. This include both successful and unsuccessful
/// requests.
pub(crate) storage_controller_pageserver_request_latency:
measured::HistogramVec<PageserverRequestLabelGroupSet, 5>,
/// Count of pass-through HTTP requests to the pageserver that resulted in an error,
/// broken down by the pageserver node id, request name and method
pub(crate) storage_controller_passthrough_request_error:
measured::CounterVec<PageserverRequestLabelGroupSet>,
/// Latency of pass-through HTTP requests to the pageserver, broken down by pageserver
/// node id, request name and method. This include both successful and unsuccessful
/// requests.
pub(crate) storage_controller_passthrough_request_latency:
measured::HistogramVec<PageserverRequestLabelGroupSet, 5>,
/// Count of errors in database queries, broken down by error type and operation.
pub(crate) storage_controller_database_query_error:
measured::CounterVec<DatabaseQueryErrorLabelGroupSet>,
/// Latency of database queries, broken down by operation.
pub(crate) storage_controller_database_query_latency:
measured::HistogramVec<DatabaseQueryLatencyLabelGroupSet, 5>,
}
impl StorageControllerMetrics {
pub(crate) fn encode(&self) -> Bytes {
let mut encoder = self.encoder.lock().unwrap();
self.metrics_group.collect_into(&mut *encoder);
encoder.finish()
}
}
impl Default for StorageControllerMetrics {
fn default() -> Self {
Self {
metrics_group: StorageControllerMetricGroup::new(),
encoder: Mutex::new(measured::text::TextEncoder::new()),
}
}
}
impl StorageControllerMetricGroup {
pub(crate) fn new() -> Self {
Self {
storage_controller_reconcile_spawn: measured::Counter::new(),
storage_controller_reconcile_complete: measured::CounterVec::new(
ReconcileCompleteLabelGroupSet {
status: StaticLabelSet::new(),
},
),
storage_controller_http_request_status: measured::CounterVec::new(
HttpRequestStatusLabelGroupSet {
path: lasso::ThreadedRodeo::new(),
method: StaticLabelSet::new(),
status: StaticLabelSet::new(),
},
),
storage_controller_http_request_latency: measured::HistogramVec::new(
measured::metric::histogram::Thresholds::exponential_buckets(0.1, 2.0),
),
storage_controller_pageserver_request_error: measured::CounterVec::new(
PageserverRequestLabelGroupSet {
pageserver_id: lasso::ThreadedRodeo::new(),
path: lasso::ThreadedRodeo::new(),
method: StaticLabelSet::new(),
},
),
storage_controller_pageserver_request_latency: measured::HistogramVec::new(
measured::metric::histogram::Thresholds::exponential_buckets(0.1, 2.0),
),
storage_controller_passthrough_request_error: measured::CounterVec::new(
PageserverRequestLabelGroupSet {
pageserver_id: lasso::ThreadedRodeo::new(),
path: lasso::ThreadedRodeo::new(),
method: StaticLabelSet::new(),
},
),
storage_controller_passthrough_request_latency: measured::HistogramVec::new(
measured::metric::histogram::Thresholds::exponential_buckets(0.1, 2.0),
),
storage_controller_database_query_error: measured::CounterVec::new(
DatabaseQueryErrorLabelGroupSet {
operation: StaticLabelSet::new(),
error_type: StaticLabelSet::new(),
},
),
storage_controller_database_query_latency: measured::HistogramVec::new(
measured::metric::histogram::Thresholds::exponential_buckets(0.1, 2.0),
),
}
}
}
#[derive(measured::LabelGroup)]
#[label(set = ReconcileCompleteLabelGroupSet)]
pub(crate) struct ReconcileCompleteLabelGroup {
pub(crate) status: ReconcileOutcome,
}
#[derive(measured::LabelGroup)]
#[label(set = HttpRequestStatusLabelGroupSet)]
pub(crate) struct HttpRequestStatusLabelGroup<'a> {
#[label(dynamic_with = lasso::ThreadedRodeo)]
pub(crate) path: &'a str,
pub(crate) method: Method,
pub(crate) status: StatusCode,
}
#[derive(measured::LabelGroup)]
#[label(set = HttpRequestLatencyLabelGroupSet)]
pub(crate) struct HttpRequestLatencyLabelGroup<'a> {
#[label(dynamic_with = lasso::ThreadedRodeo)]
pub(crate) path: &'a str,
pub(crate) method: Method,
}
impl Default for HttpRequestLatencyLabelGroupSet {
fn default() -> Self {
Self {
path: lasso::ThreadedRodeo::new(),
method: StaticLabelSet::new(),
}
}
}
#[derive(measured::LabelGroup, Clone)]
#[label(set = PageserverRequestLabelGroupSet)]
pub(crate) struct PageserverRequestLabelGroup<'a> {
#[label(dynamic_with = lasso::ThreadedRodeo)]
pub(crate) pageserver_id: &'a str,
#[label(dynamic_with = lasso::ThreadedRodeo)]
pub(crate) path: &'a str,
pub(crate) method: Method,
}
impl Default for PageserverRequestLabelGroupSet {
fn default() -> Self {
Self {
pageserver_id: lasso::ThreadedRodeo::new(),
path: lasso::ThreadedRodeo::new(),
method: StaticLabelSet::new(),
}
}
}
#[derive(measured::LabelGroup)]
#[label(set = DatabaseQueryErrorLabelGroupSet)]
pub(crate) struct DatabaseQueryErrorLabelGroup {
pub(crate) error_type: DatabaseErrorLabel,
pub(crate) operation: DatabaseOperation,
}
#[derive(measured::LabelGroup)]
#[label(set = DatabaseQueryLatencyLabelGroupSet)]
pub(crate) struct DatabaseQueryLatencyLabelGroup {
pub(crate) operation: DatabaseOperation,
}
#[derive(FixedCardinalityLabel)]
pub(crate) enum ReconcileOutcome {
#[label(rename = "ok")]
Success,
Error,
Cancel,
}
#[derive(FixedCardinalityLabel, Clone)]
pub(crate) enum Method {
Get,
Put,
Post,
Delete,
Other,
}
impl From<hyper::Method> for Method {
fn from(value: hyper::Method) -> Self {
if value == hyper::Method::GET {
Method::Get
} else if value == hyper::Method::PUT {
Method::Put
} else if value == hyper::Method::POST {
Method::Post
} else if value == hyper::Method::DELETE {
Method::Delete
} else {
Method::Other
}
}
}
pub(crate) struct StatusCode(pub(crate) hyper::http::StatusCode);
impl LabelValue for StatusCode {
fn visit<V: measured::label::LabelVisitor>(&self, v: V) -> V::Output {
v.write_int(self.0.as_u16() as u64)
}
}
impl FixedCardinalityLabel for StatusCode {
fn cardinality() -> usize {
(100..1000).len()
}
fn encode(&self) -> usize {
self.0.as_u16() as usize
}
fn decode(value: usize) -> Self {
Self(hyper::http::StatusCode::from_u16(u16::try_from(value).unwrap()).unwrap())
}
}
#[derive(FixedCardinalityLabel)]
pub(crate) enum DatabaseErrorLabel {
Query,
Connection,
ConnectionPool,
Logical,
}
impl DatabaseError {
pub(crate) fn error_label(&self) -> DatabaseErrorLabel {
match self {
Self::Query(_) => DatabaseErrorLabel::Query,
Self::Connection(_) => DatabaseErrorLabel::Connection,
Self::ConnectionPool(_) => DatabaseErrorLabel::ConnectionPool,
Self::Logical(_) => DatabaseErrorLabel::Logical,
}
}
} }

View File

@@ -12,9 +12,7 @@ use serde::Serialize;
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
use utils::{backoff, id::NodeId}; use utils::{backoff, id::NodeId};
use crate::{ use crate::{persistence::NodePersistence, scheduler::MaySchedule};
pageserver_client::PageserverClient, persistence::NodePersistence, scheduler::MaySchedule,
};
/// Represents the in-memory description of a Node. /// Represents the in-memory description of a Node.
/// ///
@@ -204,7 +202,7 @@ impl Node {
cancel: &CancellationToken, cancel: &CancellationToken,
) -> Option<mgmt_api::Result<T>> ) -> Option<mgmt_api::Result<T>>
where where
O: FnMut(PageserverClient) -> F, O: FnMut(mgmt_api::Client) -> F,
F: std::future::Future<Output = mgmt_api::Result<T>>, F: std::future::Future<Output = mgmt_api::Result<T>>,
{ {
fn is_fatal(e: &mgmt_api::Error) -> bool { fn is_fatal(e: &mgmt_api::Error) -> bool {
@@ -226,12 +224,8 @@ impl Node {
.build() .build()
.expect("Failed to construct HTTP client"); .expect("Failed to construct HTTP client");
let client = PageserverClient::from_client( let client =
self.get_id(), mgmt_api::Client::from_client(http_client, self.base_url(), jwt.as_deref());
http_client,
self.base_url(),
jwt.as_deref(),
);
let node_cancel_fut = self.cancel.cancelled(); let node_cancel_fut = self.cancel.cancelled();

View File

@@ -1,203 +0,0 @@
use pageserver_api::{
models::{
LocationConfig, LocationConfigListResponse, PageserverUtilization, SecondaryProgress,
TenantShardSplitRequest, TenantShardSplitResponse, TimelineCreateRequest, TimelineInfo,
},
shard::TenantShardId,
};
use pageserver_client::mgmt_api::{Client, Result};
use reqwest::StatusCode;
use utils::id::{NodeId, TimelineId};
/// Thin wrapper around [`pageserver_client::mgmt_api::Client`]. It allows the storage
/// controller to collect metrics in a non-intrusive manner.
#[derive(Debug, Clone)]
pub(crate) struct PageserverClient {
inner: Client,
node_id_label: String,
}
macro_rules! measured_request {
($name:literal, $method:expr, $node_id: expr, $invoke:expr) => {{
let labels = crate::metrics::PageserverRequestLabelGroup {
pageserver_id: $node_id,
path: $name,
method: $method,
};
let latency = &crate::metrics::METRICS_REGISTRY
.metrics_group
.storage_controller_pageserver_request_latency;
let _timer_guard = latency.start_timer(labels.clone());
let res = $invoke;
if res.is_err() {
let error_counters = &crate::metrics::METRICS_REGISTRY
.metrics_group
.storage_controller_pageserver_request_error;
error_counters.inc(labels)
}
res
}};
}
impl PageserverClient {
pub(crate) fn new(node_id: NodeId, mgmt_api_endpoint: String, jwt: Option<&str>) -> Self {
Self {
inner: Client::from_client(reqwest::Client::new(), mgmt_api_endpoint, jwt),
node_id_label: node_id.0.to_string(),
}
}
pub(crate) fn from_client(
node_id: NodeId,
raw_client: reqwest::Client,
mgmt_api_endpoint: String,
jwt: Option<&str>,
) -> Self {
Self {
inner: Client::from_client(raw_client, mgmt_api_endpoint, jwt),
node_id_label: node_id.0.to_string(),
}
}
pub(crate) async fn tenant_delete(&self, tenant_shard_id: TenantShardId) -> Result<StatusCode> {
measured_request!(
"tenant",
crate::metrics::Method::Delete,
&self.node_id_label,
self.inner.tenant_delete(tenant_shard_id).await
)
}
pub(crate) async fn tenant_time_travel_remote_storage(
&self,
tenant_shard_id: TenantShardId,
timestamp: &str,
done_if_after: &str,
) -> Result<()> {
measured_request!(
"tenant_time_travel_remote_storage",
crate::metrics::Method::Put,
&self.node_id_label,
self.inner
.tenant_time_travel_remote_storage(tenant_shard_id, timestamp, done_if_after)
.await
)
}
pub(crate) async fn tenant_secondary_download(
&self,
tenant_id: TenantShardId,
wait: Option<std::time::Duration>,
) -> Result<(StatusCode, SecondaryProgress)> {
measured_request!(
"tenant_secondary_download",
crate::metrics::Method::Post,
&self.node_id_label,
self.inner.tenant_secondary_download(tenant_id, wait).await
)
}
pub(crate) async fn location_config(
&self,
tenant_shard_id: TenantShardId,
config: LocationConfig,
flush_ms: Option<std::time::Duration>,
lazy: bool,
) -> Result<()> {
measured_request!(
"location_config",
crate::metrics::Method::Put,
&self.node_id_label,
self.inner
.location_config(tenant_shard_id, config, flush_ms, lazy)
.await
)
}
pub(crate) async fn list_location_config(&self) -> Result<LocationConfigListResponse> {
measured_request!(
"location_configs",
crate::metrics::Method::Get,
&self.node_id_label,
self.inner.list_location_config().await
)
}
pub(crate) async fn get_location_config(
&self,
tenant_shard_id: TenantShardId,
) -> Result<Option<LocationConfig>> {
measured_request!(
"location_config",
crate::metrics::Method::Get,
&self.node_id_label,
self.inner.get_location_config(tenant_shard_id).await
)
}
pub(crate) async fn timeline_create(
&self,
tenant_shard_id: TenantShardId,
req: &TimelineCreateRequest,
) -> Result<TimelineInfo> {
measured_request!(
"timeline",
crate::metrics::Method::Post,
&self.node_id_label,
self.inner.timeline_create(tenant_shard_id, req).await
)
}
pub(crate) async fn timeline_delete(
&self,
tenant_shard_id: TenantShardId,
timeline_id: TimelineId,
) -> Result<StatusCode> {
measured_request!(
"timeline",
crate::metrics::Method::Delete,
&self.node_id_label,
self.inner
.timeline_delete(tenant_shard_id, timeline_id)
.await
)
}
pub(crate) async fn tenant_shard_split(
&self,
tenant_shard_id: TenantShardId,
req: TenantShardSplitRequest,
) -> Result<TenantShardSplitResponse> {
measured_request!(
"tenant_shard_split",
crate::metrics::Method::Put,
&self.node_id_label,
self.inner.tenant_shard_split(tenant_shard_id, req).await
)
}
pub(crate) async fn timeline_list(
&self,
tenant_shard_id: &TenantShardId,
) -> Result<Vec<TimelineInfo>> {
measured_request!(
"timelines",
crate::metrics::Method::Get,
&self.node_id_label,
self.inner.timeline_list(tenant_shard_id).await
)
}
pub(crate) async fn get_utilization(&self) -> Result<PageserverUtilization> {
measured_request!(
"utilization",
crate::metrics::Method::Get,
&self.node_id_label,
self.inner.get_utilization().await
)
}
}

View File

@@ -19,9 +19,6 @@ use serde::{Deserialize, Serialize};
use utils::generation::Generation; use utils::generation::Generation;
use utils::id::{NodeId, TenantId}; use utils::id::{NodeId, TenantId};
use crate::metrics::{
DatabaseQueryErrorLabelGroup, DatabaseQueryLatencyLabelGroup, METRICS_REGISTRY,
};
use crate::node::Node; use crate::node::Node;
/// ## What do we store? /// ## What do we store?
@@ -78,25 +75,6 @@ pub(crate) enum DatabaseError {
Logical(String), Logical(String),
} }
#[derive(measured::FixedCardinalityLabel, Clone)]
pub(crate) enum DatabaseOperation {
InsertNode,
UpdateNode,
DeleteNode,
ListNodes,
BeginShardSplit,
CompleteShardSplit,
AbortShardSplit,
Detach,
ReAttach,
IncrementGeneration,
ListTenantShards,
InsertTenantShards,
UpdateTenantShard,
DeleteTenant,
UpdateTenantConfig,
}
#[must_use] #[must_use]
pub(crate) enum AbortShardSplitStatus { pub(crate) enum AbortShardSplitStatus {
/// We aborted the split in the database by reverting to the parent shards /// We aborted the split in the database by reverting to the parent shards
@@ -137,34 +115,6 @@ impl Persistence {
} }
} }
/// Wraps `with_conn` in order to collect latency and error metrics
async fn with_measured_conn<F, R>(&self, op: DatabaseOperation, func: F) -> DatabaseResult<R>
where
F: Fn(&mut PgConnection) -> DatabaseResult<R> + Send + 'static,
R: Send + 'static,
{
let latency = &METRICS_REGISTRY
.metrics_group
.storage_controller_database_query_latency;
let _timer = latency.start_timer(DatabaseQueryLatencyLabelGroup {
operation: op.clone(),
});
let res = self.with_conn(func).await;
if let Err(err) = &res {
let error_counter = &METRICS_REGISTRY
.metrics_group
.storage_controller_database_query_error;
error_counter.inc(DatabaseQueryErrorLabelGroup {
error_type: err.error_label(),
operation: op,
})
}
res
}
/// Call the provided function in a tokio blocking thread, with a Diesel database connection. /// Call the provided function in a tokio blocking thread, with a Diesel database connection.
async fn with_conn<F, R>(&self, func: F) -> DatabaseResult<R> async fn with_conn<F, R>(&self, func: F) -> DatabaseResult<R>
where where
@@ -180,27 +130,21 @@ impl Persistence {
/// When a node is first registered, persist it before using it for anything /// When a node is first registered, persist it before using it for anything
pub(crate) async fn insert_node(&self, node: &Node) -> DatabaseResult<()> { pub(crate) async fn insert_node(&self, node: &Node) -> DatabaseResult<()> {
let np = node.to_persistent(); let np = node.to_persistent();
self.with_measured_conn( self.with_conn(move |conn| -> DatabaseResult<()> {
DatabaseOperation::InsertNode, diesel::insert_into(crate::schema::nodes::table)
move |conn| -> DatabaseResult<()> { .values(&np)
diesel::insert_into(crate::schema::nodes::table) .execute(conn)?;
.values(&np) Ok(())
.execute(conn)?; })
Ok(())
},
)
.await .await
} }
/// At startup, populate the list of nodes which our shards may be placed on /// At startup, populate the list of nodes which our shards may be placed on
pub(crate) async fn list_nodes(&self) -> DatabaseResult<Vec<NodePersistence>> { pub(crate) async fn list_nodes(&self) -> DatabaseResult<Vec<NodePersistence>> {
let nodes: Vec<NodePersistence> = self let nodes: Vec<NodePersistence> = self
.with_measured_conn( .with_conn(move |conn| -> DatabaseResult<_> {
DatabaseOperation::ListNodes, Ok(crate::schema::nodes::table.load::<NodePersistence>(conn)?)
move |conn| -> DatabaseResult<_> { })
Ok(crate::schema::nodes::table.load::<NodePersistence>(conn)?)
},
)
.await?; .await?;
tracing::info!("list_nodes: loaded {} nodes", nodes.len()); tracing::info!("list_nodes: loaded {} nodes", nodes.len());
@@ -215,7 +159,7 @@ impl Persistence {
) -> DatabaseResult<()> { ) -> DatabaseResult<()> {
use crate::schema::nodes::dsl::*; use crate::schema::nodes::dsl::*;
let updated = self let updated = self
.with_measured_conn(DatabaseOperation::UpdateNode, move |conn| { .with_conn(move |conn| {
let updated = diesel::update(nodes) let updated = diesel::update(nodes)
.filter(node_id.eq(input_node_id.0 as i64)) .filter(node_id.eq(input_node_id.0 as i64))
.set((scheduling_policy.eq(String::from(input_scheduling)),)) .set((scheduling_policy.eq(String::from(input_scheduling)),))
@@ -237,12 +181,9 @@ impl Persistence {
/// be enriched at runtime with state discovered on pageservers. /// be enriched at runtime with state discovered on pageservers.
pub(crate) async fn list_tenant_shards(&self) -> DatabaseResult<Vec<TenantShardPersistence>> { pub(crate) async fn list_tenant_shards(&self) -> DatabaseResult<Vec<TenantShardPersistence>> {
let loaded = self let loaded = self
.with_measured_conn( .with_conn(move |conn| -> DatabaseResult<_> {
DatabaseOperation::ListTenantShards, Ok(crate::schema::tenant_shards::table.load::<TenantShardPersistence>(conn)?)
move |conn| -> DatabaseResult<_> { })
Ok(crate::schema::tenant_shards::table.load::<TenantShardPersistence>(conn)?)
},
)
.await?; .await?;
if loaded.is_empty() { if loaded.is_empty() {
@@ -270,10 +211,15 @@ impl Persistence {
let mut decoded = serde_json::from_slice::<JsonPersistence>(&bytes) let mut decoded = serde_json::from_slice::<JsonPersistence>(&bytes)
.map_err(|e| DatabaseError::Logical(format!("Deserialization error: {e}")))?; .map_err(|e| DatabaseError::Logical(format!("Deserialization error: {e}")))?;
for shard in decoded.tenants.values_mut() { for (tenant_id, tenant) in &mut decoded.tenants {
if shard.placement_policy == "\"Single\"" { // Backward compat: an old attachments.json from before PR #6251, replace
// Backward compat for test data after PR https://github.com/neondatabase/neon/pull/7165 // empty strings with proper defaults.
shard.placement_policy = "{\"Attached\":0}".to_string(); if tenant.tenant_id.is_empty() {
tenant.tenant_id = tenant_id.to_string();
tenant.config = serde_json::to_string(&TenantConfig::default())
.map_err(|e| DatabaseError::Logical(format!("Serialization error: {e}")))?;
tenant.placement_policy = serde_json::to_string(&PlacementPolicy::Single)
.map_err(|e| DatabaseError::Logical(format!("Serialization error: {e}")))?;
} }
} }
@@ -319,20 +265,17 @@ impl Persistence {
shards: Vec<TenantShardPersistence>, shards: Vec<TenantShardPersistence>,
) -> DatabaseResult<()> { ) -> DatabaseResult<()> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
self.with_measured_conn( self.with_conn(move |conn| -> DatabaseResult<()> {
DatabaseOperation::InsertTenantShards, conn.transaction(|conn| -> QueryResult<()> {
move |conn| -> DatabaseResult<()> { for tenant in &shards {
conn.transaction(|conn| -> QueryResult<()> { diesel::insert_into(tenant_shards)
for tenant in &shards { .values(tenant)
diesel::insert_into(tenant_shards) .execute(conn)?;
.values(tenant) }
.execute(conn)?;
}
Ok(())
})?;
Ok(()) Ok(())
}, })?;
) Ok(())
})
.await .await
} }
@@ -340,31 +283,25 @@ impl Persistence {
/// the tenant from memory on this server. /// the tenant from memory on this server.
pub(crate) async fn delete_tenant(&self, del_tenant_id: TenantId) -> DatabaseResult<()> { pub(crate) async fn delete_tenant(&self, del_tenant_id: TenantId) -> DatabaseResult<()> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
self.with_measured_conn( self.with_conn(move |conn| -> DatabaseResult<()> {
DatabaseOperation::DeleteTenant, diesel::delete(tenant_shards)
move |conn| -> DatabaseResult<()> { .filter(tenant_id.eq(del_tenant_id.to_string()))
diesel::delete(tenant_shards) .execute(conn)?;
.filter(tenant_id.eq(del_tenant_id.to_string()))
.execute(conn)?;
Ok(()) Ok(())
}, })
)
.await .await
} }
pub(crate) async fn delete_node(&self, del_node_id: NodeId) -> DatabaseResult<()> { pub(crate) async fn delete_node(&self, del_node_id: NodeId) -> DatabaseResult<()> {
use crate::schema::nodes::dsl::*; use crate::schema::nodes::dsl::*;
self.with_measured_conn( self.with_conn(move |conn| -> DatabaseResult<()> {
DatabaseOperation::DeleteNode, diesel::delete(nodes)
move |conn| -> DatabaseResult<()> { .filter(node_id.eq(del_node_id.0 as i64))
diesel::delete(nodes) .execute(conn)?;
.filter(node_id.eq(del_node_id.0 as i64))
.execute(conn)?;
Ok(()) Ok(())
}, })
)
.await .await
} }
@@ -378,7 +315,7 @@ impl Persistence {
) -> DatabaseResult<HashMap<TenantShardId, Generation>> { ) -> DatabaseResult<HashMap<TenantShardId, Generation>> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
let updated = self let updated = self
.with_measured_conn(DatabaseOperation::ReAttach, move |conn| { .with_conn(move |conn| {
let rows_updated = diesel::update(tenant_shards) let rows_updated = diesel::update(tenant_shards)
.filter(generation_pageserver.eq(node_id.0 as i64)) .filter(generation_pageserver.eq(node_id.0 as i64))
.set(generation.eq(generation + 1)) .set(generation.eq(generation + 1))
@@ -428,7 +365,7 @@ impl Persistence {
) -> anyhow::Result<Generation> { ) -> anyhow::Result<Generation> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
let updated = self let updated = self
.with_measured_conn(DatabaseOperation::IncrementGeneration, move |conn| { .with_conn(move |conn| {
let updated = diesel::update(tenant_shards) let updated = diesel::update(tenant_shards)
.filter(tenant_id.eq(tenant_shard_id.tenant_id.to_string())) .filter(tenant_id.eq(tenant_shard_id.tenant_id.to_string()))
.filter(shard_number.eq(tenant_shard_id.shard_number.0 as i32)) .filter(shard_number.eq(tenant_shard_id.shard_number.0 as i32))
@@ -472,7 +409,7 @@ impl Persistence {
) -> DatabaseResult<()> { ) -> DatabaseResult<()> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
self.with_measured_conn(DatabaseOperation::UpdateTenantShard, move |conn| { self.with_conn(move |conn| {
let query = diesel::update(tenant_shards) let query = diesel::update(tenant_shards)
.filter(tenant_id.eq(tenant_shard_id.tenant_id.to_string())) .filter(tenant_id.eq(tenant_shard_id.tenant_id.to_string()))
.filter(shard_number.eq(tenant_shard_id.shard_number.0 as i32)) .filter(shard_number.eq(tenant_shard_id.shard_number.0 as i32))
@@ -513,7 +450,7 @@ impl Persistence {
) -> DatabaseResult<()> { ) -> DatabaseResult<()> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
self.with_measured_conn(DatabaseOperation::UpdateTenantConfig, move |conn| { self.with_conn(move |conn| {
diesel::update(tenant_shards) diesel::update(tenant_shards)
.filter(tenant_id.eq(input_tenant_id.to_string())) .filter(tenant_id.eq(input_tenant_id.to_string()))
.set((config.eq(serde_json::to_string(&input_config).unwrap()),)) .set((config.eq(serde_json::to_string(&input_config).unwrap()),))
@@ -528,7 +465,7 @@ impl Persistence {
pub(crate) async fn detach(&self, tenant_shard_id: TenantShardId) -> anyhow::Result<()> { pub(crate) async fn detach(&self, tenant_shard_id: TenantShardId) -> anyhow::Result<()> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
self.with_measured_conn(DatabaseOperation::Detach, move |conn| { self.with_conn(move |conn| {
let updated = diesel::update(tenant_shards) let updated = diesel::update(tenant_shards)
.filter(tenant_id.eq(tenant_shard_id.tenant_id.to_string())) .filter(tenant_id.eq(tenant_shard_id.tenant_id.to_string()))
.filter(shard_number.eq(tenant_shard_id.shard_number.0 as i32)) .filter(shard_number.eq(tenant_shard_id.shard_number.0 as i32))
@@ -558,7 +495,7 @@ impl Persistence {
parent_to_children: Vec<(TenantShardId, Vec<TenantShardPersistence>)>, parent_to_children: Vec<(TenantShardId, Vec<TenantShardPersistence>)>,
) -> DatabaseResult<()> { ) -> DatabaseResult<()> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
self.with_measured_conn(DatabaseOperation::BeginShardSplit, move |conn| -> DatabaseResult<()> { self.with_conn(move |conn| -> DatabaseResult<()> {
conn.transaction(|conn| -> DatabaseResult<()> { conn.transaction(|conn| -> DatabaseResult<()> {
// Mark parent shards as splitting // Mark parent shards as splitting
@@ -622,29 +559,26 @@ impl Persistence {
old_shard_count: ShardCount, old_shard_count: ShardCount,
) -> DatabaseResult<()> { ) -> DatabaseResult<()> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
self.with_measured_conn( self.with_conn(move |conn| -> DatabaseResult<()> {
DatabaseOperation::CompleteShardSplit, conn.transaction(|conn| -> QueryResult<()> {
move |conn| -> DatabaseResult<()> { // Drop parent shards
conn.transaction(|conn| -> QueryResult<()> { diesel::delete(tenant_shards)
// Drop parent shards .filter(tenant_id.eq(split_tenant_id.to_string()))
diesel::delete(tenant_shards) .filter(shard_count.eq(old_shard_count.literal() as i32))
.filter(tenant_id.eq(split_tenant_id.to_string())) .execute(conn)?;
.filter(shard_count.eq(old_shard_count.literal() as i32))
.execute(conn)?;
// Clear sharding flag // Clear sharding flag
let updated = diesel::update(tenant_shards) let updated = diesel::update(tenant_shards)
.filter(tenant_id.eq(split_tenant_id.to_string())) .filter(tenant_id.eq(split_tenant_id.to_string()))
.set((splitting.eq(0),)) .set((splitting.eq(0),))
.execute(conn)?; .execute(conn)?;
debug_assert!(updated > 0); debug_assert!(updated > 0);
Ok(())
})?;
Ok(()) Ok(())
}, })?;
)
Ok(())
})
.await .await
} }
@@ -656,44 +590,40 @@ impl Persistence {
new_shard_count: ShardCount, new_shard_count: ShardCount,
) -> DatabaseResult<AbortShardSplitStatus> { ) -> DatabaseResult<AbortShardSplitStatus> {
use crate::schema::tenant_shards::dsl::*; use crate::schema::tenant_shards::dsl::*;
self.with_measured_conn( self.with_conn(move |conn| -> DatabaseResult<AbortShardSplitStatus> {
DatabaseOperation::AbortShardSplit, let aborted = conn.transaction(|conn| -> DatabaseResult<AbortShardSplitStatus> {
move |conn| -> DatabaseResult<AbortShardSplitStatus> { // Clear the splitting state on parent shards
let aborted = let updated = diesel::update(tenant_shards)
conn.transaction(|conn| -> DatabaseResult<AbortShardSplitStatus> { .filter(tenant_id.eq(split_tenant_id.to_string()))
// Clear the splitting state on parent shards .filter(shard_count.ne(new_shard_count.literal() as i32))
let updated = diesel::update(tenant_shards) .set((splitting.eq(0),))
.filter(tenant_id.eq(split_tenant_id.to_string())) .execute(conn)?;
.filter(shard_count.ne(new_shard_count.literal() as i32))
.set((splitting.eq(0),))
.execute(conn)?;
// Parent shards are already gone: we cannot abort. // Parent shards are already gone: we cannot abort.
if updated == 0 { if updated == 0 {
return Ok(AbortShardSplitStatus::Complete); return Ok(AbortShardSplitStatus::Complete);
} }
// Sanity check: if parent shards were present, their cardinality should // Sanity check: if parent shards were present, their cardinality should
// be less than the number of child shards. // be less than the number of child shards.
if updated >= new_shard_count.count() as usize { if updated >= new_shard_count.count() as usize {
return Err(DatabaseError::Logical(format!( return Err(DatabaseError::Logical(format!(
"Unexpected parent shard count {updated} while aborting split to \ "Unexpected parent shard count {updated} while aborting split to \
count {new_shard_count:?} on tenant {split_tenant_id}" count {new_shard_count:?} on tenant {split_tenant_id}"
))); )));
} }
// Erase child shards // Erase child shards
diesel::delete(tenant_shards) diesel::delete(tenant_shards)
.filter(tenant_id.eq(split_tenant_id.to_string())) .filter(tenant_id.eq(split_tenant_id.to_string()))
.filter(shard_count.eq(new_shard_count.literal() as i32)) .filter(shard_count.eq(new_shard_count.literal() as i32))
.execute(conn)?; .execute(conn)?;
Ok(AbortShardSplitStatus::Aborted) Ok(AbortShardSplitStatus::Aborted)
})?; })?;
Ok(aborted) Ok(aborted)
}, })
)
.await .await
} }
} }

View File

@@ -1,4 +1,3 @@
use crate::pageserver_client::PageserverClient;
use crate::persistence::Persistence; use crate::persistence::Persistence;
use crate::service; use crate::service;
use hyper::StatusCode; use hyper::StatusCode;
@@ -118,15 +117,6 @@ impl Reconciler {
flush_ms: Option<Duration>, flush_ms: Option<Duration>,
lazy: bool, lazy: bool,
) -> Result<(), ReconcileError> { ) -> Result<(), ReconcileError> {
if !node.is_available() && config.mode == LocationConfigMode::Detached {
// Attempts to detach from offline nodes may be imitated without doing I/O: a node which is offline
// will get fully reconciled wrt the shard's intent state when it is reactivated, irrespective of
// what we put into `observed`, in [`crate::service::Service::node_activate_reconcile`]
tracing::info!("Node {node} is unavailable during detach: proceeding anyway, it will be detached on next activation");
self.observed.locations.remove(&node.get_id());
return Ok(());
}
self.observed self.observed
.locations .locations
.insert(node.get_id(), ObservedStateLocation { conf: None }); .insert(node.get_id(), ObservedStateLocation { conf: None });
@@ -159,16 +149,9 @@ impl Reconciler {
}; };
tracing::info!("location_config({node}) complete: {:?}", config); tracing::info!("location_config({node}) complete: {:?}", config);
match config.mode { self.observed
LocationConfigMode::Detached => { .locations
self.observed.locations.remove(&node.get_id()); .insert(node.get_id(), ObservedStateLocation { conf: Some(config) });
}
_ => {
self.observed
.locations
.insert(node.get_id(), ObservedStateLocation { conf: Some(config) });
}
}
Ok(()) Ok(())
} }
@@ -260,11 +243,8 @@ impl Reconciler {
tenant_shard_id: TenantShardId, tenant_shard_id: TenantShardId,
node: &Node, node: &Node,
) -> anyhow::Result<HashMap<TimelineId, Lsn>> { ) -> anyhow::Result<HashMap<TimelineId, Lsn>> {
let client = PageserverClient::new( let client =
node.get_id(), mgmt_api::Client::new(node.base_url(), self.service_config.jwt_token.as_deref());
node.base_url(),
self.service_config.jwt_token.as_deref(),
);
let timelines = client.timeline_list(&tenant_shard_id).await?; let timelines = client.timeline_list(&tenant_shard_id).await?;
Ok(timelines Ok(timelines
@@ -495,7 +475,7 @@ impl Reconciler {
} }
} }
// Downgrade the origin to secondary. If the tenant's policy is PlacementPolicy::Attached(0), then // Downgrade the origin to secondary. If the tenant's policy is PlacementPolicy::Single, then
// this location will be deleted in the general case reconciliation that runs after this. // this location will be deleted in the general case reconciliation that runs after this.
let origin_secondary_conf = build_location_config( let origin_secondary_conf = build_location_config(
&self.shard, &self.shard,

View File

@@ -7,9 +7,7 @@ use std::{
time::{Duration, Instant}, time::{Duration, Instant},
}; };
use crate::{ use crate::{id_lock_map::IdLockMap, persistence::AbortShardSplitStatus};
id_lock_map::IdLockMap, persistence::AbortShardSplitStatus, reconciler::ReconcileError,
};
use anyhow::Context; use anyhow::Context;
use control_plane::storage_controller::{ use control_plane::storage_controller::{
AttachHookRequest, AttachHookResponse, InspectRequest, InspectResponse, AttachHookRequest, AttachHookResponse, InspectRequest, InspectResponse,
@@ -20,14 +18,12 @@ use hyper::StatusCode;
use pageserver_api::{ use pageserver_api::{
controller_api::{ controller_api::{
NodeAvailability, NodeRegisterRequest, NodeSchedulingPolicy, PlacementPolicy, NodeAvailability, NodeRegisterRequest, NodeSchedulingPolicy, PlacementPolicy,
TenantCreateResponse, TenantCreateResponseShard, TenantDescribeResponse, TenantCreateResponse, TenantCreateResponseShard, TenantLocateResponse,
TenantDescribeResponseShard, TenantLocateResponse, TenantShardMigrateRequest, TenantShardMigrateRequest, TenantShardMigrateResponse, UtilizationScore,
TenantShardMigrateResponse, UtilizationScore,
}, },
models::{SecondaryProgress, TenantConfigRequest}, models::{SecondaryProgress, TenantConfigRequest},
}; };
use crate::pageserver_client::PageserverClient;
use pageserver_api::{ use pageserver_api::{
models::{ models::{
self, LocationConfig, LocationConfigListResponse, LocationConfigMode, self, LocationConfig, LocationConfigListResponse, LocationConfigMode,
@@ -204,30 +200,6 @@ enum TenantCreateOrUpdate {
Update(Vec<ShardUpdate>), Update(Vec<ShardUpdate>),
} }
struct ShardSplitParams {
old_shard_count: ShardCount,
new_shard_count: ShardCount,
new_stripe_size: Option<ShardStripeSize>,
targets: Vec<ShardSplitTarget>,
policy: PlacementPolicy,
config: TenantConfig,
shard_ident: ShardIdentity,
}
// When preparing for a shard split, we may either choose to proceed with the split,
// or find that the work is already done and return NoOp.
enum ShardSplitAction {
Split(ShardSplitParams),
NoOp(TenantShardSplitResponse),
}
// A parent shard which will be split
struct ShardSplitTarget {
parent_id: TenantShardId,
node: Node,
child_ids: Vec<TenantShardId>,
}
/// When we tenant shard split operation fails, we may not be able to clean up immediately, because nodes /// When we tenant shard split operation fails, we may not be able to clean up immediately, because nodes
/// might not be available. We therefore use a queue of abort operations processed in the background. /// might not be available. We therefore use a queue of abort operations processed in the background.
struct TenantShardSplitAbort { struct TenantShardSplitAbort {
@@ -553,11 +525,7 @@ impl Service {
break; break;
} }
let client = PageserverClient::new( let client = mgmt_api::Client::new(node.base_url(), self.config.jwt_token.as_deref());
node.get_id(),
node.base_url(),
self.config.jwt_token.as_deref(),
);
match client match client
.location_config( .location_config(
tenant_shard_id, tenant_shard_id,
@@ -765,19 +733,7 @@ impl Service {
tenant.waiter.advance(result.sequence); tenant.waiter.advance(result.sequence);
} }
Err(e) => { Err(e) => {
match e { tracing::warn!("Reconcile error: {}", e);
ReconcileError::Cancel => {
tracing::info!("Reconciler was cancelled");
}
ReconcileError::Remote(mgmt_api::Error::Cancelled) => {
// This might be due to the reconciler getting cancelled, or it might
// be due to the `Node` being marked offline.
tracing::info!("Reconciler cancelled during pageserver API call");
}
_ => {
tracing::warn!("Reconcile error: {}", e);
}
}
// Ordering: populate last_error before advancing error_seq, // Ordering: populate last_error before advancing error_seq,
// so that waiters will see the correct error after waiting. // so that waiters will see the correct error after waiting.
@@ -1101,7 +1057,7 @@ impl Service {
shard_stripe_size: 0, shard_stripe_size: 0,
generation: Some(0), generation: Some(0),
generation_pageserver: None, generation_pageserver: None,
placement_policy: serde_json::to_string(&PlacementPolicy::Attached(0)).unwrap(), placement_policy: serde_json::to_string(&PlacementPolicy::Single).unwrap(),
config: serde_json::to_string(&TenantConfig::default()).unwrap(), config: serde_json::to_string(&TenantConfig::default()).unwrap(),
splitting: SplitState::default(), splitting: SplitState::default(),
}; };
@@ -1128,7 +1084,7 @@ impl Service {
TenantState::new( TenantState::new(
attach_req.tenant_shard_id, attach_req.tenant_shard_id,
ShardIdentity::unsharded(), ShardIdentity::unsharded(),
PlacementPolicy::Attached(0), PlacementPolicy::Single,
), ),
); );
tracing::info!("Inserted shard {} in memory", attach_req.tenant_shard_id); tracing::info!("Inserted shard {} in memory", attach_req.tenant_shard_id);
@@ -1157,7 +1113,7 @@ impl Service {
self.persistence self.persistence
.update_tenant_shard( .update_tenant_shard(
attach_req.tenant_shard_id, attach_req.tenant_shard_id,
PlacementPolicy::Attached(0), PlacementPolicy::Single,
conf, conf,
None, None,
) )
@@ -1182,7 +1138,7 @@ impl Service {
if let Some(new_generation) = new_generation { if let Some(new_generation) = new_generation {
tenant_state.generation = Some(new_generation); tenant_state.generation = Some(new_generation);
tenant_state.policy = PlacementPolicy::Attached(0); tenant_state.policy = PlacementPolicy::Single;
} else { } else {
// This is a detach notification. We must update placement policy to avoid re-attaching // This is a detach notification. We must update placement policy to avoid re-attaching
// during background scheduling/reconciliation, or during storage controller restart. // during background scheduling/reconciliation, or during storage controller restart.
@@ -1394,8 +1350,7 @@ impl Service {
incremented_generations.len() incremented_generations.len()
); );
// Apply the updated generation to our in-memory state, and // Apply the updated generation to our in-memory state
// gather discover secondary locations.
let mut locked = self.inner.write().unwrap(); let mut locked = self.inner.write().unwrap();
let (nodes, tenants, scheduler) = locked.parts_mut(); let (nodes, tenants, scheduler) = locked.parts_mut();
@@ -1403,65 +1358,62 @@ impl Service {
tenants: Vec::new(), tenants: Vec::new(),
}; };
// TODO: cancel/restart any running reconciliation for this tenant, it might be trying for (tenant_shard_id, new_gen) in incremented_generations {
// to call location_conf API with an old generation. Wait for cancellation to complete response.tenants.push(ReAttachResponseTenant {
// before responding to this request. Requires well implemented CancellationToken logic id: tenant_shard_id,
// all the way to where we call location_conf. Even then, there can still be a location_conf gen: new_gen.into().unwrap(),
// request in flight over the network: TODO handle that by making location_conf API refuse });
// to go backward in generations. // Apply the new generation number to our in-memory state
let shard_state = tenants.get_mut(&tenant_shard_id);
let Some(shard_state) = shard_state else {
// Not fatal. This edge case requires a re-attach to happen
// between inserting a new tenant shard in to the database, and updating our in-memory
// state to know about the shard, _and_ that the state inserted to the database referenced
// a pageserver. Should never happen, but handle it rather than panicking, since it should
// be harmless.
tracing::error!(
"Shard {} is in database for node {} but not in-memory state",
tenant_shard_id,
reattach_req.node_id
);
continue;
};
// Scan through all shards, applying updates for ones where we updated generation // If [`Persistence::re_attach`] selected this shard, it must have alread
// and identifying shards that intend to have a secondary location on this node. // had a generation set.
for (tenant_shard_id, shard) in tenants { debug_assert!(shard_state.generation.is_some());
if let Some(new_gen) = incremented_generations.get(tenant_shard_id) { let Some(old_gen) = shard_state.generation else {
let new_gen = *new_gen; // Should never happen: would only return incremented generation
response.tenants.push(ReAttachResponseTenant { // for a tenant that already had a non-null generation.
id: *tenant_shard_id, return Err(ApiError::InternalServerError(anyhow::anyhow!(
gen: Some(new_gen.into().unwrap()), "Generation must be set while re-attaching"
// A tenant is only put into multi or stale modes in the middle of a [`Reconciler::live_migrate`] )));
// execution. If a pageserver is restarted during that process, then the reconcile pass will };
// fail, and start from scratch, so it doesn't make sense for us to try and preserve shard_state.generation = Some(std::cmp::max(old_gen, new_gen));
// the stale/multi states at this point. if let Some(observed) = shard_state
mode: LocationConfigMode::AttachedSingle, .observed
}); .locations
.get_mut(&reattach_req.node_id)
shard.generation = std::cmp::max(shard.generation, Some(new_gen)); {
if let Some(observed) = shard.observed.locations.get_mut(&reattach_req.node_id) { if let Some(conf) = observed.conf.as_mut() {
// Why can we update `observed` even though we're not sure our response will be received conf.generation = new_gen.into();
// by the pageserver? Because the pageserver will not proceed with startup until
// it has processed response: if it loses it, we'll see another request and increment
// generation again, avoiding any uncertainty about dirtiness of tenant's state.
if let Some(conf) = observed.conf.as_mut() {
conf.generation = new_gen.into();
}
} else {
// This node has no observed state for the shard: perhaps it was offline
// when the pageserver restarted. Insert a None, so that the Reconciler
// will be prompted to learn the location's state before it makes changes.
shard
.observed
.locations
.insert(reattach_req.node_id, ObservedStateLocation { conf: None });
} }
} else if shard.intent.get_secondary().contains(&reattach_req.node_id) { } else {
// Ordering: pageserver will not accept /location_config requests until it has // This node has no observed state for the shard: perhaps it was offline
// finished processing the response from re-attach. So we can update our in-memory state // when the pageserver restarted. Insert a None, so that the Reconciler
// now, and be confident that we are not stamping on the result of some later location config. // will be prompted to learn the location's state before it makes changes.
// TODO: however, we are not strictly ordered wrt ReconcileResults queue, shard_state
// so we might update observed state here, and then get over-written by some racing .observed
// ReconcileResult. The impact is low however, since we have set state on pageserver something .locations
// that matches intent, so worst case if we race then we end up doing a spurious reconcile. .insert(reattach_req.node_id, ObservedStateLocation { conf: None });
response.tenants.push(ReAttachResponseTenant {
id: *tenant_shard_id,
gen: None,
mode: LocationConfigMode::Secondary,
});
// We must not update observed, because we have no guarantee that our
// response will be received by the pageserver. This could leave it
// falsely dirty, but the resulting reconcile should be idempotent.
} }
// TODO: cancel/restart any running reconciliation for this tenant, it might be trying
// to call location_conf API with an old generation. Wait for cancellation to complete
// before responding to this request. Requires well implemented CancellationToken logic
// all the way to where we call location_conf. Even then, there can still be a location_conf
// request in flight over the network: TODO handle that by making location_conf API refuse
// to go backward in generations.
} }
// We consider a node Active once we have composed a re-attach response, but we // We consider a node Active once we have composed a re-attach response, but we
@@ -1523,8 +1475,6 @@ impl Service {
&self, &self,
create_req: TenantCreateRequest, create_req: TenantCreateRequest,
) -> Result<TenantCreateResponse, ApiError> { ) -> Result<TenantCreateResponse, ApiError> {
let tenant_id = create_req.new_tenant_id.tenant_id;
// Exclude any concurrent attempts to create/access the same tenant ID // Exclude any concurrent attempts to create/access the same tenant ID
let _tenant_lock = self let _tenant_lock = self
.tenant_op_locks .tenant_op_locks
@@ -1533,12 +1483,7 @@ impl Service {
let (response, waiters) = self.do_tenant_create(create_req).await?; let (response, waiters) = self.do_tenant_create(create_req).await?;
if let Err(e) = self.await_waiters(waiters, SHORT_RECONCILE_TIMEOUT).await { self.await_waiters(waiters, SHORT_RECONCILE_TIMEOUT).await?;
// Avoid deadlock: reconcile may fail while notifying compute, if the cloud control plane refuses to
// accept compute notifications while it is in the process of creating. Reconciliation will
// be retried in the background.
tracing::warn!(%tenant_id, "Reconcile not done yet while creating tenant ({e})");
}
Ok(response) Ok(response)
} }
@@ -1546,11 +1491,11 @@ impl Service {
&self, &self,
create_req: TenantCreateRequest, create_req: TenantCreateRequest,
) -> Result<(TenantCreateResponse, Vec<ReconcilerWaiter>), ApiError> { ) -> Result<(TenantCreateResponse, Vec<ReconcilerWaiter>), ApiError> {
// As a default, single is convenient for tests that don't choose a policy.
let placement_policy = create_req let placement_policy = create_req
.placement_policy .placement_policy
.clone() .clone()
// As a default, zero secondaries is convenient for tests that don't choose a policy. .unwrap_or(PlacementPolicy::Single);
.unwrap_or(PlacementPolicy::Attached(0));
// This service expects to handle sharding itself: it is an error to try and directly create // This service expects to handle sharding itself: it is an error to try and directly create
// a particular shard here. // a particular shard here.
@@ -1617,25 +1562,13 @@ impl Service {
splitting: SplitState::default(), splitting: SplitState::default(),
}) })
.collect(); .collect();
self.persistence
match self
.persistence
.insert_tenant_shards(persist_tenant_shards) .insert_tenant_shards(persist_tenant_shards)
.await .await
{ .map_err(|e| {
Ok(_) => {} // TODO: distinguish primary key constraint (idempotent, OK), from other errors
Err(DatabaseError::Query(diesel::result::Error::DatabaseError( ApiError::InternalServerError(anyhow::anyhow!(e))
DatabaseErrorKind::UniqueViolation, })?;
_,
))) => {
// Unique key violation: this is probably a retry. Because the shard count is part of the unique key,
// if we see a unique key violation it means that the creation request's shard count matches the previous
// creation's shard count.
tracing::info!("Tenant shards already present in database, proceeding with idempotent creation...");
}
// Any other database error is unexpected and a bug.
Err(e) => return Err(ApiError::InternalServerError(anyhow::anyhow!(e))),
};
let (waiters, response_shards) = { let (waiters, response_shards) = {
let mut locked = self.inner.write().unwrap(); let mut locked = self.inner.write().unwrap();
@@ -1772,11 +1705,11 @@ impl Service {
| LocationConfigMode::AttachedSingle | LocationConfigMode::AttachedSingle
| LocationConfigMode::AttachedStale => { | LocationConfigMode::AttachedStale => {
if nodes.len() > 1 { if nodes.len() > 1 {
PlacementPolicy::Attached(1) PlacementPolicy::Double(1)
} else { } else {
// Convenience for dev/test: if we just have one pageserver, import // Convenience for dev/test: if we just have one pageserver, import
// tenants into non-HA mode so that scheduling will succeed. // tenants into Single mode so that scheduling will succeed.
PlacementPolicy::Attached(0) PlacementPolicy::Single
} }
} }
}; };
@@ -2125,11 +2058,8 @@ impl Service {
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
for tenant_shard_id in shard_ids { for tenant_shard_id in shard_ids {
let client = PageserverClient::new( let client =
node.get_id(), mgmt_api::Client::new(node.base_url(), self.config.jwt_token.as_deref());
node.base_url(),
self.config.jwt_token.as_deref(),
);
tracing::info!("Doing time travel recovery for shard {tenant_shard_id}",); tracing::info!("Doing time travel recovery for shard {tenant_shard_id}",);
@@ -2181,11 +2111,7 @@ impl Service {
// Issue concurrent requests to all shards' locations // Issue concurrent requests to all shards' locations
let mut futs = FuturesUnordered::new(); let mut futs = FuturesUnordered::new();
for (tenant_shard_id, node) in targets { for (tenant_shard_id, node) in targets {
let client = PageserverClient::new( let client = mgmt_api::Client::new(node.base_url(), self.config.jwt_token.as_deref());
node.get_id(),
node.base_url(),
self.config.jwt_token.as_deref(),
);
futs.push(async move { futs.push(async move {
let result = client let result = client
.tenant_secondary_download(tenant_shard_id, wait) .tenant_secondary_download(tenant_shard_id, wait)
@@ -2278,11 +2204,7 @@ impl Service {
// Phase 1: delete on the pageservers // Phase 1: delete on the pageservers
let mut any_pending = false; let mut any_pending = false;
for (tenant_shard_id, node) in targets { for (tenant_shard_id, node) in targets {
let client = PageserverClient::new( let client = mgmt_api::Client::new(node.base_url(), self.config.jwt_token.as_deref());
node.get_id(),
node.base_url(),
self.config.jwt_token.as_deref(),
);
// TODO: this, like many other places, requires proper retry handling for 503, timeout: those should not // TODO: this, like many other places, requires proper retry handling for 503, timeout: those should not
// surface immediately as an error to our caller. // surface immediately as an error to our caller.
let status = client.tenant_delete(tenant_shard_id).await.map_err(|e| { let status = client.tenant_delete(tenant_shard_id).await.map_err(|e| {
@@ -2394,7 +2316,7 @@ impl Service {
tenant_shard_id, tenant_shard_id,
create_req.new_timeline_id, create_req.new_timeline_id,
); );
let client = PageserverClient::new(node.get_id(), node.base_url(), jwt.as_deref()); let client = mgmt_api::Client::new(node.base_url(), jwt.as_deref());
client client
.timeline_create(tenant_shard_id, &create_req) .timeline_create(tenant_shard_id, &create_req)
@@ -2518,7 +2440,7 @@ impl Service {
"Deleting timeline on shard {tenant_shard_id}/{timeline_id}, attached to node {node}", "Deleting timeline on shard {tenant_shard_id}/{timeline_id}, attached to node {node}",
); );
let client = PageserverClient::new(node.get_id(), node.base_url(), jwt.as_deref()); let client = mgmt_api::Client::new(node.base_url(), jwt.as_deref());
client client
.timeline_delete(tenant_shard_id, timeline_id) .timeline_delete(tenant_shard_id, timeline_id)
.await .await
@@ -2559,11 +2481,11 @@ impl Service {
} }
/// When you need to send an HTTP request to the pageserver that holds shard0 of a tenant, this /// When you need to send an HTTP request to the pageserver that holds shard0 of a tenant, this
/// function looks up and returns node. If the tenant isn't found, returns Err(ApiError::NotFound) /// function looks it up and returns the url. If the tenant isn't found, returns Err(ApiError::NotFound)
pub(crate) fn tenant_shard0_node( pub(crate) fn tenant_shard0_baseurl(
&self, &self,
tenant_id: TenantId, tenant_id: TenantId,
) -> Result<(Node, TenantShardId), ApiError> { ) -> Result<(String, TenantShardId), ApiError> {
let locked = self.inner.read().unwrap(); let locked = self.inner.read().unwrap();
let Some((tenant_shard_id, shard)) = locked let Some((tenant_shard_id, shard)) = locked
.tenants .tenants
@@ -2595,7 +2517,7 @@ impl Service {
))); )));
}; };
Ok((node.clone(), *tenant_shard_id)) Ok((node.base_url(), *tenant_shard_id))
} }
pub(crate) fn tenant_locate( pub(crate) fn tenant_locate(
@@ -2605,6 +2527,9 @@ impl Service {
let locked = self.inner.read().unwrap(); let locked = self.inner.read().unwrap();
tracing::info!("Locating shards for tenant {tenant_id}"); tracing::info!("Locating shards for tenant {tenant_id}");
// Take a snapshot of pageservers
let pageservers = locked.nodes.clone();
let mut result = Vec::new(); let mut result = Vec::new();
let mut shard_params: Option<ShardParameters> = None; let mut shard_params: Option<ShardParameters> = None;
@@ -2618,8 +2543,7 @@ impl Service {
"Cannot locate a tenant that is not attached" "Cannot locate a tenant that is not attached"
)))?; )))?;
let node = locked let node = pageservers
.nodes
.get(&node_id) .get(&node_id)
.expect("Pageservers may not be deleted while referenced"); .expect("Pageservers may not be deleted while referenced");
@@ -2667,47 +2591,6 @@ impl Service {
}) })
} }
pub(crate) fn tenant_describe(
&self,
tenant_id: TenantId,
) -> Result<TenantDescribeResponse, ApiError> {
let locked = self.inner.read().unwrap();
let mut shard_zero = None;
let mut shards = Vec::new();
for (tenant_shard_id, shard) in locked.tenants.range(TenantShardId::tenant_range(tenant_id))
{
if tenant_shard_id.is_zero() {
shard_zero = Some(shard);
}
let response_shard = TenantDescribeResponseShard {
tenant_shard_id: *tenant_shard_id,
node_attached: *shard.intent.get_attached(),
node_secondary: shard.intent.get_secondary().to_vec(),
last_error: shard.last_error.lock().unwrap().clone(),
is_reconciling: shard.reconciler.is_some(),
is_pending_compute_notification: shard.pending_compute_notification,
is_splitting: matches!(shard.splitting, SplitState::Splitting),
};
shards.push(response_shard);
}
let Some(shard_zero) = shard_zero else {
return Err(ApiError::NotFound(
anyhow::anyhow!("Tenant {tenant_id} not found").into(),
));
};
Ok(TenantDescribeResponse {
shards,
stripe_size: shard_zero.shard.stripe_size,
policy: shard_zero.policy.clone(),
config: shard_zero.config.clone(),
})
}
#[instrument(skip_all, fields(tenant_id=%op.tenant_id))] #[instrument(skip_all, fields(tenant_id=%op.tenant_id))]
async fn abort_tenant_shard_split( async fn abort_tenant_shard_split(
&self, &self,
@@ -2765,7 +2648,7 @@ impl Service {
let detach_locations: Vec<(Node, TenantShardId)> = { let detach_locations: Vec<(Node, TenantShardId)> = {
let mut detach_locations = Vec::new(); let mut detach_locations = Vec::new();
let mut locked = self.inner.write().unwrap(); let mut locked = self.inner.write().unwrap();
let (nodes, tenants, scheduler) = locked.parts_mut(); let (nodes, tenants, _scheduler) = locked.parts_mut();
for (tenant_shard_id, shard) in for (tenant_shard_id, shard) in
tenants.range_mut(TenantShardId::tenant_range(op.tenant_id)) tenants.range_mut(TenantShardId::tenant_range(op.tenant_id))
@@ -2798,13 +2681,6 @@ impl Service {
tracing::info!("Restoring parent shard {tenant_shard_id}"); tracing::info!("Restoring parent shard {tenant_shard_id}");
shard.splitting = SplitState::Idle; shard.splitting = SplitState::Idle;
if let Err(e) = shard.schedule(scheduler) {
// If this shard can't be scheduled now (perhaps due to offline nodes or
// capacity issues), that must not prevent us rolling back a split. In this
// case it should be eventually scheduled in the background.
tracing::warn!("Failed to schedule {tenant_shard_id} during shard abort: {e}")
}
self.maybe_reconcile_shard(shard, nodes); self.maybe_reconcile_shard(shard, nodes);
} }
@@ -2896,7 +2772,7 @@ impl Service {
.map(|(shard_id, _)| *shard_id) .map(|(shard_id, _)| *shard_id)
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let (nodes, tenants, scheduler) = locked.parts_mut(); let (_nodes, tenants, scheduler) = locked.parts_mut();
for parent_id in parent_ids { for parent_id in parent_ids {
let child_ids = parent_id.split(new_shard_count); let child_ids = parent_id.split(new_shard_count);
@@ -2938,7 +2814,7 @@ impl Service {
generation, generation,
&child_shard, &child_shard,
&config, &config,
matches!(policy, PlacementPolicy::Attached(n) if n > 0), matches!(policy, PlacementPolicy::Double(n) if n > 0),
)), )),
}, },
); );
@@ -2963,8 +2839,6 @@ impl Service {
// find a secondary (e.g. because cluster is overloaded). // find a secondary (e.g. because cluster is overloaded).
tracing::warn!("Failed to schedule child shard {child}: {e}"); tracing::warn!("Failed to schedule child shard {child}: {e}");
} }
// In the background, attach secondary locations for the new shards
self.maybe_reconcile_shard(&mut child_state, nodes);
tenants.insert(child, child_state); tenants.insert(child, child_state);
response.new_shards.push(child); response.new_shards.push(child);
@@ -2987,23 +2861,17 @@ impl Service {
let new_shard_count = ShardCount::new(split_req.new_shard_count); let new_shard_count = ShardCount::new(split_req.new_shard_count);
let new_stripe_size = split_req.new_stripe_size; let new_stripe_size = split_req.new_stripe_size;
// Validate the request and construct parameters. This phase is fallible, but does not require let r = self.do_tenant_shard_split(tenant_id, split_req).await;
// rollback on errors, as it does no I/O and mutates no state.
let shard_split_params = match self.prepare_tenant_shard_split(tenant_id, split_req)? {
ShardSplitAction::NoOp(resp) => return Ok(resp),
ShardSplitAction::Split(params) => params,
};
// Execute this split: this phase mutates state and does remote I/O on pageservers. If it fails,
// we must roll back.
let r = self
.do_tenant_shard_split(tenant_id, shard_split_params)
.await;
match r { match r {
Ok(r) => Ok(r), Ok(r) => Ok(r),
Err(ApiError::BadRequest(_)) => {
// A request validation error does not require rollback: we rejected it before we started making any changes: just
// return the error
r
}
Err(e) => { Err(e) => {
// Split might be part-done, we must do work to abort it. // General case error handling: split might be part-done, we must do work to abort it.
tracing::warn!("Enqueuing background abort of split on {tenant_id}"); tracing::warn!("Enqueuing background abort of split on {tenant_id}");
self.abort_tx self.abort_tx
.send(TenantShardSplitAbort { .send(TenantShardSplitAbort {
@@ -3019,18 +2887,25 @@ impl Service {
} }
} }
fn prepare_tenant_shard_split( pub(crate) async fn do_tenant_shard_split(
&self, &self,
tenant_id: TenantId, tenant_id: TenantId,
split_req: TenantShardSplitRequest, split_req: TenantShardSplitRequest,
) -> Result<ShardSplitAction, ApiError> { ) -> Result<TenantShardSplitResponse, ApiError> {
let mut policy = None;
let mut shard_ident = None;
// A parent shard which will be split
struct SplitTarget {
parent_id: TenantShardId,
node: Node,
child_ids: Vec<TenantShardId>,
}
fail::fail_point!("shard-split-validation", |_| Err(ApiError::BadRequest( fail::fail_point!("shard-split-validation", |_| Err(ApiError::BadRequest(
anyhow::anyhow!("failpoint") anyhow::anyhow!("failpoint")
))); )));
let mut policy = None;
let mut config = None;
let mut shard_ident = None;
// Validate input, and calculate which shards we will create // Validate input, and calculate which shards we will create
let (old_shard_count, targets) = let (old_shard_count, targets) =
{ {
@@ -3086,9 +2961,6 @@ impl Service {
if shard_ident.is_none() { if shard_ident.is_none() {
shard_ident = Some(shard.shard); shard_ident = Some(shard.shard);
} }
if config.is_none() {
config = Some(shard.config.clone());
}
if tenant_shard_id.shard_count.count() == split_req.new_shard_count { if tenant_shard_id.shard_count.count() == split_req.new_shard_count {
tracing::info!( tracing::info!(
@@ -3107,7 +2979,9 @@ impl Service {
.get(&node_id) .get(&node_id)
.expect("Pageservers may not be deleted while referenced"); .expect("Pageservers may not be deleted while referenced");
targets.push(ShardSplitTarget { // TODO: if any reconciliation is currently in progress for this shard, wait for it.
targets.push(SplitTarget {
parent_id: *tenant_shard_id, parent_id: *tenant_shard_id,
node: node.clone(), node: node.clone(),
child_ids: tenant_shard_id child_ids: tenant_shard_id
@@ -3117,9 +2991,9 @@ impl Service {
if targets.is_empty() { if targets.is_empty() {
if children_found.len() == split_req.new_shard_count as usize { if children_found.len() == split_req.new_shard_count as usize {
return Ok(ShardSplitAction::NoOp(TenantShardSplitResponse { return Ok(TenantShardSplitResponse {
new_shards: children_found, new_shards: children_found,
})); });
} else { } else {
// No shards found to split, and no existing children found: the // No shards found to split, and no existing children found: the
// tenant doesn't exist at all. // tenant doesn't exist at all.
@@ -3149,77 +3023,13 @@ impl Service {
shard_ident.unwrap() shard_ident.unwrap()
}; };
let policy = policy.unwrap(); let policy = policy.unwrap();
let config = config.unwrap();
Ok(ShardSplitAction::Split(ShardSplitParams {
old_shard_count,
new_shard_count: ShardCount::new(split_req.new_shard_count),
new_stripe_size: split_req.new_stripe_size,
targets,
policy,
config,
shard_ident,
}))
}
async fn do_tenant_shard_split(
&self,
tenant_id: TenantId,
params: ShardSplitParams,
) -> Result<TenantShardSplitResponse, ApiError> {
// FIXME: we have dropped self.inner lock, and not yet written anything to the database: another // FIXME: we have dropped self.inner lock, and not yet written anything to the database: another
// request could occur here, deleting or mutating the tenant. begin_shard_split checks that the // request could occur here, deleting or mutating the tenant. begin_shard_split checks that the
// parent shards exist as expected, but it would be neater to do the above pre-checks within the // parent shards exist as expected, but it would be neater to do the above pre-checks within the
// same database transaction rather than pre-check in-memory and then maybe-fail the database write. // same database transaction rather than pre-check in-memory and then maybe-fail the database write.
// (https://github.com/neondatabase/neon/issues/6676) // (https://github.com/neondatabase/neon/issues/6676)
let ShardSplitParams {
old_shard_count,
new_shard_count,
new_stripe_size,
mut targets,
policy,
config,
shard_ident,
} = params;
// Drop any secondary locations: pageservers do not support splitting these, and in any case the
// end-state for a split tenant will usually be to have secondary locations on different nodes.
// The reconciliation calls in this block also implicitly cancel+barrier wrt any ongoing reconciliation
// at the time of split.
let waiters = {
let mut locked = self.inner.write().unwrap();
let mut waiters = Vec::new();
let (nodes, tenants, scheduler) = locked.parts_mut();
for target in &mut targets {
let Some(shard) = tenants.get_mut(&target.parent_id) else {
// Paranoia check: this shouldn't happen: we have the oplock for this tenant ID.
return Err(ApiError::InternalServerError(anyhow::anyhow!(
"Shard {} not found",
target.parent_id
)));
};
if shard.intent.get_attached() != &Some(target.node.get_id()) {
// Paranoia check: this shouldn't happen: we have the oplock for this tenant ID.
return Err(ApiError::Conflict(format!(
"Shard {} unexpectedly rescheduled during split",
target.parent_id
)));
}
// Irrespective of PlacementPolicy, clear secondary locations from intent
shard.intent.clear_secondary(scheduler);
// Run Reconciler to execute detach fo secondary locations.
if let Some(waiter) = self.maybe_reconcile_shard(shard, nodes) {
waiters.push(waiter);
}
}
waiters
};
self.await_waiters(waiters, RECONCILE_TIMEOUT).await?;
// Before creating any new child shards in memory or on the pageservers, persist them: this // Before creating any new child shards in memory or on the pageservers, persist them: this
// enables us to ensure that we will always be able to clean up if something goes wrong. This also // enables us to ensure that we will always be able to clean up if something goes wrong. This also
// acts as the protection against two concurrent attempts to split: one of them will get a database // acts as the protection against two concurrent attempts to split: one of them will get a database
@@ -3248,7 +3058,8 @@ impl Service {
generation: None, generation: None,
generation_pageserver: Some(target.node.get_id().0 as i64), generation_pageserver: Some(target.node.get_id().0 as i64),
placement_policy: serde_json::to_string(&policy).unwrap(), placement_policy: serde_json::to_string(&policy).unwrap(),
config: serde_json::to_string(&config).unwrap(), // TODO: get the config out of the map
config: serde_json::to_string(&TenantConfig::default()).unwrap(),
splitting: SplitState::Splitting, splitting: SplitState::Splitting,
}); });
} }
@@ -3300,22 +3111,18 @@ impl Service {
// N>1 shards into M shards -- initially we're usually splitting 1 shard into N). // N>1 shards into M shards -- initially we're usually splitting 1 shard into N).
for target in &targets { for target in &targets {
let ShardSplitTarget { let SplitTarget {
parent_id, parent_id,
node, node,
child_ids, child_ids,
} = target; } = target;
let client = PageserverClient::new( let client = mgmt_api::Client::new(node.base_url(), self.config.jwt_token.as_deref());
node.get_id(),
node.base_url(),
self.config.jwt_token.as_deref(),
);
let response = client let response = client
.tenant_shard_split( .tenant_shard_split(
*parent_id, *parent_id,
TenantShardSplitRequest { TenantShardSplitRequest {
new_shard_count: new_shard_count.literal(), new_shard_count: split_req.new_shard_count,
new_stripe_size, new_stripe_size: split_req.new_stripe_size,
}, },
) )
.await .await
@@ -3364,8 +3171,11 @@ impl Service {
)); ));
// Replace all the shards we just split with their children: this phase is infallible. // Replace all the shards we just split with their children: this phase is infallible.
let (response, child_locations) = let (response, child_locations) = self.tenant_shard_split_commit_inmem(
self.tenant_shard_split_commit_inmem(tenant_id, new_shard_count, new_stripe_size); tenant_id,
ShardCount::new(split_req.new_shard_count),
split_req.new_stripe_size,
);
// Send compute notifications for all the new shards // Send compute notifications for all the new shards
let mut failed_notifications = Vec::new(); let mut failed_notifications = Vec::new();
@@ -3430,20 +3240,17 @@ impl Service {
let old_attached = *shard.intent.get_attached(); let old_attached = *shard.intent.get_attached();
match shard.policy { match shard.policy {
PlacementPolicy::Attached(n) => { PlacementPolicy::Single => {
shard.intent.clear_secondary(scheduler);
shard.intent.set_attached(scheduler, Some(migrate_req.node_id));
}
PlacementPolicy::Double(_n) => {
// If our new attached node was a secondary, it no longer should be. // If our new attached node was a secondary, it no longer should be.
shard.intent.remove_secondary(scheduler, migrate_req.node_id); shard.intent.remove_secondary(scheduler, migrate_req.node_id);
// If we were already attached to something, demote that to a secondary // If we were already attached to something, demote that to a secondary
if let Some(old_attached) = old_attached { if let Some(old_attached) = old_attached {
if n > 0 { shard.intent.push_secondary(scheduler, old_attached);
// Remove other secondaries to make room for the location we'll demote
while shard.intent.get_secondary().len() >= n {
shard.intent.pop_secondary(scheduler);
}
shard.intent.push_secondary(scheduler, old_attached);
}
} }
shard.intent.set_attached(scheduler, Some(migrate_req.node_id)); shard.intent.set_attached(scheduler, Some(migrate_req.node_id));
@@ -3469,7 +3276,7 @@ impl Service {
if let Some(waiter) = waiter { if let Some(waiter) = waiter {
waiter.wait_timeout(RECONCILE_TIMEOUT).await?; waiter.wait_timeout(RECONCILE_TIMEOUT).await?;
} else { } else {
tracing::info!("Migration is a no-op"); tracing::warn!("Migration is a no-op");
} }
Ok(TenantShardMigrateResponse {}) Ok(TenantShardMigrateResponse {})
@@ -3824,13 +3631,6 @@ impl Service {
observed_loc.conf = None; observed_loc.conf = None;
} }
if new_nodes.len() == 1 {
// Special case for single-node cluster: there is no point trying to reschedule
// any tenant shards: avoid doing so, in order to avoid spewing warnings about
// failures to schedule them.
continue;
}
if tenant_state.intent.demote_attached(node_id) { if tenant_state.intent.demote_attached(node_id) {
tenant_state.sequence = tenant_state.sequence.next(); tenant_state.sequence = tenant_state.sequence.next();
match tenant_state.schedule(scheduler) { match tenant_state.schedule(scheduler) {

View File

@@ -4,10 +4,7 @@ use std::{
time::Duration, time::Duration,
}; };
use crate::{ use crate::{metrics, persistence::TenantShardPersistence};
metrics::{self, ReconcileCompleteLabelGroup, ReconcileOutcome},
persistence::TenantShardPersistence,
};
use pageserver_api::controller_api::PlacementPolicy; use pageserver_api::controller_api::PlacementPolicy;
use pageserver_api::{ use pageserver_api::{
models::{LocationConfig, LocationConfigMode, TenantConfig}, models::{LocationConfig, LocationConfigMode, TenantConfig},
@@ -460,7 +457,22 @@ impl TenantState {
// Add/remove nodes to fulfil policy // Add/remove nodes to fulfil policy
use PlacementPolicy::*; use PlacementPolicy::*;
match self.policy { match self.policy {
Attached(secondary_count) => { Single => {
// Should have exactly one attached, and zero secondaries
if !self.intent.secondary.is_empty() {
self.intent.clear_secondary(scheduler);
modified = true;
}
let (modified_attached, _attached_node_id) = self.schedule_attached(scheduler)?;
modified |= modified_attached;
if !self.intent.secondary.is_empty() {
self.intent.clear_secondary(scheduler);
modified = true;
}
}
Double(secondary_count) => {
let retain_secondaries = if self.intent.attached.is_none() let retain_secondaries = if self.intent.attached.is_none()
&& scheduler.node_preferred(&self.intent.secondary).is_some() && scheduler.node_preferred(&self.intent.secondary).is_some()
{ {
@@ -721,10 +733,7 @@ impl TenantState {
let reconciler_span = tracing::info_span!(parent: None, "reconciler", seq=%reconcile_seq, let reconciler_span = tracing::info_span!(parent: None, "reconciler", seq=%reconcile_seq,
tenant_id=%reconciler.tenant_shard_id.tenant_id, tenant_id=%reconciler.tenant_shard_id.tenant_id,
shard_id=%reconciler.tenant_shard_id.shard_slug()); shard_id=%reconciler.tenant_shard_id.shard_slug());
metrics::METRICS_REGISTRY metrics::RECONCILER.spawned.inc();
.metrics_group
.storage_controller_reconcile_spawn
.inc();
let result_tx = result_tx.clone(); let result_tx = result_tx.clone();
let join_handle = tokio::task::spawn( let join_handle = tokio::task::spawn(
async move { async move {
@@ -742,12 +751,10 @@ impl TenantState {
// TODO: wrap all remote API operations in cancellation check // TODO: wrap all remote API operations in cancellation check
// as well. // as well.
if reconciler.cancel.is_cancelled() { if reconciler.cancel.is_cancelled() {
metrics::METRICS_REGISTRY metrics::RECONCILER
.metrics_group .complete
.storage_controller_reconcile_complete .with_label_values(&[metrics::ReconcilerMetrics::CANCEL])
.inc(ReconcileCompleteLabelGroup { .inc();
status: ReconcileOutcome::Cancel,
});
return; return;
} }
@@ -762,18 +769,18 @@ impl TenantState {
} }
// Update result counter // Update result counter
let outcome_label = match &result { match &result {
Ok(_) => ReconcileOutcome::Success, Ok(_) => metrics::RECONCILER
Err(ReconcileError::Cancel) => ReconcileOutcome::Cancel, .complete
Err(_) => ReconcileOutcome::Error, .with_label_values(&[metrics::ReconcilerMetrics::SUCCESS]),
}; Err(ReconcileError::Cancel) => metrics::RECONCILER
.complete
metrics::METRICS_REGISTRY .with_label_values(&[metrics::ReconcilerMetrics::CANCEL]),
.metrics_group Err(_) => metrics::RECONCILER
.storage_controller_reconcile_complete .complete
.inc(ReconcileCompleteLabelGroup { .with_label_values(&[metrics::ReconcilerMetrics::ERROR]),
status: outcome_label, }
}); .inc();
result_tx result_tx
.send(ReconcileResult { .send(ReconcileResult {
@@ -888,7 +895,7 @@ pub(crate) mod tests {
let mut scheduler = Scheduler::new(nodes.values()); let mut scheduler = Scheduler::new(nodes.values());
let mut tenant_state = make_test_tenant_shard(PlacementPolicy::Attached(1)); let mut tenant_state = make_test_tenant_shard(PlacementPolicy::Double(1));
tenant_state tenant_state
.schedule(&mut scheduler) .schedule(&mut scheduler)
.expect("we have enough nodes, scheduling should work"); .expect("we have enough nodes, scheduling should work");
@@ -936,7 +943,7 @@ pub(crate) mod tests {
let nodes = make_test_nodes(3); let nodes = make_test_nodes(3);
let mut scheduler = Scheduler::new(nodes.values()); let mut scheduler = Scheduler::new(nodes.values());
let mut tenant_state = make_test_tenant_shard(PlacementPolicy::Attached(1)); let mut tenant_state = make_test_tenant_shard(PlacementPolicy::Double(1));
tenant_state.observed.locations.insert( tenant_state.observed.locations.insert(
NodeId(3), NodeId(3),

View File

@@ -294,7 +294,7 @@ where
// is in state 'taken' but the thread that would unlock it is // is in state 'taken' but the thread that would unlock it is
// not there. // not there.
// 2. A rust object that represented some external resource in the // 2. A rust object that represented some external resource in the
// parent now got implicitly copied by the fork, even though // parent now got implicitly copied by the the fork, even though
// the object's type is not `Copy`. The parent program may use // the object's type is not `Copy`. The parent program may use
// non-copyability as way to enforce unique ownership of an // non-copyability as way to enforce unique ownership of an
// external resource in the typesystem. The fork breaks that // external resource in the typesystem. The fork breaks that

View File

@@ -437,7 +437,7 @@ async fn handle_tenant(
let placement_policy = match create_match.get_one::<String>("placement-policy") { let placement_policy = match create_match.get_one::<String>("placement-policy") {
Some(s) if !s.is_empty() => serde_json::from_str::<PlacementPolicy>(s)?, Some(s) if !s.is_empty() => serde_json::from_str::<PlacementPolicy>(s)?,
_ => PlacementPolicy::Attached(0), _ => PlacementPolicy::Single,
}; };
let tenant_conf = PageServerNode::parse_config(tenant_conf)?; let tenant_conf = PageServerNode::parse_config(tenant_conf)?;
@@ -523,6 +523,88 @@ async fn handle_tenant(
.with_context(|| format!("Tenant config failed for tenant with id {tenant_id}"))?; .with_context(|| format!("Tenant config failed for tenant with id {tenant_id}"))?;
println!("tenant {tenant_id} successfully configured on the pageserver"); println!("tenant {tenant_id} successfully configured on the pageserver");
} }
Some(("migrate", matches)) => {
let tenant_shard_id = get_tenant_shard_id(matches, env)?;
let new_pageserver = get_pageserver(env, matches)?;
let new_pageserver_id = new_pageserver.conf.id;
let storage_controller = StorageController::from_env(env);
storage_controller
.tenant_migrate(tenant_shard_id, new_pageserver_id)
.await?;
println!("tenant {tenant_shard_id} migrated to {}", new_pageserver_id);
}
Some(("status", matches)) => {
let tenant_id = get_tenant_id(matches, env)?;
let mut shard_table = comfy_table::Table::new();
shard_table.set_header(["Shard", "Pageserver", "Physical Size"]);
let mut tenant_synthetic_size = None;
let storage_controller = StorageController::from_env(env);
for shard in storage_controller.tenant_locate(tenant_id).await?.shards {
let pageserver =
PageServerNode::from_env(env, env.get_pageserver_conf(shard.node_id)?);
let size = pageserver
.http_client
.tenant_details(shard.shard_id)
.await?
.tenant_info
.current_physical_size
.unwrap();
shard_table.add_row([
format!("{}", shard.shard_id.shard_slug()),
format!("{}", shard.node_id.0),
format!("{} MiB", size / (1024 * 1024)),
]);
if shard.shard_id.is_zero() {
tenant_synthetic_size =
Some(pageserver.tenant_synthetic_size(shard.shard_id).await?);
}
}
let Some(synthetic_size) = tenant_synthetic_size else {
bail!("Shard 0 not found")
};
let mut tenant_table = comfy_table::Table::new();
tenant_table.add_row(["Tenant ID".to_string(), tenant_id.to_string()]);
tenant_table.add_row([
"Synthetic size".to_string(),
format!("{} MiB", synthetic_size.size.unwrap_or(0) / (1024 * 1024)),
]);
println!("{tenant_table}");
println!("{shard_table}");
}
Some(("shard-split", matches)) => {
let tenant_id = get_tenant_id(matches, env)?;
let shard_count: u8 = matches.get_one::<u8>("shard-count").cloned().unwrap_or(0);
let shard_stripe_size: Option<ShardStripeSize> = matches
.get_one::<Option<ShardStripeSize>>("shard-stripe-size")
.cloned()
.unwrap();
let storage_controller = StorageController::from_env(env);
let result = storage_controller
.tenant_split(tenant_id, shard_count, shard_stripe_size)
.await?;
println!(
"Split tenant {} into shards {}",
tenant_id,
result
.new_shards
.iter()
.map(|s| format!("{:?}", s))
.collect::<Vec<_>>()
.join(",")
);
}
Some((sub_name, _)) => bail!("Unexpected tenant subcommand '{}'", sub_name), Some((sub_name, _)) => bail!("Unexpected tenant subcommand '{}'", sub_name),
None => bail!("no tenant subcommand provided"), None => bail!("no tenant subcommand provided"),
@@ -1496,6 +1578,19 @@ fn cli() -> Command {
.subcommand(Command::new("config") .subcommand(Command::new("config")
.arg(tenant_id_arg.clone()) .arg(tenant_id_arg.clone())
.arg(Arg::new("config").short('c').num_args(1).action(ArgAction::Append).required(false))) .arg(Arg::new("config").short('c').num_args(1).action(ArgAction::Append).required(false)))
.subcommand(Command::new("migrate")
.about("Migrate a tenant from one pageserver to another")
.arg(tenant_id_arg.clone())
.arg(pageserver_id_arg.clone()))
.subcommand(Command::new("status")
.about("Human readable summary of the tenant's shards and attachment locations")
.arg(tenant_id_arg.clone()))
.subcommand(Command::new("shard-split")
.about("Increase the number of shards in the tenant")
.arg(tenant_id_arg.clone())
.arg(Arg::new("shard-count").value_parser(value_parser!(u8)).long("shard-count").action(ArgAction::Set).help("Number of shards in the new tenant (default 1)"))
.arg(Arg::new("shard-stripe-size").value_parser(value_parser!(u32)).long("shard-stripe-size").action(ArgAction::Set).help("Sharding stripe size in pages"))
)
) )
.subcommand( .subcommand(
Command::new("pageserver") Command::new("pageserver")

View File

@@ -12,7 +12,7 @@
//! //!
//! The endpoint is managed by the `compute_ctl` binary. When an endpoint is //! The endpoint is managed by the `compute_ctl` binary. When an endpoint is
//! started, we launch `compute_ctl` It synchronizes the safekeepers, downloads //! started, we launch `compute_ctl` It synchronizes the safekeepers, downloads
//! the basebackup from the pageserver to initialize the data directory, and //! the basebackup from the pageserver to initialize the the data directory, and
//! finally launches the PostgreSQL process. It watches the PostgreSQL process //! finally launches the PostgreSQL process. It watches the PostgreSQL process
//! until it exits. //! until it exits.
//! //!

View File

@@ -127,8 +127,8 @@ pub struct PageServerConf {
pub pg_auth_type: AuthType, pub pg_auth_type: AuthType,
pub http_auth_type: AuthType, pub http_auth_type: AuthType,
pub(crate) virtual_file_io_engine: Option<String>, pub(crate) virtual_file_io_engine: String,
pub(crate) get_vectored_impl: Option<String>, pub(crate) get_vectored_impl: String,
} }
impl Default for PageServerConf { impl Default for PageServerConf {
@@ -139,8 +139,9 @@ impl Default for PageServerConf {
listen_http_addr: String::new(), listen_http_addr: String::new(),
pg_auth_type: AuthType::Trust, pg_auth_type: AuthType::Trust,
http_auth_type: AuthType::Trust, http_auth_type: AuthType::Trust,
virtual_file_io_engine: None, // FIXME: use the ones exposed by pageserver crate
get_vectored_impl: None, virtual_file_io_engine: "tokio-epoll-uring".to_owned(),
get_vectored_impl: "sequential".to_owned(),
} }
} }
} }

View File

@@ -101,16 +101,8 @@ impl PageServerNode {
let pg_auth_type_param = format!("pg_auth_type='{}'", pg_auth_type); let pg_auth_type_param = format!("pg_auth_type='{}'", pg_auth_type);
let listen_pg_addr_param = format!("listen_pg_addr='{}'", listen_pg_addr); let listen_pg_addr_param = format!("listen_pg_addr='{}'", listen_pg_addr);
let virtual_file_io_engine = if let Some(virtual_file_io_engine) = virtual_file_io_engine { let virtual_file_io_engine = format!("virtual_file_io_engine='{virtual_file_io_engine}'");
format!("virtual_file_io_engine='{virtual_file_io_engine}'") let get_vectored_impl = format!("get_vectored_impl='{get_vectored_impl}'");
} else {
String::new()
};
let get_vectored_impl = if let Some(get_vectored_impl) = get_vectored_impl {
format!("get_vectored_impl='{get_vectored_impl}'")
} else {
String::new()
};
let broker_endpoint_param = format!("broker_endpoint='{}'", self.env.broker.client_url()); let broker_endpoint_param = format!("broker_endpoint='{}'", self.env.broker.client_url());

View File

@@ -279,7 +279,6 @@ impl StorageController {
&self.listen, &self.listen,
"-p", "-p",
self.path.as_ref(), self.path.as_ref(),
"--dev",
"--database-url", "--database-url",
&database_url, &database_url,
"--max-unavailable-interval", "--max-unavailable-interval",
@@ -476,7 +475,7 @@ impl StorageController {
pub async fn tenant_locate(&self, tenant_id: TenantId) -> anyhow::Result<TenantLocateResponse> { pub async fn tenant_locate(&self, tenant_id: TenantId) -> anyhow::Result<TenantLocateResponse> {
self.dispatch::<(), _>( self.dispatch::<(), _>(
Method::GET, Method::GET,
format!("debug/v1/tenant/{tenant_id}/locate"), format!("control/v1/tenant/{tenant_id}/locate"),
None, None,
) )
.await .await

View File

@@ -40,7 +40,7 @@ macro_rules! register_hll {
}}; }};
($N:literal, $NAME:expr, $HELP:expr $(,)?) => {{ ($N:literal, $NAME:expr, $HELP:expr $(,)?) => {{
$crate::register_hll!($N, $crate::opts!($NAME, $HELP)) $crate::register_hll!($N, $crate::opts!($NAME, $HELP), $LABELS_NAMES)
}}; }};
} }

View File

@@ -6,10 +6,7 @@ use std::str::FromStr;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use utils::id::NodeId; use utils::id::NodeId;
use crate::{ use crate::{models::ShardParameters, shard::TenantShardId};
models::{ShardParameters, TenantConfig},
shard::{ShardStripeSize, TenantShardId},
};
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct TenantCreateResponseShard { pub struct TenantCreateResponseShard {
@@ -60,31 +57,6 @@ pub struct TenantLocateResponse {
pub shard_params: ShardParameters, pub shard_params: ShardParameters,
} }
#[derive(Serialize, Deserialize)]
pub struct TenantDescribeResponse {
pub shards: Vec<TenantDescribeResponseShard>,
pub stripe_size: ShardStripeSize,
pub policy: PlacementPolicy,
pub config: TenantConfig,
}
#[derive(Serialize, Deserialize)]
pub struct TenantDescribeResponseShard {
pub tenant_shard_id: TenantShardId,
pub node_attached: Option<NodeId>,
pub node_secondary: Vec<NodeId>,
pub last_error: String,
/// A task is currently running to reconcile this tenant's intent state with the state on pageservers
pub is_reconciling: bool,
/// This shard failed in sending a compute notification to the cloud control plane, and a retry is pending.
pub is_pending_compute_notification: bool,
/// A shard split is currently underway
pub is_splitting: bool,
}
/// Explicitly migrating a particular shard is a low level operation /// Explicitly migrating a particular shard is a low level operation
/// TODO: higher level "Reschedule tenant" operation where the request /// TODO: higher level "Reschedule tenant" operation where the request
/// specifies some constraints, e.g. asking it to get off particular node(s) /// specifies some constraints, e.g. asking it to get off particular node(s)
@@ -209,8 +181,11 @@ impl From<NodeSchedulingPolicy> for String {
/// to create secondary locations. /// to create secondary locations.
#[derive(Clone, Serialize, Deserialize, Debug, PartialEq, Eq)] #[derive(Clone, Serialize, Deserialize, Debug, PartialEq, Eq)]
pub enum PlacementPolicy { pub enum PlacementPolicy {
/// Normal live state: one attached pageserver and zero or more secondaries. /// Cheapest way to attach a tenant: just one pageserver, no secondary
Attached(usize), Single,
/// Production-ready way to attach a tenant: one attached pageserver and
/// some number of secondaries.
Double(usize),
/// Create one secondary mode locations. This is useful when onboarding /// Create one secondary mode locations. This is useful when onboarding
/// a tenant, or for an idle tenant that we might want to bring online quickly. /// a tenant, or for an idle tenant that we might want to bring online quickly.
Secondary, Secondary,
@@ -232,14 +207,14 @@ mod test {
/// Check stability of PlacementPolicy's serialization /// Check stability of PlacementPolicy's serialization
#[test] #[test]
fn placement_policy_encoding() -> anyhow::Result<()> { fn placement_policy_encoding() -> anyhow::Result<()> {
let v = PlacementPolicy::Attached(1); let v = PlacementPolicy::Double(1);
let encoded = serde_json::to_string(&v)?; let encoded = serde_json::to_string(&v)?;
assert_eq!(encoded, "{\"Attached\":1}"); assert_eq!(encoded, "{\"Double\":1}");
assert_eq!(serde_json::from_str::<PlacementPolicy>(&encoded)?, v); assert_eq!(serde_json::from_str::<PlacementPolicy>(&encoded)?, v);
let v = PlacementPolicy::Detached; let v = PlacementPolicy::Single;
let encoded = serde_json::to_string(&v)?; let encoded = serde_json::to_string(&v)?;
assert_eq!(encoded, "\"Detached\""); assert_eq!(encoded, "\"Single\"");
assert_eq!(serde_json::from_str::<PlacementPolicy>(&encoded)?, v); assert_eq!(serde_json::from_str::<PlacementPolicy>(&encoded)?, v);
Ok(()) Ok(())
} }

View File

@@ -6,9 +6,7 @@
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use utils::id::NodeId; use utils::id::NodeId;
use crate::{ use crate::{controller_api::NodeRegisterRequest, shard::TenantShardId};
controller_api::NodeRegisterRequest, models::LocationConfigMode, shard::TenantShardId,
};
/// Upcall message sent by the pageserver to the configured `control_plane_api` on /// Upcall message sent by the pageserver to the configured `control_plane_api` on
/// startup. /// startup.
@@ -22,20 +20,12 @@ pub struct ReAttachRequest {
pub register: Option<NodeRegisterRequest>, pub register: Option<NodeRegisterRequest>,
} }
fn default_mode() -> LocationConfigMode { #[derive(Serialize, Deserialize)]
LocationConfigMode::AttachedSingle
}
#[derive(Serialize, Deserialize, Debug)]
pub struct ReAttachResponseTenant { pub struct ReAttachResponseTenant {
pub id: TenantShardId, pub id: TenantShardId,
/// Mandatory if LocationConfigMode is None or set to an Attached* mode pub gen: u32,
pub gen: Option<u32>,
/// Default value only for backward compat: this field should be set
#[serde(default = "default_mode")]
pub mode: LocationConfigMode,
} }
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct ReAttachResponse { pub struct ReAttachResponse {
pub tenants: Vec<ReAttachResponseTenant>, pub tenants: Vec<ReAttachResponseTenant>,

View File

@@ -1,6 +1,5 @@
use anyhow::*; use anyhow::*;
use clap::{value_parser, Arg, ArgMatches, Command}; use clap::{value_parser, Arg, ArgMatches, Command};
use postgres::Client;
use std::{path::PathBuf, str::FromStr}; use std::{path::PathBuf, str::FromStr};
use wal_craft::*; use wal_craft::*;
@@ -9,8 +8,8 @@ fn main() -> Result<()> {
.init(); .init();
let arg_matches = cli().get_matches(); let arg_matches = cli().get_matches();
let wal_craft = |arg_matches: &ArgMatches, client: &mut Client| { let wal_craft = |arg_matches: &ArgMatches, client| {
let intermediate_lsns = match arg_matches let (intermediate_lsns, end_of_wal_lsn) = match arg_matches
.get_one::<String>("type") .get_one::<String>("type")
.map(|s| s.as_str()) .map(|s| s.as_str())
.context("'type' is required")? .context("'type' is required")?
@@ -26,7 +25,6 @@ fn main() -> Result<()> {
LastWalRecordCrossingSegment::NAME => LastWalRecordCrossingSegment::craft(client)?, LastWalRecordCrossingSegment::NAME => LastWalRecordCrossingSegment::craft(client)?,
a => panic!("Unknown --type argument: {a}"), a => panic!("Unknown --type argument: {a}"),
}; };
let end_of_wal_lsn = client.pg_current_wal_insert_lsn()?;
for lsn in intermediate_lsns { for lsn in intermediate_lsns {
println!("intermediate_lsn = {lsn}"); println!("intermediate_lsn = {lsn}");
} }

View File

@@ -5,6 +5,7 @@ use postgres::types::PgLsn;
use postgres::Client; use postgres::Client;
use postgres_ffi::{WAL_SEGMENT_SIZE, XLOG_BLCKSZ}; use postgres_ffi::{WAL_SEGMENT_SIZE, XLOG_BLCKSZ};
use postgres_ffi::{XLOG_SIZE_OF_XLOG_RECORD, XLOG_SIZE_OF_XLOG_SHORT_PHD}; use postgres_ffi::{XLOG_SIZE_OF_XLOG_RECORD, XLOG_SIZE_OF_XLOG_SHORT_PHD};
use std::cmp::Ordering;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::process::Command; use std::process::Command;
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
@@ -231,52 +232,59 @@ pub fn ensure_server_config(client: &mut impl postgres::GenericClient) -> anyhow
pub trait Crafter { pub trait Crafter {
const NAME: &'static str; const NAME: &'static str;
/// Generates WAL using the client `client`. Returns a vector of some valid /// Generates WAL using the client `client`. Returns a pair of:
/// "interesting" intermediate LSNs which one may start reading from. /// * A vector of some valid "interesting" intermediate LSNs which one may start reading from.
/// test_end_of_wal uses this to check various starting points. /// May include or exclude Lsn(0) and the end-of-wal.
/// /// * The expected end-of-wal LSN.
/// Note that postgres is generally keen about writing some WAL. While we fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<(Vec<PgLsn>, PgLsn)>;
/// try to disable it (autovacuum, big wal_writer_delay, etc) it is always
/// possible, e.g. xl_running_xacts are dumped each 15s. So checks about
/// stable WAL end would be flaky unless postgres is shut down. For this
/// reason returning potential end of WAL here is pointless. Most of the
/// time this doesn't happen though, so it is reasonable to create needed
/// WAL structure and immediately kill postgres like test_end_of_wal does.
fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<Vec<PgLsn>>;
} }
/// Wraps some WAL craft function, providing current LSN to it before the
/// insertion and flushing WAL afterwards. Also pushes initial LSN to the
/// result.
fn craft_internal<C: postgres::GenericClient>( fn craft_internal<C: postgres::GenericClient>(
client: &mut C, client: &mut C,
f: impl Fn(&mut C, PgLsn) -> anyhow::Result<Vec<PgLsn>>, f: impl Fn(&mut C, PgLsn) -> anyhow::Result<(Vec<PgLsn>, Option<PgLsn>)>,
) -> anyhow::Result<Vec<PgLsn>> { ) -> anyhow::Result<(Vec<PgLsn>, PgLsn)> {
ensure_server_config(client)?; ensure_server_config(client)?;
let initial_lsn = client.pg_current_wal_insert_lsn()?; let initial_lsn = client.pg_current_wal_insert_lsn()?;
info!("LSN initial = {}", initial_lsn); info!("LSN initial = {}", initial_lsn);
let mut intermediate_lsns = f(client, initial_lsn)?; let (mut intermediate_lsns, last_lsn) = f(client, initial_lsn)?;
let last_lsn = match last_lsn {
None => client.pg_current_wal_insert_lsn()?,
Some(last_lsn) => {
let insert_lsn = client.pg_current_wal_insert_lsn()?;
match last_lsn.cmp(&insert_lsn) {
Ordering::Less => bail!(
"Some records were inserted after the crafted WAL: {} vs {}",
last_lsn,
insert_lsn
),
Ordering::Equal => last_lsn,
Ordering::Greater => bail!("Reported LSN is greater than insert_lsn"),
}
}
};
if !intermediate_lsns.starts_with(&[initial_lsn]) { if !intermediate_lsns.starts_with(&[initial_lsn]) {
intermediate_lsns.insert(0, initial_lsn); intermediate_lsns.insert(0, initial_lsn);
} }
// Some records may be not flushed, e.g. non-transactional logical messages. // Some records may be not flushed, e.g. non-transactional logical messages.
//
// Note: this is broken if pg_current_wal_insert_lsn is at page boundary
// because pg_current_wal_insert_lsn skips page headers.
client.execute("select neon_xlogflush(pg_current_wal_insert_lsn())", &[])?; client.execute("select neon_xlogflush(pg_current_wal_insert_lsn())", &[])?;
Ok(intermediate_lsns) match last_lsn.cmp(&client.pg_current_wal_flush_lsn()?) {
Ordering::Less => bail!("Some records were flushed after the crafted WAL"),
Ordering::Equal => {}
Ordering::Greater => bail!("Reported LSN is greater than flush_lsn"),
}
Ok((intermediate_lsns, last_lsn))
} }
pub struct Simple; pub struct Simple;
impl Crafter for Simple { impl Crafter for Simple {
const NAME: &'static str = "simple"; const NAME: &'static str = "simple";
fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<Vec<PgLsn>> { fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<(Vec<PgLsn>, PgLsn)> {
craft_internal(client, |client, _| { craft_internal(client, |client, _| {
client.execute("CREATE table t(x int)", &[])?; client.execute("CREATE table t(x int)", &[])?;
Ok(Vec::new()) Ok((Vec::new(), None))
}) })
} }
} }
@@ -284,36 +292,29 @@ impl Crafter for Simple {
pub struct LastWalRecordXlogSwitch; pub struct LastWalRecordXlogSwitch;
impl Crafter for LastWalRecordXlogSwitch { impl Crafter for LastWalRecordXlogSwitch {
const NAME: &'static str = "last_wal_record_xlog_switch"; const NAME: &'static str = "last_wal_record_xlog_switch";
fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<Vec<PgLsn>> { fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<(Vec<PgLsn>, PgLsn)> {
// Do not use craft_internal because here we end up with flush_lsn exactly on // Do not use generate_internal because here we end up with flush_lsn exactly on
// the segment boundary and insert_lsn after the initial page header, which is unusual. // the segment boundary and insert_lsn after the initial page header, which is unusual.
ensure_server_config(client)?; ensure_server_config(client)?;
client.execute("CREATE table t(x int)", &[])?; client.execute("CREATE table t(x int)", &[])?;
let before_xlog_switch = client.pg_current_wal_insert_lsn()?; let before_xlog_switch = client.pg_current_wal_insert_lsn()?;
// pg_switch_wal returns end of last record of the switched segment, let after_xlog_switch: PgLsn = client.query_one("SELECT pg_switch_wal()", &[])?.get(0);
// i.e. end of SWITCH itself. let next_segment = PgLsn::from(0x0200_0000);
let xlog_switch_record_end: PgLsn = client.query_one("SELECT pg_switch_wal()", &[])?.get(0);
let before_xlog_switch_u64 = u64::from(before_xlog_switch);
let next_segment = PgLsn::from(
before_xlog_switch_u64 - (before_xlog_switch_u64 % WAL_SEGMENT_SIZE as u64)
+ WAL_SEGMENT_SIZE as u64,
);
ensure!( ensure!(
xlog_switch_record_end <= next_segment, after_xlog_switch <= next_segment,
"XLOG_SWITCH record ended after the expected segment boundary: {} > {}", "XLOG_SWITCH message ended after the expected segment boundary: {} > {}",
xlog_switch_record_end, after_xlog_switch,
next_segment next_segment
); );
Ok(vec![before_xlog_switch, xlog_switch_record_end]) Ok((vec![before_xlog_switch, after_xlog_switch], next_segment))
} }
} }
pub struct LastWalRecordXlogSwitchEndsOnPageBoundary; pub struct LastWalRecordXlogSwitchEndsOnPageBoundary;
/// Craft xlog SWITCH record ending at page boundary.
impl Crafter for LastWalRecordXlogSwitchEndsOnPageBoundary { impl Crafter for LastWalRecordXlogSwitchEndsOnPageBoundary {
const NAME: &'static str = "last_wal_record_xlog_switch_ends_on_page_boundary"; const NAME: &'static str = "last_wal_record_xlog_switch_ends_on_page_boundary";
fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<Vec<PgLsn>> { fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<(Vec<PgLsn>, PgLsn)> {
// Do not use generate_internal because here we end up with flush_lsn exactly on // Do not use generate_internal because here we end up with flush_lsn exactly on
// the segment boundary and insert_lsn after the initial page header, which is unusual. // the segment boundary and insert_lsn after the initial page header, which is unusual.
ensure_server_config(client)?; ensure_server_config(client)?;
@@ -360,29 +361,28 @@ impl Crafter for LastWalRecordXlogSwitchEndsOnPageBoundary {
// Emit the XLOG_SWITCH // Emit the XLOG_SWITCH
let before_xlog_switch = client.pg_current_wal_insert_lsn()?; let before_xlog_switch = client.pg_current_wal_insert_lsn()?;
let xlog_switch_record_end: PgLsn = client.query_one("SELECT pg_switch_wal()", &[])?.get(0); let after_xlog_switch: PgLsn = client.query_one("SELECT pg_switch_wal()", &[])?.get(0);
let next_segment = PgLsn::from(0x0200_0000); let next_segment = PgLsn::from(0x0200_0000);
ensure!( ensure!(
xlog_switch_record_end < next_segment, after_xlog_switch < next_segment,
"XLOG_SWITCH record ended on or after the expected segment boundary: {} > {}", "XLOG_SWITCH message ended on or after the expected segment boundary: {} > {}",
xlog_switch_record_end, after_xlog_switch,
next_segment next_segment
); );
ensure!( ensure!(
u64::from(xlog_switch_record_end) as usize % XLOG_BLCKSZ == XLOG_SIZE_OF_XLOG_SHORT_PHD, u64::from(after_xlog_switch) as usize % XLOG_BLCKSZ == XLOG_SIZE_OF_XLOG_SHORT_PHD,
"XLOG_SWITCH message ended not on page boundary: {}, offset = {}", "XLOG_SWITCH message ended not on page boundary: {}, offset = {}",
xlog_switch_record_end, after_xlog_switch,
u64::from(xlog_switch_record_end) as usize % XLOG_BLCKSZ u64::from(after_xlog_switch) as usize % XLOG_BLCKSZ
); );
Ok(vec![before_xlog_switch, xlog_switch_record_end]) Ok((vec![before_xlog_switch, after_xlog_switch], next_segment))
} }
} }
/// Write ~16MB logical message; it should cross WAL segment. fn craft_single_logical_message(
fn craft_seg_size_logical_message(
client: &mut impl postgres::GenericClient, client: &mut impl postgres::GenericClient,
transactional: bool, transactional: bool,
) -> anyhow::Result<Vec<PgLsn>> { ) -> anyhow::Result<(Vec<PgLsn>, PgLsn)> {
craft_internal(client, |client, initial_lsn| { craft_internal(client, |client, initial_lsn| {
ensure!( ensure!(
initial_lsn < PgLsn::from(0x0200_0000 - 1024 * 1024), initial_lsn < PgLsn::from(0x0200_0000 - 1024 * 1024),
@@ -405,24 +405,34 @@ fn craft_seg_size_logical_message(
"Logical message crossed two segments" "Logical message crossed two segments"
); );
Ok(vec![message_lsn]) if transactional {
// Transactional logical messages are part of a transaction, so the one above is
// followed by a small COMMIT record.
let after_message_lsn = client.pg_current_wal_insert_lsn()?;
ensure!(
message_lsn < after_message_lsn,
"No record found after the emitted message"
);
Ok((vec![message_lsn], Some(after_message_lsn)))
} else {
Ok((Vec::new(), Some(message_lsn)))
}
}) })
} }
pub struct WalRecordCrossingSegmentFollowedBySmallOne; pub struct WalRecordCrossingSegmentFollowedBySmallOne;
impl Crafter for WalRecordCrossingSegmentFollowedBySmallOne { impl Crafter for WalRecordCrossingSegmentFollowedBySmallOne {
const NAME: &'static str = "wal_record_crossing_segment_followed_by_small_one"; const NAME: &'static str = "wal_record_crossing_segment_followed_by_small_one";
fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<Vec<PgLsn>> { fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<(Vec<PgLsn>, PgLsn)> {
// Transactional message crossing WAL segment will be followed by small craft_single_logical_message(client, true)
// commit record.
craft_seg_size_logical_message(client, true)
} }
} }
pub struct LastWalRecordCrossingSegment; pub struct LastWalRecordCrossingSegment;
impl Crafter for LastWalRecordCrossingSegment { impl Crafter for LastWalRecordCrossingSegment {
const NAME: &'static str = "last_wal_record_crossing_segment"; const NAME: &'static str = "last_wal_record_crossing_segment";
fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<Vec<PgLsn>> { fn craft(client: &mut impl postgres::GenericClient) -> anyhow::Result<(Vec<PgLsn>, PgLsn)> {
craft_seg_size_logical_message(client, false) craft_single_logical_message(client, false)
} }
} }

View File

@@ -11,15 +11,13 @@ use utils::const_assert;
use utils::lsn::Lsn; use utils::lsn::Lsn;
fn init_logging() { fn init_logging() {
let _ = env_logger::Builder::from_env(env_logger::Env::default().default_filter_or(format!( let _ = env_logger::Builder::from_env(env_logger::Env::default().default_filter_or(
"crate=info,postgres_ffi::{PG_MAJORVERSION}::xlog_utils=trace" format!("crate=info,postgres_ffi::{PG_MAJORVERSION}::xlog_utils=trace"),
))) ))
.is_test(true) .is_test(true)
.try_init(); .try_init();
} }
/// Test that find_end_of_wal returns the same results as pg_dump on various
/// WALs created by Crafter.
fn test_end_of_wal<C: crate::Crafter>(test_name: &str) { fn test_end_of_wal<C: crate::Crafter>(test_name: &str) {
use crate::*; use crate::*;
@@ -40,13 +38,13 @@ fn test_end_of_wal<C: crate::Crafter>(test_name: &str) {
} }
cfg.initdb().unwrap(); cfg.initdb().unwrap();
let srv = cfg.start_server().unwrap(); let srv = cfg.start_server().unwrap();
let intermediate_lsns = C::craft(&mut srv.connect_with_timeout().unwrap()).unwrap(); let (intermediate_lsns, expected_end_of_wal_partial) =
C::craft(&mut srv.connect_with_timeout().unwrap()).unwrap();
let intermediate_lsns: Vec<Lsn> = intermediate_lsns let intermediate_lsns: Vec<Lsn> = intermediate_lsns
.iter() .iter()
.map(|&lsn| u64::from(lsn).into()) .map(|&lsn| u64::from(lsn).into())
.collect(); .collect();
// Kill postgres. Note that it might have inserted to WAL something after let expected_end_of_wal: Lsn = u64::from(expected_end_of_wal_partial).into();
// 'craft' did its job.
srv.kill(); srv.kill();
// Check find_end_of_wal on the initial WAL // Check find_end_of_wal on the initial WAL
@@ -58,7 +56,7 @@ fn test_end_of_wal<C: crate::Crafter>(test_name: &str) {
.filter(|fname| IsXLogFileName(fname)) .filter(|fname| IsXLogFileName(fname))
.max() .max()
.unwrap(); .unwrap();
let expected_end_of_wal = find_pg_waldump_end_of_wal(&cfg, &last_segment); check_pg_waldump_end_of_wal(&cfg, &last_segment, expected_end_of_wal);
for start_lsn in intermediate_lsns for start_lsn in intermediate_lsns
.iter() .iter()
.chain(std::iter::once(&expected_end_of_wal)) .chain(std::iter::once(&expected_end_of_wal))
@@ -93,7 +91,11 @@ fn test_end_of_wal<C: crate::Crafter>(test_name: &str) {
} }
} }
fn find_pg_waldump_end_of_wal(cfg: &crate::Conf, last_segment: &str) -> Lsn { fn check_pg_waldump_end_of_wal(
cfg: &crate::Conf,
last_segment: &str,
expected_end_of_wal: Lsn,
) {
// Get the actual end of WAL by pg_waldump // Get the actual end of WAL by pg_waldump
let waldump_output = cfg let waldump_output = cfg
.pg_waldump("000000010000000000000001", last_segment) .pg_waldump("000000010000000000000001", last_segment)
@@ -111,8 +113,11 @@ fn find_pg_waldump_end_of_wal(cfg: &crate::Conf, last_segment: &str) -> Lsn {
} }
}; };
let waldump_wal_end = Lsn::from_str(caps.get(1).unwrap().as_str()).unwrap(); let waldump_wal_end = Lsn::from_str(caps.get(1).unwrap().as_str()).unwrap();
info!("waldump erred on {}", waldump_wal_end); info!(
waldump_wal_end "waldump erred on {}, expected wal end at {}",
waldump_wal_end, expected_end_of_wal
);
assert_eq!(waldump_wal_end, expected_end_of_wal);
} }
fn check_end_of_wal( fn check_end_of_wal(
@@ -205,9 +210,9 @@ pub fn test_update_next_xid() {
#[test] #[test]
pub fn test_encode_logical_message() { pub fn test_encode_logical_message() {
let expected = [ let expected = [
64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 21, 0, 0, 170, 34, 166, 227, 255, 38, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 21, 0, 0, 170, 34, 166, 227, 255,
0, 0, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 112, 114, 101, 102, 38, 0, 0, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 112, 114,
105, 120, 0, 109, 101, 115, 115, 97, 103, 101, 101, 102, 105, 120, 0, 109, 101, 115, 115, 97, 103, 101,
]; ];
let actual = encode_logical_message("prefix", "message"); let actual = encode_logical_message("prefix", "message");
assert_eq!(expected, actual[..]); assert_eq!(expected, actual[..]);

View File

@@ -198,7 +198,6 @@ impl LocalFs {
fs::OpenOptions::new() fs::OpenOptions::new()
.write(true) .write(true)
.create(true) .create(true)
.truncate(true)
.open(&temp_file_path) .open(&temp_file_path)
.await .await
.with_context(|| { .with_context(|| {

View File

@@ -247,7 +247,7 @@ fn scenario_4() {
// //
// This is in total 5000 + 1000 + 5000 + 1000 = 12000 // This is in total 5000 + 1000 + 5000 + 1000 = 12000
// //
// (If we used the method from the previous scenario, and // (If we used the the method from the previous scenario, and
// kept only snapshot at the branch point, we'd need to keep // kept only snapshot at the branch point, we'd need to keep
// all the WAL between 10000-18000 on the main branch, so // all the WAL between 10000-18000 on the main branch, so
// the total size would be 5000 + 1000 + 8000 = 14000. The // the total size would be 5000 + 1000 + 8000 = 14000. The

View File

@@ -13,7 +13,6 @@ testing = ["fail/failpoints"]
[dependencies] [dependencies]
arc-swap.workspace = true arc-swap.workspace = true
sentry.workspace = true sentry.workspace = true
async-compression.workspace = true
async-trait.workspace = true async-trait.workspace = true
anyhow.workspace = true anyhow.workspace = true
bincode.workspace = true bincode.workspace = true
@@ -37,7 +36,6 @@ serde_json.workspace = true
signal-hook.workspace = true signal-hook.workspace = true
thiserror.workspace = true thiserror.workspace = true
tokio.workspace = true tokio.workspace = true
tokio-tar.workspace = true
tokio-util.workspace = true tokio-util.workspace = true
tracing.workspace = true tracing.workspace = true
tracing-error.workspace = true tracing-error.workspace = true
@@ -48,7 +46,6 @@ strum.workspace = true
strum_macros.workspace = true strum_macros.workspace = true
url.workspace = true url.workspace = true
uuid.workspace = true uuid.workspace = true
walkdir.workspace = true
pq_proto.workspace = true pq_proto.workspace = true
postgres_connection.workspace = true postgres_connection.workspace = true

View File

@@ -245,7 +245,7 @@ impl std::io::Write for ChannelWriter {
} }
} }
pub async fn prometheus_metrics_handler(_req: Request<Body>) -> Result<Response<Body>, ApiError> { async fn prometheus_metrics_handler(_req: Request<Body>) -> Result<Response<Body>, ApiError> {
SERVE_METRICS_COUNT.inc(); SERVE_METRICS_COUNT.inc();
let started_at = std::time::Instant::now(); let started_at = std::time::Instant::now();
@@ -367,6 +367,7 @@ pub fn make_router() -> RouterBuilder<hyper::Body, ApiError> {
.middleware(Middleware::post_with_info( .middleware(Middleware::post_with_info(
add_request_id_header_to_response, add_request_id_header_to_response,
)) ))
.get("/metrics", |r| request_span(r, prometheus_metrics_handler))
.err_handler(route_error_handler) .err_handler(route_error_handler)
} }

View File

@@ -87,8 +87,6 @@ pub mod failpoint_support;
pub mod yielding_loop; pub mod yielding_loop;
pub mod zstd;
/// This is a shortcut to embed git sha into binaries and avoid copying the same build script to all packages /// This is a shortcut to embed git sha into binaries and avoid copying the same build script to all packages
/// ///
/// we have several cases: /// we have several cases:

View File

@@ -63,7 +63,6 @@ impl UnwrittenLockFile {
pub fn create_exclusive(lock_file_path: &Utf8Path) -> anyhow::Result<UnwrittenLockFile> { pub fn create_exclusive(lock_file_path: &Utf8Path) -> anyhow::Result<UnwrittenLockFile> {
let lock_file = fs::OpenOptions::new() let lock_file = fs::OpenOptions::new()
.create(true) // O_CREAT .create(true) // O_CREAT
.truncate(true)
.write(true) .write(true)
.open(lock_file_path) .open(lock_file_path)
.context("open lock file")?; .context("open lock file")?;

View File

@@ -29,10 +29,12 @@ pub struct PageserverFeedback {
// Serialize with RFC3339 format. // Serialize with RFC3339 format.
#[serde(with = "serde_systemtime")] #[serde(with = "serde_systemtime")]
pub replytime: SystemTime, pub replytime: SystemTime,
/// Used to track feedbacks from different shards. Always zero for unsharded tenants.
pub shard_number: u32,
} }
// NOTE: Do not forget to increment this number when adding new fields to PageserverFeedback.
// Do not remove previously available fields because this might be backwards incompatible.
pub const PAGESERVER_FEEDBACK_FIELDS_NUMBER: u8 = 5;
impl PageserverFeedback { impl PageserverFeedback {
pub fn empty() -> PageserverFeedback { pub fn empty() -> PageserverFeedback {
PageserverFeedback { PageserverFeedback {
@@ -41,7 +43,6 @@ impl PageserverFeedback {
remote_consistent_lsn: Lsn::INVALID, remote_consistent_lsn: Lsn::INVALID,
disk_consistent_lsn: Lsn::INVALID, disk_consistent_lsn: Lsn::INVALID,
replytime: *PG_EPOCH, replytime: *PG_EPOCH,
shard_number: 0,
} }
} }
@@ -58,26 +59,17 @@ impl PageserverFeedback {
// //
// TODO: change serialized fields names once all computes migrate to rename. // TODO: change serialized fields names once all computes migrate to rename.
pub fn serialize(&self, buf: &mut BytesMut) { pub fn serialize(&self, buf: &mut BytesMut) {
let buf_ptr = buf.len(); buf.put_u8(PAGESERVER_FEEDBACK_FIELDS_NUMBER); // # of keys
buf.put_u8(0); // # of keys, will be filled later
let mut nkeys = 0;
nkeys += 1;
buf.put_slice(b"current_timeline_size\0"); buf.put_slice(b"current_timeline_size\0");
buf.put_i32(8); buf.put_i32(8);
buf.put_u64(self.current_timeline_size); buf.put_u64(self.current_timeline_size);
nkeys += 1;
buf.put_slice(b"ps_writelsn\0"); buf.put_slice(b"ps_writelsn\0");
buf.put_i32(8); buf.put_i32(8);
buf.put_u64(self.last_received_lsn.0); buf.put_u64(self.last_received_lsn.0);
nkeys += 1;
buf.put_slice(b"ps_flushlsn\0"); buf.put_slice(b"ps_flushlsn\0");
buf.put_i32(8); buf.put_i32(8);
buf.put_u64(self.disk_consistent_lsn.0); buf.put_u64(self.disk_consistent_lsn.0);
nkeys += 1;
buf.put_slice(b"ps_applylsn\0"); buf.put_slice(b"ps_applylsn\0");
buf.put_i32(8); buf.put_i32(8);
buf.put_u64(self.remote_consistent_lsn.0); buf.put_u64(self.remote_consistent_lsn.0);
@@ -88,19 +80,9 @@ impl PageserverFeedback {
.expect("failed to serialize pg_replytime earlier than PG_EPOCH") .expect("failed to serialize pg_replytime earlier than PG_EPOCH")
.as_micros() as i64; .as_micros() as i64;
nkeys += 1;
buf.put_slice(b"ps_replytime\0"); buf.put_slice(b"ps_replytime\0");
buf.put_i32(8); buf.put_i32(8);
buf.put_i64(timestamp); buf.put_i64(timestamp);
if self.shard_number > 0 {
nkeys += 1;
buf.put_slice(b"shard_number\0");
buf.put_i32(4);
buf.put_u32(self.shard_number);
}
buf[buf_ptr] = nkeys;
} }
// Deserialize PageserverFeedback message // Deserialize PageserverFeedback message
@@ -143,8 +125,9 @@ impl PageserverFeedback {
} }
b"shard_number" => { b"shard_number" => {
let len = buf.get_i32(); let len = buf.get_i32();
assert_eq!(len, 4); // TODO: this will be implemented in the next update,
rf.shard_number = buf.get_u32(); // for now, we just skip the value.
buf.advance(len as usize);
} }
_ => { _ => {
let len = buf.get_i32(); let len = buf.get_i32();
@@ -217,7 +200,10 @@ mod tests {
rf.serialize(&mut data); rf.serialize(&mut data);
// Add an extra field to the buffer and adjust number of keys // Add an extra field to the buffer and adjust number of keys
data[0] += 1; if let Some(first) = data.first_mut() {
*first = PAGESERVER_FEEDBACK_FIELDS_NUMBER + 1;
}
data.put_slice(b"new_field_one\0"); data.put_slice(b"new_field_one\0");
data.put_i32(8); data.put_i32(8);
data.put_u64(42); data.put_u64(42);

View File

@@ -245,7 +245,7 @@ impl<'a, T> Guard<'a, T> {
/// ///
/// The permit will be on a semaphore part of the new internal value, and any following /// The permit will be on a semaphore part of the new internal value, and any following
/// [`OnceCell::get_or_init`] will wait on it to complete. /// [`OnceCell::get_or_init`] will wait on it to complete.
pub fn take_and_deinit(mut self) -> (T, InitPermit) { pub fn take_and_deinit(&mut self) -> (T, InitPermit) {
let mut swapped = Inner::default(); let mut swapped = Inner::default();
let sem = swapped.init_semaphore.clone(); let sem = swapped.init_semaphore.clone();
// acquire and forget right away, moving the control over to InitPermit // acquire and forget right away, moving the control over to InitPermit
@@ -543,7 +543,7 @@ mod tests {
target.set(42, permit); target.set(42, permit);
let (_answer, permit) = { let (_answer, permit) = {
let guard = target let mut guard = target
.get_or_init(|permit| async { Ok::<_, Infallible>((11, permit)) }) .get_or_init(|permit| async { Ok::<_, Infallible>((11, permit)) })
.await .await
.unwrap(); .unwrap();

View File

@@ -1,60 +1,27 @@
use std::{alloc::Layout, cmp::Ordering, ops::RangeBounds}; use std::{alloc::Layout, cmp::Ordering, ops::RangeBounds};
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum VecMapOrdering {
Greater,
GreaterOrEqual,
}
/// Ordered map datastructure implemented in a Vec. /// Ordered map datastructure implemented in a Vec.
/// Append only - can only add keys that are larger than the /// Append only - can only add keys that are larger than the
/// current max key. /// current max key.
/// Ordering can be adjusted using [`VecMapOrdering`]
/// during `VecMap` construction.
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub struct VecMap<K, V> { pub struct VecMap<K, V>(Vec<(K, V)>);
data: Vec<(K, V)>,
ordering: VecMapOrdering,
}
impl<K, V> Default for VecMap<K, V> { impl<K, V> Default for VecMap<K, V> {
fn default() -> Self { fn default() -> Self {
VecMap { VecMap(Default::default())
data: Default::default(),
ordering: VecMapOrdering::Greater,
}
} }
} }
#[derive(thiserror::Error, Debug)] #[derive(Debug)]
pub enum VecMapError { pub struct InvalidKey;
#[error("Key violates ordering constraint")]
InvalidKey,
#[error("Mismatched ordering constraints")]
ExtendOrderingError,
}
impl<K: Ord, V> VecMap<K, V> { impl<K: Ord, V> VecMap<K, V> {
pub fn new(ordering: VecMapOrdering) -> Self {
Self {
data: Vec::new(),
ordering,
}
}
pub fn with_capacity(capacity: usize, ordering: VecMapOrdering) -> Self {
Self {
data: Vec::with_capacity(capacity),
ordering,
}
}
pub fn is_empty(&self) -> bool { pub fn is_empty(&self) -> bool {
self.data.is_empty() self.0.is_empty()
} }
pub fn as_slice(&self) -> &[(K, V)] { pub fn as_slice(&self) -> &[(K, V)] {
self.data.as_slice() self.0.as_slice()
} }
/// This function may panic if given a range where the lower bound is /// This function may panic if given a range where the lower bound is
@@ -62,7 +29,7 @@ impl<K: Ord, V> VecMap<K, V> {
pub fn slice_range<R: RangeBounds<K>>(&self, range: R) -> &[(K, V)] { pub fn slice_range<R: RangeBounds<K>>(&self, range: R) -> &[(K, V)] {
use std::ops::Bound::*; use std::ops::Bound::*;
let binary_search = |k: &K| self.data.binary_search_by_key(&k, extract_key); let binary_search = |k: &K| self.0.binary_search_by_key(&k, extract_key);
let start_idx = match range.start_bound() { let start_idx = match range.start_bound() {
Unbounded => 0, Unbounded => 0,
@@ -74,7 +41,7 @@ impl<K: Ord, V> VecMap<K, V> {
}; };
let end_idx = match range.end_bound() { let end_idx = match range.end_bound() {
Unbounded => self.data.len(), Unbounded => self.0.len(),
Included(k) => match binary_search(k) { Included(k) => match binary_search(k) {
Ok(idx) => idx + 1, Ok(idx) => idx + 1,
Err(idx) => idx, Err(idx) => idx,
@@ -82,30 +49,34 @@ impl<K: Ord, V> VecMap<K, V> {
Excluded(k) => binary_search(k).unwrap_or_else(std::convert::identity), Excluded(k) => binary_search(k).unwrap_or_else(std::convert::identity),
}; };
&self.data[start_idx..end_idx] &self.0[start_idx..end_idx]
} }
/// Add a key value pair to the map. /// Add a key value pair to the map.
/// If `key` is not respective of the `self` ordering the /// If `key` is less than or equal to the current maximum key
/// pair will not be added and `InvalidKey` error will be returned. /// the pair will not be added and InvalidKey error will be returned.
pub fn append(&mut self, key: K, value: V) -> Result<usize, VecMapError> { pub fn append(&mut self, key: K, value: V) -> Result<usize, InvalidKey> {
self.validate_key_order(&key)?; if let Some((last_key, _last_value)) = self.0.last() {
if &key <= last_key {
return Err(InvalidKey);
}
}
let delta_size = self.instrument_vec_op(|vec| vec.push((key, value))); let delta_size = self.instrument_vec_op(|vec| vec.push((key, value)));
Ok(delta_size) Ok(delta_size)
} }
/// Update the maximum key value pair or add a new key value pair to the map. /// Update the maximum key value pair or add a new key value pair to the map.
/// If `key` is not respective of the `self` ordering no updates or additions /// If `key` is less than the current maximum key no updates or additions
/// will occur and `InvalidKey` error will be returned. /// will occur and InvalidKey error will be returned.
pub fn append_or_update_last( pub fn append_or_update_last(
&mut self, &mut self,
key: K, key: K,
mut value: V, mut value: V,
) -> Result<(Option<V>, usize), VecMapError> { ) -> Result<(Option<V>, usize), InvalidKey> {
if let Some((last_key, last_value)) = self.data.last_mut() { if let Some((last_key, last_value)) = self.0.last_mut() {
match key.cmp(last_key) { match key.cmp(last_key) {
Ordering::Less => return Err(VecMapError::InvalidKey), Ordering::Less => return Err(InvalidKey),
Ordering::Equal => { Ordering::Equal => {
std::mem::swap(last_value, &mut value); std::mem::swap(last_value, &mut value);
const DELTA_SIZE: usize = 0; const DELTA_SIZE: usize = 0;
@@ -129,67 +100,40 @@ impl<K: Ord, V> VecMap<K, V> {
V: Clone, V: Clone,
{ {
let split_idx = self let split_idx = self
.data .0
.binary_search_by_key(&cutoff, extract_key) .binary_search_by_key(&cutoff, extract_key)
.unwrap_or_else(std::convert::identity); .unwrap_or_else(std::convert::identity);
( (
VecMap { VecMap(self.0[..split_idx].to_vec()),
data: self.data[..split_idx].to_vec(), VecMap(self.0[split_idx..].to_vec()),
ordering: self.ordering,
},
VecMap {
data: self.data[split_idx..].to_vec(),
ordering: self.ordering,
},
) )
} }
/// Move items from `other` to the end of `self`, leaving `other` empty. /// Move items from `other` to the end of `self`, leaving `other` empty.
/// If the `other` ordering is different from `self` ordering /// If any keys in `other` is less than or equal to any key in `self`,
/// `ExtendOrderingError` error will be returned. /// `InvalidKey` error will be returned and no mutation will occur.
/// If any keys in `other` is not respective of the ordering defined in pub fn extend(&mut self, other: &mut Self) -> Result<usize, InvalidKey> {
/// `self`, `InvalidKey` error will be returned and no mutation will occur. let self_last_opt = self.0.last().map(extract_key);
pub fn extend(&mut self, other: &mut Self) -> Result<usize, VecMapError> { let other_first_opt = other.0.last().map(extract_key);
if self.ordering != other.ordering {
return Err(VecMapError::ExtendOrderingError);
}
let other_first_opt = other.data.last().map(extract_key); if let (Some(self_last), Some(other_first)) = (self_last_opt, other_first_opt) {
if let Some(other_first) = other_first_opt { if self_last >= other_first {
self.validate_key_order(other_first)?; return Err(InvalidKey);
}
let delta_size = self.instrument_vec_op(|vec| vec.append(&mut other.data));
Ok(delta_size)
}
/// Validate the current last key in `self` and key being
/// inserted against the order defined in `self`.
fn validate_key_order(&self, key: &K) -> Result<(), VecMapError> {
if let Some(last_key) = self.data.last().map(extract_key) {
match (&self.ordering, &key.cmp(last_key)) {
(VecMapOrdering::Greater, Ordering::Less | Ordering::Equal) => {
return Err(VecMapError::InvalidKey);
}
(VecMapOrdering::Greater, Ordering::Greater) => {}
(VecMapOrdering::GreaterOrEqual, Ordering::Less) => {
return Err(VecMapError::InvalidKey);
}
(VecMapOrdering::GreaterOrEqual, Ordering::Equal | Ordering::Greater) => {}
} }
} }
Ok(()) let delta_size = self.instrument_vec_op(|vec| vec.append(&mut other.0));
Ok(delta_size)
} }
/// Instrument an operation on the underlying [`Vec`]. /// Instrument an operation on the underlying [`Vec`].
/// Will panic if the operation decreases capacity. /// Will panic if the operation decreases capacity.
/// Returns the increase in memory usage caused by the op. /// Returns the increase in memory usage caused by the op.
fn instrument_vec_op(&mut self, op: impl FnOnce(&mut Vec<(K, V)>)) -> usize { fn instrument_vec_op(&mut self, op: impl FnOnce(&mut Vec<(K, V)>)) -> usize {
let old_cap = self.data.capacity(); let old_cap = self.0.capacity();
op(&mut self.data); op(&mut self.0);
let new_cap = self.data.capacity(); let new_cap = self.0.capacity();
match old_cap.cmp(&new_cap) { match old_cap.cmp(&new_cap) {
Ordering::Less => { Ordering::Less => {
@@ -201,36 +145,6 @@ impl<K: Ord, V> VecMap<K, V> {
Ordering::Greater => panic!("VecMap capacity shouldn't ever decrease"), Ordering::Greater => panic!("VecMap capacity shouldn't ever decrease"),
} }
} }
/// Similar to `from_iter` defined in `FromIter` trait except
/// that it accepts an [`VecMapOrdering`]
pub fn from_iter<I: IntoIterator<Item = (K, V)>>(iter: I, ordering: VecMapOrdering) -> Self {
let iter = iter.into_iter();
let initial_capacity = {
match iter.size_hint() {
(lower_bound, None) => lower_bound,
(_, Some(upper_bound)) => upper_bound,
}
};
let mut vec_map = VecMap::with_capacity(initial_capacity, ordering);
for (key, value) in iter {
vec_map
.append(key, value)
.expect("The passed collection needs to be sorted!");
}
vec_map
}
}
impl<K: Ord, V> IntoIterator for VecMap<K, V> {
type Item = (K, V);
type IntoIter = std::vec::IntoIter<(K, V)>;
fn into_iter(self) -> Self::IntoIter {
self.data.into_iter()
}
} }
fn extract_key<K, V>(entry: &(K, V)) -> &K { fn extract_key<K, V>(entry: &(K, V)) -> &K {
@@ -241,7 +155,7 @@ fn extract_key<K, V>(entry: &(K, V)) -> &K {
mod tests { mod tests {
use std::{collections::BTreeMap, ops::Bound}; use std::{collections::BTreeMap, ops::Bound};
use super::{VecMap, VecMapOrdering}; use super::VecMap;
#[test] #[test]
fn unbounded_range() { fn unbounded_range() {
@@ -396,59 +310,5 @@ mod tests {
left.extend(&mut one_map).unwrap_err(); left.extend(&mut one_map).unwrap_err();
assert_eq!(left.as_slice(), &[(0, ()), (1, ())]); assert_eq!(left.as_slice(), &[(0, ()), (1, ())]);
assert_eq!(one_map.as_slice(), &[(1, ())]); assert_eq!(one_map.as_slice(), &[(1, ())]);
let mut map_greater_or_equal = VecMap::new(VecMapOrdering::GreaterOrEqual);
map_greater_or_equal.append(2, ()).unwrap();
map_greater_or_equal.append(2, ()).unwrap();
left.extend(&mut map_greater_or_equal).unwrap_err();
assert_eq!(left.as_slice(), &[(0, ()), (1, ())]);
assert_eq!(map_greater_or_equal.as_slice(), &[(2, ()), (2, ())]);
}
#[test]
fn extend_with_ordering() {
let mut left = VecMap::new(VecMapOrdering::GreaterOrEqual);
left.append(0, ()).unwrap();
assert_eq!(left.as_slice(), &[(0, ())]);
let mut greater_right = VecMap::new(VecMapOrdering::Greater);
greater_right.append(0, ()).unwrap();
left.extend(&mut greater_right).unwrap_err();
assert_eq!(left.as_slice(), &[(0, ())]);
let mut greater_or_equal_right = VecMap::new(VecMapOrdering::GreaterOrEqual);
greater_or_equal_right.append(2, ()).unwrap();
greater_or_equal_right.append(2, ()).unwrap();
left.extend(&mut greater_or_equal_right).unwrap();
assert_eq!(left.as_slice(), &[(0, ()), (2, ()), (2, ())]);
}
#[test]
fn vec_map_from_sorted() {
let vec = vec![(1, ()), (2, ()), (3, ()), (6, ())];
let vec_map = VecMap::from_iter(vec, VecMapOrdering::Greater);
assert_eq!(vec_map.as_slice(), &[(1, ()), (2, ()), (3, ()), (6, ())]);
let vec = vec![(1, ()), (2, ()), (3, ()), (3, ()), (6, ()), (6, ())];
let vec_map = VecMap::from_iter(vec, VecMapOrdering::GreaterOrEqual);
assert_eq!(
vec_map.as_slice(),
&[(1, ()), (2, ()), (3, ()), (3, ()), (6, ()), (6, ())]
);
}
#[test]
#[should_panic]
fn vec_map_from_unsorted_greater() {
let vec = vec![(1, ()), (2, ()), (2, ()), (3, ()), (6, ())];
let _ = VecMap::from_iter(vec, VecMapOrdering::Greater);
}
#[test]
#[should_panic]
fn vec_map_from_unsorted_greater_or_equal() {
let vec = vec![(1, ()), (2, ()), (3, ()), (6, ()), (5, ())];
let _ = VecMap::from_iter(vec, VecMapOrdering::GreaterOrEqual);
} }
} }

View File

@@ -1,78 +0,0 @@
use std::io::SeekFrom;
use anyhow::{Context, Result};
use async_compression::{
tokio::{bufread::ZstdDecoder, write::ZstdEncoder},
zstd::CParameter,
Level,
};
use camino::Utf8Path;
use nix::NixPath;
use tokio::{
fs::{File, OpenOptions},
io::AsyncBufRead,
io::AsyncSeekExt,
io::AsyncWriteExt,
};
use tokio_tar::{Archive, Builder, HeaderMode};
use walkdir::WalkDir;
/// Creates a Zstandard tarball.
pub async fn create_zst_tarball(path: &Utf8Path, tarball: &Utf8Path) -> Result<(File, u64)> {
let file = OpenOptions::new()
.create(true)
.truncate(true)
.read(true)
.write(true)
.open(&tarball)
.await
.with_context(|| format!("tempfile creation {tarball}"))?;
let mut paths = Vec::new();
for entry in WalkDir::new(path) {
let entry = entry?;
let metadata = entry.metadata().expect("error getting dir entry metadata");
// Also allow directories so that we also get empty directories
if !(metadata.is_file() || metadata.is_dir()) {
continue;
}
let path = entry.into_path();
paths.push(path);
}
// Do a sort to get a more consistent listing
paths.sort_unstable();
let zstd = ZstdEncoder::with_quality_and_params(
file,
Level::Default,
&[CParameter::enable_long_distance_matching(true)],
);
let mut builder = Builder::new(zstd);
// Use reproducible header mode
builder.mode(HeaderMode::Deterministic);
for p in paths {
let rel_path = p.strip_prefix(path)?;
if rel_path.is_empty() {
// The top directory should not be compressed,
// the tar crate doesn't like that
continue;
}
builder.append_path_with_name(&p, rel_path).await?;
}
let mut zstd = builder.into_inner().await?;
zstd.shutdown().await?;
let mut compressed = zstd.into_inner();
let compressed_len = compressed.metadata().await?.len();
compressed.seek(SeekFrom::Start(0)).await?;
Ok((compressed, compressed_len))
}
/// Creates a Zstandard tarball.
pub async fn extract_zst_tarball(
path: &Utf8Path,
tarball: impl AsyncBufRead + Unpin,
) -> Result<()> {
let decoder = Box::pin(ZstdDecoder::new(tarball));
let mut archive = Archive::new(decoder);
archive.unpack(path).await?;
Ok(())
}

View File

@@ -69,7 +69,7 @@ pub struct Config {
/// should be removed once we have a better solution there. /// should be removed once we have a better solution there.
sys_buffer_bytes: u64, sys_buffer_bytes: u64,
/// Minimum fraction of total system memory reserved *before* the cgroup threshold; in /// Minimum fraction of total system memory reserved *before* the the cgroup threshold; in
/// other words, providing a ceiling for the highest value of the threshold by enforcing that /// other words, providing a ceiling for the highest value of the threshold by enforcing that
/// there's at least `cgroup_min_overhead_fraction` of the total memory remaining beyond the /// there's at least `cgroup_min_overhead_fraction` of the total memory remaining beyond the
/// threshold. /// threshold.

View File

@@ -324,11 +324,11 @@ extern "C" fn finish_sync_safekeepers(wp: *mut WalProposer, lsn: XLogRecPtr) {
} }
} }
extern "C" fn process_safekeeper_feedback(wp: *mut WalProposer, sk: *mut Safekeeper) { extern "C" fn process_safekeeper_feedback(wp: *mut WalProposer) {
unsafe { unsafe {
let callback_data = (*(*wp).config).callback_data; let callback_data = (*(*wp).config).callback_data;
let api = callback_data as *mut Box<dyn ApiImpl>; let api = callback_data as *mut Box<dyn ApiImpl>;
(*api).process_safekeeper_feedback(&mut (*wp), &mut (*sk)); (*api).process_safekeeper_feedback(&mut (*wp))
} }
} }

View File

@@ -142,7 +142,7 @@ pub trait ApiImpl {
todo!() todo!()
} }
fn process_safekeeper_feedback(&mut self, _wp: &mut WalProposer, _sk: &mut Safekeeper) { fn process_safekeeper_feedback(&mut self, _wp: &mut WalProposer) {
todo!() todo!()
} }

View File

@@ -59,7 +59,6 @@ signal-hook.workspace = true
smallvec = { workspace = true, features = ["write"] } smallvec = { workspace = true, features = ["write"] }
svg_fmt.workspace = true svg_fmt.workspace = true
sync_wrapper.workspace = true sync_wrapper.workspace = true
sysinfo.workspace = true
tokio-tar.workspace = true tokio-tar.workspace = true
thiserror.workspace = true thiserror.workspace = true
tokio = { workspace = true, features = ["process", "sync", "fs", "rt", "io-util", "time"] } tokio = { workspace = true, features = ["process", "sync", "fs", "rt", "io-util", "time"] }

View File

@@ -1,156 +1,160 @@
//! Quantify a single walredo manager's throughput under N concurrent callers. //! Simple benchmarking around walredo.
//! //!
//! The benchmark implementation ([`bench_impl`]) is parametrized by //! Right now they hope to just set a baseline. Later we can try to expand into latency and
//! - `redo_work` => [`Request::short_request`] or [`Request::medium_request`] //! throughput after figuring out the coordinated omission problems below.
//! - `n_redos` => number of times the benchmark shell execute the `redo_work`
//! - `nclients` => number of clients (more on this shortly).
//! //!
//! The benchmark impl sets up a multi-threaded tokio runtime with default parameters. //! There are two sets of inputs; `short` and `medium`. They were collected on postgres v14 by
//! It spawns `nclients` times [`client`] tokio tasks. //! logging what happens when a sequential scan is requested on a small table, then picking out two
//! Each task executes the `redo_work` `n_redos/nclients` times. //! suitable from logs.
//! //!
//! We exercise the following combinations:
//! - `redo_work = short / medium``
//! - `nclients = [1, 2, 4, 8, 16, 32, 64, 128]`
//! //!
//! We let `criterion` determine the `n_redos` using `iter_custom`. //! Reference data (git blame to see commit) on an i3en.3xlarge
//! The idea is that for each `(redo_work, nclients)` combination, // ```text
//! criterion will run the `bench_impl` multiple times with different `n_redos`. //! short/short/1 time: [39.175 µs 39.348 µs 39.536 µs]
//! The `bench_impl` reports the aggregate wall clock time from the clients' perspective. //! short/short/2 time: [51.227 µs 51.487 µs 51.755 µs]
//! Criterion will divide that by `n_redos` to compute the "time per iteration". //! short/short/4 time: [76.048 µs 76.362 µs 76.674 µs]
//! In our case, "time per iteration" means "time per redo_work execution". //! short/short/8 time: [128.94 µs 129.82 µs 130.74 µs]
//! //! short/short/16 time: [227.84 µs 229.00 µs 230.28 µs]
//! NB: the way by which `iter_custom` determines the "number of iterations" //! short/short/32 time: [455.97 µs 457.81 µs 459.90 µs]
//! is called sampling. Apparently the idea here is to detect outliers. //! short/short/64 time: [902.46 µs 904.84 µs 907.32 µs]
//! We're not sure whether the current choice of sampling method makes sense. //! short/short/128 time: [1.7416 ms 1.7487 ms 1.7561 ms]
//! See https://bheisler.github.io/criterion.rs/book/user_guide/command_line_output.html#collecting-samples //! ``
//!
//! # Reference Numbers use std::sync::Arc;
//!
//! 2024-03-20 on i3en.3xlarge
//!
//! ```text
//! short/1 time: [26.483 µs 26.614 µs 26.767 µs]
//! short/2 time: [32.223 µs 32.465 µs 32.767 µs]
//! short/4 time: [47.203 µs 47.583 µs 47.984 µs]
//! short/8 time: [89.135 µs 89.612 µs 90.139 µs]
//! short/16 time: [190.12 µs 191.52 µs 192.88 µs]
//! short/32 time: [380.96 µs 382.63 µs 384.20 µs]
//! short/64 time: [736.86 µs 741.07 µs 745.03 µs]
//! short/128 time: [1.4106 ms 1.4206 ms 1.4294 ms]
//! medium/1 time: [111.81 µs 112.25 µs 112.79 µs]
//! medium/2 time: [158.26 µs 159.13 µs 160.21 µs]
//! medium/4 time: [334.65 µs 337.14 µs 340.07 µs]
//! medium/8 time: [675.32 µs 679.91 µs 685.25 µs]
//! medium/16 time: [1.2929 ms 1.2996 ms 1.3067 ms]
//! medium/32 time: [2.4295 ms 2.4461 ms 2.4623 ms]
//! medium/64 time: [4.3973 ms 4.4458 ms 4.4875 ms]
//! medium/128 time: [7.5955 ms 7.7847 ms 7.9481 ms]
//! ```
use bytes::{Buf, Bytes}; use bytes::{Buf, Bytes};
use criterion::{BenchmarkId, Criterion}; use pageserver::{
use pageserver::{config::PageServerConf, walrecord::NeonWalRecord, walredo::PostgresRedoManager}; config::PageServerConf, repository::Key, walrecord::NeonWalRecord, walredo::PostgresRedoManager,
use pageserver_api::{key::Key, shard::TenantShardId};
use std::{
sync::Arc,
time::{Duration, Instant},
}; };
use tokio::{sync::Barrier, task::JoinSet}; use pageserver_api::shard::TenantShardId;
use tokio::task::JoinSet;
use utils::{id::TenantId, lsn::Lsn}; use utils::{id::TenantId, lsn::Lsn};
fn bench(c: &mut Criterion) { use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
{
let nclients = [1, 2, 4, 8, 16, 32, 64, 128];
for nclients in nclients {
let mut group = c.benchmark_group("short");
group.bench_with_input(
BenchmarkId::from_parameter(nclients),
&nclients,
|b, nclients| {
let redo_work = Arc::new(Request::short_input());
b.iter_custom(|iters| bench_impl(Arc::clone(&redo_work), iters, *nclients));
},
);
}
}
{ fn redo_scenarios(c: &mut Criterion) {
let nclients = [1, 2, 4, 8, 16, 32, 64, 128]; // logging should be enabled when adding more inputs, since walredo will only report malformed
for nclients in nclients { // input to the stderr.
let mut group = c.benchmark_group("medium"); // utils::logging::init(utils::logging::LogFormat::Plain).unwrap();
group.bench_with_input(
BenchmarkId::from_parameter(nclients),
&nclients,
|b, nclients| {
let redo_work = Arc::new(Request::medium_input());
b.iter_custom(|iters| bench_impl(Arc::clone(&redo_work), iters, *nclients));
},
);
}
}
}
criterion::criterion_group!(benches, bench);
criterion::criterion_main!(benches);
// Returns the sum of each client's wall-clock time spent executing their share of the n_redos.
fn bench_impl(redo_work: Arc<Request>, n_redos: u64, nclients: u64) -> Duration {
let repo_dir = camino_tempfile::tempdir_in(env!("CARGO_TARGET_TMPDIR")).unwrap(); let repo_dir = camino_tempfile::tempdir_in(env!("CARGO_TARGET_TMPDIR")).unwrap();
let conf = PageServerConf::dummy_conf(repo_dir.path().to_path_buf()); let conf = PageServerConf::dummy_conf(repo_dir.path().to_path_buf());
let conf = Box::leak(Box::new(conf)); let conf = Box::leak(Box::new(conf));
let tenant_shard_id = TenantShardId::unsharded(TenantId::generate()); let tenant_shard_id = TenantShardId::unsharded(TenantId::generate());
let manager = PostgresRedoManager::new(conf, tenant_shard_id);
let manager = Arc::new(manager);
{
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
tracing::info!("executing first");
rt.block_on(short().execute(&manager)).unwrap();
tracing::info!("first executed");
}
let thread_counts = [1, 2, 4, 8, 16, 32, 64, 128];
let mut group = c.benchmark_group("short");
group.sampling_mode(criterion::SamplingMode::Flat);
for thread_count in thread_counts {
group.bench_with_input(
BenchmarkId::new("short", thread_count),
&thread_count,
|b, thread_count| {
add_multithreaded_walredo_requesters(b, *thread_count, &manager, short);
},
);
}
drop(group);
let mut group = c.benchmark_group("medium");
group.sampling_mode(criterion::SamplingMode::Flat);
for thread_count in thread_counts {
group.bench_with_input(
BenchmarkId::new("medium", thread_count),
&thread_count,
|b, thread_count| {
add_multithreaded_walredo_requesters(b, *thread_count, &manager, medium);
},
);
}
drop(group);
}
/// Sets up a multi-threaded tokio runtime with default worker thread count,
/// then, spawn `requesters` tasks that repeatedly:
/// - get input from `input_factor()`
/// - call `manager.request_redo()` with their input
///
/// This stress-tests the scalability of a single walredo manager at high tokio-level concurrency.
///
/// Using tokio's default worker thread count means the results will differ on machines
/// with different core countrs. We don't care about that, the performance will always
/// be different on different hardware. To compare performance of different software versions,
/// use the same hardware.
fn add_multithreaded_walredo_requesters(
b: &mut criterion::Bencher,
nrequesters: usize,
manager: &Arc<PostgresRedoManager>,
input_factory: fn() -> Request,
) {
assert_ne!(nrequesters, 0);
let rt = tokio::runtime::Builder::new_multi_thread() let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all() .enable_all()
.build() .build()
.unwrap(); .unwrap();
let start = Arc::new(Barrier::new(nclients as usize)); let barrier = Arc::new(tokio::sync::Barrier::new(nrequesters + 1));
let mut tasks = JoinSet::new(); let mut requesters = JoinSet::new();
for _ in 0..nrequesters {
let manager = PostgresRedoManager::new(conf, tenant_shard_id); let _entered = rt.enter();
let manager = Arc::new(manager); let manager = manager.clone();
let barrier = barrier.clone();
for _ in 0..nclients { requesters.spawn(async move {
rt.block_on(async { loop {
tasks.spawn(client( let input = input_factory();
Arc::clone(&manager), barrier.wait().await;
Arc::clone(&start), let page = input.execute(&manager).await.unwrap();
Arc::clone(&redo_work), assert_eq!(page.remaining(), 8192);
// divide the amount of work equally among the clients barrier.wait().await;
n_redos / nclients, }
))
}); });
} }
rt.block_on(async move { let do_one_iteration = || {
let mut total_wallclock_time = std::time::Duration::from_millis(0); rt.block_on(async {
while let Some(res) = tasks.join_next().await { barrier.wait().await;
total_wallclock_time += res.unwrap(); // wait for work to complete
} barrier.wait().await;
total_wallclock_time })
}) };
b.iter_batched(
|| {
// warmup
do_one_iteration();
},
|()| {
// work loop
do_one_iteration();
},
criterion::BatchSize::PerIteration,
);
rt.block_on(requesters.shutdown());
} }
async fn client( criterion_group!(benches, redo_scenarios);
mgr: Arc<PostgresRedoManager>, criterion_main!(benches);
start: Arc<Barrier>,
redo_work: Arc<Request>,
n_redos: u64,
) -> Duration {
start.wait().await;
let start = Instant::now();
for _ in 0..n_redos {
let page = redo_work.execute(&mgr).await.unwrap();
assert_eq!(page.remaining(), 8192);
// The real pageserver will rarely if ever do 2 walredos in a row without
// yielding to the executor.
tokio::task::yield_now().await;
}
start.elapsed()
}
macro_rules! lsn { macro_rules! lsn {
($input:expr) => {{ ($input:expr) => {{
@@ -162,46 +166,12 @@ macro_rules! lsn {
}}; }};
} }
/// Simple wrapper around `WalRedoManager::request_redo`. /// Short payload, 1132 bytes.
/// // pg_records are copypasted from log, where they are put with Debug impl of Bytes, which uses \0
/// In benchmarks this is cloned around. // for null bytes.
#[derive(Clone)] #[allow(clippy::octal_escapes)]
struct Request { fn short() -> Request {
key: Key, Request {
lsn: Lsn,
base_img: Option<(Lsn, Bytes)>,
records: Vec<(Lsn, NeonWalRecord)>,
pg_version: u32,
}
impl Request {
async fn execute(&self, manager: &PostgresRedoManager) -> anyhow::Result<Bytes> {
let Request {
key,
lsn,
base_img,
records,
pg_version,
} = self;
// TODO: avoid these clones
manager
.request_redo(*key, *lsn, base_img.clone(), records.clone(), *pg_version)
.await
}
fn pg_record(will_init: bool, bytes: &'static [u8]) -> NeonWalRecord {
let rec = Bytes::from_static(bytes);
NeonWalRecord::Postgres { will_init, rec }
}
/// Short payload, 1132 bytes.
// pg_records are copypasted from log, where they are put with Debug impl of Bytes, which uses \0
// for null bytes.
#[allow(clippy::octal_escapes)]
pub fn short_input() -> Request {
let pg_record = Self::pg_record;
Request {
key: Key { key: Key {
field1: 0, field1: 0,
field2: 1663, field2: 1663,
@@ -224,14 +194,13 @@ impl Request {
], ],
pg_version: 14, pg_version: 14,
} }
} }
/// Medium sized payload, serializes as 26393 bytes. /// Medium sized payload, serializes as 26393 bytes.
// see [`short`] // see [`short`]
#[allow(clippy::octal_escapes)] #[allow(clippy::octal_escapes)]
pub fn medium_input() -> Request { fn medium() -> Request {
let pg_record = Self::pg_record; Request {
Request {
key: Key { key: Key {
field1: 0, field1: 0,
field2: 1663, field2: 1663,
@@ -473,5 +442,37 @@ impl Request {
], ],
pg_version: 14, pg_version: 14,
} }
}
fn pg_record(will_init: bool, bytes: &'static [u8]) -> NeonWalRecord {
let rec = Bytes::from_static(bytes);
NeonWalRecord::Postgres { will_init, rec }
}
/// Simple wrapper around `WalRedoManager::request_redo`.
///
/// In benchmarks this is cloned around.
#[derive(Clone)]
struct Request {
key: Key,
lsn: Lsn,
base_img: Option<(Lsn, Bytes)>,
records: Vec<(Lsn, NeonWalRecord)>,
pg_version: u32,
}
impl Request {
async fn execute(self, manager: &PostgresRedoManager) -> anyhow::Result<Bytes> {
let Request {
key,
lsn,
base_img,
records,
pg_version,
} = self;
manager
.request_redo(key, lsn, base_img, records, pg_version)
.await
} }
} }

View File

@@ -600,37 +600,32 @@ fn start_pageserver(
None, None,
"consumption metrics collection", "consumption metrics collection",
true, true,
{ async move {
let tenant_manager = tenant_manager.clone(); // first wait until background jobs are cleared to launch.
async move { //
// first wait until background jobs are cleared to launch. // this is because we only process active tenants and timelines, and the
// // Timeline::get_current_logical_size will spawn the logical size calculation,
// this is because we only process active tenants and timelines, and the // which will not be rate-limited.
// Timeline::get_current_logical_size will spawn the logical size calculation, let cancel = task_mgr::shutdown_token();
// which will not be rate-limited.
let cancel = task_mgr::shutdown_token();
tokio::select! { tokio::select! {
_ = cancel.cancelled() => { return Ok(()); }, _ = cancel.cancelled() => { return Ok(()); },
_ = background_jobs_barrier.wait() => {} _ = background_jobs_barrier.wait() => {}
}; };
pageserver::consumption_metrics::collect_metrics( pageserver::consumption_metrics::collect_metrics(
tenant_manager, metric_collection_endpoint,
metric_collection_endpoint, conf.metric_collection_interval,
&conf.metric_collection_bucket, conf.cached_metric_collection_interval,
conf.metric_collection_interval, conf.synthetic_size_calculation_interval,
conf.cached_metric_collection_interval, conf.id,
conf.synthetic_size_calculation_interval, local_disk_storage,
conf.id, cancel,
local_disk_storage, metrics_ctx,
cancel, )
metrics_ctx, .instrument(info_span!("metrics_collection"))
) .await?;
.instrument(info_span!("metrics_collection")) Ok(())
.await?;
Ok(())
}
}, },
); );
} }

View File

@@ -95,8 +95,6 @@ pub mod defaults {
pub const DEFAULT_VALIDATE_VECTORED_GET: bool = true; pub const DEFAULT_VALIDATE_VECTORED_GET: bool = true;
pub const DEFAULT_EPHEMERAL_BYTES_PER_MEMORY_KB: usize = 0;
/// ///
/// Default built-in configuration file. /// Default built-in configuration file.
/// ///
@@ -158,8 +156,6 @@ pub mod defaults {
#heatmap_upload_concurrency = {DEFAULT_HEATMAP_UPLOAD_CONCURRENCY} #heatmap_upload_concurrency = {DEFAULT_HEATMAP_UPLOAD_CONCURRENCY}
#secondary_download_concurrency = {DEFAULT_SECONDARY_DOWNLOAD_CONCURRENCY} #secondary_download_concurrency = {DEFAULT_SECONDARY_DOWNLOAD_CONCURRENCY}
#ephemeral_bytes_per_memory_kb = {DEFAULT_EPHEMERAL_BYTES_PER_MEMORY_KB}
[remote_storage] [remote_storage]
"# "#
@@ -238,7 +234,6 @@ pub struct PageServerConf {
// How often to send unchanged cached metrics to the metrics endpoint. // How often to send unchanged cached metrics to the metrics endpoint.
pub cached_metric_collection_interval: Duration, pub cached_metric_collection_interval: Duration,
pub metric_collection_endpoint: Option<Url>, pub metric_collection_endpoint: Option<Url>,
pub metric_collection_bucket: Option<RemoteStorageConfig>,
pub synthetic_size_calculation_interval: Duration, pub synthetic_size_calculation_interval: Duration,
pub disk_usage_based_eviction: Option<DiskUsageEvictionTaskConfig>, pub disk_usage_based_eviction: Option<DiskUsageEvictionTaskConfig>,
@@ -283,13 +278,6 @@ pub struct PageServerConf {
pub max_vectored_read_bytes: MaxVectoredReadBytes, pub max_vectored_read_bytes: MaxVectoredReadBytes,
pub validate_vectored_get: bool, pub validate_vectored_get: bool,
/// How many bytes of ephemeral layer content will we allow per kilobyte of RAM. When this
/// is exceeded, we start proactively closing ephemeral layers to limit the total amount
/// of ephemeral data.
///
/// Setting this to zero disables limits on total ephemeral layer size.
pub ephemeral_bytes_per_memory_kb: usize,
} }
/// We do not want to store this in a PageServerConf because the latter may be logged /// We do not want to store this in a PageServerConf because the latter may be logged
@@ -385,7 +373,6 @@ struct PageServerConfigBuilder {
cached_metric_collection_interval: BuilderValue<Duration>, cached_metric_collection_interval: BuilderValue<Duration>,
metric_collection_endpoint: BuilderValue<Option<Url>>, metric_collection_endpoint: BuilderValue<Option<Url>>,
synthetic_size_calculation_interval: BuilderValue<Duration>, synthetic_size_calculation_interval: BuilderValue<Duration>,
metric_collection_bucket: BuilderValue<Option<RemoteStorageConfig>>,
disk_usage_based_eviction: BuilderValue<Option<DiskUsageEvictionTaskConfig>>, disk_usage_based_eviction: BuilderValue<Option<DiskUsageEvictionTaskConfig>>,
@@ -411,8 +398,6 @@ struct PageServerConfigBuilder {
max_vectored_read_bytes: BuilderValue<MaxVectoredReadBytes>, max_vectored_read_bytes: BuilderValue<MaxVectoredReadBytes>,
validate_vectored_get: BuilderValue<bool>, validate_vectored_get: BuilderValue<bool>,
ephemeral_bytes_per_memory_kb: BuilderValue<usize>,
} }
impl PageServerConfigBuilder { impl PageServerConfigBuilder {
@@ -470,8 +455,6 @@ impl PageServerConfigBuilder {
.expect("cannot parse default synthetic size calculation interval")), .expect("cannot parse default synthetic size calculation interval")),
metric_collection_endpoint: Set(DEFAULT_METRIC_COLLECTION_ENDPOINT), metric_collection_endpoint: Set(DEFAULT_METRIC_COLLECTION_ENDPOINT),
metric_collection_bucket: Set(None),
disk_usage_based_eviction: Set(None), disk_usage_based_eviction: Set(None),
test_remote_failures: Set(0), test_remote_failures: Set(0),
@@ -499,7 +482,6 @@ impl PageServerConfigBuilder {
NonZeroUsize::new(DEFAULT_MAX_VECTORED_READ_BYTES).unwrap(), NonZeroUsize::new(DEFAULT_MAX_VECTORED_READ_BYTES).unwrap(),
)), )),
validate_vectored_get: Set(DEFAULT_VALIDATE_VECTORED_GET), validate_vectored_get: Set(DEFAULT_VALIDATE_VECTORED_GET),
ephemeral_bytes_per_memory_kb: Set(DEFAULT_EPHEMERAL_BYTES_PER_MEMORY_KB),
} }
} }
} }
@@ -604,13 +586,6 @@ impl PageServerConfigBuilder {
self.metric_collection_endpoint = BuilderValue::Set(metric_collection_endpoint) self.metric_collection_endpoint = BuilderValue::Set(metric_collection_endpoint)
} }
pub fn metric_collection_bucket(
&mut self,
metric_collection_bucket: Option<RemoteStorageConfig>,
) {
self.metric_collection_bucket = BuilderValue::Set(metric_collection_bucket)
}
pub fn synthetic_size_calculation_interval( pub fn synthetic_size_calculation_interval(
&mut self, &mut self,
synthetic_size_calculation_interval: Duration, synthetic_size_calculation_interval: Duration,
@@ -679,10 +654,6 @@ impl PageServerConfigBuilder {
self.validate_vectored_get = BuilderValue::Set(value); self.validate_vectored_get = BuilderValue::Set(value);
} }
pub fn get_ephemeral_bytes_per_memory_kb(&mut self, value: usize) {
self.ephemeral_bytes_per_memory_kb = BuilderValue::Set(value);
}
pub fn build(self) -> anyhow::Result<PageServerConf> { pub fn build(self) -> anyhow::Result<PageServerConf> {
let default = Self::default_values(); let default = Self::default_values();
@@ -723,7 +694,6 @@ impl PageServerConfigBuilder {
metric_collection_interval, metric_collection_interval,
cached_metric_collection_interval, cached_metric_collection_interval,
metric_collection_endpoint, metric_collection_endpoint,
metric_collection_bucket,
synthetic_size_calculation_interval, synthetic_size_calculation_interval,
disk_usage_based_eviction, disk_usage_based_eviction,
test_remote_failures, test_remote_failures,
@@ -738,7 +708,6 @@ impl PageServerConfigBuilder {
get_vectored_impl, get_vectored_impl,
max_vectored_read_bytes, max_vectored_read_bytes,
validate_vectored_get, validate_vectored_get,
ephemeral_bytes_per_memory_kb,
} }
CUSTOM LOGIC CUSTOM LOGIC
{ {
@@ -973,9 +942,6 @@ impl PageServerConf {
let endpoint = parse_toml_string(key, item)?.parse().context("failed to parse metric_collection_endpoint")?; let endpoint = parse_toml_string(key, item)?.parse().context("failed to parse metric_collection_endpoint")?;
builder.metric_collection_endpoint(Some(endpoint)); builder.metric_collection_endpoint(Some(endpoint));
}, },
"metric_collection_bucket" => {
builder.metric_collection_bucket(RemoteStorageConfig::from_toml(item)?)
}
"synthetic_size_calculation_interval" => "synthetic_size_calculation_interval" =>
builder.synthetic_size_calculation_interval(parse_toml_duration(key, item)?), builder.synthetic_size_calculation_interval(parse_toml_duration(key, item)?),
"test_remote_failures" => builder.test_remote_failures(parse_toml_u64(key, item)?), "test_remote_failures" => builder.test_remote_failures(parse_toml_u64(key, item)?),
@@ -1029,9 +995,6 @@ impl PageServerConf {
"validate_vectored_get" => { "validate_vectored_get" => {
builder.get_validate_vectored_get(parse_toml_bool("validate_vectored_get", item)?) builder.get_validate_vectored_get(parse_toml_bool("validate_vectored_get", item)?)
} }
"ephemeral_bytes_per_memory_kb" => {
builder.get_ephemeral_bytes_per_memory_kb(parse_toml_u64("ephemeral_bytes_per_memory_kb", item)? as usize)
}
_ => bail!("unrecognized pageserver option '{key}'"), _ => bail!("unrecognized pageserver option '{key}'"),
} }
} }
@@ -1094,7 +1057,6 @@ impl PageServerConf {
metric_collection_interval: Duration::from_secs(60), metric_collection_interval: Duration::from_secs(60),
cached_metric_collection_interval: Duration::from_secs(60 * 60), cached_metric_collection_interval: Duration::from_secs(60 * 60),
metric_collection_endpoint: defaults::DEFAULT_METRIC_COLLECTION_ENDPOINT, metric_collection_endpoint: defaults::DEFAULT_METRIC_COLLECTION_ENDPOINT,
metric_collection_bucket: None,
synthetic_size_calculation_interval: Duration::from_secs(60), synthetic_size_calculation_interval: Duration::from_secs(60),
disk_usage_based_eviction: None, disk_usage_based_eviction: None,
test_remote_failures: 0, test_remote_failures: 0,
@@ -1113,7 +1075,6 @@ impl PageServerConf {
.expect("Invalid default constant"), .expect("Invalid default constant"),
), ),
validate_vectored_get: defaults::DEFAULT_VALIDATE_VECTORED_GET, validate_vectored_get: defaults::DEFAULT_VALIDATE_VECTORED_GET,
ephemeral_bytes_per_memory_kb: defaults::DEFAULT_EPHEMERAL_BYTES_PER_MEMORY_KB,
} }
} }
} }
@@ -1328,7 +1289,6 @@ background_task_maximum_delay = '334 s'
defaults::DEFAULT_CACHED_METRIC_COLLECTION_INTERVAL defaults::DEFAULT_CACHED_METRIC_COLLECTION_INTERVAL
)?, )?,
metric_collection_endpoint: defaults::DEFAULT_METRIC_COLLECTION_ENDPOINT, metric_collection_endpoint: defaults::DEFAULT_METRIC_COLLECTION_ENDPOINT,
metric_collection_bucket: None,
synthetic_size_calculation_interval: humantime::parse_duration( synthetic_size_calculation_interval: humantime::parse_duration(
defaults::DEFAULT_SYNTHETIC_SIZE_CALCULATION_INTERVAL defaults::DEFAULT_SYNTHETIC_SIZE_CALCULATION_INTERVAL
)?, )?,
@@ -1351,7 +1311,6 @@ background_task_maximum_delay = '334 s'
.expect("Invalid default constant") .expect("Invalid default constant")
), ),
validate_vectored_get: defaults::DEFAULT_VALIDATE_VECTORED_GET, validate_vectored_get: defaults::DEFAULT_VALIDATE_VECTORED_GET,
ephemeral_bytes_per_memory_kb: defaults::DEFAULT_EPHEMERAL_BYTES_PER_MEMORY_KB
}, },
"Correct defaults should be used when no config values are provided" "Correct defaults should be used when no config values are provided"
); );
@@ -1404,7 +1363,6 @@ background_task_maximum_delay = '334 s'
metric_collection_interval: Duration::from_secs(222), metric_collection_interval: Duration::from_secs(222),
cached_metric_collection_interval: Duration::from_secs(22200), cached_metric_collection_interval: Duration::from_secs(22200),
metric_collection_endpoint: Some(Url::parse("http://localhost:80/metrics")?), metric_collection_endpoint: Some(Url::parse("http://localhost:80/metrics")?),
metric_collection_bucket: None,
synthetic_size_calculation_interval: Duration::from_secs(333), synthetic_size_calculation_interval: Duration::from_secs(333),
disk_usage_based_eviction: None, disk_usage_based_eviction: None,
test_remote_failures: 0, test_remote_failures: 0,
@@ -1423,7 +1381,6 @@ background_task_maximum_delay = '334 s'
.expect("Invalid default constant") .expect("Invalid default constant")
), ),
validate_vectored_get: defaults::DEFAULT_VALIDATE_VECTORED_GET, validate_vectored_get: defaults::DEFAULT_VALIDATE_VECTORED_GET,
ephemeral_bytes_per_memory_kb: defaults::DEFAULT_EPHEMERAL_BYTES_PER_MEMORY_KB
}, },
"Should be able to parse all basic config values correctly" "Should be able to parse all basic config values correctly"
); );

View File

@@ -3,13 +3,10 @@
use crate::context::{DownloadBehavior, RequestContext}; use crate::context::{DownloadBehavior, RequestContext};
use crate::task_mgr::{self, TaskKind, BACKGROUND_RUNTIME}; use crate::task_mgr::{self, TaskKind, BACKGROUND_RUNTIME};
use crate::tenant::tasks::BackgroundLoopKind; use crate::tenant::tasks::BackgroundLoopKind;
use crate::tenant::{ use crate::tenant::{mgr, LogicalSizeCalculationCause, PageReconstructError, Tenant};
mgr::TenantManager, LogicalSizeCalculationCause, PageReconstructError, Tenant,
};
use camino::Utf8PathBuf; use camino::Utf8PathBuf;
use consumption_metrics::EventType; use consumption_metrics::EventType;
use pageserver_api::models::TenantState; use pageserver_api::models::TenantState;
use remote_storage::{GenericRemoteStorage, RemoteStorageConfig};
use reqwest::Url; use reqwest::Url;
use std::collections::HashMap; use std::collections::HashMap;
use std::sync::Arc; use std::sync::Arc;
@@ -43,9 +40,7 @@ type Cache = HashMap<MetricsKey, (EventType, u64)>;
/// Main thread that serves metrics collection /// Main thread that serves metrics collection
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
pub async fn collect_metrics( pub async fn collect_metrics(
tenant_manager: Arc<TenantManager>,
metric_collection_endpoint: &Url, metric_collection_endpoint: &Url,
metric_collection_bucket: &Option<RemoteStorageConfig>,
metric_collection_interval: Duration, metric_collection_interval: Duration,
_cached_metric_collection_interval: Duration, _cached_metric_collection_interval: Duration,
synthetic_size_calculation_interval: Duration, synthetic_size_calculation_interval: Duration,
@@ -70,19 +65,15 @@ pub async fn collect_metrics(
None, None,
"synthetic size calculation", "synthetic size calculation",
false, false,
{ async move {
let tenant_manager = tenant_manager.clone(); calculate_synthetic_size_worker(
async move { synthetic_size_calculation_interval,
calculate_synthetic_size_worker( &cancel,
tenant_manager, &worker_ctx,
synthetic_size_calculation_interval, )
&cancel, .instrument(info_span!("synthetic_size_worker"))
&worker_ctx, .await?;
) Ok(())
.instrument(info_span!("synthetic_size_worker"))
.await?;
Ok(())
}
}, },
); );
@@ -103,27 +94,13 @@ pub async fn collect_metrics(
.build() .build()
.expect("Failed to create http client with timeout"); .expect("Failed to create http client with timeout");
let bucket_client = if let Some(bucket_config) = metric_collection_bucket {
match GenericRemoteStorage::from_config(bucket_config) {
Ok(client) => Some(client),
Err(e) => {
// Non-fatal error: if we were given an invalid config, we will proceed
// with sending metrics over the network, but not to S3.
tracing::warn!("Invalid configuration for metric_collection_bucket: {e}");
None
}
}
} else {
None
};
let node_id = node_id.to_string(); let node_id = node_id.to_string();
loop { loop {
let started_at = Instant::now(); let started_at = Instant::now();
// these are point in time, with variable "now" // these are point in time, with variable "now"
let metrics = metrics::collect_all_metrics(&tenant_manager, &cached_metrics, &ctx).await; let metrics = metrics::collect_all_metrics(&cached_metrics, &ctx).await;
let metrics = Arc::new(metrics); let metrics = Arc::new(metrics);
@@ -141,18 +118,10 @@ pub async fn collect_metrics(
tracing::error!("failed to persist metrics to {path:?}: {e:#}"); tracing::error!("failed to persist metrics to {path:?}: {e:#}");
} }
} }
if let Some(bucket_client) = &bucket_client {
let res =
upload::upload_metrics_bucket(bucket_client, &cancel, &node_id, &metrics).await;
if let Err(e) = res {
tracing::error!("failed to upload to S3: {e:#}");
}
}
}; };
let upload = async { let upload = async {
let res = upload::upload_metrics_http( let res = upload::upload_metrics(
&client, &client,
metric_collection_endpoint, metric_collection_endpoint,
&cancel, &cancel,
@@ -163,7 +132,7 @@ pub async fn collect_metrics(
.await; .await;
if let Err(e) = res { if let Err(e) = res {
// serialization error which should never happen // serialization error which should never happen
tracing::error!("failed to upload via HTTP due to {e:#}"); tracing::error!("failed to upload due to {e:#}");
} }
}; };
@@ -278,7 +247,6 @@ async fn reschedule(
/// Caclculate synthetic size for each active tenant /// Caclculate synthetic size for each active tenant
async fn calculate_synthetic_size_worker( async fn calculate_synthetic_size_worker(
tenant_manager: Arc<TenantManager>,
synthetic_size_calculation_interval: Duration, synthetic_size_calculation_interval: Duration,
cancel: &CancellationToken, cancel: &CancellationToken,
ctx: &RequestContext, ctx: &RequestContext,
@@ -291,7 +259,7 @@ async fn calculate_synthetic_size_worker(
loop { loop {
let started_at = Instant::now(); let started_at = Instant::now();
let tenants = match tenant_manager.list_tenants() { let tenants = match mgr::list_tenants().await {
Ok(tenants) => tenants, Ok(tenants) => tenants,
Err(e) => { Err(e) => {
warn!("cannot get tenant list: {e:#}"); warn!("cannot get tenant list: {e:#}");
@@ -310,14 +278,10 @@ async fn calculate_synthetic_size_worker(
continue; continue;
} }
let Ok(tenant) = tenant_manager.get_attached_tenant_shard(tenant_shard_id) else { let Ok(tenant) = mgr::get_tenant(tenant_shard_id, true) else {
continue; continue;
}; };
if !tenant.is_active() {
continue;
}
// there is never any reason to exit calculate_synthetic_size_worker following any // there is never any reason to exit calculate_synthetic_size_worker following any
// return value -- we don't need to care about shutdown because no tenant is found when // return value -- we don't need to care about shutdown because no tenant is found when
// pageserver is shut down. // pageserver is shut down.
@@ -355,7 +319,9 @@ async fn calculate_and_log(tenant: &Tenant, cancel: &CancellationToken, ctx: &Re
}; };
// this error can be returned if timeline is shutting down, but it does not // this error can be returned if timeline is shutting down, but it does not
// mean the synthetic size worker should terminate. // mean the synthetic size worker should terminate. we do not need any checks
// in this function because `mgr::get_tenant` will error out after shutdown has
// progressed to shutting down tenants.
let shutting_down = matches!( let shutting_down = matches!(
e.downcast_ref::<PageReconstructError>(), e.downcast_ref::<PageReconstructError>(),
Some(PageReconstructError::Cancelled | PageReconstructError::AncestorStopping(_)) Some(PageReconstructError::Cancelled | PageReconstructError::AncestorStopping(_))

View File

@@ -1,4 +1,3 @@
use crate::tenant::mgr::TenantManager;
use crate::{context::RequestContext, tenant::timeline::logical_size::CurrentLogicalSize}; use crate::{context::RequestContext, tenant::timeline::logical_size::CurrentLogicalSize};
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use consumption_metrics::EventType; use consumption_metrics::EventType;
@@ -182,7 +181,6 @@ impl MetricsKey {
} }
pub(super) async fn collect_all_metrics( pub(super) async fn collect_all_metrics(
tenant_manager: &Arc<TenantManager>,
cached_metrics: &Cache, cached_metrics: &Cache,
ctx: &RequestContext, ctx: &RequestContext,
) -> Vec<RawMetric> { ) -> Vec<RawMetric> {
@@ -190,7 +188,7 @@ pub(super) async fn collect_all_metrics(
let started_at = std::time::Instant::now(); let started_at = std::time::Instant::now();
let tenants = match tenant_manager.list_tenants() { let tenants = match crate::tenant::mgr::list_tenants().await {
Ok(tenants) => tenants, Ok(tenants) => tenants,
Err(err) => { Err(err) => {
tracing::error!("failed to list tenants: {:?}", err); tracing::error!("failed to list tenants: {:?}", err);
@@ -202,8 +200,7 @@ pub(super) async fn collect_all_metrics(
if state != TenantState::Active || !id.is_zero() { if state != TenantState::Active || !id.is_zero() {
None None
} else { } else {
tenant_manager crate::tenant::mgr::get_tenant(id, true)
.get_attached_tenant_shard(id)
.ok() .ok()
.map(|tenant| (id.tenant_id, tenant)) .map(|tenant| (id.tenant_id, tenant))
} }

View File

@@ -1,9 +1,4 @@
use std::time::SystemTime;
use chrono::{DateTime, Utc};
use consumption_metrics::{Event, EventChunk, IdempotencyKey, CHUNK_SIZE}; use consumption_metrics::{Event, EventChunk, IdempotencyKey, CHUNK_SIZE};
use remote_storage::{GenericRemoteStorage, RemotePath};
use tokio::io::AsyncWriteExt;
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
use tracing::Instrument; use tracing::Instrument;
@@ -18,9 +13,8 @@ struct Ids {
pub(super) timeline_id: Option<TimelineId>, pub(super) timeline_id: Option<TimelineId>,
} }
/// Serialize and write metrics to an HTTP endpoint
#[tracing::instrument(skip_all, fields(metrics_total = %metrics.len()))] #[tracing::instrument(skip_all, fields(metrics_total = %metrics.len()))]
pub(super) async fn upload_metrics_http( pub(super) async fn upload_metrics(
client: &reqwest::Client, client: &reqwest::Client,
metric_collection_endpoint: &reqwest::Url, metric_collection_endpoint: &reqwest::Url,
cancel: &CancellationToken, cancel: &CancellationToken,
@@ -80,60 +74,6 @@ pub(super) async fn upload_metrics_http(
Ok(()) Ok(())
} }
/// Serialize and write metrics to a remote storage object
#[tracing::instrument(skip_all, fields(metrics_total = %metrics.len()))]
pub(super) async fn upload_metrics_bucket(
client: &GenericRemoteStorage,
cancel: &CancellationToken,
node_id: &str,
metrics: &[RawMetric],
) -> anyhow::Result<()> {
if metrics.is_empty() {
// Skip uploads if we have no metrics, so that readers don't have to handle the edge case
// of an empty object.
return Ok(());
}
// Compose object path
let datetime: DateTime<Utc> = SystemTime::now().into();
let ts_prefix = datetime.format("year=%Y/month=%m/day=%d/%H:%M:%SZ");
let path = RemotePath::from_string(&format!("{ts_prefix}_{node_id}.ndjson.gz"))?;
// Set up a gzip writer into a buffer
let mut compressed_bytes: Vec<u8> = Vec::new();
let compressed_writer = std::io::Cursor::new(&mut compressed_bytes);
let mut gzip_writer = async_compression::tokio::write::GzipEncoder::new(compressed_writer);
// Serialize and write into compressed buffer
let started_at = std::time::Instant::now();
for res in serialize_in_chunks(CHUNK_SIZE, metrics, node_id) {
let (_chunk, body) = res?;
gzip_writer.write_all(&body).await?;
}
gzip_writer.flush().await?;
gzip_writer.shutdown().await?;
let compressed_length = compressed_bytes.len();
// Write to remote storage
client
.upload_storage_object(
futures::stream::once(futures::future::ready(Ok(compressed_bytes.into()))),
compressed_length,
&path,
cancel,
)
.await?;
let elapsed = started_at.elapsed();
tracing::info!(
compressed_length,
elapsed_ms = elapsed.as_millis(),
"write metrics bucket at {path}",
);
Ok(())
}
// The return type is quite ugly, but we gain testability in isolation // The return type is quite ugly, but we gain testability in isolation
fn serialize_in_chunks<'a, F>( fn serialize_in_chunks<'a, F>(
chunk_size: usize, chunk_size: usize,

View File

@@ -5,8 +5,7 @@ use pageserver_api::{
controller_api::NodeRegisterRequest, controller_api::NodeRegisterRequest,
shard::TenantShardId, shard::TenantShardId,
upcall_api::{ upcall_api::{
ReAttachRequest, ReAttachResponse, ReAttachResponseTenant, ValidateRequest, ReAttachRequest, ReAttachResponse, ValidateRequest, ValidateRequestTenant, ValidateResponse,
ValidateRequestTenant, ValidateResponse,
}, },
}; };
use serde::{de::DeserializeOwned, Serialize}; use serde::{de::DeserializeOwned, Serialize};
@@ -38,9 +37,7 @@ pub trait ControlPlaneGenerationsApi {
fn re_attach( fn re_attach(
&self, &self,
conf: &PageServerConf, conf: &PageServerConf,
) -> impl Future< ) -> impl Future<Output = Result<HashMap<TenantShardId, Generation>, RetryForeverError>> + Send;
Output = Result<HashMap<TenantShardId, ReAttachResponseTenant>, RetryForeverError>,
> + Send;
fn validate( fn validate(
&self, &self,
tenants: Vec<(TenantShardId, Generation)>, tenants: Vec<(TenantShardId, Generation)>,
@@ -121,7 +118,7 @@ impl ControlPlaneGenerationsApi for ControlPlaneClient {
async fn re_attach( async fn re_attach(
&self, &self,
conf: &PageServerConf, conf: &PageServerConf,
) -> Result<HashMap<TenantShardId, ReAttachResponseTenant>, RetryForeverError> { ) -> Result<HashMap<TenantShardId, Generation>, RetryForeverError> {
let re_attach_path = self let re_attach_path = self
.base_url .base_url
.join("re-attach") .join("re-attach")
@@ -184,7 +181,7 @@ impl ControlPlaneGenerationsApi for ControlPlaneClient {
Ok(response Ok(response
.tenants .tenants
.into_iter() .into_iter()
.map(|rart| (rart.id, rart)) .map(|t| (t.id, Generation::new(t.gen)))
.collect::<HashMap<_, _>>()) .collect::<HashMap<_, _>>())
} }

View File

@@ -724,8 +724,8 @@ impl DeletionQueue {
mod test { mod test {
use camino::Utf8Path; use camino::Utf8Path;
use hex_literal::hex; use hex_literal::hex;
use pageserver_api::{shard::ShardIndex, upcall_api::ReAttachResponseTenant}; use pageserver_api::shard::ShardIndex;
use std::{io::ErrorKind, time::Duration}; use std::io::ErrorKind;
use tracing::info; use tracing::info;
use remote_storage::{RemoteStorageConfig, RemoteStorageKind}; use remote_storage::{RemoteStorageConfig, RemoteStorageKind};
@@ -834,10 +834,9 @@ mod test {
async fn re_attach( async fn re_attach(
&self, &self,
_conf: &PageServerConf, _conf: &PageServerConf,
) -> Result<HashMap<TenantShardId, ReAttachResponseTenant>, RetryForeverError> { ) -> Result<HashMap<TenantShardId, Generation>, RetryForeverError> {
unimplemented!() unimplemented!()
} }
async fn validate( async fn validate(
&self, &self,
tenants: Vec<(TenantShardId, Generation)>, tenants: Vec<(TenantShardId, Generation)>,

View File

@@ -61,6 +61,7 @@ use crate::{
metrics::disk_usage_based_eviction::METRICS, metrics::disk_usage_based_eviction::METRICS,
task_mgr::{self, TaskKind, BACKGROUND_RUNTIME}, task_mgr::{self, TaskKind, BACKGROUND_RUNTIME},
tenant::{ tenant::{
self,
mgr::TenantManager, mgr::TenantManager,
remote_timeline_client::LayerFileMetadata, remote_timeline_client::LayerFileMetadata,
secondary::SecondaryTenant, secondary::SecondaryTenant,
@@ -813,8 +814,8 @@ async fn collect_eviction_candidates(
const LOG_DURATION_THRESHOLD: std::time::Duration = std::time::Duration::from_secs(10); const LOG_DURATION_THRESHOLD: std::time::Duration = std::time::Duration::from_secs(10);
// get a snapshot of the list of tenants // get a snapshot of the list of tenants
let tenants = tenant_manager let tenants = tenant::mgr::list_tenants()
.list_tenants() .await
.context("get list of tenants")?; .context("get list of tenants")?;
// TODO: avoid listing every layer in every tenant: this loop can block the executor, // TODO: avoid listing every layer in every tenant: this loop can block the executor,
@@ -826,12 +827,8 @@ async fn collect_eviction_candidates(
if cancel.is_cancelled() { if cancel.is_cancelled() {
return Ok(EvictionCandidates::Cancelled); return Ok(EvictionCandidates::Cancelled);
} }
let tenant = match tenant_manager.get_attached_tenant_shard(tenant_id) { let tenant = match tenant::mgr::get_tenant(tenant_id, true) {
Ok(tenant) if tenant.is_active() => tenant, Ok(tenant) => tenant,
Ok(_) => {
debug!(tenant_id=%tenant_id.tenant_id, shard_id=%tenant_id.shard_slug(), "Tenant shard is not active");
continue;
}
Err(e) => { Err(e) => {
// this can happen if tenant has lifecycle transition after we fetched it // this can happen if tenant has lifecycle transition after we fetched it
debug!("failed to get tenant: {e:#}"); debug!("failed to get tenant: {e:#}");

View File

@@ -1038,7 +1038,7 @@ paths:
format: hex format: hex
responses: responses:
"201": "201":
description: Timeline was created, or already existed with matching parameters description: TimelineInfo
content: content:
application/json: application/json:
schema: schema:
@@ -1068,17 +1068,11 @@ paths:
schema: schema:
$ref: "#/components/schemas/Error" $ref: "#/components/schemas/Error"
"409": "409":
description: Timeline already exists, with different parameters. Creation cannot proceed. description: Timeline already exists, creation skipped
content: content:
application/json: application/json:
schema: schema:
$ref: "#/components/schemas/ConflictError" $ref: "#/components/schemas/ConflictError"
"429":
description: A creation request was sent for the same Timeline Id while a creation was already in progress. Back off and retry.
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
"500": "500":
description: Generic operation error description: Generic operation error
content: content:

View File

@@ -36,7 +36,6 @@ use tokio_util::sync::CancellationToken;
use tracing::*; use tracing::*;
use utils::auth::JwtAuth; use utils::auth::JwtAuth;
use utils::failpoint_support::failpoints_handler; use utils::failpoint_support::failpoints_handler;
use utils::http::endpoint::prometheus_metrics_handler;
use utils::http::endpoint::request_span; use utils::http::endpoint::request_span;
use utils::http::json::json_request_or_empty_body; use utils::http::json::json_request_or_empty_body;
use utils::http::request::{get_request_param, must_get_query_param, parse_query_param}; use utils::http::request::{get_request_param, must_get_query_param, parse_query_param};
@@ -49,8 +48,8 @@ use crate::task_mgr::TaskKind;
use crate::tenant::config::{LocationConf, TenantConfOpt}; use crate::tenant::config::{LocationConf, TenantConfOpt};
use crate::tenant::mgr::GetActiveTenantError; use crate::tenant::mgr::GetActiveTenantError;
use crate::tenant::mgr::{ use crate::tenant::mgr::{
GetTenantError, TenantManager, TenantMapError, TenantMapInsertError, TenantSlotError, GetTenantError, SetNewTenantConfigError, TenantManager, TenantMapError, TenantMapInsertError,
TenantSlotUpsertError, TenantStateError, TenantSlotError, TenantSlotUpsertError, TenantStateError,
}; };
use crate::tenant::mgr::{TenantSlot, UpsertLocationError}; use crate::tenant::mgr::{TenantSlot, UpsertLocationError};
use crate::tenant::remote_timeline_client; use crate::tenant::remote_timeline_client;
@@ -249,11 +248,16 @@ impl From<GetTenantError> for ApiError {
fn from(tse: GetTenantError) -> ApiError { fn from(tse: GetTenantError) -> ApiError {
match tse { match tse {
GetTenantError::NotFound(tid) => ApiError::NotFound(anyhow!("tenant {}", tid).into()), GetTenantError::NotFound(tid) => ApiError::NotFound(anyhow!("tenant {}", tid).into()),
GetTenantError::Broken(reason) => {
ApiError::InternalServerError(anyhow!("tenant is broken: {}", reason))
}
GetTenantError::NotActive(_) => { GetTenantError::NotActive(_) => {
// Why is this not `ApiError::NotFound`? // Why is this not `ApiError::NotFound`?
// Because we must be careful to never return 404 for a tenant if it does // Because we must be careful to never return 404 for a tenant if it does
// in fact exist locally. If we did, the caller could draw the conclusion // in fact exist locally. If we did, the caller could draw the conclusion
// that it can attach the tenant to another PS and we'd be in split-brain. // that it can attach the tenant to another PS and we'd be in split-brain.
//
// (We can produce this variant only in `mgr::get_tenant(..., active=true)` calls).
ApiError::ResourceUnavailable("Tenant not yet active".into()) ApiError::ResourceUnavailable("Tenant not yet active".into())
} }
GetTenantError::MapState(e) => ApiError::ResourceUnavailable(format!("{e}").into()), GetTenantError::MapState(e) => ApiError::ResourceUnavailable(format!("{e}").into()),
@@ -264,9 +268,6 @@ impl From<GetTenantError> for ApiError {
impl From<GetActiveTenantError> for ApiError { impl From<GetActiveTenantError> for ApiError {
fn from(e: GetActiveTenantError) -> ApiError { fn from(e: GetActiveTenantError) -> ApiError {
match e { match e {
GetActiveTenantError::Broken(reason) => {
ApiError::InternalServerError(anyhow!("tenant is broken: {}", reason))
}
GetActiveTenantError::WillNotBecomeActive(_) => ApiError::Conflict(format!("{}", e)), GetActiveTenantError::WillNotBecomeActive(_) => ApiError::Conflict(format!("{}", e)),
GetActiveTenantError::Cancelled => ApiError::ShuttingDown, GetActiveTenantError::Cancelled => ApiError::ShuttingDown,
GetActiveTenantError::NotFound(gte) => gte.into(), GetActiveTenantError::NotFound(gte) => gte.into(),
@@ -277,6 +278,19 @@ impl From<GetActiveTenantError> for ApiError {
} }
} }
impl From<SetNewTenantConfigError> for ApiError {
fn from(e: SetNewTenantConfigError) -> ApiError {
match e {
SetNewTenantConfigError::GetTenant(tid) => {
ApiError::NotFound(anyhow!("tenant {}", tid).into())
}
e @ (SetNewTenantConfigError::Persist(_) | SetNewTenantConfigError::Other(_)) => {
ApiError::InternalServerError(anyhow::Error::new(e))
}
}
}
}
impl From<crate::tenant::DeleteTimelineError> for ApiError { impl From<crate::tenant::DeleteTimelineError> for ApiError {
fn from(value: crate::tenant::DeleteTimelineError) -> Self { fn from(value: crate::tenant::DeleteTimelineError) -> Self {
use crate::tenant::DeleteTimelineError::*; use crate::tenant::DeleteTimelineError::*;
@@ -480,7 +494,7 @@ async fn timeline_create_handler(
async { async {
let tenant = state let tenant = state
.tenant_manager .tenant_manager
.get_attached_tenant_shard(tenant_shard_id)?; .get_attached_tenant_shard(tenant_shard_id, false)?;
tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?; tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?;
@@ -520,13 +534,10 @@ async fn timeline_create_handler(
HttpErrorBody::from_msg("Tenant shutting down".to_string()), HttpErrorBody::from_msg("Tenant shutting down".to_string()),
) )
} }
Err(e @ tenant::CreateTimelineError::Conflict) => { Err(
json_response(StatusCode::CONFLICT, HttpErrorBody::from_msg(e.to_string())) e @ tenant::CreateTimelineError::Conflict
} | e @ tenant::CreateTimelineError::AlreadyCreating,
Err(e @ tenant::CreateTimelineError::AlreadyCreating) => json_response( ) => json_response(StatusCode::CONFLICT, HttpErrorBody::from_msg(e.to_string())),
StatusCode::TOO_MANY_REQUESTS,
HttpErrorBody::from_msg(e.to_string()),
),
Err(tenant::CreateTimelineError::AncestorLsn(err)) => json_response( Err(tenant::CreateTimelineError::AncestorLsn(err)) => json_response(
StatusCode::NOT_ACCEPTABLE, StatusCode::NOT_ACCEPTABLE,
HttpErrorBody::from_msg(format!("{err:#}")), HttpErrorBody::from_msg(format!("{err:#}")),
@@ -569,7 +580,7 @@ async fn timeline_list_handler(
let response_data = async { let response_data = async {
let tenant = state let tenant = state
.tenant_manager .tenant_manager
.get_attached_tenant_shard(tenant_shard_id)?; .get_attached_tenant_shard(tenant_shard_id, false)?;
tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?; tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?;
@@ -607,7 +618,6 @@ async fn timeline_preserve_initdb_handler(
let tenant_shard_id: TenantShardId = parse_request_param(&request, "tenant_shard_id")?; let tenant_shard_id: TenantShardId = parse_request_param(&request, "tenant_shard_id")?;
let timeline_id: TimelineId = parse_request_param(&request, "timeline_id")?; let timeline_id: TimelineId = parse_request_param(&request, "timeline_id")?;
check_permission(&request, Some(tenant_shard_id.tenant_id))?; check_permission(&request, Some(tenant_shard_id.tenant_id))?;
let state = get_state(&request);
// Part of the process for disaster recovery from safekeeper-stored WAL: // Part of the process for disaster recovery from safekeeper-stored WAL:
// If we don't recover into a new timeline but want to keep the timeline ID, // If we don't recover into a new timeline but want to keep the timeline ID,
@@ -615,9 +625,7 @@ async fn timeline_preserve_initdb_handler(
// location where timeline recreation cand find it. // location where timeline recreation cand find it.
async { async {
let tenant = state let tenant = mgr::get_tenant(tenant_shard_id, false)?;
.tenant_manager
.get_attached_tenant_shard(tenant_shard_id)?;
let timeline = tenant let timeline = tenant
.get_timeline(timeline_id, false) .get_timeline(timeline_id, false)
@@ -659,7 +667,7 @@ async fn timeline_detail_handler(
let timeline_info = async { let timeline_info = async {
let tenant = state let tenant = state
.tenant_manager .tenant_manager
.get_attached_tenant_shard(tenant_shard_id)?; .get_attached_tenant_shard(tenant_shard_id, false)?;
tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?; tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?;
@@ -846,7 +854,7 @@ async fn timeline_delete_handler(
let tenant = state let tenant = state
.tenant_manager .tenant_manager
.get_attached_tenant_shard(tenant_shard_id) .get_attached_tenant_shard(tenant_shard_id, false)
.map_err(|e| { .map_err(|e| {
match e { match e {
// GetTenantError has a built-in conversion to ApiError, but in this context we don't // GetTenantError has a built-in conversion to ApiError, but in this context we don't
@@ -877,16 +885,14 @@ async fn tenant_detach_handler(
let state = get_state(&request); let state = get_state(&request);
let conf = state.conf; let conf = state.conf;
state mgr::detach_tenant(
.tenant_manager conf,
.detach_tenant( tenant_shard_id,
conf, detach_ignored.unwrap_or(false),
tenant_shard_id, &state.deletion_queue_client,
detach_ignored.unwrap_or(false), )
&state.deletion_queue_client, .instrument(info_span!("tenant_detach", %tenant_id, shard_id=%tenant_shard_id.shard_slug()))
) .await?;
.instrument(info_span!("tenant_detach", %tenant_id, shard_id=%tenant_shard_id.shard_slug()))
.await?;
json_response(StatusCode::OK, ()) json_response(StatusCode::OK, ())
} }
@@ -964,11 +970,10 @@ async fn tenant_list_handler(
_cancel: CancellationToken, _cancel: CancellationToken,
) -> Result<Response<Body>, ApiError> { ) -> Result<Response<Body>, ApiError> {
check_permission(&request, None)?; check_permission(&request, None)?;
let state = get_state(&request);
let response_data = state let response_data = mgr::list_tenants()
.tenant_manager .instrument(info_span!("tenant_list"))
.list_tenants() .await
.map_err(|_| { .map_err(|_| {
ApiError::ResourceUnavailable("Tenant map is initializing or shutting down".into()) ApiError::ResourceUnavailable("Tenant map is initializing or shutting down".into())
})? })?
@@ -991,12 +996,9 @@ async fn tenant_status(
) -> Result<Response<Body>, ApiError> { ) -> Result<Response<Body>, ApiError> {
let tenant_shard_id: TenantShardId = parse_request_param(&request, "tenant_shard_id")?; let tenant_shard_id: TenantShardId = parse_request_param(&request, "tenant_shard_id")?;
check_permission(&request, Some(tenant_shard_id.tenant_id))?; check_permission(&request, Some(tenant_shard_id.tenant_id))?;
let state = get_state(&request);
let tenant_info = async { let tenant_info = async {
let tenant = state let tenant = mgr::get_tenant(tenant_shard_id, false)?;
.tenant_manager
.get_attached_tenant_shard(tenant_shard_id)?;
// Calculate total physical size of all timelines // Calculate total physical size of all timelines
let mut current_physical_size = 0; let mut current_physical_size = 0;
@@ -1069,7 +1071,9 @@ async fn tenant_size_handler(
let inputs_only: Option<bool> = parse_query_param(&request, "inputs_only")?; let inputs_only: Option<bool> = parse_query_param(&request, "inputs_only")?;
let retention_period: Option<u64> = parse_query_param(&request, "retention_period")?; let retention_period: Option<u64> = parse_query_param(&request, "retention_period")?;
let headers = request.headers(); let headers = request.headers();
let state = get_state(&request);
let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download);
let tenant = mgr::get_tenant(tenant_shard_id, true)?;
if !tenant_shard_id.is_zero() { if !tenant_shard_id.is_zero() {
return Err(ApiError::BadRequest(anyhow!( return Err(ApiError::BadRequest(anyhow!(
@@ -1077,12 +1081,6 @@ async fn tenant_size_handler(
))); )));
} }
let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download);
let tenant = state
.tenant_manager
.get_attached_tenant_shard(tenant_shard_id)?;
tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?;
// this can be long operation // this can be long operation
let inputs = tenant let inputs = tenant
.gather_size_inputs( .gather_size_inputs(
@@ -1151,15 +1149,10 @@ async fn tenant_shard_split_handler(
let state = get_state(&request); let state = get_state(&request);
let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Warn); let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Warn);
let tenant = state
.tenant_manager
.get_attached_tenant_shard(tenant_shard_id)?;
tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?;
let new_shards = state let new_shards = state
.tenant_manager .tenant_manager
.shard_split( .shard_split(
tenant, tenant_shard_id,
ShardCount::new(req.new_shard_count), ShardCount::new(req.new_shard_count),
req.new_stripe_size, req.new_stripe_size,
&ctx, &ctx,
@@ -1377,11 +1370,8 @@ async fn get_tenant_config_handler(
) -> Result<Response<Body>, ApiError> { ) -> Result<Response<Body>, ApiError> {
let tenant_shard_id: TenantShardId = parse_request_param(&request, "tenant_shard_id")?; let tenant_shard_id: TenantShardId = parse_request_param(&request, "tenant_shard_id")?;
check_permission(&request, Some(tenant_shard_id.tenant_id))?; check_permission(&request, Some(tenant_shard_id.tenant_id))?;
let state = get_state(&request);
let tenant = state let tenant = mgr::get_tenant(tenant_shard_id, false)?;
.tenant_manager
.get_attached_tenant_shard(tenant_shard_id)?;
let response = HashMap::from([ let response = HashMap::from([
( (
@@ -1409,31 +1399,13 @@ async fn update_tenant_config_handler(
let tenant_id = request_data.tenant_id; let tenant_id = request_data.tenant_id;
check_permission(&request, Some(tenant_id))?; check_permission(&request, Some(tenant_id))?;
let new_tenant_conf = let tenant_conf =
TenantConfOpt::try_from(&request_data.config).map_err(ApiError::BadRequest)?; TenantConfOpt::try_from(&request_data.config).map_err(ApiError::BadRequest)?;
let state = get_state(&request); let state = get_state(&request);
mgr::set_new_tenant_config(state.conf, tenant_conf, tenant_id)
let tenant_shard_id = TenantShardId::unsharded(tenant_id); .instrument(info_span!("tenant_config", %tenant_id))
.await?;
let tenant = state
.tenant_manager
.get_attached_tenant_shard(tenant_shard_id)?;
tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?;
// This is a legacy API that only operates on attached tenants: the preferred
// API to use is the location_config/ endpoint, which lets the caller provide
// the full LocationConf.
let location_conf = LocationConf::attached_single(
new_tenant_conf.clone(),
tenant.get_generation(),
&ShardParameters::default(),
);
crate::tenant::Tenant::persist_tenant_config(state.conf, &tenant_shard_id, &location_conf)
.await
.map_err(ApiError::InternalServerError)?;
tenant.set_new_tenant_config(new_tenant_conf);
json_response(StatusCode::OK, ()) json_response(StatusCode::OK, ())
} }
@@ -1456,14 +1428,13 @@ async fn put_tenant_location_config_handler(
// The `Detached` state is special, it doesn't upsert a tenant, it removes // The `Detached` state is special, it doesn't upsert a tenant, it removes
// its local disk content and drops it from memory. // its local disk content and drops it from memory.
if let LocationConfigMode::Detached = request_data.config.mode { if let LocationConfigMode::Detached = request_data.config.mode {
if let Err(e) = state if let Err(e) =
.tenant_manager mgr::detach_tenant(conf, tenant_shard_id, true, &state.deletion_queue_client)
.detach_tenant(conf, tenant_shard_id, true, &state.deletion_queue_client) .instrument(info_span!("tenant_detach",
.instrument(info_span!("tenant_detach", tenant_id = %tenant_shard_id.tenant_id,
tenant_id = %tenant_shard_id.tenant_id, shard_id = %tenant_shard_id.shard_slug()
shard_id = %tenant_shard_id.shard_slug() ))
)) .await
.await
{ {
match e { match e {
TenantStateError::SlotError(TenantSlotError::NotFound(_)) => { TenantStateError::SlotError(TenantSlotError::NotFound(_)) => {
@@ -1657,12 +1628,10 @@ async fn handle_tenant_break(
) -> Result<Response<Body>, ApiError> { ) -> Result<Response<Body>, ApiError> {
let tenant_shard_id: TenantShardId = parse_request_param(&r, "tenant_shard_id")?; let tenant_shard_id: TenantShardId = parse_request_param(&r, "tenant_shard_id")?;
let state = get_state(&r); let tenant = crate::tenant::mgr::get_tenant(tenant_shard_id, true)
state .map_err(|_| ApiError::Conflict(String::from("no active tenant found")))?;
.tenant_manager
.get_attached_tenant_shard(tenant_shard_id)? tenant.set_broken("broken from test".to_owned()).await;
.set_broken("broken from test".to_owned())
.await;
json_response(StatusCode::OK, ()) json_response(StatusCode::OK, ())
} }
@@ -1679,7 +1648,8 @@ async fn timeline_gc_handler(
let gc_req: TimelineGcRequest = json_request(&mut request).await?; let gc_req: TimelineGcRequest = json_request(&mut request).await?;
let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download); let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download);
let wait_task_done = mgr::immediate_gc(tenant_shard_id, timeline_id, gc_req, cancel, &ctx)?; let wait_task_done =
mgr::immediate_gc(tenant_shard_id, timeline_id, gc_req, cancel, &ctx).await?;
let gc_result = wait_task_done let gc_result = wait_task_done
.await .await
.context("wait for gc task") .context("wait for gc task")
@@ -1906,7 +1876,7 @@ async fn active_timeline_of_active_tenant(
tenant_shard_id: TenantShardId, tenant_shard_id: TenantShardId,
timeline_id: TimelineId, timeline_id: TimelineId,
) -> Result<Arc<Timeline>, ApiError> { ) -> Result<Arc<Timeline>, ApiError> {
let tenant = tenant_manager.get_attached_tenant_shard(tenant_shard_id)?; let tenant = tenant_manager.get_attached_tenant_shard(tenant_shard_id, false)?;
tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?; tenant.wait_to_become_active(ACTIVE_TENANT_TIMEOUT).await?;
@@ -2292,7 +2262,6 @@ pub fn make_router(
Ok(router Ok(router
.data(state) .data(state)
.get("/metrics", |r| request_span(r, prometheus_metrics_handler))
.get("/v1/status", |r| api_handler(r, status_handler)) .get("/v1/status", |r| api_handler(r, status_handler))
.put("/v1/failpoints", |r| { .put("/v1/failpoints", |r| {
testing_api_handler("manage failpoints", r, failpoints_handler) testing_api_handler("manage failpoints", r, failpoints_handler)

View File

@@ -2,20 +2,28 @@
//! Import data and WAL from a PostgreSQL data directory and WAL segments into //! Import data and WAL from a PostgreSQL data directory and WAL segments into
//! a neon Timeline. //! a neon Timeline.
//! //!
use std::io::SeekFrom;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use anyhow::{bail, ensure, Context, Result}; use anyhow::{bail, ensure, Context, Result};
use async_compression::tokio::bufread::ZstdDecoder;
use async_compression::{tokio::write::ZstdEncoder, zstd::CParameter, Level};
use bytes::Bytes; use bytes::Bytes;
use camino::Utf8Path; use camino::Utf8Path;
use futures::StreamExt; use futures::StreamExt;
use tokio::io::{AsyncRead, AsyncReadExt}; use nix::NixPath;
use tokio::fs::{File, OpenOptions};
use tokio::io::{AsyncBufRead, AsyncRead, AsyncReadExt, AsyncSeekExt, AsyncWriteExt};
use tokio_tar::Archive; use tokio_tar::Archive;
use tokio_tar::Builder;
use tokio_tar::HeaderMode;
use tracing::*; use tracing::*;
use walkdir::WalkDir; use walkdir::WalkDir;
use crate::context::RequestContext; use crate::context::RequestContext;
use crate::metrics::WAL_INGEST; use crate::metrics::WAL_INGEST;
use crate::pgdatadir_mapping::*; use crate::pgdatadir_mapping::*;
use crate::tenant::remote_timeline_client::INITDB_PATH;
use crate::tenant::Timeline; use crate::tenant::Timeline;
use crate::walingest::WalIngest; use crate::walingest::WalIngest;
use crate::walrecord::DecodedWALRecord; use crate::walrecord::DecodedWALRecord;
@@ -625,3 +633,65 @@ async fn read_all_bytes(reader: &mut (impl AsyncRead + Unpin)) -> Result<Bytes>
reader.read_to_end(&mut buf).await?; reader.read_to_end(&mut buf).await?;
Ok(Bytes::from(buf)) Ok(Bytes::from(buf))
} }
pub async fn create_tar_zst(pgdata_path: &Utf8Path, tmp_path: &Utf8Path) -> Result<(File, u64)> {
let file = OpenOptions::new()
.create(true)
.truncate(true)
.read(true)
.write(true)
.open(&tmp_path)
.await
.with_context(|| format!("tempfile creation {tmp_path}"))?;
let mut paths = Vec::new();
for entry in WalkDir::new(pgdata_path) {
let entry = entry?;
let metadata = entry.metadata().expect("error getting dir entry metadata");
// Also allow directories so that we also get empty directories
if !(metadata.is_file() || metadata.is_dir()) {
continue;
}
let path = entry.into_path();
paths.push(path);
}
// Do a sort to get a more consistent listing
paths.sort_unstable();
let zstd = ZstdEncoder::with_quality_and_params(
file,
Level::Default,
&[CParameter::enable_long_distance_matching(true)],
);
let mut builder = Builder::new(zstd);
// Use reproducible header mode
builder.mode(HeaderMode::Deterministic);
for path in paths {
let rel_path = path.strip_prefix(pgdata_path)?;
if rel_path.is_empty() {
// The top directory should not be compressed,
// the tar crate doesn't like that
continue;
}
builder.append_path_with_name(&path, rel_path).await?;
}
let mut zstd = builder.into_inner().await?;
zstd.shutdown().await?;
let mut compressed = zstd.into_inner();
let compressed_len = compressed.metadata().await?.len();
const INITDB_TAR_ZST_WARN_LIMIT: u64 = 2 * 1024 * 1024;
if compressed_len > INITDB_TAR_ZST_WARN_LIMIT {
warn!("compressed {INITDB_PATH} size of {compressed_len} is above limit {INITDB_TAR_ZST_WARN_LIMIT}.");
}
compressed.seek(SeekFrom::Start(0)).await?;
Ok((compressed, compressed_len))
}
pub async fn extract_tar_zst(
pgdata_path: &Utf8Path,
tar_zst: impl AsyncBufRead + Unpin,
) -> Result<()> {
let tar = Box::pin(ZstdDecoder::new(tar_zst));
let mut archive = Archive::new(tar);
archive.unpack(pgdata_path).await?;
Ok(())
}

View File

@@ -435,7 +435,7 @@ pub(crate) static RESIDENT_PHYSICAL_SIZE_GLOBAL: Lazy<UIntGauge> = Lazy::new(||
static REMOTE_PHYSICAL_SIZE: Lazy<UIntGaugeVec> = Lazy::new(|| { static REMOTE_PHYSICAL_SIZE: Lazy<UIntGaugeVec> = Lazy::new(|| {
register_uint_gauge_vec!( register_uint_gauge_vec!(
"pageserver_remote_physical_size", "pageserver_remote_physical_size",
"The size of the layer files present in the remote storage that are listed in the remote index_part.json.", "The size of the layer files present in the remote storage that are listed in the the remote index_part.json.",
// Corollary: If any files are missing from the index part, they won't be included here. // Corollary: If any files are missing from the index part, they won't be included here.
&["tenant_id", "shard_id", "timeline_id"] &["tenant_id", "shard_id", "timeline_id"]
) )
@@ -699,14 +699,6 @@ pub static STARTUP_IS_LOADING: Lazy<UIntGauge> = Lazy::new(|| {
.expect("Failed to register pageserver_startup_is_loading") .expect("Failed to register pageserver_startup_is_loading")
}); });
pub(crate) static TIMELINE_EPHEMERAL_BYTES: Lazy<UIntGauge> = Lazy::new(|| {
register_uint_gauge!(
"pageserver_timeline_ephemeral_bytes",
"Total number of bytes in ephemeral layers, summed for all timelines. Approximate, lazily updated."
)
.expect("Failed to register metric")
});
/// Metrics related to the lifecycle of a [`crate::tenant::Tenant`] object: things /// Metrics related to the lifecycle of a [`crate::tenant::Tenant`] object: things
/// like how long it took to load. /// like how long it took to load.
/// ///

View File

@@ -760,7 +760,6 @@ impl PageServerHandler {
let mut copyin_reader = pin!(StreamReader::new(self.copyin_stream(pgb, &tenant.cancel))); let mut copyin_reader = pin!(StreamReader::new(self.copyin_stream(pgb, &tenant.cancel)));
timeline timeline
.import_basebackup_from_tar( .import_basebackup_from_tar(
tenant.clone(),
&mut copyin_reader, &mut copyin_reader,
base_lsn, base_lsn,
self.broker_client.clone(), self.broker_client.clone(),

View File

@@ -34,7 +34,6 @@ use strum::IntoEnumIterator;
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
use tracing::{debug, trace, warn}; use tracing::{debug, trace, warn};
use utils::bin_ser::DeserializeError; use utils::bin_ser::DeserializeError;
use utils::vec_map::{VecMap, VecMapOrdering};
use utils::{bin_ser::BeSer, lsn::Lsn}; use utils::{bin_ser::BeSer, lsn::Lsn};
const MAX_AUX_FILE_DELTAS: usize = 1024; const MAX_AUX_FILE_DELTAS: usize = 1024;
@@ -1547,13 +1546,12 @@ impl<'a> DatadirModification<'a> {
if !self.pending_updates.is_empty() { if !self.pending_updates.is_empty() {
// The put_batch call below expects expects the inputs to be sorted by Lsn, // The put_batch call below expects expects the inputs to be sorted by Lsn,
// so we do that first. // so we do that first.
let lsn_ordered_batch: VecMap<Lsn, (Key, Value)> = VecMap::from_iter( let lsn_ordered_batch: Vec<(Key, Lsn, Value)> = self
self.pending_updates .pending_updates
.drain() .drain()
.map(|(key, vals)| vals.into_iter().map(move |(lsn, val)| (lsn, (key, val)))) .map(|(key, vals)| vals.into_iter().map(move |(lsn, val)| (key, lsn, val)))
.kmerge_by(|lhs, rhs| lhs.0 < rhs.0), .kmerge_by(|lhs, rhs| lhs.1 .0 < rhs.1 .0)
VecMapOrdering::GreaterOrEqual, .collect();
);
writer.put_batch(lsn_ordered_batch, ctx).await?; writer.put_batch(lsn_ordered_batch, ctx).await?;
} }

View File

@@ -43,8 +43,6 @@ use utils::sync::gate::Gate;
use utils::sync::gate::GateGuard; use utils::sync::gate::GateGuard;
use utils::timeout::timeout_cancellable; use utils::timeout::timeout_cancellable;
use utils::timeout::TimeoutCancellableError; use utils::timeout::TimeoutCancellableError;
use utils::zstd::create_zst_tarball;
use utils::zstd::extract_zst_tarball;
use self::config::AttachedLocationConfig; use self::config::AttachedLocationConfig;
use self::config::AttachmentMode; use self::config::AttachmentMode;
@@ -202,13 +200,6 @@ pub(super) struct AttachedTenantConf {
} }
impl AttachedTenantConf { impl AttachedTenantConf {
fn new(tenant_conf: TenantConfOpt, location: AttachedLocationConfig) -> Self {
Self {
tenant_conf,
location,
}
}
fn try_from(location_conf: LocationConf) -> anyhow::Result<Self> { fn try_from(location_conf: LocationConf) -> anyhow::Result<Self> {
match &location_conf.mode { match &location_conf.mode {
LocationMode::Attached(attach_conf) => Ok(Self { LocationMode::Attached(attach_conf) => Ok(Self {
@@ -685,20 +676,9 @@ impl Tenant {
} }
// Ideally we should use Tenant::set_broken_no_wait, but it is not supposed to be used when tenant is in loading state. // Ideally we should use Tenant::set_broken_no_wait, but it is not supposed to be used when tenant is in loading state.
enum BrokenVerbosity {
Error,
Info
}
let make_broken = let make_broken =
|t: &Tenant, err: anyhow::Error, verbosity: BrokenVerbosity| { |t: &Tenant, err: anyhow::Error| {
match verbosity { error!("attach failed, setting tenant state to Broken: {err:?}");
BrokenVerbosity::Info => {
info!("attach cancelled, setting tenant state to Broken: {err}");
},
BrokenVerbosity::Error => {
error!("attach failed, setting tenant state to Broken: {err:?}");
}
}
t.state.send_modify(|state| { t.state.send_modify(|state| {
// The Stopping case is for when we have passed control on to DeleteTenantFlow: // The Stopping case is for when we have passed control on to DeleteTenantFlow:
// if it errors, we will call make_broken when tenant is already in Stopping. // if it errors, we will call make_broken when tenant is already in Stopping.
@@ -762,7 +742,7 @@ impl Tenant {
// Make the tenant broken so that set_stopping will not hang waiting for it to leave // Make the tenant broken so that set_stopping will not hang waiting for it to leave
// the Attaching state. This is an over-reaction (nothing really broke, the tenant is // the Attaching state. This is an over-reaction (nothing really broke, the tenant is
// just shutting down), but ensures progress. // just shutting down), but ensures progress.
make_broken(&tenant_clone, anyhow::anyhow!("Shut down while Attaching"), BrokenVerbosity::Info); make_broken(&tenant_clone, anyhow::anyhow!("Shut down while Attaching"));
return Ok(()); return Ok(());
}, },
) )
@@ -784,7 +764,7 @@ impl Tenant {
match res { match res {
Ok(p) => Some(p), Ok(p) => Some(p),
Err(e) => { Err(e) => {
make_broken(&tenant_clone, anyhow::anyhow!(e), BrokenVerbosity::Error); make_broken(&tenant_clone, anyhow::anyhow!(e));
return Ok(()); return Ok(());
} }
} }
@@ -808,7 +788,7 @@ impl Tenant {
{ {
Ok(should_resume_deletion) => should_resume_deletion, Ok(should_resume_deletion) => should_resume_deletion,
Err(err) => { Err(err) => {
make_broken(&tenant_clone, anyhow::anyhow!(err), BrokenVerbosity::Error); make_broken(&tenant_clone, anyhow::anyhow!(err));
return Ok(()); return Ok(());
} }
} }
@@ -838,7 +818,7 @@ impl Tenant {
.await; .await;
if let Err(e) = deleted { if let Err(e) = deleted {
make_broken(&tenant_clone, anyhow::anyhow!(e), BrokenVerbosity::Error); make_broken(&tenant_clone, anyhow::anyhow!(e));
} }
return Ok(()); return Ok(());
@@ -859,7 +839,7 @@ impl Tenant {
tenant_clone.activate(broker_client, None, &ctx); tenant_clone.activate(broker_client, None, &ctx);
} }
Err(e) => { Err(e) => {
make_broken(&tenant_clone, anyhow::anyhow!(e), BrokenVerbosity::Error); make_broken(&tenant_clone, anyhow::anyhow!(e));
} }
} }
@@ -1411,7 +1391,7 @@ impl Tenant {
/// the same timeline ID already exists, returns CreateTimelineError::AlreadyExists. /// the same timeline ID already exists, returns CreateTimelineError::AlreadyExists.
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
pub(crate) async fn create_timeline( pub(crate) async fn create_timeline(
self: &Arc<Tenant>, &self,
new_timeline_id: TimelineId, new_timeline_id: TimelineId,
ancestor_timeline_id: Option<TimelineId>, ancestor_timeline_id: Option<TimelineId>,
mut ancestor_start_lsn: Option<Lsn>, mut ancestor_start_lsn: Option<Lsn>,
@@ -1559,7 +1539,7 @@ impl Tenant {
})?; })?;
} }
loaded_timeline.activate(self.clone(), broker_client, None, ctx); loaded_timeline.activate(broker_client, None, ctx);
Ok(loaded_timeline) Ok(loaded_timeline)
} }
@@ -1731,12 +1711,7 @@ impl Tenant {
let mut activated_timelines = 0; let mut activated_timelines = 0;
for timeline in timelines_to_activate { for timeline in timelines_to_activate {
timeline.activate( timeline.activate(broker_client.clone(), background_jobs_can_start, ctx);
self.clone(),
broker_client.clone(),
background_jobs_can_start,
ctx,
);
activated_timelines += 1; activated_timelines += 1;
} }
@@ -2068,12 +2043,7 @@ impl Tenant {
TenantState::Active { .. } => { TenantState::Active { .. } => {
return Ok(()); return Ok(());
} }
TenantState::Broken { reason, .. } => { TenantState::Broken { .. } | TenantState::Stopping { .. } => {
// This is fatal, and reported distinctly from the general case of "will never be active" because
// it's logically a 500 to external API users (broken is always a bug).
return Err(GetActiveTenantError::Broken(reason));
}
TenantState::Stopping { .. } => {
// There's no chance the tenant can transition back into ::Active // There's no chance the tenant can transition back into ::Active
return Err(GetActiveTenantError::WillNotBecomeActive(current_state)); return Err(GetActiveTenantError::WillNotBecomeActive(current_state));
} }
@@ -2151,7 +2121,7 @@ impl Tenant {
// Shut down the timeline's remote client: this means that the indices we write // Shut down the timeline's remote client: this means that the indices we write
// for child shards will not be invalidated by the parent shard deleting layers. // for child shards will not be invalidated by the parent shard deleting layers.
tl_client.shutdown().await; tl_client.shutdown().await?;
// Download methods can still be used after shutdown, as they don't flow through the remote client's // Download methods can still be used after shutdown, as they don't flow through the remote client's
// queue. In principal the RemoteTimelineClient could provide this without downloading it, but this // queue. In principal the RemoteTimelineClient could provide this without downloading it, but this
@@ -3072,13 +3042,8 @@ impl Tenant {
} }
} }
let (pgdata_zstd, tar_zst_size) = create_zst_tarball(pgdata_path, &temp_path).await?; let (pgdata_zstd, tar_zst_size) =
const INITDB_TAR_ZST_WARN_LIMIT: u64 = 2 * 1024 * 1024; import_datadir::create_tar_zst(pgdata_path, &temp_path).await?;
if tar_zst_size > INITDB_TAR_ZST_WARN_LIMIT {
warn!(
"compressed {temp_path} size of {tar_zst_size} is above limit {INITDB_TAR_ZST_WARN_LIMIT}."
);
}
pausable_failpoint!("before-initdb-upload"); pausable_failpoint!("before-initdb-upload");
@@ -3178,7 +3143,7 @@ impl Tenant {
let buf_read = let buf_read =
BufReader::with_capacity(remote_timeline_client::BUFFER_SIZE, initdb_tar_zst); BufReader::with_capacity(remote_timeline_client::BUFFER_SIZE, initdb_tar_zst);
extract_zst_tarball(&pgdata_path, buf_read) import_datadir::extract_tar_zst(&pgdata_path, buf_read)
.await .await
.context("extract initdb tar")?; .context("extract initdb tar")?;
} else { } else {

View File

@@ -196,17 +196,16 @@ impl LocationConf {
/// For use when attaching/re-attaching: update the generation stored in this /// For use when attaching/re-attaching: update the generation stored in this
/// structure. If we were in a secondary state, promote to attached (posession /// structure. If we were in a secondary state, promote to attached (posession
/// of a fresh generation implies this). /// of a fresh generation implies this).
pub(crate) fn attach_in_generation(&mut self, mode: AttachmentMode, generation: Generation) { pub(crate) fn attach_in_generation(&mut self, generation: Generation) {
match &mut self.mode { match &mut self.mode {
LocationMode::Attached(attach_conf) => { LocationMode::Attached(attach_conf) => {
attach_conf.generation = generation; attach_conf.generation = generation;
attach_conf.attach_mode = mode;
} }
LocationMode::Secondary(_) => { LocationMode::Secondary(_) => {
// We are promoted to attached by the control plane's re-attach response // We are promoted to attached by the control plane's re-attach response
self.mode = LocationMode::Attached(AttachedLocationConfig { self.mode = LocationMode::Attached(AttachedLocationConfig {
generation, generation,
attach_mode: mode, attach_mode: AttachmentMode::Single,
}) })
} }
} }

View File

@@ -111,7 +111,6 @@ async fn create_local_delete_mark(
let _ = std::fs::OpenOptions::new() let _ = std::fs::OpenOptions::new()
.write(true) .write(true)
.create(true) .create(true)
.truncate(true)
.open(&marker_path) .open(&marker_path)
.with_context(|| format!("could not create delete marker file {marker_path:?}"))?; .with_context(|| format!("could not create delete marker file {marker_path:?}"))?;

View File

@@ -2,13 +2,13 @@
//! page server. //! page server.
use camino::{Utf8DirEntry, Utf8Path, Utf8PathBuf}; use camino::{Utf8DirEntry, Utf8Path, Utf8PathBuf};
use futures::stream::StreamExt;
use itertools::Itertools; use itertools::Itertools;
use pageserver_api::key::Key; use pageserver_api::key::Key;
use pageserver_api::models::LocationConfigMode; use pageserver_api::models::ShardParameters;
use pageserver_api::shard::{ use pageserver_api::shard::{
ShardCount, ShardIdentity, ShardNumber, ShardStripeSize, TenantShardId, ShardCount, ShardIdentity, ShardNumber, ShardStripeSize, TenantShardId,
}; };
use pageserver_api::upcall_api::ReAttachResponseTenant;
use rand::{distributions::Alphanumeric, Rng}; use rand::{distributions::Alphanumeric, Rng};
use std::borrow::Cow; use std::borrow::Cow;
use std::cmp::Ordering; use std::cmp::Ordering;
@@ -16,7 +16,6 @@ use std::collections::{BTreeMap, HashMap};
use std::ops::Deref; use std::ops::Deref;
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use sysinfo::SystemExt;
use tokio::fs; use tokio::fs;
use utils::timeout::{timeout_cancellable, TimeoutCancellableError}; use utils::timeout::{timeout_cancellable, TimeoutCancellableError};
@@ -40,10 +39,10 @@ use crate::metrics::{TENANT, TENANT_MANAGER as METRICS};
use crate::task_mgr::{self, TaskKind}; use crate::task_mgr::{self, TaskKind};
use crate::tenant::config::{ use crate::tenant::config::{
AttachedLocationConfig, AttachmentMode, LocationConf, LocationMode, SecondaryLocationConfig, AttachedLocationConfig, AttachmentMode, LocationConf, LocationMode, SecondaryLocationConfig,
TenantConfOpt,
}; };
use crate::tenant::delete::DeleteTenantFlow; use crate::tenant::delete::DeleteTenantFlow;
use crate::tenant::span::debug_assert_current_span_has_tenant_id; use crate::tenant::span::debug_assert_current_span_has_tenant_id;
use crate::tenant::storage_layer::inmemory_layer;
use crate::tenant::{AttachedTenantConf, SpawnMode, Tenant, TenantState}; use crate::tenant::{AttachedTenantConf, SpawnMode, Tenant, TenantState};
use crate::{InitializationOrder, IGNORED_TENANT_FILE_NAME, METADATA_FILE_NAME, TEMP_FILE_SUFFIX}; use crate::{InitializationOrder, IGNORED_TENANT_FILE_NAME, METADATA_FILE_NAME, TEMP_FILE_SUFFIX};
@@ -126,46 +125,6 @@ pub(crate) enum ShardSelector {
Page(Key), Page(Key),
} }
/// A convenience for use with the re_attach ControlPlaneClient function: rather
/// than the serializable struct, we build this enum that encapsulates
/// the invariant that attached tenants always have generations.
///
/// This represents the subset of a LocationConfig that we receive during re-attach.
pub(crate) enum TenantStartupMode {
Attached((AttachmentMode, Generation)),
Secondary,
}
impl TenantStartupMode {
/// Return the generation & mode that should be used when starting
/// this tenant.
///
/// If this returns None, the re-attach struct is in an invalid state and
/// should be ignored in the response.
fn from_reattach_tenant(rart: ReAttachResponseTenant) -> Option<Self> {
match (rart.mode, rart.gen) {
(LocationConfigMode::Detached, _) => None,
(LocationConfigMode::Secondary, _) => Some(Self::Secondary),
(LocationConfigMode::AttachedMulti, Some(g)) => {
Some(Self::Attached((AttachmentMode::Multi, Generation::new(g))))
}
(LocationConfigMode::AttachedSingle, Some(g)) => {
Some(Self::Attached((AttachmentMode::Single, Generation::new(g))))
}
(LocationConfigMode::AttachedStale, Some(g)) => {
Some(Self::Attached((AttachmentMode::Stale, Generation::new(g))))
}
_ => {
tracing::warn!(
"Received invalid re-attach state for tenant {}: {rart:?}",
rart.id
);
None
}
}
}
}
impl TenantsMap { impl TenantsMap {
/// Convenience function for typical usage, where we want to get a `Tenant` object, for /// Convenience function for typical usage, where we want to get a `Tenant` object, for
/// working with attached tenants. If the TenantId is in the map but in Secondary state, /// working with attached tenants. If the TenantId is in the map but in Secondary state,
@@ -312,7 +271,7 @@ pub struct TenantManager {
fn emergency_generations( fn emergency_generations(
tenant_confs: &HashMap<TenantShardId, anyhow::Result<LocationConf>>, tenant_confs: &HashMap<TenantShardId, anyhow::Result<LocationConf>>,
) -> HashMap<TenantShardId, TenantStartupMode> { ) -> HashMap<TenantShardId, Generation> {
tenant_confs tenant_confs
.iter() .iter()
.filter_map(|(tid, lc)| { .filter_map(|(tid, lc)| {
@@ -320,15 +279,12 @@ fn emergency_generations(
Ok(lc) => lc, Ok(lc) => lc,
Err(_) => return None, Err(_) => return None,
}; };
Some(( let gen = match &lc.mode {
*tid, LocationMode::Attached(alc) => Some(alc.generation),
match &lc.mode { LocationMode::Secondary(_) => None,
LocationMode::Attached(alc) => { };
TenantStartupMode::Attached((alc.attach_mode, alc.generation))
} gen.map(|g| (*tid, g))
LocationMode::Secondary(_) => TenantStartupMode::Secondary,
},
))
}) })
.collect() .collect()
} }
@@ -338,7 +294,7 @@ async fn init_load_generations(
tenant_confs: &HashMap<TenantShardId, anyhow::Result<LocationConf>>, tenant_confs: &HashMap<TenantShardId, anyhow::Result<LocationConf>>,
resources: &TenantSharedResources, resources: &TenantSharedResources,
cancel: &CancellationToken, cancel: &CancellationToken,
) -> anyhow::Result<Option<HashMap<TenantShardId, TenantStartupMode>>> { ) -> anyhow::Result<Option<HashMap<TenantShardId, Generation>>> {
let generations = if conf.control_plane_emergency_mode { let generations = if conf.control_plane_emergency_mode {
error!( error!(
"Emergency mode! Tenants will be attached unsafely using their last known generation" "Emergency mode! Tenants will be attached unsafely using their last known generation"
@@ -348,12 +304,7 @@ async fn init_load_generations(
info!("Calling control plane API to re-attach tenants"); info!("Calling control plane API to re-attach tenants");
// If we are configured to use the control plane API, then it is the source of truth for what tenants to load. // If we are configured to use the control plane API, then it is the source of truth for what tenants to load.
match client.re_attach(conf).await { match client.re_attach(conf).await {
Ok(tenants) => tenants Ok(tenants) => tenants,
.into_iter()
.flat_map(|(id, rart)| {
TenantStartupMode::from_reattach_tenant(rart).map(|tsm| (id, tsm))
})
.collect(),
Err(RetryForeverError::ShuttingDown) => { Err(RetryForeverError::ShuttingDown) => {
anyhow::bail!("Shut down while waiting for control plane re-attach response") anyhow::bail!("Shut down while waiting for control plane re-attach response")
} }
@@ -371,17 +322,9 @@ async fn init_load_generations(
// Must only do this if remote storage is enabled, otherwise deletion queue // Must only do this if remote storage is enabled, otherwise deletion queue
// is not running and channel push will fail. // is not running and channel push will fail.
if resources.remote_storage.is_some() { if resources.remote_storage.is_some() {
let attached_tenants = generations resources
.iter() .deletion_queue_client
.flat_map(|(id, start_mode)| { .recover(generations.clone())?;
match start_mode {
TenantStartupMode::Attached((_mode, generation)) => Some(generation),
TenantStartupMode::Secondary => None,
}
.map(|gen| (*id, *gen))
})
.collect();
resources.deletion_queue_client.recover(attached_tenants)?;
} }
Ok(Some(generations)) Ok(Some(generations))
@@ -544,23 +487,12 @@ pub async fn init_tenant_mgr(
let ctx = RequestContext::todo_child(TaskKind::Startup, DownloadBehavior::Warn); let ctx = RequestContext::todo_child(TaskKind::Startup, DownloadBehavior::Warn);
// Initialize dynamic limits that depend on system resources
let system_memory =
sysinfo::System::new_with_specifics(sysinfo::RefreshKind::new().with_memory())
.total_memory();
let max_ephemeral_layer_bytes =
conf.ephemeral_bytes_per_memory_kb as u64 * (system_memory / 1024);
tracing::info!("Initialized ephemeral layer size limit to {max_ephemeral_layer_bytes}, for {system_memory} bytes of memory");
inmemory_layer::GLOBAL_RESOURCES.max_dirty_bytes.store(
max_ephemeral_layer_bytes,
std::sync::atomic::Ordering::Relaxed,
);
// Scan local filesystem for attached tenants // Scan local filesystem for attached tenants
let tenant_configs = init_load_tenant_configs(conf).await?; let tenant_configs = init_load_tenant_configs(conf).await?;
// Determine which tenants are to be secondary or attached, and in which generation // Determine which tenants are to be attached
let tenant_modes = init_load_generations(conf, &tenant_configs, &resources, &cancel).await?; let tenant_generations =
init_load_generations(conf, &tenant_configs, &resources, &cancel).await?;
tracing::info!( tracing::info!(
"Attaching {} tenants at startup, warming up {} at a time", "Attaching {} tenants at startup, warming up {} at a time",
@@ -590,102 +522,97 @@ pub async fn init_tenant_mgr(
} }
}; };
// FIXME: if we were attached, and get demoted to secondary on re-attach, we let generation = if let Some(generations) = &tenant_generations {
// don't have a place to get a config.
// (https://github.com/neondatabase/neon/issues/5377)
const DEFAULT_SECONDARY_CONF: SecondaryLocationConfig =
SecondaryLocationConfig { warm: true };
// Update the location config according to the re-attach response
if let Some(tenant_modes) = &tenant_modes {
// We have a generation map: treat it as the authority for whether // We have a generation map: treat it as the authority for whether
// this tenant is really attached. // this tenant is really attached.
match tenant_modes.get(&tenant_shard_id) { if let Some(gen) = generations.get(&tenant_shard_id) {
None => { if let LocationMode::Attached(attached) = &location_conf.mode {
info!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), "Detaching tenant, control plane omitted it in re-attach response"); if attached.generation > *gen {
if let Err(e) = safe_remove_tenant_dir_all(&tenant_dir_path).await {
error!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(),
"Failed to remove detached tenant directory '{tenant_dir_path}': {e:?}",
);
}
// We deleted local content: move on to next tenant, don't try and spawn this one.
continue;
}
Some(TenantStartupMode::Secondary) => {
if !matches!(location_conf.mode, LocationMode::Secondary(_)) {
location_conf.mode = LocationMode::Secondary(DEFAULT_SECONDARY_CONF);
}
}
Some(TenantStartupMode::Attached((attach_mode, generation))) => {
let old_gen_higher = match &location_conf.mode {
LocationMode::Attached(AttachedLocationConfig {
generation: old_generation,
attach_mode: _attach_mode,
}) => {
if old_generation > generation {
Some(old_generation)
} else {
None
}
}
_ => None,
};
if let Some(old_generation) = old_gen_higher {
tracing::error!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), tracing::error!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(),
"Control plane gave decreasing generation ({generation:?}) in re-attach response for tenant that was attached in generation {:?}, demoting to secondary", "Control plane gave decreasing generation ({gen:?}) in re-attach response for tenant that was attached in generation {:?}, demoting to secondary",
old_generation attached.generation
); );
// We cannot safely attach this tenant given a bogus generation number, but let's avoid throwing away // We cannot safely attach this tenant given a bogus generation number, but let's avoid throwing away
// local disk content: demote to secondary rather than detaching. // local disk content: demote to secondary rather than detaching.
location_conf.mode = LocationMode::Secondary(DEFAULT_SECONDARY_CONF); tenants.insert(
} else { tenant_shard_id,
location_conf.attach_in_generation(*attach_mode, *generation); TenantSlot::Secondary(SecondaryTenant::new(
tenant_shard_id,
location_conf.shard,
location_conf.tenant_conf.clone(),
&SecondaryLocationConfig { warm: false },
)),
);
} }
} }
*gen
} else {
match &location_conf.mode {
LocationMode::Secondary(secondary_config) => {
// We do not require the control plane's permission for secondary mode
// tenants, because they do no remote writes and hence require no
// generation number
info!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), "Loaded tenant in secondary mode");
tenants.insert(
tenant_shard_id,
TenantSlot::Secondary(SecondaryTenant::new(
tenant_shard_id,
location_conf.shard,
location_conf.tenant_conf,
secondary_config,
)),
);
}
LocationMode::Attached(_) => {
// TODO: augment re-attach API to enable the control plane to
// instruct us about secondary attachments. That way, instead of throwing
// away local state, we can gracefully fall back to secondary here, if the control
// plane tells us so.
// (https://github.com/neondatabase/neon/issues/5377)
info!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), "Detaching tenant, control plane omitted it in re-attach response");
if let Err(e) = safe_remove_tenant_dir_all(&tenant_dir_path).await {
error!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(),
"Failed to remove detached tenant directory '{tenant_dir_path}': {e:?}",
);
}
}
};
continue;
} }
} else { } else {
// Legacy mode: no generation information, any tenant present // Legacy mode: no generation information, any tenant present
// on local disk may activate // on local disk may activate
info!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), "Starting tenant in legacy mode, no generation",); info!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), "Starting tenant in legacy mode, no generation",);
Generation::none()
}; };
// Presence of a generation number implies attachment: attach the tenant // Presence of a generation number implies attachment: attach the tenant
// if it wasn't already, and apply the generation number. // if it wasn't already, and apply the generation number.
location_conf.attach_in_generation(generation);
Tenant::persist_tenant_config(conf, &tenant_shard_id, &location_conf).await?; Tenant::persist_tenant_config(conf, &tenant_shard_id, &location_conf).await?;
let shard_identity = location_conf.shard; let shard_identity = location_conf.shard;
let slot = match location_conf.mode { match tenant_spawn(
LocationMode::Attached(attached_conf) => { conf,
match tenant_spawn( tenant_shard_id,
conf, &tenant_dir_path,
tenant_shard_id, resources.clone(),
&tenant_dir_path, AttachedTenantConf::try_from(location_conf)?,
resources.clone(), shard_identity,
AttachedTenantConf::new(location_conf.tenant_conf, attached_conf), Some(init_order.clone()),
shard_identity, &TENANTS,
Some(init_order.clone()), SpawnMode::Lazy,
&TENANTS, &ctx,
SpawnMode::Lazy, ) {
&ctx, Ok(tenant) => {
) { tenants.insert(tenant_shard_id, TenantSlot::Attached(tenant));
Ok(tenant) => TenantSlot::Attached(tenant),
Err(e) => {
error!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), "Failed to start tenant: {e:#}");
continue;
}
}
} }
LocationMode::Secondary(secondary_conf) => TenantSlot::Secondary(SecondaryTenant::new( Err(e) => {
tenant_shard_id, error!(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), "Failed to start tenant: {e:#}");
shard_identity, }
location_conf.tenant_conf, }
&secondary_conf,
)),
};
tenants.insert(tenant_shard_id, slot);
} }
info!("Processed {} local tenants at startup", tenants.len()); info!("Processed {} local tenants at startup", tenants.len());
@@ -706,7 +633,7 @@ pub async fn init_tenant_mgr(
/// Wrapper for Tenant::spawn that checks invariants before running, and inserts /// Wrapper for Tenant::spawn that checks invariants before running, and inserts
/// a broken tenant in the map if Tenant::spawn fails. /// a broken tenant in the map if Tenant::spawn fails.
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
fn tenant_spawn( pub(crate) fn tenant_spawn(
conf: &'static PageServerConf, conf: &'static PageServerConf,
tenant_shard_id: TenantShardId, tenant_shard_id: TenantShardId,
tenant_path: &Utf8Path, tenant_path: &Utf8Path,
@@ -888,6 +815,50 @@ async fn shutdown_all_tenants0(tenants: &std::sync::RwLock<TenantsMap>) {
// caller will log how long we took // caller will log how long we took
} }
#[derive(Debug, thiserror::Error)]
pub(crate) enum SetNewTenantConfigError {
#[error(transparent)]
GetTenant(#[from] GetTenantError),
#[error(transparent)]
Persist(anyhow::Error),
#[error(transparent)]
Other(anyhow::Error),
}
pub(crate) async fn set_new_tenant_config(
conf: &'static PageServerConf,
new_tenant_conf: TenantConfOpt,
tenant_id: TenantId,
) -> Result<(), SetNewTenantConfigError> {
// Legacy API: does not support sharding
let tenant_shard_id = TenantShardId::unsharded(tenant_id);
info!("configuring tenant {tenant_id}");
let tenant = get_tenant(tenant_shard_id, true)?;
if !tenant.tenant_shard_id().shard_count.is_unsharded() {
// Note that we use ShardParameters::default below.
return Err(SetNewTenantConfigError::Other(anyhow::anyhow!(
"This API may only be used on single-sharded tenants, use the /location_config API for sharded tenants"
)));
}
// This is a legacy API that only operates on attached tenants: the preferred
// API to use is the location_config/ endpoint, which lets the caller provide
// the full LocationConf.
let location_conf = LocationConf::attached_single(
new_tenant_conf.clone(),
tenant.generation,
&ShardParameters::default(),
);
Tenant::persist_tenant_config(conf, &tenant_shard_id, &location_conf)
.await
.map_err(SetNewTenantConfigError::Persist)?;
tenant.set_new_tenant_config(new_tenant_conf);
Ok(())
}
#[derive(thiserror::Error, Debug)] #[derive(thiserror::Error, Debug)]
pub(crate) enum UpsertLocationError { pub(crate) enum UpsertLocationError {
#[error("Bad config request: {0}")] #[error("Bad config request: {0}")]
@@ -913,21 +884,32 @@ impl TenantManager {
self.conf self.conf
} }
/// Gets the attached tenant from the in-memory data, erroring if it's absent, in secondary mode, or currently /// Gets the attached tenant from the in-memory data, erroring if it's absent, in secondary mode, or is not fitting to the query.
/// undergoing a state change (i.e. slot is InProgress). /// `active_only = true` allows to query only tenants that are ready for operations, erroring on other kinds of tenants.
///
/// The return Tenant is not guaranteed to be active: check its status after obtaing it, or
/// use [`Tenant::wait_to_become_active`] before using it if you will do I/O on it.
pub(crate) fn get_attached_tenant_shard( pub(crate) fn get_attached_tenant_shard(
&self, &self,
tenant_shard_id: TenantShardId, tenant_shard_id: TenantShardId,
active_only: bool,
) -> Result<Arc<Tenant>, GetTenantError> { ) -> Result<Arc<Tenant>, GetTenantError> {
let locked = self.tenants.read().unwrap(); let locked = self.tenants.read().unwrap();
let peek_slot = tenant_map_peek_slot(&locked, &tenant_shard_id, TenantSlotPeekMode::Read)?; let peek_slot = tenant_map_peek_slot(&locked, &tenant_shard_id, TenantSlotPeekMode::Read)?;
match peek_slot { match peek_slot {
Some(TenantSlot::Attached(tenant)) => Ok(Arc::clone(tenant)), Some(TenantSlot::Attached(tenant)) => match tenant.current_state() {
TenantState::Broken {
reason,
backtrace: _,
} if active_only => Err(GetTenantError::Broken(reason)),
TenantState::Active => Ok(Arc::clone(tenant)),
_ => {
if active_only {
Err(GetTenantError::NotActive(tenant_shard_id))
} else {
Ok(Arc::clone(tenant))
}
}
},
Some(TenantSlot::InProgress(_)) => Err(GetTenantError::NotActive(tenant_shard_id)), Some(TenantSlot::InProgress(_)) => Err(GetTenantError::NotActive(tenant_shard_id)),
None | Some(TenantSlot::Secondary(_)) => { None | Some(TenantSlot::Secondary(_)) => {
Err(GetTenantError::NotFound(tenant_shard_id.tenant_id)) Err(GetTenantError::NotFound(tenant_shard_id.tenant_id))
@@ -1420,8 +1402,7 @@ impl TenantManager {
.wait_to_become_active(activation_timeout) .wait_to_become_active(activation_timeout)
.await .await
.map_err(|e| match e { .map_err(|e| match e {
GetActiveTenantError::WillNotBecomeActive(_) GetActiveTenantError::WillNotBecomeActive(_) => {
| GetActiveTenantError::Broken(_) => {
DeleteTenantError::InvalidState(tenant.current_state()) DeleteTenantError::InvalidState(tenant.current_state())
} }
GetActiveTenantError::Cancelled => DeleteTenantError::Cancelled, GetActiveTenantError::Cancelled => DeleteTenantError::Cancelled,
@@ -1448,30 +1429,29 @@ impl TenantManager {
result result
} }
#[instrument(skip_all, fields(tenant_id=%tenant.get_tenant_shard_id().tenant_id, shard_id=%tenant.get_tenant_shard_id().shard_slug(), new_shard_count=%new_shard_count.literal()))] #[instrument(skip_all, fields(tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), new_shard_count=%new_shard_count.literal()))]
pub(crate) async fn shard_split( pub(crate) async fn shard_split(
&self, &self,
tenant: Arc<Tenant>, tenant_shard_id: TenantShardId,
new_shard_count: ShardCount, new_shard_count: ShardCount,
new_stripe_size: Option<ShardStripeSize>, new_stripe_size: Option<ShardStripeSize>,
ctx: &RequestContext, ctx: &RequestContext,
) -> anyhow::Result<Vec<TenantShardId>> { ) -> anyhow::Result<Vec<TenantShardId>> {
let tenant_shard_id = *tenant.get_tenant_shard_id();
let r = self let r = self
.do_shard_split(tenant, new_shard_count, new_stripe_size, ctx) .do_shard_split(tenant_shard_id, new_shard_count, new_stripe_size, ctx)
.await; .await;
if r.is_err() { if r.is_err() {
// Shard splitting might have left the original shard in a partially shut down state (it // Shard splitting might have left the original shard in a partially shut down state (it
// stops the shard's remote timeline client). Reset it to ensure we leave things in // stops the shard's remote timeline client). Reset it to ensure we leave things in
// a working state. // a working state.
if self.get(tenant_shard_id).is_some() { if self.get(tenant_shard_id).is_some() {
tracing::warn!("Resetting after shard split failure"); tracing::warn!("Resetting {tenant_shard_id} after shard split failure");
if let Err(e) = self.reset_tenant(tenant_shard_id, false, ctx).await { if let Err(e) = self.reset_tenant(tenant_shard_id, false, ctx).await {
// Log this error because our return value will still be the original error, not this one. This is // Log this error because our return value will still be the original error, not this one. This is
// a severe error: if this happens, we might be leaving behind a tenant that is not fully functional // a severe error: if this happens, we might be leaving behind a tenant that is not fully functional
// (e.g. has uploads disabled). We can't do anything else: if reset fails then shutting the tenant down or // (e.g. has uploads disabled). We can't do anything else: if reset fails then shutting the tenant down or
// setting it broken probably won't help either. // setting it broken probably won't help either.
tracing::error!("Failed to reset: {e}"); tracing::error!("Failed to reset {tenant_shard_id}: {e}");
} }
} }
} }
@@ -1481,12 +1461,12 @@ impl TenantManager {
pub(crate) async fn do_shard_split( pub(crate) async fn do_shard_split(
&self, &self,
tenant: Arc<Tenant>, tenant_shard_id: TenantShardId,
new_shard_count: ShardCount, new_shard_count: ShardCount,
new_stripe_size: Option<ShardStripeSize>, new_stripe_size: Option<ShardStripeSize>,
ctx: &RequestContext, ctx: &RequestContext,
) -> anyhow::Result<Vec<TenantShardId>> { ) -> anyhow::Result<Vec<TenantShardId>> {
let tenant_shard_id = *tenant.get_tenant_shard_id(); let tenant = get_tenant(tenant_shard_id, true)?;
// Validate the incoming request // Validate the incoming request
if new_shard_count.count() <= tenant_shard_id.shard_count.count() { if new_shard_count.count() <= tenant_shard_id.shard_count.count() {
@@ -1532,6 +1512,7 @@ impl TenantManager {
// If [`Tenant::split_prepare`] fails, we must reload the tenant, because it might // If [`Tenant::split_prepare`] fails, we must reload the tenant, because it might
// have been left in a partially-shut-down state. // have been left in a partially-shut-down state.
tracing::warn!("Failed to prepare for split: {e}, reloading Tenant before returning"); tracing::warn!("Failed to prepare for split: {e}, reloading Tenant before returning");
self.reset_tenant(tenant_shard_id, false, ctx).await?;
return Err(e); return Err(e);
} }
@@ -1680,7 +1661,19 @@ impl TenantManager {
let tmp_path = safe_rename_tenant_dir(&local_tenant_directory) let tmp_path = safe_rename_tenant_dir(&local_tenant_directory)
.await .await
.with_context(|| format!("local tenant directory {local_tenant_directory:?} rename"))?; .with_context(|| format!("local tenant directory {local_tenant_directory:?} rename"))?;
self.spawn_background_purge(tmp_path); task_mgr::spawn(
task_mgr::BACKGROUND_RUNTIME.handle(),
TaskKind::MgmtRequest,
None,
None,
"tenant_files_delete",
false,
async move {
fs::remove_dir_all(tmp_path.as_path())
.await
.with_context(|| format!("tenant directory {:?} deletion", tmp_path))
},
);
fail::fail_point!("shard-split-pre-finish", |_| Err(anyhow::anyhow!( fail::fail_point!("shard-split-pre-finish", |_| Err(anyhow::anyhow!(
"failpoint" "failpoint"
@@ -1715,9 +1708,9 @@ impl TenantManager {
.layers .layers
.read() .read()
.await .await
.likely_resident_layers() .resident_layers()
.collect::<Vec<_>>(); .collect::<Vec<_>>()
.await;
for layer in timeline_layers { for layer in timeline_layers {
let relative_path = layer let relative_path = layer
.local_path() .local_path()
@@ -1834,119 +1827,6 @@ impl TenantManager {
shutdown_all_tenants0(self.tenants).await shutdown_all_tenants0(self.tenants).await
} }
/// When we have moved a tenant's content to a temporary directory, we may delete it lazily in
/// the background, and thereby avoid blocking any API requests on this deletion completing.
fn spawn_background_purge(&self, tmp_path: Utf8PathBuf) {
// Although we are cleaning up the tenant, this task is not meant to be bound by the lifetime of the tenant in memory.
// After a tenant is detached, there are no more task_mgr tasks for that tenant_id.
let task_tenant_id = None;
task_mgr::spawn(
task_mgr::BACKGROUND_RUNTIME.handle(),
TaskKind::MgmtRequest,
task_tenant_id,
None,
"tenant_files_delete",
false,
async move {
fs::remove_dir_all(tmp_path.as_path())
.await
.with_context(|| format!("tenant directory {:?} deletion", tmp_path))
},
);
}
pub(crate) async fn detach_tenant(
&self,
conf: &'static PageServerConf,
tenant_shard_id: TenantShardId,
detach_ignored: bool,
deletion_queue_client: &DeletionQueueClient,
) -> Result<(), TenantStateError> {
let tmp_path = self
.detach_tenant0(
conf,
&TENANTS,
tenant_shard_id,
detach_ignored,
deletion_queue_client,
)
.await?;
self.spawn_background_purge(tmp_path);
Ok(())
}
async fn detach_tenant0(
&self,
conf: &'static PageServerConf,
tenants: &std::sync::RwLock<TenantsMap>,
tenant_shard_id: TenantShardId,
detach_ignored: bool,
deletion_queue_client: &DeletionQueueClient,
) -> Result<Utf8PathBuf, TenantStateError> {
let tenant_dir_rename_operation = |tenant_id_to_clean: TenantShardId| async move {
let local_tenant_directory = conf.tenant_path(&tenant_id_to_clean);
safe_rename_tenant_dir(&local_tenant_directory)
.await
.with_context(|| {
format!("local tenant directory {local_tenant_directory:?} rename")
})
};
let removal_result = remove_tenant_from_memory(
tenants,
tenant_shard_id,
tenant_dir_rename_operation(tenant_shard_id),
)
.await;
// Flush pending deletions, so that they have a good chance of passing validation
// before this tenant is potentially re-attached elsewhere.
deletion_queue_client.flush_advisory();
// Ignored tenants are not present in memory and will bail the removal from memory operation.
// Before returning the error, check for ignored tenant removal case — we only need to clean its local files then.
if detach_ignored
&& matches!(
removal_result,
Err(TenantStateError::SlotError(TenantSlotError::NotFound(_)))
)
{
let tenant_ignore_mark = conf.tenant_ignore_mark_file_path(&tenant_shard_id);
if tenant_ignore_mark.exists() {
info!("Detaching an ignored tenant");
let tmp_path = tenant_dir_rename_operation(tenant_shard_id)
.await
.with_context(|| {
format!("Ignored tenant {tenant_shard_id} local directory rename")
})?;
return Ok(tmp_path);
}
}
removal_result
}
pub(crate) fn list_tenants(
&self,
) -> Result<Vec<(TenantShardId, TenantState, Generation)>, TenantMapListError> {
let tenants = TENANTS.read().unwrap();
let m = match &*tenants {
TenantsMap::Initializing => return Err(TenantMapListError::Initializing),
TenantsMap::Open(m) | TenantsMap::ShuttingDown(m) => m,
};
Ok(m.iter()
.filter_map(|(id, tenant)| match tenant {
TenantSlot::Attached(tenant) => {
Some((*id, tenant.current_state(), tenant.generation()))
}
TenantSlot::Secondary(_) => None,
TenantSlot::InProgress(_) => None,
})
.collect())
}
} }
#[derive(Debug, thiserror::Error)] #[derive(Debug, thiserror::Error)]
@@ -1958,12 +1838,51 @@ pub(crate) enum GetTenantError {
#[error("Tenant {0} is not active")] #[error("Tenant {0} is not active")]
NotActive(TenantShardId), NotActive(TenantShardId),
/// Broken is logically a subset of NotActive, but a distinct error is useful as
/// NotActive is usually a retryable state for API purposes, whereas Broken
/// is a stuck error state
#[error("Tenant is broken: {0}")]
Broken(String),
// Initializing or shutting down: cannot authoritatively say whether we have this tenant // Initializing or shutting down: cannot authoritatively say whether we have this tenant
#[error("Tenant map is not available: {0}")] #[error("Tenant map is not available: {0}")]
MapState(#[from] TenantMapError), MapState(#[from] TenantMapError),
} }
/// Gets the tenant from the in-memory data, erroring if it's absent or is not fitting to the query.
/// `active_only = true` allows to query only tenants that are ready for operations, erroring on other kinds of tenants.
///
/// This method is cancel-safe.
pub(crate) fn get_tenant(
tenant_shard_id: TenantShardId,
active_only: bool,
) -> Result<Arc<Tenant>, GetTenantError> {
let locked = TENANTS.read().unwrap();
let peek_slot = tenant_map_peek_slot(&locked, &tenant_shard_id, TenantSlotPeekMode::Read)?;
match peek_slot {
Some(TenantSlot::Attached(tenant)) => match tenant.current_state() {
TenantState::Broken {
reason,
backtrace: _,
} if active_only => Err(GetTenantError::Broken(reason)),
TenantState::Active => Ok(Arc::clone(tenant)),
_ => {
if active_only {
Err(GetTenantError::NotActive(tenant_shard_id))
} else {
Ok(Arc::clone(tenant))
}
}
},
Some(TenantSlot::InProgress(_)) => Err(GetTenantError::NotActive(tenant_shard_id)),
None | Some(TenantSlot::Secondary(_)) => {
Err(GetTenantError::NotFound(tenant_shard_id.tenant_id))
}
}
}
#[derive(thiserror::Error, Debug)] #[derive(thiserror::Error, Debug)]
pub(crate) enum GetActiveTenantError { pub(crate) enum GetActiveTenantError {
/// We may time out either while TenantSlot is InProgress, or while the Tenant /// We may time out either while TenantSlot is InProgress, or while the Tenant
@@ -1987,12 +1906,6 @@ pub(crate) enum GetActiveTenantError {
/// Tenant exists, but is in a state that cannot become active (e.g. Stopping, Broken) /// Tenant exists, but is in a state that cannot become active (e.g. Stopping, Broken)
#[error("will not become active. Current state: {0}")] #[error("will not become active. Current state: {0}")]
WillNotBecomeActive(TenantState), WillNotBecomeActive(TenantState),
/// Broken is logically a subset of WillNotBecomeActive, but a distinct error is useful as
/// WillNotBecomeActive is a permitted error under some circumstances, whereas broken should
/// never happen.
#[error("Tenant is broken: {0}")]
Broken(String),
} }
/// Get a [`Tenant`] in its active state. If the tenant_id is currently in [`TenantSlot::InProgress`] /// Get a [`Tenant`] in its active state. If the tenant_id is currently in [`TenantSlot::InProgress`]
@@ -2115,6 +2028,87 @@ pub(crate) enum TenantStateError {
Other(#[from] anyhow::Error), Other(#[from] anyhow::Error),
} }
pub(crate) async fn detach_tenant(
conf: &'static PageServerConf,
tenant_shard_id: TenantShardId,
detach_ignored: bool,
deletion_queue_client: &DeletionQueueClient,
) -> Result<(), TenantStateError> {
let tmp_path = detach_tenant0(
conf,
&TENANTS,
tenant_shard_id,
detach_ignored,
deletion_queue_client,
)
.await?;
// Although we are cleaning up the tenant, this task is not meant to be bound by the lifetime of the tenant in memory.
// After a tenant is detached, there are no more task_mgr tasks for that tenant_id.
let task_tenant_id = None;
task_mgr::spawn(
task_mgr::BACKGROUND_RUNTIME.handle(),
TaskKind::MgmtRequest,
task_tenant_id,
None,
"tenant_files_delete",
false,
async move {
fs::remove_dir_all(tmp_path.as_path())
.await
.with_context(|| format!("tenant directory {:?} deletion", tmp_path))
},
);
Ok(())
}
async fn detach_tenant0(
conf: &'static PageServerConf,
tenants: &std::sync::RwLock<TenantsMap>,
tenant_shard_id: TenantShardId,
detach_ignored: bool,
deletion_queue_client: &DeletionQueueClient,
) -> Result<Utf8PathBuf, TenantStateError> {
let tenant_dir_rename_operation = |tenant_id_to_clean: TenantShardId| async move {
let local_tenant_directory = conf.tenant_path(&tenant_id_to_clean);
safe_rename_tenant_dir(&local_tenant_directory)
.await
.with_context(|| format!("local tenant directory {local_tenant_directory:?} rename"))
};
let removal_result = remove_tenant_from_memory(
tenants,
tenant_shard_id,
tenant_dir_rename_operation(tenant_shard_id),
)
.await;
// Flush pending deletions, so that they have a good chance of passing validation
// before this tenant is potentially re-attached elsewhere.
deletion_queue_client.flush_advisory();
// Ignored tenants are not present in memory and will bail the removal from memory operation.
// Before returning the error, check for ignored tenant removal case — we only need to clean its local files then.
if detach_ignored
&& matches!(
removal_result,
Err(TenantStateError::SlotError(TenantSlotError::NotFound(_)))
)
{
let tenant_ignore_mark = conf.tenant_ignore_mark_file_path(&tenant_shard_id);
if tenant_ignore_mark.exists() {
info!("Detaching an ignored tenant");
let tmp_path = tenant_dir_rename_operation(tenant_shard_id)
.await
.with_context(|| {
format!("Ignored tenant {tenant_shard_id} local directory rename")
})?;
return Ok(tmp_path);
}
}
removal_result
}
pub(crate) async fn load_tenant( pub(crate) async fn load_tenant(
conf: &'static PageServerConf, conf: &'static PageServerConf,
tenant_id: TenantId, tenant_id: TenantId,
@@ -2148,7 +2142,7 @@ pub(crate) async fn load_tenant(
let mut location_conf = let mut location_conf =
Tenant::load_tenant_config(conf, &tenant_shard_id).map_err(TenantMapInsertError::Other)?; Tenant::load_tenant_config(conf, &tenant_shard_id).map_err(TenantMapInsertError::Other)?;
location_conf.attach_in_generation(AttachmentMode::Single, generation); location_conf.attach_in_generation(generation);
Tenant::persist_tenant_config(conf, &tenant_shard_id, &location_conf).await?; Tenant::persist_tenant_config(conf, &tenant_shard_id, &location_conf).await?;
@@ -2212,6 +2206,27 @@ pub(crate) enum TenantMapListError {
Initializing, Initializing,
} }
///
/// Get list of tenants, for the mgmt API
///
pub(crate) async fn list_tenants(
) -> Result<Vec<(TenantShardId, TenantState, Generation)>, TenantMapListError> {
let tenants = TENANTS.read().unwrap();
let m = match &*tenants {
TenantsMap::Initializing => return Err(TenantMapListError::Initializing),
TenantsMap::Open(m) | TenantsMap::ShuttingDown(m) => m,
};
Ok(m.iter()
.filter_map(|(id, tenant)| match tenant {
TenantSlot::Attached(tenant) => {
Some((*id, tenant.current_state(), tenant.generation()))
}
TenantSlot::Secondary(_) => None,
TenantSlot::InProgress(_) => None,
})
.collect())
}
#[derive(Debug, thiserror::Error)] #[derive(Debug, thiserror::Error)]
pub(crate) enum TenantMapInsertError { pub(crate) enum TenantMapInsertError {
#[error(transparent)] #[error(transparent)]
@@ -2714,7 +2729,7 @@ use {
utils::http::error::ApiError, utils::http::error::ApiError,
}; };
pub(crate) fn immediate_gc( pub(crate) async fn immediate_gc(
tenant_shard_id: TenantShardId, tenant_shard_id: TenantShardId,
timeline_id: TimelineId, timeline_id: TimelineId,
gc_req: TimelineGcRequest, gc_req: TimelineGcRequest,
@@ -2736,8 +2751,6 @@ pub(crate) fn immediate_gc(
// Run in task_mgr to avoid race with tenant_detach operation // Run in task_mgr to avoid race with tenant_detach operation
let ctx = ctx.detached_child(TaskKind::GarbageCollector, DownloadBehavior::Download); let ctx = ctx.detached_child(TaskKind::GarbageCollector, DownloadBehavior::Download);
let (task_done, wait_task_done) = tokio::sync::oneshot::channel(); let (task_done, wait_task_done) = tokio::sync::oneshot::channel();
let span = info_span!("manual_gc", tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), %timeline_id);
// TODO: spawning is redundant now, need to hold the gate // TODO: spawning is redundant now, need to hold the gate
task_mgr::spawn( task_mgr::spawn(
&tokio::runtime::Handle::current(), &tokio::runtime::Handle::current(),
@@ -2752,15 +2765,16 @@ pub(crate) fn immediate_gc(
#[allow(unused_mut)] #[allow(unused_mut)]
let mut result = tenant let mut result = tenant
.gc_iteration(Some(timeline_id), gc_horizon, pitr, &cancel, &ctx) .gc_iteration(Some(timeline_id), gc_horizon, pitr, &cancel, &ctx)
.instrument(info_span!("manual_gc", tenant_id=%tenant_shard_id.tenant_id, shard_id=%tenant_shard_id.shard_slug(), %timeline_id))
.await; .await;
// FIXME: `gc_iteration` can return an error for multiple reasons; we should handle it // FIXME: `gc_iteration` can return an error for multiple reasons; we should handle it
// better once the types support it. // better once the types support it.
#[cfg(feature = "testing")] #[cfg(feature = "testing")]
{ {
// we need to synchronize with drop completion for python tests without polling for
// log messages
if let Ok(result) = result.as_mut() { if let Ok(result) = result.as_mut() {
// why not futures unordered? it seems it needs very much the same task structure
// but would only run on single task.
let mut js = tokio::task::JoinSet::new(); let mut js = tokio::task::JoinSet::new();
for layer in std::mem::take(&mut result.doomed_layers) { for layer in std::mem::take(&mut result.doomed_layers) {
js.spawn(layer.wait_drop()); js.spawn(layer.wait_drop());
@@ -2776,7 +2790,7 @@ pub(crate) fn immediate_gc(
if let Some(rtc) = rtc { if let Some(rtc) = rtc {
// layer drops schedule actions on remote timeline client to actually do the // layer drops schedule actions on remote timeline client to actually do the
// deletions; don't care about the shutdown error, just exit fast // deletions; don't care just exit fast about the shutdown error
drop(rtc.wait_completion().await); drop(rtc.wait_completion().await);
} }
} }
@@ -2787,7 +2801,6 @@ pub(crate) fn immediate_gc(
} }
Ok(()) Ok(())
} }
.instrument(span)
); );
// drop the guard until after we've spawned the task so that timeline shutdown will wait for the task // drop the guard until after we've spawned the task so that timeline shutdown will wait for the task

View File

@@ -217,7 +217,7 @@ use crate::task_mgr::shutdown_token;
use crate::tenant::debug_assert_current_span_has_tenant_and_timeline_id; use crate::tenant::debug_assert_current_span_has_tenant_and_timeline_id;
use crate::tenant::remote_timeline_client::download::download_retry; use crate::tenant::remote_timeline_client::download::download_retry;
use crate::tenant::storage_layer::AsLayerDesc; use crate::tenant::storage_layer::AsLayerDesc;
use crate::tenant::upload_queue::{Delete, UploadQueueStoppedDeletable}; use crate::tenant::upload_queue::Delete;
use crate::tenant::TIMELINES_SEGMENT_NAME; use crate::tenant::TIMELINES_SEGMENT_NAME;
use crate::{ use crate::{
config::PageServerConf, config::PageServerConf,
@@ -266,6 +266,15 @@ pub enum MaybeDeletedIndexPart {
Deleted(IndexPart), Deleted(IndexPart),
} }
/// Errors that can arise when calling [`RemoteTimelineClient::stop`].
#[derive(Debug, thiserror::Error)]
pub enum StopError {
/// Returned if the upload queue was never initialized.
/// See [`RemoteTimelineClient::init_upload_queue`] and [`RemoteTimelineClient::init_upload_queue_for_empty_remote`].
#[error("queue is not initialized")]
QueueUninitialized,
}
#[derive(Debug, thiserror::Error)] #[derive(Debug, thiserror::Error)]
pub enum PersistIndexPartWithDeletedFlagError { pub enum PersistIndexPartWithDeletedFlagError {
#[error("another task is already setting the deleted_flag, started at {0:?}")] #[error("another task is already setting the deleted_flag, started at {0:?}")]
@@ -390,10 +399,15 @@ impl RemoteTimelineClient {
"bug: it is responsibility of the caller to provide index part from MaybeDeletedIndexPart::Deleted" "bug: it is responsibility of the caller to provide index part from MaybeDeletedIndexPart::Deleted"
))?; ))?;
{
let mut upload_queue = self.upload_queue.lock().unwrap();
upload_queue.initialize_with_current_remote_index_part(index_part)?;
self.update_remote_physical_size_gauge(Some(index_part));
}
// also locks upload queue, without dropping the guard above it will be a deadlock
self.stop().expect("initialized line above");
let mut upload_queue = self.upload_queue.lock().unwrap(); let mut upload_queue = self.upload_queue.lock().unwrap();
upload_queue.initialize_with_current_remote_index_part(index_part)?;
self.update_remote_physical_size_gauge(Some(index_part));
self.stop_impl(&mut upload_queue);
upload_queue upload_queue
.stopped_mut() .stopped_mut()
@@ -407,8 +421,7 @@ impl RemoteTimelineClient {
match &mut *self.upload_queue.lock().unwrap() { match &mut *self.upload_queue.lock().unwrap() {
UploadQueue::Uninitialized => None, UploadQueue::Uninitialized => None,
UploadQueue::Initialized(q) => q.get_last_remote_consistent_lsn_projected(), UploadQueue::Initialized(q) => q.get_last_remote_consistent_lsn_projected(),
UploadQueue::Stopped(UploadQueueStopped::Uninitialized) => None, UploadQueue::Stopped(q) => q
UploadQueue::Stopped(UploadQueueStopped::Deletable(q)) => q
.upload_queue_for_deletion .upload_queue_for_deletion
.get_last_remote_consistent_lsn_projected(), .get_last_remote_consistent_lsn_projected(),
} }
@@ -418,8 +431,7 @@ impl RemoteTimelineClient {
match &mut *self.upload_queue.lock().unwrap() { match &mut *self.upload_queue.lock().unwrap() {
UploadQueue::Uninitialized => None, UploadQueue::Uninitialized => None,
UploadQueue::Initialized(q) => Some(q.get_last_remote_consistent_lsn_visible()), UploadQueue::Initialized(q) => Some(q.get_last_remote_consistent_lsn_visible()),
UploadQueue::Stopped(UploadQueueStopped::Uninitialized) => None, UploadQueue::Stopped(q) => Some(
UploadQueue::Stopped(UploadQueueStopped::Deletable(q)) => Some(
q.upload_queue_for_deletion q.upload_queue_for_deletion
.get_last_remote_consistent_lsn_visible(), .get_last_remote_consistent_lsn_visible(),
), ),
@@ -886,7 +898,7 @@ impl RemoteTimelineClient {
/// Wait for all previously scheduled operations to complete, and then stop. /// Wait for all previously scheduled operations to complete, and then stop.
/// ///
/// Not cancellation safe /// Not cancellation safe
pub(crate) async fn shutdown(self: &Arc<Self>) { pub(crate) async fn shutdown(self: &Arc<Self>) -> Result<(), StopError> {
// On cancellation the queue is left in ackward state of refusing new operations but // On cancellation the queue is left in ackward state of refusing new operations but
// proper stop is yet to be called. On cancel the original or some later task must call // proper stop is yet to be called. On cancel the original or some later task must call
// `stop` or `shutdown`. // `stop` or `shutdown`.
@@ -897,12 +909,8 @@ impl RemoteTimelineClient {
let fut = { let fut = {
let mut guard = self.upload_queue.lock().unwrap(); let mut guard = self.upload_queue.lock().unwrap();
let upload_queue = match &mut *guard { let upload_queue = match &mut *guard {
UploadQueue::Stopped(_) => return, UploadQueue::Stopped(_) => return Ok(()),
UploadQueue::Uninitialized => { UploadQueue::Uninitialized => return Err(StopError::QueueUninitialized),
// transition into Stopped state
self.stop_impl(&mut guard);
return;
}
UploadQueue::Initialized(ref mut init) => init, UploadQueue::Initialized(ref mut init) => init,
}; };
@@ -934,7 +942,7 @@ impl RemoteTimelineClient {
} }
} }
self.stop(); self.stop()
} }
/// Set the deleted_at field in the remote index file. /// Set the deleted_at field in the remote index file.
@@ -1316,7 +1324,12 @@ impl RemoteTimelineClient {
// upload finishes or times out soon enough. // upload finishes or times out soon enough.
if cancel.is_cancelled() { if cancel.is_cancelled() {
info!("upload task cancelled by shutdown request"); info!("upload task cancelled by shutdown request");
self.stop(); match self.stop() {
Ok(()) => {}
Err(StopError::QueueUninitialized) => {
unreachable!("we never launch an upload task if the queue is uninitialized, and once it is initialized, we never go back")
}
}
return; return;
} }
@@ -1571,23 +1584,17 @@ impl RemoteTimelineClient {
/// In-progress operations will still be running after this function returns. /// In-progress operations will still be running after this function returns.
/// Use `task_mgr::shutdown_tasks(None, Some(self.tenant_id), Some(timeline_id))` /// Use `task_mgr::shutdown_tasks(None, Some(self.tenant_id), Some(timeline_id))`
/// to wait for them to complete, after calling this function. /// to wait for them to complete, after calling this function.
pub(crate) fn stop(&self) { pub(crate) fn stop(&self) -> Result<(), StopError> {
// Whichever *task* for this RemoteTimelineClient grabs the mutex first will transition the queue // Whichever *task* for this RemoteTimelineClient grabs the mutex first will transition the queue
// into stopped state, thereby dropping all off the queued *ops* which haven't become *tasks* yet. // into stopped state, thereby dropping all off the queued *ops* which haven't become *tasks* yet.
// The other *tasks* will come here and observe an already shut down queue and hence simply wrap up their business. // The other *tasks* will come here and observe an already shut down queue and hence simply wrap up their business.
let mut guard = self.upload_queue.lock().unwrap(); let mut guard = self.upload_queue.lock().unwrap();
self.stop_impl(&mut guard); match &mut *guard {
} UploadQueue::Uninitialized => Err(StopError::QueueUninitialized),
fn stop_impl(&self, guard: &mut std::sync::MutexGuard<UploadQueue>) {
match &mut **guard {
UploadQueue::Uninitialized => {
info!("UploadQueue is in state Uninitialized, nothing to do");
**guard = UploadQueue::Stopped(UploadQueueStopped::Uninitialized);
}
UploadQueue::Stopped(_) => { UploadQueue::Stopped(_) => {
// nothing to do // nothing to do
info!("another concurrent task already shut down the queue"); info!("another concurrent task already shut down the queue");
Ok(())
} }
UploadQueue::Initialized(initialized) => { UploadQueue::Initialized(initialized) => {
info!("shutting down upload queue"); info!("shutting down upload queue");
@@ -1620,13 +1627,11 @@ impl RemoteTimelineClient {
}; };
let upload_queue = std::mem::replace( let upload_queue = std::mem::replace(
&mut **guard, &mut *guard,
UploadQueue::Stopped(UploadQueueStopped::Deletable( UploadQueue::Stopped(UploadQueueStopped {
UploadQueueStoppedDeletable { upload_queue_for_deletion,
upload_queue_for_deletion, deleted_at: SetDeletedFlagProgress::NotRunning,
deleted_at: SetDeletedFlagProgress::NotRunning, }),
},
)),
); );
if let UploadQueue::Initialized(qi) = upload_queue { if let UploadQueue::Initialized(qi) = upload_queue {
qi qi
@@ -1655,6 +1660,10 @@ impl RemoteTimelineClient {
// which is exactly what we want to happen. // which is exactly what we want to happen.
drop(op); drop(op);
} }
// We're done.
drop(guard);
Ok(())
} }
} }
} }

View File

@@ -15,7 +15,6 @@ use crate::{
tenant::{ tenant::{
config::SecondaryLocationConfig, config::SecondaryLocationConfig,
debug_assert_current_span_has_tenant_and_timeline_id, debug_assert_current_span_has_tenant_and_timeline_id,
ephemeral_file::is_ephemeral_file,
remote_timeline_client::{ remote_timeline_client::{
index::LayerFileMetadata, is_temp_download_file, FAILED_DOWNLOAD_WARN_THRESHOLD, index::LayerFileMetadata, is_temp_download_file, FAILED_DOWNLOAD_WARN_THRESHOLD,
FAILED_REMOTE_OP_RETRIES, FAILED_REMOTE_OP_RETRIES,
@@ -535,11 +534,7 @@ impl<'a> TenantDownloader<'a> {
.await .await
.maybe_fatal_err(&context_msg)?; .maybe_fatal_err(&context_msg)?;
tracing::debug!( tracing::debug!("Wrote local heatmap to {}", heatmap_path);
"Wrote local heatmap to {}, with {} timelines",
heatmap_path,
heatmap.timelines.len()
);
// Clean up any local layers that aren't in the heatmap. We do this first for all timelines, on the general // Clean up any local layers that aren't in the heatmap. We do this first for all timelines, on the general
// principle that deletions should be done before writes wherever possible, and so that we can use this // principle that deletions should be done before writes wherever possible, and so that we can use this
@@ -552,10 +547,6 @@ impl<'a> TenantDownloader<'a> {
// Download the layers in the heatmap // Download the layers in the heatmap
for timeline in heatmap.timelines { for timeline in heatmap.timelines {
if self.secondary_state.cancel.is_cancelled() { if self.secondary_state.cancel.is_cancelled() {
tracing::debug!(
"Cancelled before downloading timeline {}",
timeline.timeline_id
);
return Ok(()); return Ok(());
} }
@@ -773,13 +764,10 @@ impl<'a> TenantDownloader<'a> {
} }
}; };
tracing::debug!(timeline_id=%timeline.timeline_id, "Downloading layers, {} in heatmap", timeline.layers.len());
// Download heatmap layers that are not present on local disk, or update their // Download heatmap layers that are not present on local disk, or update their
// access time if they are already present. // access time if they are already present.
for layer in timeline.layers { for layer in timeline.layers {
if self.secondary_state.cancel.is_cancelled() { if self.secondary_state.cancel.is_cancelled() {
tracing::debug!("Cancelled -- dropping out of layer loop");
return Ok(()); return Ok(());
} }
@@ -962,10 +950,7 @@ async fn init_timeline_state(
// Secondary mode doesn't use local metadata files, but they might have been left behind by an attached tenant. // Secondary mode doesn't use local metadata files, but they might have been left behind by an attached tenant.
warn!(path=?dentry.path(), "found legacy metadata file, these should have been removed in load_tenant_config"); warn!(path=?dentry.path(), "found legacy metadata file, these should have been removed in load_tenant_config");
continue; continue;
} else if crate::is_temporary(&file_path) } else if crate::is_temporary(&file_path) || is_temp_download_file(&file_path) {
|| is_temp_download_file(&file_path)
|| is_ephemeral_file(file_name)
{
// Temporary files are frequently left behind from restarting during downloads // Temporary files are frequently left behind from restarting during downloads
tracing::info!("Cleaning up temporary file {file_path}"); tracing::info!("Cleaning up temporary file {file_path}");
if let Err(e) = tokio::fs::remove_file(&file_path) if let Err(e) = tokio::fs::remove_file(&file_path)

View File

@@ -9,7 +9,6 @@ use crate::{
metrics::SECONDARY_MODE, metrics::SECONDARY_MODE,
tenant::{ tenant::{
config::AttachmentMode, config::AttachmentMode,
mgr::GetTenantError,
mgr::TenantManager, mgr::TenantManager,
remote_timeline_client::remote_heatmap_path, remote_timeline_client::remote_heatmap_path,
span::debug_assert_current_span_has_tenant_id, span::debug_assert_current_span_has_tenant_id,
@@ -293,11 +292,8 @@ impl JobGenerator<UploadPending, WriteInProgress, WriteComplete, UploadCommand>
"Starting heatmap write on command"); "Starting heatmap write on command");
let tenant = self let tenant = self
.tenant_manager .tenant_manager
.get_attached_tenant_shard(*tenant_shard_id) .get_attached_tenant_shard(*tenant_shard_id, true)
.map_err(|e| anyhow::anyhow!(e))?; .map_err(|e| anyhow::anyhow!(e))?;
if !tenant.is_active() {
return Err(GetTenantError::NotActive(*tenant_shard_id).into());
}
Ok(UploadPending { Ok(UploadPending {
// Ignore our state for last digest: this forces an upload even if nothing has changed // Ignore our state for last digest: this forces an upload even if nothing has changed

View File

@@ -300,7 +300,6 @@ where
let tenant_shard_id = job.get_tenant_shard_id(); let tenant_shard_id = job.get_tenant_shard_id();
let barrier = if let Some(barrier) = self.get_running(tenant_shard_id) { let barrier = if let Some(barrier) = self.get_running(tenant_shard_id) {
tracing::info!("Command already running, waiting for it");
barrier barrier
} else { } else {
let running = self.spawn_now(job); let running = self.spawn_now(job);

View File

@@ -3,7 +3,7 @@
pub mod delta_layer; pub mod delta_layer;
mod filename; mod filename;
pub mod image_layer; pub mod image_layer;
pub(crate) mod inmemory_layer; mod inmemory_layer;
pub(crate) mod layer; pub(crate) mod layer;
mod layer_desc; mod layer_desc;

View File

@@ -23,12 +23,8 @@ use tracing::*;
use utils::{bin_ser::BeSer, id::TimelineId, lsn::Lsn, vec_map::VecMap}; use utils::{bin_ser::BeSer, id::TimelineId, lsn::Lsn, vec_map::VecMap};
// avoid binding to Write (conflicts with std::io::Write) // avoid binding to Write (conflicts with std::io::Write)
// while being able to use std::fmt::Write's methods // while being able to use std::fmt::Write's methods
use crate::metrics::TIMELINE_EPHEMERAL_BYTES;
use std::cmp::Ordering;
use std::fmt::Write as _; use std::fmt::Write as _;
use std::ops::Range; use std::ops::Range;
use std::sync::atomic::Ordering as AtomicOrdering;
use std::sync::atomic::{AtomicU64, AtomicUsize};
use tokio::sync::{RwLock, RwLockWriteGuard}; use tokio::sync::{RwLock, RwLockWriteGuard};
use super::{ use super::{
@@ -74,8 +70,6 @@ pub struct InMemoryLayerInner {
/// Each serialized Value is preceded by a 'u32' length field. /// Each serialized Value is preceded by a 'u32' length field.
/// PerSeg::page_versions map stores offsets into this file. /// PerSeg::page_versions map stores offsets into this file.
file: EphemeralFile, file: EphemeralFile,
resource_units: GlobalResourceUnits,
} }
impl std::fmt::Debug for InMemoryLayerInner { impl std::fmt::Debug for InMemoryLayerInner {
@@ -84,121 +78,6 @@ impl std::fmt::Debug for InMemoryLayerInner {
} }
} }
/// State shared by all in-memory (ephemeral) layers. Updated infrequently during background ticks in Timeline,
/// to minimize contention.
///
/// This global state is used to implement behaviors that require a global view of the system, e.g.
/// rolling layers proactively to limit the total amount of dirty data.
pub(crate) struct GlobalResources {
// Limit on how high dirty_bytes may grow before we start freezing layers to reduce it.
// Zero means unlimited.
pub(crate) max_dirty_bytes: AtomicU64,
// How many bytes are in all EphemeralFile objects
dirty_bytes: AtomicU64,
// How many layers are contributing to dirty_bytes
dirty_layers: AtomicUsize,
}
// Per-timeline RAII struct for its contribution to [`GlobalResources`]
struct GlobalResourceUnits {
// How many dirty bytes have I added to the global dirty_bytes: this guard object is responsible
// for decrementing the global counter by this many bytes when dropped.
dirty_bytes: u64,
}
impl GlobalResourceUnits {
// Hint for the layer append path to update us when the layer size differs from the last
// call to update_size by this much. If we don't reach this threshold, we'll still get
// updated when the Timeline "ticks" in the background.
const MAX_SIZE_DRIFT: u64 = 10 * 1024 * 1024;
fn new() -> Self {
GLOBAL_RESOURCES
.dirty_layers
.fetch_add(1, AtomicOrdering::Relaxed);
Self { dirty_bytes: 0 }
}
/// Do not call this frequently: all timelines will write to these same global atomics,
/// so this is a relatively expensive operation. Wait at least a few seconds between calls.
///
/// Returns the effective layer size limit that should be applied, if any, to keep
/// the total number of dirty bytes below the configured maximum.
fn publish_size(&mut self, size: u64) -> Option<u64> {
let new_global_dirty_bytes = match size.cmp(&self.dirty_bytes) {
Ordering::Equal => GLOBAL_RESOURCES.dirty_bytes.load(AtomicOrdering::Relaxed),
Ordering::Greater => {
let delta = size - self.dirty_bytes;
let old = GLOBAL_RESOURCES
.dirty_bytes
.fetch_add(delta, AtomicOrdering::Relaxed);
old + delta
}
Ordering::Less => {
let delta = self.dirty_bytes - size;
let old = GLOBAL_RESOURCES
.dirty_bytes
.fetch_sub(delta, AtomicOrdering::Relaxed);
old - delta
}
};
// This is a sloppy update: concurrent updates to the counter will race, and the exact
// value of the metric might not be the exact latest value of GLOBAL_RESOURCES::dirty_bytes.
// That's okay: as long as the metric contains some recent value, it doesn't have to always
// be literally the last update.
TIMELINE_EPHEMERAL_BYTES.set(new_global_dirty_bytes);
self.dirty_bytes = size;
let max_dirty_bytes = GLOBAL_RESOURCES
.max_dirty_bytes
.load(AtomicOrdering::Relaxed);
if max_dirty_bytes > 0 && new_global_dirty_bytes > max_dirty_bytes {
// Set the layer file limit to the average layer size: this implies that all above-average
// sized layers will be elegible for freezing. They will be frozen in the order they
// next enter publish_size.
Some(
new_global_dirty_bytes
/ GLOBAL_RESOURCES.dirty_layers.load(AtomicOrdering::Relaxed) as u64,
)
} else {
None
}
}
// Call publish_size if the input size differs from last published size by more than
// the drift limit
fn maybe_publish_size(&mut self, size: u64) {
let publish = match size.cmp(&self.dirty_bytes) {
Ordering::Equal => false,
Ordering::Greater => size - self.dirty_bytes > Self::MAX_SIZE_DRIFT,
Ordering::Less => self.dirty_bytes - size > Self::MAX_SIZE_DRIFT,
};
if publish {
self.publish_size(size);
}
}
}
impl Drop for GlobalResourceUnits {
fn drop(&mut self) {
GLOBAL_RESOURCES
.dirty_layers
.fetch_sub(1, AtomicOrdering::Relaxed);
// Subtract our contribution to the global total dirty bytes
self.publish_size(0);
}
}
pub(crate) static GLOBAL_RESOURCES: GlobalResources = GlobalResources {
max_dirty_bytes: AtomicU64::new(0),
dirty_bytes: AtomicU64::new(0),
dirty_layers: AtomicUsize::new(0),
};
impl InMemoryLayer { impl InMemoryLayer {
pub(crate) fn get_timeline_id(&self) -> TimelineId { pub(crate) fn get_timeline_id(&self) -> TimelineId {
self.timeline_id self.timeline_id
@@ -214,10 +93,6 @@ impl InMemoryLayer {
} }
} }
pub(crate) fn try_len(&self) -> Option<u64> {
self.inner.try_read().map(|i| i.file.len()).ok()
}
pub(crate) fn assert_writable(&self) { pub(crate) fn assert_writable(&self) {
assert!(self.end_lsn.get().is_none()); assert!(self.end_lsn.get().is_none());
} }
@@ -453,7 +328,6 @@ impl InMemoryLayer {
inner: RwLock::new(InMemoryLayerInner { inner: RwLock::new(InMemoryLayerInner {
index: HashMap::new(), index: HashMap::new(),
file, file,
resource_units: GlobalResourceUnits::new(),
}), }),
}) })
} }
@@ -504,18 +378,9 @@ impl InMemoryLayer {
warn!("Key {} at {} already exists", key, lsn); warn!("Key {} at {} already exists", key, lsn);
} }
let size = locked_inner.file.len();
locked_inner.resource_units.maybe_publish_size(size);
Ok(()) Ok(())
} }
pub(crate) async fn tick(&self) -> Option<u64> {
let mut inner = self.inner.write().await;
let size = inner.file.len();
inner.resource_units.publish_size(size)
}
pub(crate) async fn put_tombstones(&self, _key_ranges: &[(Range<Key>, Lsn)]) -> Result<()> { pub(crate) async fn put_tombstones(&self, _key_ranges: &[(Range<Key>, Lsn)]) -> Result<()> {
// TODO: Currently, we just leak the storage for any deleted keys // TODO: Currently, we just leak the storage for any deleted keys
Ok(()) Ok(())

File diff suppressed because it is too large Load Diff

View File

@@ -1,119 +0,0 @@
//! failpoints for unit tests, implying `#[cfg(test)]`.
//!
//! These are not accessible over http.
use super::*;
impl Layer {
/// Enable a failpoint from a unit test.
pub(super) fn enable_failpoint(&self, failpoint: Failpoint) {
self.0.failpoints.lock().unwrap().push(failpoint);
}
}
impl LayerInner {
/// Query if this failpoint is enabled, as in, arrive at a failpoint.
///
/// Calls to this method need to be `#[cfg(test)]` guarded.
pub(super) async fn failpoint(&self, kind: FailpointKind) -> Result<(), FailpointHit> {
let fut = {
let mut fps = self.failpoints.lock().unwrap();
// find the *last* failpoint for cases in which we need to use multiple for the same
// thing (two blocked evictions)
let fp = fps.iter_mut().rfind(|x| x.kind() == kind);
let Some(fp) = fp else {
return Ok(());
};
fp.hit()
};
fut.await
}
}
#[derive(Debug, PartialEq, Eq)]
pub(crate) enum FailpointKind {
/// Failpoint acts as an accurate cancelled by drop here; see the only site of use.
AfterDeterminingLayerNeedsNoDownload,
/// Failpoint for stalling eviction starting
WaitBeforeStartingEvicting,
/// Failpoint hit in the spawned task
WaitBeforeDownloading,
}
pub(crate) enum Failpoint {
AfterDeterminingLayerNeedsNoDownload,
WaitBeforeStartingEvicting(
Option<utils::completion::Completion>,
utils::completion::Barrier,
),
WaitBeforeDownloading(
Option<utils::completion::Completion>,
utils::completion::Barrier,
),
}
impl Failpoint {
fn kind(&self) -> FailpointKind {
match self {
Failpoint::AfterDeterminingLayerNeedsNoDownload => {
FailpointKind::AfterDeterminingLayerNeedsNoDownload
}
Failpoint::WaitBeforeStartingEvicting(..) => FailpointKind::WaitBeforeStartingEvicting,
Failpoint::WaitBeforeDownloading(..) => FailpointKind::WaitBeforeDownloading,
}
}
fn hit(&mut self) -> impl std::future::Future<Output = Result<(), FailpointHit>> + 'static {
use futures::future::FutureExt;
// use boxed futures to avoid Either hurdles
match self {
Failpoint::AfterDeterminingLayerNeedsNoDownload => {
let kind = self.kind();
async move { Err(FailpointHit(kind)) }.boxed()
}
Failpoint::WaitBeforeStartingEvicting(arrival, b)
| Failpoint::WaitBeforeDownloading(arrival, b) => {
// first one signals arrival
drop(arrival.take());
let b = b.clone();
async move {
tracing::trace!("waiting on a failpoint barrier");
b.wait().await;
tracing::trace!("done waiting on a failpoint barrier");
Ok(())
}
.boxed()
}
}
}
}
impl std::fmt::Display for FailpointKind {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
std::fmt::Debug::fmt(self, f)
}
}
#[derive(Debug)]
pub(crate) struct FailpointHit(FailpointKind);
impl std::fmt::Display for FailpointHit {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
std::fmt::Debug::fmt(self, f)
}
}
impl std::error::Error for FailpointHit {}
impl From<FailpointHit> for DownloadError {
fn from(value: FailpointHit) -> Self {
DownloadError::Failpoint(value.0)
}
}

View File

@@ -1,13 +1,14 @@
use futures::StreamExt;
use pageserver_api::key::CONTROLFILE_KEY; use pageserver_api::key::CONTROLFILE_KEY;
use tokio::task::JoinSet; use tokio::task::JoinSet;
use tracing::Instrument;
use utils::{ use utils::{
completion::{self, Completion}, completion::{self, Completion},
id::TimelineId, id::TimelineId,
}; };
use super::failpoints::{Failpoint, FailpointKind};
use super::*; use super::*;
use crate::context::DownloadBehavior; use crate::{context::DownloadBehavior, task_mgr::BACKGROUND_RUNTIME};
use crate::{task_mgr::TaskKind, tenant::harness::TenantHarness}; use crate::{task_mgr::TaskKind, tenant::harness::TenantHarness};
/// Used in tests to advance a future to wanted await point, and not futher. /// Used in tests to advance a future to wanted await point, and not futher.
@@ -20,7 +21,7 @@ const FOREVER: std::time::Duration = std::time::Duration::from_secs(ADVANCE.as_s
/// Demonstrate the API and resident -> evicted -> resident -> deleted transitions. /// Demonstrate the API and resident -> evicted -> resident -> deleted transitions.
#[tokio::test] #[tokio::test]
async fn smoke_test() { async fn smoke_test() {
let handle = tokio::runtime::Handle::current(); let handle = BACKGROUND_RUNTIME.handle();
let h = TenantHarness::create("smoke_test").unwrap(); let h = TenantHarness::create("smoke_test").unwrap();
let span = h.span(); let span = h.span();
@@ -37,7 +38,7 @@ async fn smoke_test() {
let layer = { let layer = {
let mut layers = { let mut layers = {
let layers = timeline.layers.read().await; let layers = timeline.layers.read().await;
layers.likely_resident_layers().collect::<Vec<_>>() layers.resident_layers().collect::<Vec<_>>().await
}; };
assert_eq!(layers.len(), 1); assert_eq!(layers.len(), 1);
@@ -87,7 +88,7 @@ async fn smoke_test() {
// //
// ZERO for timeout does not work reliably, so first take up all spawn_blocking slots to // ZERO for timeout does not work reliably, so first take up all spawn_blocking slots to
// artificially slow it down. // artificially slow it down.
let helper = SpawnBlockingPoolHelper::consume_all_spawn_blocking_threads(&handle).await; let helper = SpawnBlockingPoolHelper::consume_all_spawn_blocking_threads(handle).await;
match layer match layer
.evict_and_wait(std::time::Duration::ZERO) .evict_and_wait(std::time::Duration::ZERO)
@@ -98,7 +99,7 @@ async fn smoke_test() {
// expected, but note that the eviction is "still ongoing" // expected, but note that the eviction is "still ongoing"
helper.release().await; helper.release().await;
// exhaust spawn_blocking pool to ensure it is now complete // exhaust spawn_blocking pool to ensure it is now complete
SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads(&handle) SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads(handle)
.await; .await;
} }
other => unreachable!("{other:?}"), other => unreachable!("{other:?}"),
@@ -107,7 +108,7 @@ async fn smoke_test() {
// only way to query if a layer is resident is to acquire a ResidentLayer instance. // only way to query if a layer is resident is to acquire a ResidentLayer instance.
// Layer::keep_resident never downloads, but it might initialize if the layer file is found // Layer::keep_resident never downloads, but it might initialize if the layer file is found
// downloaded locally. // downloaded locally.
let none = layer.keep_resident().await; let none = layer.keep_resident().await.unwrap();
assert!( assert!(
none.is_none(), none.is_none(),
"Expected none, because eviction removed the local file, found: {none:?}" "Expected none, because eviction removed the local file, found: {none:?}"
@@ -166,7 +167,6 @@ async fn smoke_test() {
rtc.wait_completion().await.unwrap(); rtc.wait_completion().await.unwrap();
assert_eq!(rtc.get_remote_physical_size(), 0); assert_eq!(rtc.get_remote_physical_size(), 0);
assert_eq!(0, LAYER_IMPL_METRICS.inits_cancelled.get())
} }
/// This test demonstrates a previous hang when a eviction and deletion were requested at the same /// This test demonstrates a previous hang when a eviction and deletion were requested at the same
@@ -174,7 +174,7 @@ async fn smoke_test() {
#[tokio::test(start_paused = true)] #[tokio::test(start_paused = true)]
async fn evict_and_wait_on_wanted_deleted() { async fn evict_and_wait_on_wanted_deleted() {
// this is the runtime on which Layer spawns the blocking tasks on // this is the runtime on which Layer spawns the blocking tasks on
let handle = tokio::runtime::Handle::current(); let handle = BACKGROUND_RUNTIME.handle();
let h = TenantHarness::create("evict_and_wait_on_wanted_deleted").unwrap(); let h = TenantHarness::create("evict_and_wait_on_wanted_deleted").unwrap();
utils::logging::replace_panic_hook_with_tracing_panic_hook().forget(); utils::logging::replace_panic_hook_with_tracing_panic_hook().forget();
@@ -188,7 +188,7 @@ async fn evict_and_wait_on_wanted_deleted() {
let layer = { let layer = {
let mut layers = { let mut layers = {
let layers = timeline.layers.read().await; let layers = timeline.layers.read().await;
layers.likely_resident_layers().collect::<Vec<_>>() layers.resident_layers().collect::<Vec<_>>().await
}; };
assert_eq!(layers.len(), 1); assert_eq!(layers.len(), 1);
@@ -213,11 +213,11 @@ async fn evict_and_wait_on_wanted_deleted() {
drop(resident); drop(resident);
// make sure the eviction task gets to run // make sure the eviction task gets to run
SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads(&handle).await; SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads(handle).await;
let resident = layer.keep_resident().await; let resident = layer.keep_resident().await;
assert!( assert!(
resident.is_none(), matches!(resident, Ok(None)),
"keep_resident should not have re-initialized: {resident:?}" "keep_resident should not have re-initialized: {resident:?}"
); );
@@ -235,332 +235,24 @@ async fn evict_and_wait_on_wanted_deleted() {
layers.finish_gc_timeline(&[layer]); layers.finish_gc_timeline(&[layer]);
} }
SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads(&handle).await; SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads(handle).await;
assert_eq!(1, LAYER_IMPL_METRICS.started_deletes.get()); assert_eq!(1, LAYER_IMPL_METRICS.started_deletes.get());
assert_eq!(1, LAYER_IMPL_METRICS.completed_deletes.get()); assert_eq!(1, LAYER_IMPL_METRICS.completed_deletes.get());
assert_eq!(1, LAYER_IMPL_METRICS.started_evictions.get()); assert_eq!(1, LAYER_IMPL_METRICS.started_evictions.get());
assert_eq!(1, LAYER_IMPL_METRICS.completed_evictions.get()); assert_eq!(1, LAYER_IMPL_METRICS.completed_evictions.get());
assert_eq!(0, LAYER_IMPL_METRICS.inits_cancelled.get())
} }
/// This test ensures we are able to read the layer while the layer eviction has been /// This test shows that ensures we are able to read the layer while the layer eviction has been
/// started but not completed. /// started but not completed due to spawn_blocking pool being blocked.
#[test]
fn read_wins_pending_eviction() {
let rt = tokio::runtime::Builder::new_current_thread()
.max_blocking_threads(1)
.enable_all()
.start_paused(true)
.build()
.unwrap();
rt.block_on(async move {
// this is the runtime on which Layer spawns the blocking tasks on
let handle = tokio::runtime::Handle::current();
let h = TenantHarness::create("read_wins_pending_eviction").unwrap();
let (tenant, ctx) = h.load().await;
let span = h.span();
let download_span = span.in_scope(|| tracing::info_span!("downloading", timeline_id = 1));
let timeline = tenant
.create_test_timeline(TimelineId::generate(), Lsn(0x10), 14, &ctx)
.await
.unwrap();
let layer = {
let mut layers = {
let layers = timeline.layers.read().await;
layers.likely_resident_layers().collect::<Vec<_>>()
};
assert_eq!(layers.len(), 1);
layers.swap_remove(0)
};
// setup done
let resident = layer.keep_resident().await.unwrap();
let mut evict_and_wait = std::pin::pin!(layer.evict_and_wait(FOREVER));
// drive the future to await on the status channel
tokio::time::timeout(ADVANCE, &mut evict_and_wait)
.await
.expect_err("should had been a timeout since we are holding the layer resident");
assert_eq!(1, LAYER_IMPL_METRICS.started_evictions.get());
let (completion, barrier) = utils::completion::channel();
let (arrival, arrived_at_barrier) = utils::completion::channel();
layer.enable_failpoint(Failpoint::WaitBeforeStartingEvicting(
Some(arrival),
barrier,
));
// now the eviction cannot proceed because the threads are consumed while completion exists
drop(resident);
arrived_at_barrier.wait().await;
assert!(!layer.is_likely_resident());
// because no actual eviction happened, we get to just reinitialize the DownloadedLayer
layer
.0
.get_or_maybe_download(false, None)
.instrument(download_span)
.await
.expect("should had reinitialized without downloading");
assert!(layer.is_likely_resident());
// reinitialization notifies of new resident status, which should error out all evict_and_wait
let e = tokio::time::timeout(ADVANCE, &mut evict_and_wait)
.await
.expect("no timeout, because get_or_maybe_download re-initialized")
.expect_err("eviction should not have succeeded because re-initialized");
// works as intended: evictions lose to "downloads"
assert!(matches!(e, EvictionError::Downloaded), "{e:?}");
assert_eq!(0, LAYER_IMPL_METRICS.completed_evictions.get());
// this is not wrong: the eviction is technically still "on the way" as it's still queued
// because of a failpoint
assert_eq!(
0,
LAYER_IMPL_METRICS
.cancelled_evictions
.values()
.map(|ctr| ctr.get())
.sum::<u64>()
);
drop(completion);
tokio::time::sleep(ADVANCE).await;
SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads0(&handle, 1)
.await;
assert_eq!(0, LAYER_IMPL_METRICS.completed_evictions.get());
// now we finally can observe the original eviction failing
// it would had been possible to observe it earlier, but here it is guaranteed to have
// happened.
assert_eq!(
1,
LAYER_IMPL_METRICS
.cancelled_evictions
.values()
.map(|ctr| ctr.get())
.sum::<u64>()
);
assert_eq!(
1,
LAYER_IMPL_METRICS.cancelled_evictions[EvictionCancelled::AlreadyReinitialized].get()
);
assert_eq!(0, LAYER_IMPL_METRICS.inits_cancelled.get())
});
}
/// Use failpoint to delay an eviction starting to get a VersionCheckFailed.
#[test]
fn multiple_pending_evictions_in_order() {
let name = "multiple_pending_evictions_in_order";
let in_order = true;
multiple_pending_evictions_scenario(name, in_order);
}
/// Use failpoint to reorder later eviction before first to get a UnexpectedEvictedState.
#[test]
fn multiple_pending_evictions_out_of_order() {
let name = "multiple_pending_evictions_out_of_order";
let in_order = false;
multiple_pending_evictions_scenario(name, in_order);
}
fn multiple_pending_evictions_scenario(name: &'static str, in_order: bool) {
let rt = tokio::runtime::Builder::new_current_thread()
.max_blocking_threads(1)
.enable_all()
.start_paused(true)
.build()
.unwrap();
rt.block_on(async move {
// this is the runtime on which Layer spawns the blocking tasks on
let handle = tokio::runtime::Handle::current();
let h = TenantHarness::create(name).unwrap();
let (tenant, ctx) = h.load().await;
let span = h.span();
let download_span = span.in_scope(|| tracing::info_span!("downloading", timeline_id = 1));
let timeline = tenant
.create_test_timeline(TimelineId::generate(), Lsn(0x10), 14, &ctx)
.await
.unwrap();
let layer = {
let mut layers = {
let layers = timeline.layers.read().await;
layers.likely_resident_layers().collect::<Vec<_>>()
};
assert_eq!(layers.len(), 1);
layers.swap_remove(0)
};
// setup done
let resident = layer.keep_resident().await.unwrap();
let mut evict_and_wait = std::pin::pin!(layer.evict_and_wait(FOREVER));
// drive the future to await on the status channel
tokio::time::timeout(ADVANCE, &mut evict_and_wait)
.await
.expect_err("should had been a timeout since we are holding the layer resident");
assert_eq!(1, LAYER_IMPL_METRICS.started_evictions.get());
let (completion1, barrier) = utils::completion::channel();
let mut completion1 = Some(completion1);
let (arrival, arrived_at_barrier) = utils::completion::channel();
layer.enable_failpoint(Failpoint::WaitBeforeStartingEvicting(
Some(arrival),
barrier,
));
// now the eviction cannot proceed because we are simulating arbitrary long delay for the
// eviction task start.
drop(resident);
assert!(!layer.is_likely_resident());
arrived_at_barrier.wait().await;
// because no actual eviction happened, we get to just reinitialize the DownloadedLayer
layer
.0
.get_or_maybe_download(false, None)
.instrument(download_span)
.await
.expect("should had reinitialized without downloading");
assert!(layer.is_likely_resident());
// reinitialization notifies of new resident status, which should error out all evict_and_wait
let e = tokio::time::timeout(ADVANCE, &mut evict_and_wait)
.await
.expect("no timeout, because get_or_maybe_download re-initialized")
.expect_err("eviction should not have succeeded because re-initialized");
// works as intended: evictions lose to "downloads"
assert!(matches!(e, EvictionError::Downloaded), "{e:?}");
assert_eq!(0, LAYER_IMPL_METRICS.completed_evictions.get());
// this is not wrong: the eviction is technically still "on the way" as it's still queued
// because of a failpoint
assert_eq!(
0,
LAYER_IMPL_METRICS
.cancelled_evictions
.values()
.map(|ctr| ctr.get())
.sum::<u64>()
);
assert_eq!(0, LAYER_IMPL_METRICS.completed_evictions.get());
// configure another failpoint for the second eviction -- evictions are per initialization,
// so now that we've reinitialized the inner, we get to run two of them at the same time.
let (completion2, barrier) = utils::completion::channel();
let (arrival, arrived_at_barrier) = utils::completion::channel();
layer.enable_failpoint(Failpoint::WaitBeforeStartingEvicting(
Some(arrival),
barrier,
));
let mut second_eviction = std::pin::pin!(layer.evict_and_wait(FOREVER));
// advance to the wait on the queue
tokio::time::timeout(ADVANCE, &mut second_eviction)
.await
.expect_err("timeout because failpoint is blocking");
arrived_at_barrier.wait().await;
assert_eq!(2, LAYER_IMPL_METRICS.started_evictions.get());
let mut release_earlier_eviction = |expected_reason| {
assert_eq!(
0,
LAYER_IMPL_METRICS.cancelled_evictions[expected_reason].get(),
);
drop(completion1.take().unwrap());
let handle = &handle;
async move {
tokio::time::sleep(ADVANCE).await;
SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads0(
handle, 1,
)
.await;
assert_eq!(
1,
LAYER_IMPL_METRICS.cancelled_evictions[expected_reason].get(),
);
}
};
if in_order {
release_earlier_eviction(EvictionCancelled::VersionCheckFailed).await;
}
// release the later eviction which is for the current version
drop(completion2);
tokio::time::sleep(ADVANCE).await;
SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads0(&handle, 1)
.await;
if !in_order {
release_earlier_eviction(EvictionCancelled::UnexpectedEvictedState).await;
}
tokio::time::timeout(ADVANCE, &mut second_eviction)
.await
.expect("eviction goes through now that spawn_blocking is unclogged")
.expect("eviction should succeed, because version matches");
assert_eq!(1, LAYER_IMPL_METRICS.completed_evictions.get());
// ensure the cancelled are unchanged
assert_eq!(
1,
LAYER_IMPL_METRICS
.cancelled_evictions
.values()
.map(|ctr| ctr.get())
.sum::<u64>()
);
assert_eq!(0, LAYER_IMPL_METRICS.inits_cancelled.get())
});
}
/// The test ensures with a failpoint that a pending eviction is not cancelled by what is currently
/// a `Layer::keep_resident` call.
/// ///
/// This matters because cancelling the eviction would leave us in a state where the file is on /// Here `Layer::keep_resident` is used to "simulate" reads, because it cannot download.
/// disk but the layer internal state says it has not been initialized. Futhermore, it allows us to
/// have non-repairing `Layer::is_likely_resident`.
#[tokio::test(start_paused = true)] #[tokio::test(start_paused = true)]
async fn cancelled_get_or_maybe_download_does_not_cancel_eviction() { async fn residency_check_while_evict_and_wait_on_clogged_spawn_blocking() {
let handle = tokio::runtime::Handle::current(); // this is the runtime on which Layer spawns the blocking tasks on
let h = let handle = BACKGROUND_RUNTIME.handle();
TenantHarness::create("cancelled_get_or_maybe_download_does_not_cancel_eviction").unwrap(); let h = TenantHarness::create("residency_check_while_evict_and_wait_on_clogged_spawn_blocking")
.unwrap();
let (tenant, ctx) = h.load().await; let (tenant, ctx) = h.load().await;
let timeline = tenant let timeline = tenant
@@ -571,7 +263,7 @@ async fn cancelled_get_or_maybe_download_does_not_cancel_eviction() {
let layer = { let layer = {
let mut layers = { let mut layers = {
let layers = timeline.layers.read().await; let layers = timeline.layers.read().await;
layers.likely_resident_layers().collect::<Vec<_>>() layers.resident_layers().collect::<Vec<_>>().await
}; };
assert_eq!(layers.len(), 1); assert_eq!(layers.len(), 1);
@@ -579,154 +271,90 @@ async fn cancelled_get_or_maybe_download_does_not_cancel_eviction() {
layers.swap_remove(0) layers.swap_remove(0)
}; };
// this failpoint will simulate the `get_or_maybe_download` becoming cancelled (by returning an // setup done
// Err) at the right time as in "during" the `LayerInner::needs_download`.
layer.enable_failpoint(Failpoint::AfterDeterminingLayerNeedsNoDownload);
let (completion, barrier) = utils::completion::channel(); let resident = layer.keep_resident().await.unwrap();
let (arrival, arrived_at_barrier) = utils::completion::channel();
layer.enable_failpoint(Failpoint::WaitBeforeStartingEvicting(
Some(arrival),
barrier,
));
tokio::time::timeout(ADVANCE, layer.evict_and_wait(FOREVER))
.await
.expect_err("should had advanced to waiting on channel");
arrived_at_barrier.wait().await;
// simulate a cancelled read which is cancelled before it gets to re-initialize
let e = layer
.0
.get_or_maybe_download(false, None)
.await
.unwrap_err();
assert!(
matches!(
e,
DownloadError::Failpoint(FailpointKind::AfterDeterminingLayerNeedsNoDownload)
),
"{e:?}"
);
assert!(
layer.0.needs_download().await.unwrap().is_none(),
"file is still on disk"
);
// release the eviction task
drop(completion);
tokio::time::sleep(ADVANCE).await;
SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads(&handle).await;
// failpoint is still enabled, but it is not hit
let e = layer
.0
.get_or_maybe_download(false, None)
.await
.unwrap_err();
assert!(matches!(e, DownloadError::DownloadRequired), "{e:?}");
// failpoint is not counted as cancellation either
assert_eq!(0, LAYER_IMPL_METRICS.inits_cancelled.get())
}
#[tokio::test(start_paused = true)]
async fn evict_and_wait_does_not_wait_for_download() {
// let handle = tokio::runtime::Handle::current();
let h = TenantHarness::create("evict_and_wait_does_not_wait_for_download").unwrap();
let (tenant, ctx) = h.load().await;
let span = h.span();
let download_span = span.in_scope(|| tracing::info_span!("downloading", timeline_id = 1));
let timeline = tenant
.create_test_timeline(TimelineId::generate(), Lsn(0x10), 14, &ctx)
.await
.unwrap();
let layer = {
let mut layers = {
let layers = timeline.layers.read().await;
layers.likely_resident_layers().collect::<Vec<_>>()
};
assert_eq!(layers.len(), 1);
layers.swap_remove(0)
};
// kind of forced setup: start an eviction but do not allow it progress until we are
// downloading
let (eviction_can_continue, barrier) = utils::completion::channel();
let (arrival, eviction_arrived) = utils::completion::channel();
layer.enable_failpoint(Failpoint::WaitBeforeStartingEvicting(
Some(arrival),
barrier,
));
let mut evict_and_wait = std::pin::pin!(layer.evict_and_wait(FOREVER)); let mut evict_and_wait = std::pin::pin!(layer.evict_and_wait(FOREVER));
// use this once-awaited other_evict to synchronize with the eviction // drive the future to await on the status channel
let other_evict = layer.evict_and_wait(FOREVER);
tokio::time::timeout(ADVANCE, &mut evict_and_wait) tokio::time::timeout(ADVANCE, &mut evict_and_wait)
.await .await
.expect_err("should had advanced"); .expect_err("should had been a timeout since we are holding the layer resident");
eviction_arrived.wait().await; assert_eq!(1, LAYER_IMPL_METRICS.started_evictions.get());
drop(eviction_can_continue);
other_evict.await.unwrap();
// now the layer is evicted, and the "evict_and_wait" is waiting on the receiver // clog up BACKGROUND_RUNTIME spawn_blocking
assert!(!layer.is_likely_resident()); let helper = SpawnBlockingPoolHelper::consume_all_spawn_blocking_threads(handle).await;
// following new evict_and_wait will fail until we've completed the download // now the eviction cannot proceed because the threads are consumed while completion exists
let e = layer.evict_and_wait(FOREVER).await.unwrap_err(); drop(resident);
assert!(matches!(e, EvictionError::NotFound), "{e:?}");
let (download_can_continue, barrier) = utils::completion::channel(); // because no actual eviction happened, we get to just reinitialize the DownloadedLayer
let (arrival, _download_arrived) = utils::completion::channel(); layer
layer.enable_failpoint(Failpoint::WaitBeforeDownloading(Some(arrival), barrier)); .keep_resident()
.await
.expect("keep_resident should had reinitialized without downloading")
.expect("ResidentLayer");
let mut download = std::pin::pin!(layer // because the keep_resident check alters wanted evicted without sending a message, we will
.0 // never get completed
.get_or_maybe_download(true, None) let e = tokio::time::timeout(ADVANCE, &mut evict_and_wait)
.instrument(download_span)); .await
.expect("no timeout, because keep_resident re-initialized")
.expect_err("eviction should not have succeeded because re-initialized");
assert!( // works as intended: evictions lose to "downloads"
!layer.is_likely_resident(), assert!(matches!(e, EvictionError::Downloaded), "{e:?}");
"during download layer is evicted" assert_eq!(0, LAYER_IMPL_METRICS.completed_evictions.get());
// this is not wrong: the eviction is technically still "on the way" as it's still queued
// because spawn_blocking is clogged up
assert_eq!(
0,
LAYER_IMPL_METRICS
.cancelled_evictions
.values()
.map(|ctr| ctr.get())
.sum::<u64>()
); );
tokio::time::timeout(ADVANCE, &mut download) let mut second_eviction = std::pin::pin!(layer.evict_and_wait(FOREVER));
// advance to the wait on the queue
tokio::time::timeout(ADVANCE, &mut second_eviction)
.await .await
.expect_err("should had timed out because of failpoint"); .expect_err("timeout because spawn_blocking is clogged");
// now we finally get to continue, and because the latest state is downloading, we deduce that // in this case we don't leak started evictions, but I think there is still a chance of that
// original eviction succeeded // happening, because we could have upgrades race multiple evictions while only one of them
evict_and_wait.await.unwrap(); // happens?
assert_eq!(2, LAYER_IMPL_METRICS.started_evictions.get());
// however a new evict_and_wait will fail helper.release().await;
let e = layer.evict_and_wait(FOREVER).await.unwrap_err();
assert!(matches!(e, EvictionError::NotFound), "{e:?}");
assert!(!layer.is_likely_resident()); // the second_eviction gets to run here
//
// synchronize to be *strictly* after the second_eviction spawn_blocking run
SpawnBlockingPoolHelper::consume_and_release_all_of_spawn_blocking_threads(handle).await;
drop(download_can_continue); tokio::time::timeout(ADVANCE, &mut second_eviction)
download.await.expect("download should had succeeded"); .await
assert!(layer.is_likely_resident()); .expect("eviction goes through now that spawn_blocking is unclogged")
.expect("eviction should succeed, because version matches");
// only now can we evict assert_eq!(1, LAYER_IMPL_METRICS.completed_evictions.get());
layer.evict_and_wait(FOREVER).await.unwrap();
}
#[test] // now we finally can observe the original spawn_blocking failing
fn layer_size() { // it would had been possible to observe it earlier, but here it is guaranteed to have
assert_eq!(std::mem::size_of::<LayerAccessStats>(), 2040); // happened.
assert_eq!(std::mem::size_of::<PersistentLayerDesc>(), 104); assert_eq!(
assert_eq!(std::mem::size_of::<LayerInner>(), 2328); 1,
// it also has the utf8 path LAYER_IMPL_METRICS
.cancelled_evictions
.values()
.map(|ctr| ctr.get())
.sum::<u64>()
);
} }
struct SpawnBlockingPoolHelper { struct SpawnBlockingPoolHelper {
@@ -743,41 +371,31 @@ impl SpawnBlockingPoolHelper {
/// ///
/// This should be no issue nowdays, because nextest runs each test in it's own process. /// This should be no issue nowdays, because nextest runs each test in it's own process.
async fn consume_all_spawn_blocking_threads(handle: &tokio::runtime::Handle) -> Self { async fn consume_all_spawn_blocking_threads(handle: &tokio::runtime::Handle) -> Self {
let default_max_blocking_threads = 512;
Self::consume_all_spawn_blocking_threads0(handle, default_max_blocking_threads).await
}
async fn consume_all_spawn_blocking_threads0(
handle: &tokio::runtime::Handle,
threads: usize,
) -> Self {
assert_ne!(threads, 0);
let (completion, barrier) = completion::channel(); let (completion, barrier) = completion::channel();
let (started, starts_completed) = completion::channel(); let (tx, mut rx) = tokio::sync::mpsc::channel(8);
let assumed_max_blocking_threads = 512;
let mut blocking_tasks = JoinSet::new(); let mut blocking_tasks = JoinSet::new();
for _ in 0..threads { for _ in 0..assumed_max_blocking_threads {
let barrier = barrier.clone(); let barrier = barrier.clone();
let started = started.clone(); let tx = tx.clone();
blocking_tasks.spawn_blocking_on( blocking_tasks.spawn_blocking_on(
move || { move || {
drop(started); tx.blocking_send(()).unwrap();
drop(tx);
tokio::runtime::Handle::current().block_on(barrier.wait()); tokio::runtime::Handle::current().block_on(barrier.wait());
}, },
handle, handle,
); );
} }
drop(started);
starts_completed.wait().await;
drop(barrier); drop(barrier);
tracing::trace!("consumed all threads"); for _ in 0..assumed_max_blocking_threads {
rx.recv().await.unwrap();
}
SpawnBlockingPoolHelper { SpawnBlockingPoolHelper {
awaited_by_spawn_blocking_tasks: completion, awaited_by_spawn_blocking_tasks: completion,
@@ -797,22 +415,13 @@ impl SpawnBlockingPoolHelper {
while let Some(res) = blocking_tasks.join_next().await { while let Some(res) = blocking_tasks.join_next().await {
res.expect("none of the tasks should had panicked"); res.expect("none of the tasks should had panicked");
} }
tracing::trace!("released all threads");
} }
/// In the tests it is used as an easy way of making sure something scheduled on the target /// In the tests it is used as an easy way of making sure something scheduled on the target
/// runtimes `spawn_blocking` has completed, because it must've been scheduled and completed /// runtimes `spawn_blocking` has completed, because it must've been scheduled and completed
/// before our tasks have a chance to schedule and complete. /// before our tasks have a chance to schedule and complete.
async fn consume_and_release_all_of_spawn_blocking_threads(handle: &tokio::runtime::Handle) { async fn consume_and_release_all_of_spawn_blocking_threads(handle: &tokio::runtime::Handle) {
Self::consume_and_release_all_of_spawn_blocking_threads0(handle, 512).await Self::consume_all_spawn_blocking_threads(handle)
}
async fn consume_and_release_all_of_spawn_blocking_threads0(
handle: &tokio::runtime::Handle,
threads: usize,
) {
Self::consume_all_spawn_blocking_threads0(handle, threads)
.await .await
.release() .release()
.await .await
@@ -826,7 +435,7 @@ fn spawn_blocking_pool_helper_actually_works() {
// because the amount is not configurable for our helper, expect the same amount as // because the amount is not configurable for our helper, expect the same amount as
// BACKGROUND_RUNTIME using the tokio defaults would have. // BACKGROUND_RUNTIME using the tokio defaults would have.
let rt = tokio::runtime::Builder::new_current_thread() let rt = tokio::runtime::Builder::new_current_thread()
.max_blocking_threads(1) .max_blocking_threads(512)
.enable_all() .enable_all()
.build() .build()
.unwrap(); .unwrap();
@@ -836,8 +445,7 @@ fn spawn_blocking_pool_helper_actually_works() {
rt.block_on(async move { rt.block_on(async move {
// this will not return until all threads are spun up and actually executing the code // this will not return until all threads are spun up and actually executing the code
// waiting on `consumed` to be `SpawnBlockingPoolHelper::release`'d. // waiting on `consumed` to be `SpawnBlockingPoolHelper::release`'d.
let consumed = let consumed = SpawnBlockingPoolHelper::consume_all_spawn_blocking_threads(handle).await;
SpawnBlockingPoolHelper::consume_all_spawn_blocking_threads0(handle, 1).await;
println!("consumed"); println!("consumed");

View File

@@ -13,13 +13,14 @@ use bytes::Bytes;
use camino::Utf8Path; use camino::Utf8Path;
use enumset::EnumSet; use enumset::EnumSet;
use fail::fail_point; use fail::fail_point;
use futures::stream::StreamExt;
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use pageserver_api::{ use pageserver_api::{
key::AUX_FILES_KEY, key::AUX_FILES_KEY,
keyspace::KeySpaceAccum, keyspace::KeySpaceAccum,
models::{ models::{
CompactionAlgorithm, DownloadRemoteLayersTaskInfo, DownloadRemoteLayersTaskSpawnRequest, CompactionAlgorithm, DownloadRemoteLayersTaskInfo, DownloadRemoteLayersTaskSpawnRequest,
EvictionPolicy, InMemoryLayerInfo, LayerMapInfo, TimelineState, EvictionPolicy, LayerMapInfo, TimelineState,
}, },
reltag::BlockNumber, reltag::BlockNumber,
shard::{ShardIdentity, TenantShardId}, shard::{ShardIdentity, TenantShardId},
@@ -36,7 +37,6 @@ use tracing::*;
use utils::{ use utils::{
bin_ser::BeSer, bin_ser::BeSer,
sync::gate::{Gate, GateGuard}, sync::gate::{Gate, GateGuard},
vec_map::VecMap,
}; };
use std::ops::{Deref, Range}; use std::ops::{Deref, Range};
@@ -54,7 +54,6 @@ use std::{
ops::ControlFlow, ops::ControlFlow,
}; };
use crate::deletion_queue::DeletionQueueClient;
use crate::tenant::timeline::logical_size::CurrentLogicalSize; use crate::tenant::timeline::logical_size::CurrentLogicalSize;
use crate::tenant::{ use crate::tenant::{
layer_map::{LayerMap, SearchResult}, layer_map::{LayerMap, SearchResult},
@@ -65,6 +64,7 @@ use crate::{
disk_usage_eviction_task::DiskUsageEvictionInfo, disk_usage_eviction_task::DiskUsageEvictionInfo,
pgdatadir_mapping::CollectKeySpaceError, pgdatadir_mapping::CollectKeySpaceError,
}; };
use crate::{deletion_queue::DeletionQueueClient, tenant::remote_timeline_client::StopError};
use crate::{ use crate::{
disk_usage_eviction_task::finite_f32, disk_usage_eviction_task::finite_f32,
tenant::storage_layer::{ tenant::storage_layer::{
@@ -1142,79 +1142,6 @@ impl Timeline {
self.flush_frozen_layers_and_wait().await self.flush_frozen_layers_and_wait().await
} }
/// If there is no writer, and conditions for rolling the latest layer are met, then freeze it.
///
/// This is for use in background housekeeping, to provide guarantees of layers closing eventually
/// even if there are no ongoing writes to drive that.
async fn maybe_freeze_ephemeral_layer(&self) {
let Ok(_write_guard) = self.write_lock.try_lock() else {
// If the write lock is held, there is an active wal receiver: rolling open layers
// is their responsibility while they hold this lock.
return;
};
let Ok(layers_guard) = self.layers.try_read() else {
// Don't block if the layer lock is busy
return;
};
let Some(open_layer) = &layers_guard.layer_map().open_layer else {
// No open layer, no work to do.
return;
};
let Some(current_size) = open_layer.try_len() else {
// Unexpected: since we hold the write guard, nobody else should be writing to this layer, so
// read lock to get size should always succeed.
tracing::warn!("Lock conflict while reading size of open layer");
return;
};
let current_lsn = self.get_last_record_lsn();
let checkpoint_distance_override = open_layer.tick().await;
if let Some(size_override) = checkpoint_distance_override {
if current_size > size_override {
// This is not harmful, but it only happens in relatively rare cases where
// time-based checkpoints are not happening fast enough to keep the amount of
// ephemeral data within configured limits. It's a sign of stress on the system.
tracing::info!("Early-rolling open layer at size {current_size} (limit {size_override}) due to dirty data pressure");
}
}
let checkpoint_distance =
checkpoint_distance_override.unwrap_or(self.get_checkpoint_distance());
if self.should_roll(
current_size,
current_size,
checkpoint_distance,
self.get_last_record_lsn(),
self.last_freeze_at.load(),
*self.last_freeze_ts.read().unwrap(),
) {
match open_layer.info() {
InMemoryLayerInfo::Frozen { lsn_start, lsn_end } => {
// We may reach this point if the layer was already frozen by not yet flushed: flushing
// happens asynchronously in the background.
tracing::debug!(
"Not freezing open layer, it's already frozen ({lsn_start}..{lsn_end})"
);
}
InMemoryLayerInfo::Open { .. } => {
// Upgrade to a write lock and freeze the layer
drop(layers_guard);
let mut layers_guard = self.layers.write().await;
layers_guard
.try_freeze_in_memory_layer(current_lsn, &self.last_freeze_at)
.await;
}
}
self.flush_frozen_layers();
}
}
/// Outermost timeline compaction operation; downloads needed layers. /// Outermost timeline compaction operation; downloads needed layers.
pub(crate) async fn compact( pub(crate) async fn compact(
self: &Arc<Self>, self: &Arc<Self>,
@@ -1237,11 +1164,6 @@ impl Timeline {
(guard, permit) (guard, permit)
}; };
// Prior to compaction, check if an open ephemeral layer should be closed: this provides
// background enforcement of checkpoint interval if there is no active WAL receiver, to avoid keeping
// an ephemeral layer open forever when idle.
self.maybe_freeze_ephemeral_layer().await;
// this wait probably never needs any "long time spent" logging, because we already nag if // this wait probably never needs any "long time spent" logging, because we already nag if
// compaction task goes over it's period (20s) which is quite often in production. // compaction task goes over it's period (20s) which is quite often in production.
let (_guard, _permit) = tokio::select! { let (_guard, _permit) = tokio::select! {
@@ -1274,7 +1196,6 @@ impl Timeline {
pub(crate) fn activate( pub(crate) fn activate(
self: &Arc<Self>, self: &Arc<Self>,
parent: Arc<crate::tenant::Tenant>,
broker_client: BrokerClientChannel, broker_client: BrokerClientChannel,
background_jobs_can_start: Option<&completion::Barrier>, background_jobs_can_start: Option<&completion::Barrier>,
ctx: &RequestContext, ctx: &RequestContext,
@@ -1285,7 +1206,7 @@ impl Timeline {
} }
self.launch_wal_receiver(ctx, broker_client); self.launch_wal_receiver(ctx, broker_client);
self.set_state(TimelineState::Active); self.set_state(TimelineState::Active);
self.launch_eviction_task(parent, background_jobs_can_start); self.launch_eviction_task(background_jobs_can_start);
} }
/// Graceful shutdown, may do a lot of I/O as we flush any open layers to disk and then /// Graceful shutdown, may do a lot of I/O as we flush any open layers to disk and then
@@ -1320,7 +1241,11 @@ impl Timeline {
// what is problematic is the shutting down of RemoteTimelineClient, because // what is problematic is the shutting down of RemoteTimelineClient, because
// obviously it does not make sense to stop while we wait for it, but what // obviously it does not make sense to stop while we wait for it, but what
// about corner cases like s3 suddenly hanging up? // about corner cases like s3 suddenly hanging up?
client.shutdown().await; if let Err(e) = client.shutdown().await {
// Non-fatal. Shutdown is infallible. Failures to flush just mean that
// we have some extra WAL replay to do next time the timeline starts.
warn!("failed to flush to remote storage: {e:#}");
}
} }
} }
Err(e) => { Err(e) => {
@@ -1357,7 +1282,12 @@ impl Timeline {
// Shut down remote timeline client: this gracefully moves its metadata into its Stopping state in // Shut down remote timeline client: this gracefully moves its metadata into its Stopping state in
// case our caller wants to use that for a deletion // case our caller wants to use that for a deletion
if let Some(remote_client) = self.remote_client.as_ref() { if let Some(remote_client) = self.remote_client.as_ref() {
remote_client.stop(); match remote_client.stop() {
Ok(()) => {}
Err(StopError::QueueUninitialized) => {
// Shutting down during initialization is legal
}
}
} }
tracing::debug!("Waiting for tasks..."); tracing::debug!("Waiting for tasks...");
@@ -1513,53 +1443,6 @@ impl Timeline {
Err(EvictionError::Timeout) => Ok(Some(false)), Err(EvictionError::Timeout) => Ok(Some(false)),
} }
} }
fn should_roll(
&self,
layer_size: u64,
projected_layer_size: u64,
checkpoint_distance: u64,
projected_lsn: Lsn,
last_freeze_at: Lsn,
last_freeze_ts: Instant,
) -> bool {
let distance = projected_lsn.widening_sub(last_freeze_at);
// Rolling the open layer can be triggered by:
// 1. The distance from the last LSN we rolled at. This bounds the amount of WAL that
// the safekeepers need to store. For sharded tenants, we multiply by shard count to
// account for how writes are distributed across shards: we expect each node to consume
// 1/count of the LSN on average.
// 2. The size of the currently open layer.
// 3. The time since the last roll. It helps safekeepers to regard pageserver as caught
// up and suspend activity.
if distance >= checkpoint_distance as i128 * self.shard_identity.count.count() as i128 {
info!(
"Will roll layer at {} with layer size {} due to LSN distance ({})",
projected_lsn, layer_size, distance
);
true
} else if projected_layer_size >= checkpoint_distance {
info!(
"Will roll layer at {} with layer size {} due to layer size ({})",
projected_lsn, layer_size, projected_layer_size
);
true
} else if distance > 0 && last_freeze_ts.elapsed() >= self.get_checkpoint_timeout() {
info!(
"Will roll layer at {} with layer size {} due to time since last flush ({:?})",
projected_lsn,
layer_size,
last_freeze_ts.elapsed()
);
true
} else {
false
}
}
} }
/// Number of times we will compute partition within a checkpoint distance. /// Number of times we will compute partition within a checkpoint distance.
@@ -2559,7 +2442,7 @@ impl Timeline {
let guard = self.layers.read().await; let guard = self.layers.read().await;
let resident = guard.likely_resident_layers().map(|layer| { let resident = guard.resident_layers().map(|layer| {
let last_activity_ts = layer.access_stats().latest_activity_or_now(); let last_activity_ts = layer.access_stats().latest_activity_or_now();
HeatMapLayer::new( HeatMapLayer::new(
@@ -2569,7 +2452,7 @@ impl Timeline {
) )
}); });
let layers = resident.collect(); let layers = resident.collect().await;
Some(HeatMapTimeline::new(self.timeline_id, layers)) Some(HeatMapTimeline::new(self.timeline_id, layers))
} }
@@ -2713,10 +2596,6 @@ impl Timeline {
// Get all the data needed to reconstruct the page version from this layer. // Get all the data needed to reconstruct the page version from this layer.
// But if we have an older cached page image, no need to go past that. // But if we have an older cached page image, no need to go past that.
let lsn_floor = max(cached_lsn + 1, start_lsn); let lsn_floor = max(cached_lsn + 1, start_lsn);
let open_layer = open_layer.clone();
drop(guard);
result = match open_layer result = match open_layer
.get_value_reconstruct_data( .get_value_reconstruct_data(
key, key,
@@ -2734,7 +2613,10 @@ impl Timeline {
traversal_path.push(( traversal_path.push((
result, result,
cont_lsn, cont_lsn,
Box::new(move || open_layer.traversal_id()), Box::new({
let open_layer = Arc::clone(open_layer);
move || open_layer.traversal_id()
}),
)); ));
continue 'outer; continue 'outer;
} }
@@ -2744,10 +2626,6 @@ impl Timeline {
if cont_lsn > start_lsn { if cont_lsn > start_lsn {
//info!("CHECKING for {} at {} on frozen layer {}", key, cont_lsn, frozen_layer.filename().display()); //info!("CHECKING for {} at {} on frozen layer {}", key, cont_lsn, frozen_layer.filename().display());
let lsn_floor = max(cached_lsn + 1, start_lsn); let lsn_floor = max(cached_lsn + 1, start_lsn);
let frozen_layer = frozen_layer.clone();
drop(guard);
result = match frozen_layer result = match frozen_layer
.get_value_reconstruct_data( .get_value_reconstruct_data(
key, key,
@@ -2765,7 +2643,10 @@ impl Timeline {
traversal_path.push(( traversal_path.push((
result, result,
cont_lsn, cont_lsn,
Box::new(move || frozen_layer.traversal_id()), Box::new({
let frozen_layer = Arc::clone(frozen_layer);
move || frozen_layer.traversal_id()
}),
)); ));
continue 'outer; continue 'outer;
} }
@@ -2773,8 +2654,6 @@ impl Timeline {
if let Some(SearchResult { lsn_floor, layer }) = layers.search(key, cont_lsn) { if let Some(SearchResult { lsn_floor, layer }) = layers.search(key, cont_lsn) {
let layer = guard.get_from_desc(&layer); let layer = guard.get_from_desc(&layer);
drop(guard);
// Get all the data needed to reconstruct the page version from this layer. // Get all the data needed to reconstruct the page version from this layer.
// But if we have an older cached page image, no need to go past that. // But if we have an older cached page image, no need to go past that.
let lsn_floor = max(cached_lsn + 1, lsn_floor); let lsn_floor = max(cached_lsn + 1, lsn_floor);
@@ -4423,7 +4302,7 @@ impl Timeline {
let mut max_layer_size: Option<u64> = None; let mut max_layer_size: Option<u64> = None;
let resident_layers = guard let resident_layers = guard
.likely_resident_layers() .resident_layers()
.map(|layer| { .map(|layer| {
let file_size = layer.layer_desc().file_size; let file_size = layer.layer_desc().file_size;
max_layer_size = max_layer_size.map_or(Some(file_size), |m| Some(m.max(file_size))); max_layer_size = max_layer_size.map_or(Some(file_size), |m| Some(m.max(file_size)));
@@ -4436,7 +4315,8 @@ impl Timeline {
relative_last_activity: finite_f32::FiniteF32::ZERO, relative_last_activity: finite_f32::FiniteF32::ZERO,
} }
}) })
.collect(); .collect()
.await;
DiskUsageEvictionInfo { DiskUsageEvictionInfo {
max_layer_size, max_layer_size,
@@ -4581,6 +4461,49 @@ impl<'a> TimelineWriter<'a> {
res res
} }
/// "Tick" the timeline writer: it will roll the open layer if required
/// and do nothing else.
pub(crate) async fn tick(&mut self) -> anyhow::Result<()> {
self.open_layer_if_present().await?;
let last_record_lsn = self.get_last_record_lsn();
let action = self.get_open_layer_action(last_record_lsn, 0);
if action == OpenLayerAction::Roll {
self.roll_layer(last_record_lsn).await?;
}
Ok(())
}
/// Populate the timeline writer state only if an in-memory layer
/// is already open.
async fn open_layer_if_present(&mut self) -> anyhow::Result<()> {
assert!(self.write_guard.is_none());
let open_layer = {
let guard = self.layers.read().await;
let layers = guard.layer_map();
match layers.open_layer {
Some(ref open_layer) => open_layer.clone(),
None => {
return Ok(());
}
}
};
let initial_size = open_layer.size().await?;
let last_freeze_at = self.last_freeze_at.load();
let last_freeze_ts = *self.last_freeze_ts.read().unwrap();
self.write_guard.replace(TimelineWriterState::new(
open_layer,
initial_size,
last_freeze_at,
last_freeze_ts,
));
Ok(())
}
async fn handle_open_layer_action( async fn handle_open_layer_action(
&mut self, &mut self,
at: Lsn, at: Lsn,
@@ -4652,29 +4575,59 @@ impl<'a> TimelineWriter<'a> {
return OpenLayerAction::None; return OpenLayerAction::None;
} }
if self.tl.should_roll( let distance = lsn.widening_sub(state.cached_last_freeze_at);
state.current_size, let proposed_open_layer_size = state.current_size + new_value_size;
state.current_size + new_value_size,
self.get_checkpoint_distance(), // Rolling the open layer can be triggered by:
lsn, // 1. The distance from the last LSN we rolled at. This bounds the amount of WAL that
state.cached_last_freeze_at, // the safekeepers need to store. For sharded tenants, we multiply by shard count to
state.cached_last_freeze_ts, // account for how writes are distributed across shards: we expect each node to consume
) { // 1/count of the LSN on average.
// 2. The size of the currently open layer.
// 3. The time since the last roll. It helps safekeepers to regard pageserver as caught
// up and suspend activity.
if distance
>= self.get_checkpoint_distance() as i128 * self.shard_identity.count.count() as i128
{
info!(
"Will roll layer at {} with layer size {} due to LSN distance ({})",
lsn, state.current_size, distance
);
OpenLayerAction::Roll
} else if proposed_open_layer_size >= self.get_checkpoint_distance() {
info!(
"Will roll layer at {} with layer size {} due to layer size ({})",
lsn, state.current_size, proposed_open_layer_size
);
OpenLayerAction::Roll
} else if distance > 0
&& state.cached_last_freeze_ts.elapsed() >= self.get_checkpoint_timeout()
{
info!(
"Will roll layer at {} with layer size {} due to time since last flush ({:?})",
lsn,
state.current_size,
state.cached_last_freeze_ts.elapsed()
);
OpenLayerAction::Roll OpenLayerAction::Roll
} else { } else {
OpenLayerAction::None OpenLayerAction::None
} }
} }
/// Put a batch of keys at the specified Lsns. /// Put a batch keys at the specified Lsns.
/// ///
/// The batch is sorted by Lsn (enforced by usage of [`utils::vec_map::VecMap`]. /// The batch should be sorted by Lsn such that it's safe
/// to roll the open layer mid batch.
pub(crate) async fn put_batch( pub(crate) async fn put_batch(
&mut self, &mut self,
batch: VecMap<Lsn, (Key, Value)>, batch: Vec<(Key, Lsn, Value)>,
ctx: &RequestContext, ctx: &RequestContext,
) -> anyhow::Result<()> { ) -> anyhow::Result<()> {
for (lsn, (key, val)) in batch { for (key, lsn, val) in batch {
self.put(key, lsn, &val, ctx).await? self.put(key, lsn, &val, ctx).await?
} }
@@ -4760,6 +4713,7 @@ mod tests {
.keep_resident() .keep_resident()
.await .await
.expect("no download => no downloading errors") .expect("no download => no downloading errors")
.expect("should had been resident")
.drop_eviction_guard(); .drop_eviction_guard();
let forever = std::time::Duration::from_secs(120); let forever = std::time::Duration::from_secs(120);
@@ -4770,7 +4724,7 @@ mod tests {
let (first, second) = tokio::join!(first, second); let (first, second) = tokio::join!(first, second);
let res = layer.keep_resident().await; let res = layer.keep_resident().await;
assert!(res.is_none(), "{res:?}"); assert!(matches!(res, Ok(None)), "{res:?}");
match (first, second) { match (first, second) {
(Ok(()), Ok(())) => { (Ok(()), Ok(())) => {

View File

@@ -16,7 +16,9 @@ use crate::{
tenant::{ tenant::{
debug_assert_current_span_has_tenant_and_timeline_id, debug_assert_current_span_has_tenant_and_timeline_id,
metadata::TimelineMetadata, metadata::TimelineMetadata,
remote_timeline_client::{PersistIndexPartWithDeletedFlagError, RemoteTimelineClient}, remote_timeline_client::{
self, PersistIndexPartWithDeletedFlagError, RemoteTimelineClient,
},
CreateTimelineCause, DeleteTimelineError, Tenant, CreateTimelineCause, DeleteTimelineError, Tenant,
}, },
}; };
@@ -48,7 +50,19 @@ async fn stop_tasks(timeline: &Timeline) -> Result<(), DeleteTimelineError> {
// Prevent new uploads from starting. // Prevent new uploads from starting.
if let Some(remote_client) = timeline.remote_client.as_ref() { if let Some(remote_client) = timeline.remote_client.as_ref() {
remote_client.stop(); let res = remote_client.stop();
match res {
Ok(()) => {}
Err(e) => match e {
remote_timeline_client::StopError::QueueUninitialized => {
// This case shouldn't happen currently because the
// load and attach code bails out if _any_ of the timeline fails to fetch its IndexPart.
// That is, before we declare the Tenant as Active.
// But we only allow calls to delete_timeline on Active tenants.
return Err(DeleteTimelineError::Other(anyhow::anyhow!("upload queue is uninitialized, likely the timeline was in Broken state prior to this call because it failed to fetch IndexPart during load or attach, check the logs")));
}
},
}
} }
// Stop & wait for the remaining timeline tasks, including upload tasks. // Stop & wait for the remaining timeline tasks, including upload tasks.

View File

@@ -51,7 +51,6 @@ pub struct EvictionTaskTenantState {
impl Timeline { impl Timeline {
pub(super) fn launch_eviction_task( pub(super) fn launch_eviction_task(
self: &Arc<Self>, self: &Arc<Self>,
parent: Arc<Tenant>,
background_tasks_can_start: Option<&completion::Barrier>, background_tasks_can_start: Option<&completion::Barrier>,
) { ) {
let self_clone = Arc::clone(self); let self_clone = Arc::clone(self);
@@ -73,14 +72,14 @@ impl Timeline {
_ = completion::Barrier::maybe_wait(background_tasks_can_start) => {} _ = completion::Barrier::maybe_wait(background_tasks_can_start) => {}
}; };
self_clone.eviction_task(parent, cancel).await; self_clone.eviction_task(cancel).await;
Ok(()) Ok(())
}, },
); );
} }
#[instrument(skip_all, fields(tenant_id = %self.tenant_shard_id.tenant_id, shard_id = %self.tenant_shard_id.shard_slug(), timeline_id = %self.timeline_id))] #[instrument(skip_all, fields(tenant_id = %self.tenant_shard_id.tenant_id, shard_id = %self.tenant_shard_id.shard_slug(), timeline_id = %self.timeline_id))]
async fn eviction_task(self: Arc<Self>, tenant: Arc<Tenant>, cancel: CancellationToken) { async fn eviction_task(self: Arc<Self>, cancel: CancellationToken) {
use crate::tenant::tasks::random_init_delay; use crate::tenant::tasks::random_init_delay;
// acquire the gate guard only once within a useful span // acquire the gate guard only once within a useful span
@@ -104,7 +103,7 @@ impl Timeline {
loop { loop {
let policy = self.get_eviction_policy(); let policy = self.get_eviction_policy();
let cf = self let cf = self
.eviction_iteration(&tenant, &policy, &cancel, &guard, &ctx) .eviction_iteration(&policy, &cancel, &guard, &ctx)
.await; .await;
match cf { match cf {
@@ -124,7 +123,6 @@ impl Timeline {
#[instrument(skip_all, fields(policy_kind = policy.discriminant_str()))] #[instrument(skip_all, fields(policy_kind = policy.discriminant_str()))]
async fn eviction_iteration( async fn eviction_iteration(
self: &Arc<Self>, self: &Arc<Self>,
tenant: &Tenant,
policy: &EvictionPolicy, policy: &EvictionPolicy,
cancel: &CancellationToken, cancel: &CancellationToken,
gate: &GateGuard, gate: &GateGuard,
@@ -139,7 +137,7 @@ impl Timeline {
} }
EvictionPolicy::LayerAccessThreshold(p) => { EvictionPolicy::LayerAccessThreshold(p) => {
match self match self
.eviction_iteration_threshold(tenant, p, cancel, gate, ctx) .eviction_iteration_threshold(p, cancel, gate, ctx)
.await .await
{ {
ControlFlow::Break(()) => return ControlFlow::Break(()), ControlFlow::Break(()) => return ControlFlow::Break(()),
@@ -148,11 +146,7 @@ impl Timeline {
(p.period, p.threshold) (p.period, p.threshold)
} }
EvictionPolicy::OnlyImitiate(p) => { EvictionPolicy::OnlyImitiate(p) => {
if self if self.imitiate_only(p, cancel, gate, ctx).await.is_break() {
.imitiate_only(tenant, p, cancel, gate, ctx)
.await
.is_break()
{
return ControlFlow::Break(()); return ControlFlow::Break(());
} }
(p.period, p.threshold) (p.period, p.threshold)
@@ -181,7 +175,6 @@ impl Timeline {
async fn eviction_iteration_threshold( async fn eviction_iteration_threshold(
self: &Arc<Self>, self: &Arc<Self>,
tenant: &Tenant,
p: &EvictionPolicyLayerAccessThreshold, p: &EvictionPolicyLayerAccessThreshold,
cancel: &CancellationToken, cancel: &CancellationToken,
gate: &GateGuard, gate: &GateGuard,
@@ -200,10 +193,7 @@ impl Timeline {
_ = self.cancel.cancelled() => return ControlFlow::Break(()), _ = self.cancel.cancelled() => return ControlFlow::Break(()),
}; };
match self match self.imitate_layer_accesses(p, cancel, gate, ctx).await {
.imitate_layer_accesses(tenant, p, cancel, gate, ctx)
.await
{
ControlFlow::Break(()) => return ControlFlow::Break(()), ControlFlow::Break(()) => return ControlFlow::Break(()),
ControlFlow::Continue(()) => (), ControlFlow::Continue(()) => (),
} }
@@ -235,18 +225,24 @@ impl Timeline {
{ {
let guard = self.layers.read().await; let guard = self.layers.read().await;
let layers = guard.layer_map(); let layers = guard.layer_map();
for layer in layers.iter_historic_layers() { for hist_layer in layers.iter_historic_layers() {
let layer = guard.get_from_desc(&layer); let hist_layer = guard.get_from_desc(&hist_layer);
// guard against eviction while we inspect it; it might be that eviction_task and // guard against eviction while we inspect it; it might be that eviction_task and
// disk_usage_eviction_task both select the same layers to be evicted, and // disk_usage_eviction_task both select the same layers to be evicted, and
// seemingly free up double the space. both succeeding is of no consequence. // seemingly free up double the space. both succeeding is of no consequence.
let guard = match hist_layer.keep_resident().await {
Ok(Some(l)) => l,
Ok(None) => continue,
Err(e) => {
// these should not happen, but we cannot make them statically impossible right
// now.
tracing::warn!(layer=%hist_layer, "failed to keep the layer resident: {e:#}");
continue;
}
};
if !layer.is_likely_resident() { let last_activity_ts = hist_layer.access_stats().latest_activity_or_now();
continue;
}
let last_activity_ts = layer.access_stats().latest_activity_or_now();
let no_activity_for = match now.duration_since(last_activity_ts) { let no_activity_for = match now.duration_since(last_activity_ts) {
Ok(d) => d, Ok(d) => d,
@@ -269,8 +265,9 @@ impl Timeline {
continue; continue;
} }
}; };
let layer = guard.drop_eviction_guard();
if no_activity_for > p.threshold { if no_activity_for > p.threshold {
// this could cause a lot of allocations in some cases
js.spawn(async move { js.spawn(async move {
layer layer
.evict_and_wait(std::time::Duration::from_secs(5)) .evict_and_wait(std::time::Duration::from_secs(5))
@@ -325,7 +322,6 @@ impl Timeline {
/// disk usage based eviction task. /// disk usage based eviction task.
async fn imitiate_only( async fn imitiate_only(
self: &Arc<Self>, self: &Arc<Self>,
tenant: &Tenant,
p: &EvictionPolicyLayerAccessThreshold, p: &EvictionPolicyLayerAccessThreshold,
cancel: &CancellationToken, cancel: &CancellationToken,
gate: &GateGuard, gate: &GateGuard,
@@ -342,8 +338,7 @@ impl Timeline {
_ = self.cancel.cancelled() => return ControlFlow::Break(()), _ = self.cancel.cancelled() => return ControlFlow::Break(()),
}; };
self.imitate_layer_accesses(tenant, p, cancel, gate, ctx) self.imitate_layer_accesses(p, cancel, gate, ctx).await
.await
} }
/// If we evict layers but keep cached values derived from those layers, then /// If we evict layers but keep cached values derived from those layers, then
@@ -373,7 +368,6 @@ impl Timeline {
#[instrument(skip_all)] #[instrument(skip_all)]
async fn imitate_layer_accesses( async fn imitate_layer_accesses(
&self, &self,
tenant: &Tenant,
p: &EvictionPolicyLayerAccessThreshold, p: &EvictionPolicyLayerAccessThreshold,
cancel: &CancellationToken, cancel: &CancellationToken,
gate: &GateGuard, gate: &GateGuard,
@@ -409,11 +403,17 @@ impl Timeline {
// Make one of the tenant's timelines draw the short straw and run the calculation. // Make one of the tenant's timelines draw the short straw and run the calculation.
// The others wait until the calculation is done so that they take into account the // The others wait until the calculation is done so that they take into account the
// imitated accesses that the winner made. // imitated accesses that the winner made.
let tenant = match crate::tenant::mgr::get_tenant(self.tenant_shard_id, true) {
Ok(t) => t,
Err(_) => {
return ControlFlow::Break(());
}
};
let mut state = tenant.eviction_task_tenant_state.lock().await; let mut state = tenant.eviction_task_tenant_state.lock().await;
match state.last_layer_access_imitation { match state.last_layer_access_imitation {
Some(ts) if ts.elapsed() < inter_imitate_period => { /* no need to run */ } Some(ts) if ts.elapsed() < inter_imitate_period => { /* no need to run */ }
_ => { _ => {
self.imitate_synthetic_size_calculation_worker(tenant, cancel, ctx) self.imitate_synthetic_size_calculation_worker(&tenant, cancel, ctx)
.await; .await;
state.last_layer_access_imitation = Some(tokio::time::Instant::now()); state.last_layer_access_imitation = Some(tokio::time::Instant::now());
} }
@@ -487,7 +487,7 @@ impl Timeline {
#[instrument(skip_all)] #[instrument(skip_all)]
async fn imitate_synthetic_size_calculation_worker( async fn imitate_synthetic_size_calculation_worker(
&self, &self,
tenant: &Tenant, tenant: &Arc<Tenant>,
cancel: &CancellationToken, cancel: &CancellationToken,
ctx: &RequestContext, ctx: &RequestContext,
) { ) {

View File

@@ -1,4 +1,5 @@
use anyhow::{bail, ensure, Context, Result}; use anyhow::{bail, ensure, Context, Result};
use futures::StreamExt;
use pageserver_api::shard::TenantShardId; use pageserver_api::shard::TenantShardId;
use std::{collections::HashMap, sync::Arc}; use std::{collections::HashMap, sync::Arc};
use tracing::trace; use tracing::trace;
@@ -240,16 +241,29 @@ impl LayerManager {
layer.delete_on_drop(); layer.delete_on_drop();
} }
pub(crate) fn likely_resident_layers(&self) -> impl Iterator<Item = Layer> + '_ { pub(crate) fn resident_layers(&self) -> impl futures::stream::Stream<Item = Layer> + '_ {
// for small layer maps, we most likely have all resident, but for larger more are likely // for small layer maps, we most likely have all resident, but for larger more are likely
// to be evicted assuming lots of layers correlated with longer lifespan. // to be evicted assuming lots of layers correlated with longer lifespan.
self.layer_map().iter_historic_layers().filter_map(|desc| { let layers = self
self.layer_fmgr .layer_map()
.0 .iter_historic_layers()
.get(&desc.key()) .map(|desc| self.get_from_desc(&desc));
.filter(|l| l.is_likely_resident())
.cloned() let layers = futures::stream::iter(layers);
layers.filter_map(|layer| async move {
// TODO(#6028): this query does not really need to see the ResidentLayer
match layer.keep_resident().await {
Ok(Some(layer)) => Some(layer.drop_eviction_guard()),
Ok(None) => None,
Err(e) => {
// these should not happen, but we cannot make them statically impossible right
// now.
tracing::warn!(%layer, "failed to keep the layer resident: {e:#}");
None
}
}
}) })
} }

View File

@@ -86,7 +86,6 @@ impl<'t> UninitializedTimeline<'t> {
/// Prepares timeline data by loading it from the basebackup archive. /// Prepares timeline data by loading it from the basebackup archive.
pub(crate) async fn import_basebackup_from_tar( pub(crate) async fn import_basebackup_from_tar(
self, self,
tenant: Arc<Tenant>,
copyin_read: &mut (impl tokio::io::AsyncRead + Send + Sync + Unpin), copyin_read: &mut (impl tokio::io::AsyncRead + Send + Sync + Unpin),
base_lsn: Lsn, base_lsn: Lsn,
broker_client: storage_broker::BrokerClientChannel, broker_client: storage_broker::BrokerClientChannel,
@@ -115,7 +114,7 @@ impl<'t> UninitializedTimeline<'t> {
// All the data has been imported. Insert the Timeline into the tenant's timelines map // All the data has been imported. Insert the Timeline into the tenant's timelines map
let tl = self.finish_creation()?; let tl = self.finish_creation()?;
tl.activate(tenant, broker_client, None, ctx); tl.activate(broker_client, None, ctx);
Ok(tl) Ok(tl)
} }

View File

@@ -33,9 +33,11 @@ use crate::tenant::timeline::walreceiver::connection_manager::{
use pageserver_api::shard::TenantShardId; use pageserver_api::shard::TenantShardId;
use std::future::Future; use std::future::Future;
use std::num::NonZeroU64; use std::num::NonZeroU64;
use std::ops::ControlFlow;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use storage_broker::BrokerClientChannel; use storage_broker::BrokerClientChannel;
use tokio::select;
use tokio::sync::watch; use tokio::sync::watch;
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
use tracing::*; use tracing::*;
@@ -89,27 +91,31 @@ impl WalReceiver {
async move { async move {
debug_assert_current_span_has_tenant_and_timeline_id(); debug_assert_current_span_has_tenant_and_timeline_id();
debug!("WAL receiver manager started, connecting to broker"); debug!("WAL receiver manager started, connecting to broker");
let cancel = task_mgr::shutdown_token();
let mut connection_manager_state = ConnectionManagerState::new( let mut connection_manager_state = ConnectionManagerState::new(
timeline, timeline,
conf, conf,
); );
while !cancel.is_cancelled() { loop {
let loop_step_result = connection_manager_loop_step( select! {
&mut broker_client, _ = task_mgr::shutdown_watcher() => {
&mut connection_manager_state, trace!("WAL receiver shutdown requested, shutting down");
&walreceiver_ctx,
&cancel,
&loop_status,
).await;
match loop_step_result {
Ok(()) => continue,
Err(_cancelled) => {
trace!("Connection manager loop ended, shutting down");
break; break;
} },
loop_step_result = connection_manager_loop_step(
&mut broker_client,
&mut connection_manager_state,
&walreceiver_ctx,
&loop_status,
) => match loop_step_result {
ControlFlow::Continue(()) => continue,
ControlFlow::Break(()) => {
trace!("Connection manager loop ended, shutting down");
break;
}
},
} }
} }
connection_manager_state.shutdown().await; connection_manager_state.shutdown().await;
*loop_status.write().unwrap() = None; *loop_status.write().unwrap() = None;
Ok(()) Ok(())
@@ -191,9 +197,6 @@ impl<E: Clone> TaskHandle<E> {
} }
} }
/// # Cancel-Safety
///
/// Cancellation-safe.
async fn next_task_event(&mut self) -> TaskEvent<E> { async fn next_task_event(&mut self) -> TaskEvent<E> {
match self.events_receiver.changed().await { match self.events_receiver.changed().await {
Ok(()) => TaskEvent::Update((self.events_receiver.borrow()).clone()), Ok(()) => TaskEvent::Update((self.events_receiver.borrow()).clone()),

View File

@@ -17,7 +17,7 @@ use crate::metrics::{
WALRECEIVER_ACTIVE_MANAGERS, WALRECEIVER_BROKER_UPDATES, WALRECEIVER_CANDIDATES_ADDED, WALRECEIVER_ACTIVE_MANAGERS, WALRECEIVER_BROKER_UPDATES, WALRECEIVER_CANDIDATES_ADDED,
WALRECEIVER_CANDIDATES_REMOVED, WALRECEIVER_SWITCHES, WALRECEIVER_CANDIDATES_REMOVED, WALRECEIVER_SWITCHES,
}; };
use crate::task_mgr::TaskKind; use crate::task_mgr::{shutdown_token, TaskKind};
use crate::tenant::{debug_assert_current_span_has_tenant_and_timeline_id, Timeline}; use crate::tenant::{debug_assert_current_span_has_tenant_and_timeline_id, Timeline};
use anyhow::Context; use anyhow::Context;
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
@@ -27,7 +27,7 @@ use storage_broker::proto::SafekeeperTimelineInfo;
use storage_broker::proto::SubscribeSafekeeperInfoRequest; use storage_broker::proto::SubscribeSafekeeperInfoRequest;
use storage_broker::proto::TenantTimelineId as ProtoTenantTimelineId; use storage_broker::proto::TenantTimelineId as ProtoTenantTimelineId;
use storage_broker::{BrokerClientChannel, Code, Streaming}; use storage_broker::{BrokerClientChannel, Code, Streaming};
use tokio_util::sync::CancellationToken; use tokio::select;
use tracing::*; use tracing::*;
use postgres_connection::PgConnectionConfig; use postgres_connection::PgConnectionConfig;
@@ -45,33 +45,27 @@ use super::{
TaskEvent, TaskHandle, TaskEvent, TaskHandle,
}; };
pub(crate) struct Cancelled;
/// Attempts to subscribe for timeline updates, pushed by safekeepers into the broker. /// Attempts to subscribe for timeline updates, pushed by safekeepers into the broker.
/// Based on the updates, desides whether to start, keep or stop a WAL receiver task. /// Based on the updates, desides whether to start, keep or stop a WAL receiver task.
/// If storage broker subscription is cancelled, exits. /// If storage broker subscription is cancelled, exits.
///
/// # Cancel-Safety
///
/// Not cancellation-safe. Use `cancel` token to request cancellation.
pub(super) async fn connection_manager_loop_step( pub(super) async fn connection_manager_loop_step(
broker_client: &mut BrokerClientChannel, broker_client: &mut BrokerClientChannel,
connection_manager_state: &mut ConnectionManagerState, connection_manager_state: &mut ConnectionManagerState,
ctx: &RequestContext, ctx: &RequestContext,
cancel: &CancellationToken,
manager_status: &std::sync::RwLock<Option<ConnectionManagerStatus>>, manager_status: &std::sync::RwLock<Option<ConnectionManagerStatus>>,
) -> Result<(), Cancelled> { ) -> ControlFlow<(), ()> {
match tokio::select! { match connection_manager_state
_ = cancel.cancelled() => { return Err(Cancelled); }, .timeline
st = connection_manager_state.timeline.wait_to_become_active(ctx) => { st } .wait_to_become_active(ctx)
} { .await
{
Ok(()) => {} Ok(()) => {}
Err(new_state) => { Err(new_state) => {
debug!( debug!(
?new_state, ?new_state,
"state changed, stopping wal connection manager loop" "state changed, stopping wal connection manager loop"
); );
return Err(Cancelled); return ControlFlow::Break(());
} }
} }
@@ -92,7 +86,7 @@ pub(super) async fn connection_manager_loop_step(
// Subscribe to the broker updates. Stream shares underlying TCP connection // Subscribe to the broker updates. Stream shares underlying TCP connection
// with other streams on this client (other connection managers). When // with other streams on this client (other connection managers). When
// object goes out of scope, stream finishes in drop() automatically. // object goes out of scope, stream finishes in drop() automatically.
let mut broker_subscription = subscribe_for_timeline_updates(broker_client, id, cancel).await?; let mut broker_subscription = subscribe_for_timeline_updates(broker_client, id).await;
debug!("Subscribed for broker timeline updates"); debug!("Subscribed for broker timeline updates");
loop { loop {
@@ -100,7 +94,6 @@ pub(super) async fn connection_manager_loop_step(
// These things are happening concurrently: // These things are happening concurrently:
// //
// - cancellation request
// - keep receiving WAL on the current connection // - keep receiving WAL on the current connection
// - if the shared state says we need to change connection, disconnect and return // - if the shared state says we need to change connection, disconnect and return
// - this runs in a separate task and we receive updates via a watch channel // - this runs in a separate task and we receive updates via a watch channel
@@ -108,11 +101,7 @@ pub(super) async fn connection_manager_loop_step(
// - receive updates from broker // - receive updates from broker
// - this might change the current desired connection // - this might change the current desired connection
// - timeline state changes to something that does not allow walreceiver to run concurrently // - timeline state changes to something that does not allow walreceiver to run concurrently
select! {
// NB: make sure each of the select expressions are cancellation-safe
// (no need for arms to be cancellation-safe).
tokio::select! {
_ = cancel.cancelled() => { return Err(Cancelled); }
Some(wal_connection_update) = async { Some(wal_connection_update) = async {
match connection_manager_state.wal_connection.as_mut() { match connection_manager_state.wal_connection.as_mut() {
Some(wal_connection) => Some(wal_connection.connection_task.next_task_event().await), Some(wal_connection) => Some(wal_connection.connection_task.next_task_event().await),
@@ -144,7 +133,7 @@ pub(super) async fn connection_manager_loop_step(
}, },
// Got a new update from the broker // Got a new update from the broker
broker_update = broker_subscription.message() /* TODO: review cancellation-safety */ => { broker_update = broker_subscription.message() => {
match broker_update { match broker_update {
Ok(Some(broker_update)) => connection_manager_state.register_timeline_update(broker_update), Ok(Some(broker_update)) => connection_manager_state.register_timeline_update(broker_update),
Err(status) => { Err(status) => {
@@ -158,17 +147,16 @@ pub(super) async fn connection_manager_loop_step(
warn!("broker subscription failed: {status}"); warn!("broker subscription failed: {status}");
} }
} }
return Ok(()); return ControlFlow::Continue(());
} }
Ok(None) => { Ok(None) => {
error!("broker subscription stream ended"); // can't happen error!("broker subscription stream ended"); // can't happen
return Ok(()); return ControlFlow::Continue(());
} }
} }
}, },
new_event = async { new_event = async {
// Reminder: this match arm needs to be cancellation-safe.
loop { loop {
if connection_manager_state.timeline.current_state() == TimelineState::Loading { if connection_manager_state.timeline.current_state() == TimelineState::Loading {
warn!("wal connection manager should only be launched after timeline has become active"); warn!("wal connection manager should only be launched after timeline has become active");
@@ -194,11 +182,11 @@ pub(super) async fn connection_manager_loop_step(
} }
} => match new_event { } => match new_event {
ControlFlow::Continue(()) => { ControlFlow::Continue(()) => {
return Ok(()); return ControlFlow::Continue(());
} }
ControlFlow::Break(()) => { ControlFlow::Break(()) => {
debug!("Timeline is no longer active, stopping wal connection manager loop"); debug!("Timeline is no longer active, stopping wal connection manager loop");
return Err(Cancelled); return ControlFlow::Break(());
} }
}, },
@@ -230,15 +218,16 @@ pub(super) async fn connection_manager_loop_step(
async fn subscribe_for_timeline_updates( async fn subscribe_for_timeline_updates(
broker_client: &mut BrokerClientChannel, broker_client: &mut BrokerClientChannel,
id: TenantTimelineId, id: TenantTimelineId,
cancel: &CancellationToken, ) -> Streaming<SafekeeperTimelineInfo> {
) -> Result<Streaming<SafekeeperTimelineInfo>, Cancelled> {
let mut attempt = 0; let mut attempt = 0;
let cancel = shutdown_token();
loop { loop {
exponential_backoff( exponential_backoff(
attempt, attempt,
DEFAULT_BASE_BACKOFF_SECONDS, DEFAULT_BASE_BACKOFF_SECONDS,
DEFAULT_MAX_BACKOFF_SECONDS, DEFAULT_MAX_BACKOFF_SECONDS,
cancel, &cancel,
) )
.await; .await;
attempt += 1; attempt += 1;
@@ -252,14 +241,9 @@ async fn subscribe_for_timeline_updates(
subscription_key: Some(key), subscription_key: Some(key),
}; };
match { match broker_client.subscribe_safekeeper_info(request).await {
tokio::select! {
r = broker_client.subscribe_safekeeper_info(request) => { r }
_ = cancel.cancelled() => { return Err(Cancelled); }
}
} {
Ok(resp) => { Ok(resp) => {
return Ok(resp.into_inner()); return resp.into_inner();
} }
Err(e) => { Err(e) => {
// Safekeeper nodes can stop pushing timeline updates to the broker, when no new writes happen and // Safekeeper nodes can stop pushing timeline updates to the broker, when no new writes happen and
@@ -502,10 +486,6 @@ impl ConnectionManagerState {
/// Drops the current connection (if any) and updates retry timeout for the next /// Drops the current connection (if any) and updates retry timeout for the next
/// connection attempt to the same safekeeper. /// connection attempt to the same safekeeper.
///
/// # Cancel-Safety
///
/// Not cancellation-safe.
async fn drop_old_connection(&mut self, needs_shutdown: bool) { async fn drop_old_connection(&mut self, needs_shutdown: bool) {
let wal_connection = match self.wal_connection.take() { let wal_connection = match self.wal_connection.take() {
Some(wal_connection) => wal_connection, Some(wal_connection) => wal_connection,
@@ -513,14 +493,7 @@ impl ConnectionManagerState {
}; };
if needs_shutdown { if needs_shutdown {
wal_connection wal_connection.connection_task.shutdown().await;
.connection_task
.shutdown()
// This here is why this function isn't cancellation-safe.
// If we got cancelled here, then self.wal_connection is already None and we lose track of the task.
// Even if our caller diligently calls Self::shutdown(), it will find a self.wal_connection=None
// and thus be ineffective.
.await;
} }
let retry = self let retry = self
@@ -865,9 +838,6 @@ impl ConnectionManagerState {
} }
} }
/// # Cancel-Safety
///
/// Not cancellation-safe.
pub(super) async fn shutdown(mut self) { pub(super) async fn shutdown(mut self) {
if let Some(wal_connection) = self.wal_connection.take() { if let Some(wal_connection) = self.wal_connection.take() {
wal_connection.connection_task.shutdown().await; wal_connection.connection_task.shutdown().await;

View File

@@ -389,6 +389,17 @@ pub(super) async fn handle_walreceiver_connection(
} }
} }
{
// This is a hack. It piggybacks on the keepalive messages sent by the
// safekeeper in order to enforce `checkpoint_timeout` on the currently
// open layer. This hack doesn't provide a bound on the total size of
// in-memory layers on a pageserver. See https://github.com/neondatabase/neon/issues/6916.
let mut writer = timeline.writer().await;
if let Err(err) = writer.tick().await {
warn!("Timeline writer tick failed: {err}");
}
}
if let Some(last_lsn) = status_update { if let Some(last_lsn) = status_update {
let timeline_remote_consistent_lsn = timeline let timeline_remote_consistent_lsn = timeline
.get_remote_consistent_lsn_visible() .get_remote_consistent_lsn_visible()
@@ -437,7 +448,6 @@ pub(super) async fn handle_walreceiver_connection(
disk_consistent_lsn, disk_consistent_lsn,
remote_consistent_lsn, remote_consistent_lsn,
replytime: ts, replytime: ts,
shard_number: timeline.tenant_shard_id.shard_number.0 as u32,
}; };
debug!("neon_status_update {status_update:?}"); debug!("neon_status_update {status_update:?}");

View File

@@ -121,16 +121,11 @@ pub(super) enum SetDeletedFlagProgress {
Successful(NaiveDateTime), Successful(NaiveDateTime),
} }
pub(super) struct UploadQueueStoppedDeletable { pub(super) struct UploadQueueStopped {
pub(super) upload_queue_for_deletion: UploadQueueInitialized, pub(super) upload_queue_for_deletion: UploadQueueInitialized,
pub(super) deleted_at: SetDeletedFlagProgress, pub(super) deleted_at: SetDeletedFlagProgress,
} }
pub(super) enum UploadQueueStopped {
Deletable(UploadQueueStoppedDeletable),
Uninitialized,
}
#[derive(thiserror::Error, Debug)] #[derive(thiserror::Error, Debug)]
pub(crate) enum NotInitialized { pub(crate) enum NotInitialized {
#[error("queue is in state Uninitialized")] #[error("queue is in state Uninitialized")]
@@ -254,15 +249,12 @@ impl UploadQueue {
} }
} }
pub(crate) fn stopped_mut(&mut self) -> anyhow::Result<&mut UploadQueueStoppedDeletable> { pub(crate) fn stopped_mut(&mut self) -> anyhow::Result<&mut UploadQueueStopped> {
match self { match self {
UploadQueue::Initialized(_) | UploadQueue::Uninitialized => { UploadQueue::Initialized(_) | UploadQueue::Uninitialized => {
anyhow::bail!("queue is in state {}", self.as_str()) anyhow::bail!("queue is in state {}", self.as_str())
} }
UploadQueue::Stopped(UploadQueueStopped::Uninitialized) => { UploadQueue::Stopped(stopped) => Ok(stopped),
anyhow::bail!("queue is in state Stopped(Uninitialized)")
}
UploadQueue::Stopped(UploadQueueStopped::Deletable(deletable)) => Ok(deletable),
} }
} }
} }

View File

@@ -782,7 +782,7 @@ where
} }
} }
// NB: don't use `buf.is_empty()` here; it is from the // NB: don't use `buf.is_empty()` here; it is from the
// `impl Deref for Slice { Target = [u8] }`; the &[u8] // `impl Deref for Slice { Target = [u8] }`; the the &[u8]
// returned by it only covers the initialized portion of `buf`. // returned by it only covers the initialized portion of `buf`.
// Whereas we're interested in ensuring that we filled the entire // Whereas we're interested in ensuring that we filled the entire
// buffer that the user passed in. // buffer that the user passed in.

View File

@@ -109,8 +109,6 @@ impl WalIngest {
self.checkpoint_modified = true; self.checkpoint_modified = true;
} }
failpoint_support::sleep_millis_async!("wal-ingest-record-sleep");
match decoded.xl_rmid { match decoded.xl_rmid {
pg_constants::RM_HEAP_ID | pg_constants::RM_HEAP2_ID => { pg_constants::RM_HEAP_ID | pg_constants::RM_HEAP2_ID => {
// Heap AM records need some special handling, because they modify VM pages // Heap AM records need some special handling, because they modify VM pages

View File

@@ -312,7 +312,7 @@ pg_cluster_size(PG_FUNCTION_ARGS)
{ {
int64 size; int64 size;
size = GetNeonCurrentClusterSize(); size = GetZenithCurrentClusterSize();
if (size == 0) if (size == 0)
PG_RETURN_NULL(); PG_RETURN_NULL();

View File

@@ -26,8 +26,6 @@ extern void pg_init_libpagestore(void);
extern void pg_init_walproposer(void); extern void pg_init_walproposer(void);
extern uint64 BackpressureThrottlingTime(void); extern uint64 BackpressureThrottlingTime(void);
extern void SetNeonCurrentClusterSize(uint64 size);
extern uint64 GetNeonCurrentClusterSize(void);
extern void replication_feedback_get_lsns(XLogRecPtr *writeLsn, XLogRecPtr *flushLsn, XLogRecPtr *applyLsn); extern void replication_feedback_get_lsns(XLogRecPtr *writeLsn, XLogRecPtr *flushLsn, XLogRecPtr *applyLsn);
extern void PGDLLEXPORT WalProposerSync(int argc, char *argv[]); extern void PGDLLEXPORT WalProposerSync(int argc, char *argv[]);

View File

@@ -1831,7 +1831,7 @@ neon_extend(SMgrRelation reln, ForkNumber forkNum, BlockNumber blkno,
reln->smgr_relpersistence == RELPERSISTENCE_PERMANENT && reln->smgr_relpersistence == RELPERSISTENCE_PERMANENT &&
!IsAutoVacuumWorkerProcess()) !IsAutoVacuumWorkerProcess())
{ {
uint64 current_size = GetNeonCurrentClusterSize(); uint64 current_size = GetZenithCurrentClusterSize();
if (current_size >= ((uint64) max_cluster_size) * 1024 * 1024) if (current_size >= ((uint64) max_cluster_size) * 1024 * 1024)
ereport(ERROR, ereport(ERROR,
@@ -1912,7 +1912,7 @@ neon_zeroextend(SMgrRelation reln, ForkNumber forkNum, BlockNumber blocknum,
reln->smgr_relpersistence == RELPERSISTENCE_PERMANENT && reln->smgr_relpersistence == RELPERSISTENCE_PERMANENT &&
!IsAutoVacuumWorkerProcess()) !IsAutoVacuumWorkerProcess())
{ {
uint64 current_size = GetNeonCurrentClusterSize(); uint64 current_size = GetZenithCurrentClusterSize();
if (current_size >= ((uint64) max_cluster_size) * 1024 * 1024) if (current_size >= ((uint64) max_cluster_size) * 1024 * 1024)
ereport(ERROR, ereport(ERROR,

View File

@@ -70,7 +70,7 @@ static bool SendAppendRequests(Safekeeper *sk);
static bool RecvAppendResponses(Safekeeper *sk); static bool RecvAppendResponses(Safekeeper *sk);
static XLogRecPtr CalculateMinFlushLsn(WalProposer *wp); static XLogRecPtr CalculateMinFlushLsn(WalProposer *wp);
static XLogRecPtr GetAcknowledgedByQuorumWALPosition(WalProposer *wp); static XLogRecPtr GetAcknowledgedByQuorumWALPosition(WalProposer *wp);
static void HandleSafekeeperResponse(WalProposer *wp, Safekeeper *sk); static void HandleSafekeeperResponse(WalProposer *wp);
static bool AsyncRead(Safekeeper *sk, char **buf, int *buf_size); static bool AsyncRead(Safekeeper *sk, char **buf, int *buf_size);
static bool AsyncReadMessage(Safekeeper *sk, AcceptorProposerMessage *anymsg); static bool AsyncReadMessage(Safekeeper *sk, AcceptorProposerMessage *anymsg);
static bool BlockingWrite(Safekeeper *sk, void *msg, size_t msg_size, SafekeeperState success_state); static bool BlockingWrite(Safekeeper *sk, void *msg, size_t msg_size, SafekeeperState success_state);
@@ -1405,6 +1405,7 @@ static bool
RecvAppendResponses(Safekeeper *sk) RecvAppendResponses(Safekeeper *sk)
{ {
WalProposer *wp = sk->wp; WalProposer *wp = sk->wp;
XLogRecPtr newCommitLsn;
bool readAnything = false; bool readAnything = false;
while (true) while (true)
@@ -1424,8 +1425,6 @@ RecvAppendResponses(Safekeeper *sk)
LSN_FORMAT_ARGS(sk->appendResponse.commitLsn), LSN_FORMAT_ARGS(sk->appendResponse.commitLsn),
sk->host, sk->port); sk->host, sk->port);
readAnything = true;
if (sk->appendResponse.term > wp->propTerm) if (sk->appendResponse.term > wp->propTerm)
{ {
/* /*
@@ -1439,28 +1438,35 @@ RecvAppendResponses(Safekeeper *sk)
sk->appendResponse.term, wp->propTerm); sk->appendResponse.term, wp->propTerm);
} }
HandleSafekeeperResponse(wp, sk); readAnything = true;
} }
if (!readAnything) if (!readAnything)
return sk->state == SS_ACTIVE; return sk->state == SS_ACTIVE;
/* update commit_lsn */
newCommitLsn = GetAcknowledgedByQuorumWALPosition(wp);
/*
* Send the new value to all safekeepers.
*/
if (newCommitLsn > wp->commitLsn)
{
wp->commitLsn = newCommitLsn;
BroadcastAppendRequest(wp);
}
HandleSafekeeperResponse(wp);
return sk->state == SS_ACTIVE; return sk->state == SS_ACTIVE;
} }
#define psfeedback_log(fmt, key, ...) \
wp_log(DEBUG2, "ParsePageserverFeedbackMessage: %s " fmt, key, __VA_ARGS__)
/* Parse a PageserverFeedback message, or the PageserverFeedback part of an AppendResponse */ /* Parse a PageserverFeedback message, or the PageserverFeedback part of an AppendResponse */
static void static void
ParsePageserverFeedbackMessage(WalProposer *wp, StringInfo reply_message, PageserverFeedback *ps_feedback) ParsePageserverFeedbackMessage(WalProposer *wp, StringInfo reply_message, PageserverFeedback *rf)
{ {
uint8 nkeys; uint8 nkeys;
int i; int i;
int32 len;
/* initialize the struct before parsing */
memset(ps_feedback, 0, sizeof(PageserverFeedback));
ps_feedback->present = true;
/* get number of custom keys */ /* get number of custom keys */
nkeys = pq_getmsgbyte(reply_message); nkeys = pq_getmsgbyte(reply_message);
@@ -1468,52 +1474,66 @@ ParsePageserverFeedbackMessage(WalProposer *wp, StringInfo reply_message, Pagese
for (i = 0; i < nkeys; i++) for (i = 0; i < nkeys; i++)
{ {
const char *key = pq_getmsgstring(reply_message); const char *key = pq_getmsgstring(reply_message);
unsigned int value_len = pq_getmsgint(reply_message, sizeof(int32));
if (strcmp(key, "current_timeline_size") == 0) if (strcmp(key, "current_timeline_size") == 0)
{ {
Assert(value_len == sizeof(int64)); pq_getmsgint(reply_message, sizeof(int32));
ps_feedback->currentClusterSize = pq_getmsgint64(reply_message); /* read value length */
psfeedback_log(UINT64_FORMAT, key, ps_feedback->currentClusterSize); rf->currentClusterSize = pq_getmsgint64(reply_message);
wp_log(DEBUG2, "ParsePageserverFeedbackMessage: current_timeline_size %lu",
rf->currentClusterSize);
} }
else if ((strcmp(key, "ps_writelsn") == 0) || (strcmp(key, "last_received_lsn") == 0)) else if ((strcmp(key, "ps_writelsn") == 0) || (strcmp(key, "last_received_lsn") == 0))
{ {
Assert(value_len == sizeof(int64)); pq_getmsgint(reply_message, sizeof(int32));
ps_feedback->last_received_lsn = pq_getmsgint64(reply_message); /* read value length */
psfeedback_log("%X/%X", key, LSN_FORMAT_ARGS(ps_feedback->last_received_lsn)); rf->last_received_lsn = pq_getmsgint64(reply_message);
wp_log(DEBUG2, "ParsePageserverFeedbackMessage: last_received_lsn %X/%X",
LSN_FORMAT_ARGS(rf->last_received_lsn));
} }
else if ((strcmp(key, "ps_flushlsn") == 0) || (strcmp(key, "disk_consistent_lsn") == 0)) else if ((strcmp(key, "ps_flushlsn") == 0) || (strcmp(key, "disk_consistent_lsn") == 0))
{ {
Assert(value_len == sizeof(int64)); pq_getmsgint(reply_message, sizeof(int32));
ps_feedback->disk_consistent_lsn = pq_getmsgint64(reply_message); /* read value length */
psfeedback_log("%X/%X", key, LSN_FORMAT_ARGS(ps_feedback->disk_consistent_lsn)); rf->disk_consistent_lsn = pq_getmsgint64(reply_message);
wp_log(DEBUG2, "ParsePageserverFeedbackMessage: disk_consistent_lsn %X/%X",
LSN_FORMAT_ARGS(rf->disk_consistent_lsn));
} }
else if ((strcmp(key, "ps_applylsn") == 0) || (strcmp(key, "remote_consistent_lsn") == 0)) else if ((strcmp(key, "ps_applylsn") == 0) || (strcmp(key, "remote_consistent_lsn") == 0))
{ {
Assert(value_len == sizeof(int64)); pq_getmsgint(reply_message, sizeof(int32));
ps_feedback->remote_consistent_lsn = pq_getmsgint64(reply_message); /* read value length */
psfeedback_log("%X/%X", key, LSN_FORMAT_ARGS(ps_feedback->remote_consistent_lsn)); rf->remote_consistent_lsn = pq_getmsgint64(reply_message);
wp_log(DEBUG2, "ParsePageserverFeedbackMessage: remote_consistent_lsn %X/%X",
LSN_FORMAT_ARGS(rf->remote_consistent_lsn));
} }
else if ((strcmp(key, "ps_replytime") == 0) || (strcmp(key, "replytime") == 0)) else if ((strcmp(key, "ps_replytime") == 0) || (strcmp(key, "replytime") == 0))
{ {
Assert(value_len == sizeof(int64)); pq_getmsgint(reply_message, sizeof(int32));
ps_feedback->replytime = pq_getmsgint64(reply_message); /* read value length */
psfeedback_log("%s", key, timestamptz_to_str(ps_feedback->replytime)); rf->replytime = pq_getmsgint64(reply_message);
} {
else if (strcmp(key, "shard_number") == 0) char *replyTimeStr;
{
Assert(value_len == sizeof(uint32)); /* Copy because timestamptz_to_str returns a static buffer */
ps_feedback->shard_number = pq_getmsgint(reply_message, sizeof(uint32)); replyTimeStr = pstrdup(timestamptz_to_str(rf->replytime));
psfeedback_log("%u", key, ps_feedback->shard_number); wp_log(DEBUG2, "ParsePageserverFeedbackMessage: replytime %lu reply_time: %s",
rf->replytime, replyTimeStr);
pfree(replyTimeStr);
}
} }
else else
{ {
len = pq_getmsgint(reply_message, sizeof(int32));
/* read value length */
/* /*
* Skip unknown keys to support backward compatibile protocol * Skip unknown keys to support backward compatibile protocol
* changes * changes
*/ */
wp_log(LOG, "ParsePageserverFeedbackMessage: unknown key: %s len %d", key, value_len); wp_log(LOG, "ParsePageserverFeedbackMessage: unknown key: %s len %d", key, len);
pq_getmsgbytes(reply_message, value_len); pq_getmsgbytes(reply_message, len);
}; };
} }
} }
@@ -1610,30 +1630,12 @@ GetDonor(WalProposer *wp, XLogRecPtr *donor_lsn)
return donor; return donor;
} }
/*
* Process AppendResponse message from safekeeper.
*/
static void static void
HandleSafekeeperResponse(WalProposer *wp, Safekeeper *sk) HandleSafekeeperResponse(WalProposer *wp)
{ {
XLogRecPtr candidateTruncateLsn; XLogRecPtr candidateTruncateLsn;
XLogRecPtr newCommitLsn;
newCommitLsn = GetAcknowledgedByQuorumWALPosition(wp); wp->api.process_safekeeper_feedback(wp);
if (newCommitLsn > wp->commitLsn)
{
wp->commitLsn = newCommitLsn;
/* Send new value to all safekeepers. */
BroadcastAppendRequest(wp);
}
/*
* Unlock syncrep waiters, update ps_feedback, CheckGracefulShutdown().
* The last one will terminate the process if the shutdown is requested
* and WAL is committed by the quorum. BroadcastAppendRequest() should be
* called to notify safekeepers about the new commitLsn.
*/
wp->api.process_safekeeper_feedback(wp, sk);
/* /*
* Try to advance truncateLsn -- the last record flushed to all * Try to advance truncateLsn -- the last record flushed to all
@@ -1809,10 +1811,8 @@ AsyncReadMessage(Safekeeper *sk, AcceptorProposerMessage *anymsg)
msg->hs.ts = pq_getmsgint64_le(&s); msg->hs.ts = pq_getmsgint64_le(&s);
msg->hs.xmin.value = pq_getmsgint64_le(&s); msg->hs.xmin.value = pq_getmsgint64_le(&s);
msg->hs.catalog_xmin.value = pq_getmsgint64_le(&s); msg->hs.catalog_xmin.value = pq_getmsgint64_le(&s);
if (s.len > s.cursor) if (buf_size > APPENDRESPONSE_FIXEDPART_SIZE)
ParsePageserverFeedbackMessage(wp, &s, &msg->ps_feedback); ParsePageserverFeedbackMessage(wp, &s, &msg->rf);
else
msg->ps_feedback.present = false;
pq_getmsgend(&s); pq_getmsgend(&s);
return true; return true;
} }

View File

@@ -10,7 +10,6 @@
#include "libpqwalproposer.h" #include "libpqwalproposer.h"
#include "neon_walreader.h" #include "neon_walreader.h"
#include "pagestore_client.h"
#define SK_MAGIC 0xCafeCeefu #define SK_MAGIC 0xCafeCeefu
#define SK_PROTOCOL_VERSION 2 #define SK_PROTOCOL_VERSION 2
@@ -270,8 +269,6 @@ typedef struct HotStandbyFeedback
typedef struct PageserverFeedback typedef struct PageserverFeedback
{ {
/* true if AppendResponse contains this feedback */
bool present;
/* current size of the timeline on pageserver */ /* current size of the timeline on pageserver */
uint64 currentClusterSize; uint64 currentClusterSize;
/* standby_status_update fields that safekeeper received from pageserver */ /* standby_status_update fields that safekeeper received from pageserver */
@@ -279,22 +276,14 @@ typedef struct PageserverFeedback
XLogRecPtr disk_consistent_lsn; XLogRecPtr disk_consistent_lsn;
XLogRecPtr remote_consistent_lsn; XLogRecPtr remote_consistent_lsn;
TimestampTz replytime; TimestampTz replytime;
uint32 shard_number;
} PageserverFeedback; } PageserverFeedback;
typedef struct WalproposerShmemState typedef struct WalproposerShmemState
{ {
slock_t mutex; slock_t mutex;
PageserverFeedback feedback;
term_t mineLastElectedTerm; term_t mineLastElectedTerm;
pg_atomic_uint64 backpressureThrottlingTime; pg_atomic_uint64 backpressureThrottlingTime;
pg_atomic_uint64 currentClusterSize;
/* last feedback from each shard */
PageserverFeedback shard_ps_feedback[MAX_SHARDS];
int num_shards;
/* aggregated feedback with min LSNs across shards */
PageserverFeedback min_ps_feedback;
} WalproposerShmemState; } WalproposerShmemState;
/* /*
@@ -318,12 +307,12 @@ typedef struct AppendResponse
/* Feedback received from pageserver includes standby_status_update fields */ /* Feedback received from pageserver includes standby_status_update fields */
/* and custom neon feedback. */ /* and custom neon feedback. */
/* This part of the message is extensible. */ /* This part of the message is extensible. */
PageserverFeedback ps_feedback; PageserverFeedback rf;
} AppendResponse; } AppendResponse;
/* PageserverFeedback is extensible part of the message that is parsed separately */ /* PageserverFeedback is extensible part of the message that is parsed separately */
/* Other fields are fixed part */ /* Other fields are fixed part */
#define APPENDRESPONSE_FIXEDPART_SIZE 56 #define APPENDRESPONSE_FIXEDPART_SIZE offsetof(AppendResponse, rf)
struct WalProposer; struct WalProposer;
typedef struct WalProposer WalProposer; typedef struct WalProposer WalProposer;
@@ -571,11 +560,11 @@ typedef struct walproposer_api
void (*finish_sync_safekeepers) (WalProposer *wp, XLogRecPtr lsn); void (*finish_sync_safekeepers) (WalProposer *wp, XLogRecPtr lsn);
/* /*
* Called after every AppendResponse from the safekeeper. Used to propagate * Called after every new message from the safekeeper. Used to propagate
* backpressure feedback and to confirm WAL persistence (has been commited * backpressure feedback and to confirm WAL persistence (has been commited
* on the quorum of safekeepers). * on the quorum of safekeepers).
*/ */
void (*process_safekeeper_feedback) (WalProposer *wp, Safekeeper *sk); void (*process_safekeeper_feedback) (WalProposer *wp);
/* /*
* Write a log message to the internal log processor. This is used only * Write a log message to the internal log processor. This is used only

View File

@@ -63,6 +63,7 @@ char *wal_acceptors_list = "";
int wal_acceptor_reconnect_timeout = 1000; int wal_acceptor_reconnect_timeout = 1000;
int wal_acceptor_connection_timeout = 10000; int wal_acceptor_connection_timeout = 10000;
static AppendResponse quorumFeedback;
static WalproposerShmemState *walprop_shared; static WalproposerShmemState *walprop_shared;
static WalProposerConfig walprop_config; static WalProposerConfig walprop_config;
static XLogRecPtr sentPtr = InvalidXLogRecPtr; static XLogRecPtr sentPtr = InvalidXLogRecPtr;
@@ -70,10 +71,6 @@ static const walproposer_api walprop_pg;
static volatile sig_atomic_t got_SIGUSR2 = false; static volatile sig_atomic_t got_SIGUSR2 = false;
static bool reported_sigusr2 = false; static bool reported_sigusr2 = false;
static XLogRecPtr standby_flush_lsn = InvalidXLogRecPtr;
static XLogRecPtr standby_apply_lsn = InvalidXLogRecPtr;
static HotStandbyFeedback agg_hs_feedback;
static void nwp_shmem_startup_hook(void); static void nwp_shmem_startup_hook(void);
static void nwp_register_gucs(void); static void nwp_register_gucs(void);
static void nwp_prepare_shmem(void); static void nwp_prepare_shmem(void);
@@ -282,7 +279,6 @@ WalproposerShmemInit(void)
memset(walprop_shared, 0, WalproposerShmemSize()); memset(walprop_shared, 0, WalproposerShmemSize());
SpinLockInit(&walprop_shared->mutex); SpinLockInit(&walprop_shared->mutex);
pg_atomic_init_u64(&walprop_shared->backpressureThrottlingTime, 0); pg_atomic_init_u64(&walprop_shared->backpressureThrottlingTime, 0);
pg_atomic_init_u64(&walprop_shared->currentClusterSize, 0);
} }
LWLockRelease(AddinShmemInitLock); LWLockRelease(AddinShmemInitLock);
@@ -406,58 +402,21 @@ walprop_pg_get_shmem_state(WalProposer *wp)
return walprop_shared; return walprop_shared;
} }
/* static void
* Record new ps_feedback in the array with shards and update min_feedback. replication_feedback_set(PageserverFeedback *rf)
*/
static PageserverFeedback
record_pageserver_feedback(PageserverFeedback *ps_feedback)
{ {
PageserverFeedback min_feedback;
Assert(ps_feedback->present);
Assert(ps_feedback->shard_number < MAX_SHARDS);
SpinLockAcquire(&walprop_shared->mutex); SpinLockAcquire(&walprop_shared->mutex);
memcpy(&walprop_shared->feedback, rf, sizeof(PageserverFeedback));
/* Update the number of shards */
if (ps_feedback->shard_number + 1 > walprop_shared->num_shards)
walprop_shared->num_shards = ps_feedback->shard_number + 1;
/* Update the feedback */
memcpy(&walprop_shared->shard_ps_feedback[ps_feedback->shard_number], ps_feedback, sizeof(PageserverFeedback));
/* Calculate min LSNs */
memcpy(&min_feedback, ps_feedback, sizeof(PageserverFeedback));
for (int i = 0; i < walprop_shared->num_shards; i++)
{
PageserverFeedback *feedback = &walprop_shared->shard_ps_feedback[i];
if (feedback->present)
{
if (min_feedback.last_received_lsn == InvalidXLogRecPtr || feedback->last_received_lsn < min_feedback.last_received_lsn)
min_feedback.last_received_lsn = feedback->last_received_lsn;
if (min_feedback.disk_consistent_lsn == InvalidXLogRecPtr || feedback->disk_consistent_lsn < min_feedback.disk_consistent_lsn)
min_feedback.disk_consistent_lsn = feedback->disk_consistent_lsn;
if (min_feedback.remote_consistent_lsn == InvalidXLogRecPtr || feedback->remote_consistent_lsn < min_feedback.remote_consistent_lsn)
min_feedback.remote_consistent_lsn = feedback->remote_consistent_lsn;
}
}
/* Copy min_feedback back to shmem */
memcpy(&walprop_shared->min_ps_feedback, &min_feedback, sizeof(PageserverFeedback));
SpinLockRelease(&walprop_shared->mutex); SpinLockRelease(&walprop_shared->mutex);
return min_feedback;
} }
void void
replication_feedback_get_lsns(XLogRecPtr *writeLsn, XLogRecPtr *flushLsn, XLogRecPtr *applyLsn) replication_feedback_get_lsns(XLogRecPtr *writeLsn, XLogRecPtr *flushLsn, XLogRecPtr *applyLsn)
{ {
SpinLockAcquire(&walprop_shared->mutex); SpinLockAcquire(&walprop_shared->mutex);
*writeLsn = walprop_shared->min_ps_feedback.last_received_lsn; *writeLsn = walprop_shared->feedback.last_received_lsn;
*flushLsn = walprop_shared->min_ps_feedback.disk_consistent_lsn; *flushLsn = walprop_shared->feedback.disk_consistent_lsn;
*applyLsn = walprop_shared->min_ps_feedback.remote_consistent_lsn; *applyLsn = walprop_shared->feedback.remote_consistent_lsn;
SpinLockRelease(&walprop_shared->mutex); SpinLockRelease(&walprop_shared->mutex);
} }
@@ -550,10 +509,9 @@ walprop_pg_init_standalone_sync_safekeepers(void)
static void static void
walprop_sigusr2(SIGNAL_ARGS) walprop_sigusr2(SIGNAL_ARGS)
{ {
int save_errno = errno;
got_SIGUSR2 = true; got_SIGUSR2 = true;
SetLatch(MyLatch); SetLatch(MyLatch);
errno = save_errno;
} }
static void static void
@@ -1911,6 +1869,39 @@ CheckGracefulShutdown(WalProposer *wp)
} }
} }
/*
* Choose most advanced PageserverFeedback and set it to *rf.
*/
static void
GetLatestNeonFeedback(PageserverFeedback *rf, WalProposer *wp)
{
int latest_safekeeper = 0;
XLogRecPtr last_received_lsn = InvalidXLogRecPtr;
for (int i = 0; i < wp->n_safekeepers; i++)
{
if (wp->safekeeper[i].appendResponse.rf.last_received_lsn > last_received_lsn)
{
latest_safekeeper = i;
last_received_lsn = wp->safekeeper[i].appendResponse.rf.last_received_lsn;
}
}
rf->currentClusterSize = wp->safekeeper[latest_safekeeper].appendResponse.rf.currentClusterSize;
rf->last_received_lsn = wp->safekeeper[latest_safekeeper].appendResponse.rf.last_received_lsn;
rf->disk_consistent_lsn = wp->safekeeper[latest_safekeeper].appendResponse.rf.disk_consistent_lsn;
rf->remote_consistent_lsn = wp->safekeeper[latest_safekeeper].appendResponse.rf.remote_consistent_lsn;
rf->replytime = wp->safekeeper[latest_safekeeper].appendResponse.rf.replytime;
wpg_log(DEBUG2, "GetLatestNeonFeedback: currentClusterSize %lu,"
" last_received_lsn %X/%X, disk_consistent_lsn %X/%X, remote_consistent_lsn %X/%X, replytime %lu",
rf->currentClusterSize,
LSN_FORMAT_ARGS(rf->last_received_lsn),
LSN_FORMAT_ARGS(rf->disk_consistent_lsn),
LSN_FORMAT_ARGS(rf->remote_consistent_lsn),
rf->replytime);
}
/* /*
* Combine hot standby feedbacks from all safekeepers. * Combine hot standby feedbacks from all safekeepers.
*/ */
@@ -1958,38 +1949,26 @@ CombineHotStanbyFeedbacks(HotStandbyFeedback *hs, WalProposer *wp)
* None of that is functional in sync-safekeepers. * None of that is functional in sync-safekeepers.
*/ */
static void static void
walprop_pg_process_safekeeper_feedback(WalProposer *wp, Safekeeper *sk) walprop_pg_process_safekeeper_feedback(WalProposer *wp)
{ {
HotStandbyFeedback hsFeedback; HotStandbyFeedback hsFeedback;
bool needToAdvanceSlot = false; XLogRecPtr oldDiskConsistentLsn;
if (wp->config->syncSafekeepers) if (wp->config->syncSafekeepers)
return; return;
/* handle fresh ps_feedback */ oldDiskConsistentLsn = quorumFeedback.rf.disk_consistent_lsn;
if (sk->appendResponse.ps_feedback.present)
/* Get PageserverFeedback fields from the most advanced safekeeper */
GetLatestNeonFeedback(&quorumFeedback.rf, wp);
replication_feedback_set(&quorumFeedback.rf);
SetZenithCurrentClusterSize(quorumFeedback.rf.currentClusterSize);
if (wp->commitLsn > quorumFeedback.flushLsn || oldDiskConsistentLsn != quorumFeedback.rf.disk_consistent_lsn)
{ {
PageserverFeedback min_feedback = record_pageserver_feedback(&sk->appendResponse.ps_feedback); if (wp->commitLsn > quorumFeedback.flushLsn)
quorumFeedback.flushLsn = wp->commitLsn;
/* Only one main shard sends non-zero currentClusterSize */
if (sk->appendResponse.ps_feedback.currentClusterSize > 0)
SetNeonCurrentClusterSize(sk->appendResponse.ps_feedback.currentClusterSize);
if (min_feedback.disk_consistent_lsn != standby_apply_lsn)
{
standby_apply_lsn = min_feedback.disk_consistent_lsn;
needToAdvanceSlot = true;
}
}
if (wp->commitLsn > standby_flush_lsn)
{
standby_flush_lsn = wp->commitLsn;
needToAdvanceSlot = true;
}
if (needToAdvanceSlot)
{
/* /*
* Advance the replication slot to commitLsn. WAL before it is * Advance the replication slot to commitLsn. WAL before it is
* hardened and will be fetched from one of safekeepers by * hardened and will be fetched from one of safekeepers by
@@ -1998,23 +1977,23 @@ walprop_pg_process_safekeeper_feedback(WalProposer *wp, Safekeeper *sk)
* Also wakes up syncrep waiters. * Also wakes up syncrep waiters.
*/ */
ProcessStandbyReply( ProcessStandbyReply(
/* write_lsn - This is what durably stored in safekeepers quorum. */ /* write_lsn - This is what durably stored in WAL service. */
standby_flush_lsn, quorumFeedback.flushLsn,
/* flush_lsn - This is what durably stored in safekeepers quorum. */ /* flush_lsn - This is what durably stored in WAL service. */
standby_flush_lsn, quorumFeedback.flushLsn,
/* /*
* apply_lsn - This is what processed and durably saved at* * apply_lsn - This is what processed and durably saved at*
* pageserver. * pageserver.
*/ */
standby_apply_lsn, quorumFeedback.rf.disk_consistent_lsn,
walprop_pg_get_current_timestamp(wp), false); walprop_pg_get_current_timestamp(wp), false);
} }
CombineHotStanbyFeedbacks(&hsFeedback, wp); CombineHotStanbyFeedbacks(&hsFeedback, wp);
if (hsFeedback.ts != 0 && memcmp(&hsFeedback, &agg_hs_feedback, sizeof hsFeedback) != 0) if (hsFeedback.ts != 0 && memcmp(&hsFeedback, &quorumFeedback.hs, sizeof hsFeedback) != 0)
{ {
agg_hs_feedback = hsFeedback; quorumFeedback.hs = hsFeedback;
ProcessStandbyHSFeedback(hsFeedback.ts, ProcessStandbyHSFeedback(hsFeedback.ts,
XidFromFullTransactionId(hsFeedback.xmin), XidFromFullTransactionId(hsFeedback.xmin),
EpochFromFullTransactionId(hsFeedback.xmin), EpochFromFullTransactionId(hsFeedback.xmin),
@@ -2095,18 +2074,6 @@ GetLogRepRestartLSN(WalProposer *wp)
return lrRestartLsn; return lrRestartLsn;
} }
void SetNeonCurrentClusterSize(uint64 size)
{
pg_atomic_write_u64(&walprop_shared->currentClusterSize, size);
}
uint64 GetNeonCurrentClusterSize(void)
{
return pg_atomic_read_u64(&walprop_shared->currentClusterSize);
}
uint64 GetNeonCurrentClusterSize(void);
static const walproposer_api walprop_pg = { static const walproposer_api walprop_pg = {
.get_shmem_state = walprop_pg_get_shmem_state, .get_shmem_state = walprop_pg_get_shmem_state,
.start_streaming = walprop_pg_start_streaming, .start_streaming = walprop_pg_start_streaming,

View File

@@ -11,10 +11,6 @@ testing = []
[dependencies] [dependencies]
anyhow.workspace = true anyhow.workspace = true
async-trait.workspace = true async-trait.workspace = true
aws-config.workspace = true
aws-sdk-iam.workspace = true
aws-sigv4.workspace = true
aws-types.workspace = true
base64.workspace = true base64.workspace = true
bstr.workspace = true bstr.workspace = true
bytes = { workspace = true, features = ["serde"] } bytes = { workspace = true, features = ["serde"] }
@@ -31,7 +27,6 @@ hashlink.workspace = true
hex.workspace = true hex.workspace = true
hmac.workspace = true hmac.workspace = true
hostname.workspace = true hostname.workspace = true
http.workspace = true
humantime.workspace = true humantime.workspace = true
hyper-tungstenite.workspace = true hyper-tungstenite.workspace = true
hyper.workspace = true hyper.workspace = true
@@ -64,11 +59,10 @@ rustls.workspace = true
scopeguard.workspace = true scopeguard.workspace = true
serde.workspace = true serde.workspace = true
serde_json.workspace = true serde_json.workspace = true
sha2 = { workspace = true, features = ["asm"] } sha2.workspace = true
smol_str.workspace = true smol_str.workspace = true
smallvec.workspace = true smallvec.workspace = true
socket2.workspace = true socket2.workspace = true
subtle.workspace = true
sync_wrapper.workspace = true sync_wrapper.workspace = true
task-local-extensions.workspace = true task-local-extensions.workspace = true
thiserror.workspace = true thiserror.workspace = true
@@ -97,7 +91,6 @@ workspace_hack.workspace = true
[dev-dependencies] [dev-dependencies]
camino-tempfile.workspace = true camino-tempfile.workspace = true
fallible-iterator.workspace = true
rcgen.workspace = true rcgen.workspace = true
rstest.workspace = true rstest.workspace = true
tokio-postgres-rustls.workspace = true tokio-postgres-rustls.workspace = true

View File

@@ -12,8 +12,6 @@ use crate::console::errors::GetAuthInfoError;
use crate::console::provider::{CachedRoleSecret, ConsoleBackend}; use crate::console::provider::{CachedRoleSecret, ConsoleBackend};
use crate::console::{AuthSecret, NodeInfo}; use crate::console::{AuthSecret, NodeInfo};
use crate::context::RequestMonitoring; use crate::context::RequestMonitoring;
use crate::intern::EndpointIdInt;
use crate::metrics::{AUTH_RATE_LIMIT_HITS, ENDPOINTS_AUTH_RATE_LIMITED};
use crate::proxy::connect_compute::ComputeConnectBackend; use crate::proxy::connect_compute::ComputeConnectBackend;
use crate::proxy::NeonOptions; use crate::proxy::NeonOptions;
use crate::stream::Stream; use crate::stream::Stream;
@@ -30,7 +28,7 @@ use crate::{
use crate::{scram, EndpointCacheKey, EndpointId, RoleName}; use crate::{scram, EndpointCacheKey, EndpointId, RoleName};
use std::sync::Arc; use std::sync::Arc;
use tokio::io::{AsyncRead, AsyncWrite}; use tokio::io::{AsyncRead, AsyncWrite};
use tracing::{info, warn}; use tracing::info;
/// Alternative to [`std::borrow::Cow`] but doesn't need `T: ToOwned` as we don't need that functionality /// Alternative to [`std::borrow::Cow`] but doesn't need `T: ToOwned` as we don't need that functionality
pub enum MaybeOwned<'a, T> { pub enum MaybeOwned<'a, T> {
@@ -176,52 +174,6 @@ impl TryFrom<ComputeUserInfoMaybeEndpoint> for ComputeUserInfo {
} }
} }
impl AuthenticationConfig {
pub fn check_rate_limit(
&self,
ctx: &mut RequestMonitoring,
secret: AuthSecret,
endpoint: &EndpointId,
is_cleartext: bool,
) -> auth::Result<AuthSecret> {
// we have validated the endpoint exists, so let's intern it.
let endpoint_int = EndpointIdInt::from(endpoint);
// only count the full hash count if password hack or websocket flow.
// in other words, if proxy needs to run the hashing
let password_weight = if is_cleartext {
match &secret {
#[cfg(any(test, feature = "testing"))]
AuthSecret::Md5(_) => 1,
AuthSecret::Scram(s) => s.iterations + 1,
}
} else {
// validating scram takes just 1 hmac_sha_256 operation.
1
};
let limit_not_exceeded = self
.rate_limiter
.check((endpoint_int, ctx.peer_addr), password_weight);
if !limit_not_exceeded {
warn!(
enabled = self.rate_limiter_enabled,
"rate limiting authentication"
);
AUTH_RATE_LIMIT_HITS.inc();
ENDPOINTS_AUTH_RATE_LIMITED.measure(endpoint);
if self.rate_limiter_enabled {
return Err(auth::AuthError::too_many_connections());
}
}
Ok(secret)
}
}
/// True to its name, this function encapsulates our current auth trade-offs. /// True to its name, this function encapsulates our current auth trade-offs.
/// Here, we choose the appropriate auth flow based on circumstances. /// Here, we choose the appropriate auth flow based on circumstances.
/// ///
@@ -262,24 +214,14 @@ async fn auth_quirks(
Some(secret) => secret, Some(secret) => secret,
None => api.get_role_secret(ctx, &info).await?, None => api.get_role_secret(ctx, &info).await?,
}; };
let (cached_entry, secret) = cached_secret.take_value();
let secret = match secret {
Some(secret) => config.check_rate_limit(
ctx,
secret,
&info.endpoint,
unauthenticated_password.is_some() || allow_cleartext,
)?,
None => {
// If we don't have an authentication secret, we mock one to
// prevent malicious probing (possible due to missing protocol steps).
// This mocked secret will never lead to successful authentication.
info!("authentication info not found, mocking it");
AuthSecret::Scram(scram::ServerSecret::mock(rand::random()))
}
};
let secret = cached_secret.value.clone().unwrap_or_else(|| {
// If we don't have an authentication secret, we mock one to
// prevent malicious probing (possible due to missing protocol steps).
// This mocked secret will never lead to successful authentication.
info!("authentication info not found, mocking it");
AuthSecret::Scram(scram::ServerSecret::mock(&info.user, rand::random()))
});
match authenticate_with_secret( match authenticate_with_secret(
ctx, ctx,
secret, secret,
@@ -295,7 +237,7 @@ async fn auth_quirks(
Err(e) => { Err(e) => {
if e.is_auth_failed() { if e.is_auth_failed() {
// The password could have been changed, so we invalidate the cache. // The password could have been changed, so we invalidate the cache.
cached_entry.invalidate(); cached_secret.invalidate();
} }
Err(e) Err(e)
} }
@@ -312,7 +254,7 @@ async fn authenticate_with_secret(
config: &'static AuthenticationConfig, config: &'static AuthenticationConfig,
) -> auth::Result<ComputeCredentials> { ) -> auth::Result<ComputeCredentials> {
if let Some(password) = unauthenticated_password { if let Some(password) = unauthenticated_password {
let auth_outcome = validate_password_and_exchange(&password, secret).await?; let auth_outcome = validate_password_and_exchange(&password, secret)?;
let keys = match auth_outcome { let keys = match auth_outcome {
crate::sasl::Outcome::Success(key) => key, crate::sasl::Outcome::Success(key) => key,
crate::sasl::Outcome::Failure(reason) => { crate::sasl::Outcome::Failure(reason) => {
@@ -466,232 +408,3 @@ impl ComputeConnectBackend for BackendType<'_, ComputeCredentials, &()> {
} }
} }
} }
#[cfg(test)]
mod tests {
use std::sync::Arc;
use bytes::BytesMut;
use fallible_iterator::FallibleIterator;
use once_cell::sync::Lazy;
use postgres_protocol::{
authentication::sasl::{ChannelBinding, ScramSha256},
message::{backend::Message as PgMessage, frontend},
};
use provider::AuthSecret;
use tokio::io::{AsyncRead, AsyncReadExt, AsyncWriteExt};
use crate::{
auth::{ComputeUserInfoMaybeEndpoint, IpPattern},
config::AuthenticationConfig,
console::{
self,
provider::{self, CachedAllowedIps, CachedRoleSecret},
CachedNodeInfo,
},
context::RequestMonitoring,
proxy::NeonOptions,
rate_limiter::{AuthRateLimiter, RateBucketInfo},
scram::ServerSecret,
stream::{PqStream, Stream},
};
use super::auth_quirks;
struct Auth {
ips: Vec<IpPattern>,
secret: AuthSecret,
}
impl console::Api for Auth {
async fn get_role_secret(
&self,
_ctx: &mut RequestMonitoring,
_user_info: &super::ComputeUserInfo,
) -> Result<CachedRoleSecret, console::errors::GetAuthInfoError> {
Ok(CachedRoleSecret::new_uncached(Some(self.secret.clone())))
}
async fn get_allowed_ips_and_secret(
&self,
_ctx: &mut RequestMonitoring,
_user_info: &super::ComputeUserInfo,
) -> Result<(CachedAllowedIps, Option<CachedRoleSecret>), console::errors::GetAuthInfoError>
{
Ok((
CachedAllowedIps::new_uncached(Arc::new(self.ips.clone())),
Some(CachedRoleSecret::new_uncached(Some(self.secret.clone()))),
))
}
async fn wake_compute(
&self,
_ctx: &mut RequestMonitoring,
_user_info: &super::ComputeUserInfo,
) -> Result<CachedNodeInfo, console::errors::WakeComputeError> {
unimplemented!()
}
}
static CONFIG: Lazy<AuthenticationConfig> = Lazy::new(|| AuthenticationConfig {
scram_protocol_timeout: std::time::Duration::from_secs(5),
rate_limiter_enabled: true,
rate_limiter: AuthRateLimiter::new(&RateBucketInfo::DEFAULT_AUTH_SET),
});
async fn read_message(r: &mut (impl AsyncRead + Unpin), b: &mut BytesMut) -> PgMessage {
loop {
r.read_buf(&mut *b).await.unwrap();
if let Some(m) = PgMessage::parse(&mut *b).unwrap() {
break m;
}
}
}
#[tokio::test]
async fn auth_quirks_scram() {
let (mut client, server) = tokio::io::duplex(1024);
let mut stream = PqStream::new(Stream::from_raw(server));
let mut ctx = RequestMonitoring::test();
let api = Auth {
ips: vec![],
secret: AuthSecret::Scram(ServerSecret::build("my-secret-password").await.unwrap()),
};
let user_info = ComputeUserInfoMaybeEndpoint {
user: "conrad".into(),
endpoint_id: Some("endpoint".into()),
options: NeonOptions::default(),
};
let handle = tokio::spawn(async move {
let mut scram = ScramSha256::new(b"my-secret-password", ChannelBinding::unsupported());
let mut read = BytesMut::new();
// server should offer scram
match read_message(&mut client, &mut read).await {
PgMessage::AuthenticationSasl(a) => {
let options: Vec<&str> = a.mechanisms().collect().unwrap();
assert_eq!(options, ["SCRAM-SHA-256"]);
}
_ => panic!("wrong message"),
}
// client sends client-first-message
let mut write = BytesMut::new();
frontend::sasl_initial_response("SCRAM-SHA-256", scram.message(), &mut write).unwrap();
client.write_all(&write).await.unwrap();
// server response with server-first-message
match read_message(&mut client, &mut read).await {
PgMessage::AuthenticationSaslContinue(a) => {
scram.update(a.data()).await.unwrap();
}
_ => panic!("wrong message"),
}
// client response with client-final-message
write.clear();
frontend::sasl_response(scram.message(), &mut write).unwrap();
client.write_all(&write).await.unwrap();
// server response with server-final-message
match read_message(&mut client, &mut read).await {
PgMessage::AuthenticationSaslFinal(a) => {
scram.finish(a.data()).unwrap();
}
_ => panic!("wrong message"),
}
});
let _creds = auth_quirks(&mut ctx, &api, user_info, &mut stream, false, &CONFIG)
.await
.unwrap();
handle.await.unwrap();
}
#[tokio::test]
async fn auth_quirks_cleartext() {
let (mut client, server) = tokio::io::duplex(1024);
let mut stream = PqStream::new(Stream::from_raw(server));
let mut ctx = RequestMonitoring::test();
let api = Auth {
ips: vec![],
secret: AuthSecret::Scram(ServerSecret::build("my-secret-password").await.unwrap()),
};
let user_info = ComputeUserInfoMaybeEndpoint {
user: "conrad".into(),
endpoint_id: Some("endpoint".into()),
options: NeonOptions::default(),
};
let handle = tokio::spawn(async move {
let mut read = BytesMut::new();
let mut write = BytesMut::new();
// server should offer cleartext
match read_message(&mut client, &mut read).await {
PgMessage::AuthenticationCleartextPassword => {}
_ => panic!("wrong message"),
}
// client responds with password
write.clear();
frontend::password_message(b"my-secret-password", &mut write).unwrap();
client.write_all(&write).await.unwrap();
});
let _creds = auth_quirks(&mut ctx, &api, user_info, &mut stream, true, &CONFIG)
.await
.unwrap();
handle.await.unwrap();
}
#[tokio::test]
async fn auth_quirks_password_hack() {
let (mut client, server) = tokio::io::duplex(1024);
let mut stream = PqStream::new(Stream::from_raw(server));
let mut ctx = RequestMonitoring::test();
let api = Auth {
ips: vec![],
secret: AuthSecret::Scram(ServerSecret::build("my-secret-password").await.unwrap()),
};
let user_info = ComputeUserInfoMaybeEndpoint {
user: "conrad".into(),
endpoint_id: None,
options: NeonOptions::default(),
};
let handle = tokio::spawn(async move {
let mut read = BytesMut::new();
// server should offer cleartext
match read_message(&mut client, &mut read).await {
PgMessage::AuthenticationCleartextPassword => {}
_ => panic!("wrong message"),
}
// client responds with password
let mut write = BytesMut::new();
frontend::password_message(b"endpoint=my-endpoint;my-secret-password", &mut write)
.unwrap();
client.write_all(&write).await.unwrap();
});
let creds = auth_quirks(&mut ctx, &api, user_info, &mut stream, true, &CONFIG)
.await
.unwrap();
assert_eq!(creds.info.endpoint, "my-endpoint");
handle.await.unwrap();
}
}

View File

@@ -126,7 +126,7 @@ impl<S: AsyncRead + AsyncWrite + Unpin> AuthFlow<'_, S, CleartextPassword> {
.strip_suffix(&[0]) .strip_suffix(&[0])
.ok_or(AuthErrorImpl::MalformedPassword("missing terminator"))?; .ok_or(AuthErrorImpl::MalformedPassword("missing terminator"))?;
let outcome = validate_password_and_exchange(password, self.state.0).await?; let outcome = validate_password_and_exchange(password, self.state.0)?;
if let sasl::Outcome::Success(_) = &outcome { if let sasl::Outcome::Success(_) = &outcome {
self.stream.write_message_noflush(&Be::AuthenticationOk)?; self.stream.write_message_noflush(&Be::AuthenticationOk)?;
@@ -180,7 +180,7 @@ impl<S: AsyncRead + AsyncWrite + Unpin> AuthFlow<'_, S, Scram<'_>> {
} }
} }
pub(crate) async fn validate_password_and_exchange( pub(crate) fn validate_password_and_exchange(
password: &[u8], password: &[u8],
secret: AuthSecret, secret: AuthSecret,
) -> super::Result<sasl::Outcome<ComputeCredentialKeys>> { ) -> super::Result<sasl::Outcome<ComputeCredentialKeys>> {
@@ -194,7 +194,13 @@ pub(crate) async fn validate_password_and_exchange(
} }
// perform scram authentication as both client and server to validate the keys // perform scram authentication as both client and server to validate the keys
AuthSecret::Scram(scram_secret) => { AuthSecret::Scram(scram_secret) => {
let outcome = crate::scram::exchange(&scram_secret, password).await?; use postgres_protocol::authentication::sasl::{ChannelBinding, ScramSha256};
let sasl_client = ScramSha256::new(password, ChannelBinding::unsupported());
let outcome = crate::scram::exchange(
&scram_secret,
sasl_client,
crate::config::TlsServerEndPoint::Undefined,
)?;
let client_key = match outcome { let client_key = match outcome {
sasl::Outcome::Success(client_key) => client_key, sasl::Outcome::Success(client_key) => client_key,

View File

@@ -1,10 +1,3 @@
use aws_config::environment::EnvironmentVariableCredentialsProvider;
use aws_config::imds::credentials::ImdsCredentialsProvider;
use aws_config::meta::credentials::CredentialsProviderChain;
use aws_config::meta::region::RegionProviderChain;
use aws_config::profile::ProfileFileCredentialsProvider;
use aws_config::provider_config::ProviderConfig;
use aws_config::web_identity_token::WebIdentityTokenCredentialsProvider;
use futures::future::Either; use futures::future::Either;
use proxy::auth; use proxy::auth;
use proxy::auth::backend::MaybeOwned; use proxy::auth::backend::MaybeOwned;
@@ -17,15 +10,11 @@ use proxy::config::ProjectInfoCacheOptions;
use proxy::console; use proxy::console;
use proxy::context::parquet::ParquetUploadArgs; use proxy::context::parquet::ParquetUploadArgs;
use proxy::http; use proxy::http;
use proxy::metrics::NUM_CANCELLATION_REQUESTS_SOURCE_FROM_CLIENT;
use proxy::rate_limiter::AuthRateLimiter;
use proxy::rate_limiter::EndpointRateLimiter; use proxy::rate_limiter::EndpointRateLimiter;
use proxy::rate_limiter::RateBucketInfo; use proxy::rate_limiter::RateBucketInfo;
use proxy::rate_limiter::RateLimiterConfig; use proxy::rate_limiter::RateLimiterConfig;
use proxy::redis::cancellation_publisher::RedisPublisherClient;
use proxy::redis::connection_with_credentials_provider::ConnectionWithCredentialsProvider;
use proxy::redis::elasticache;
use proxy::redis::notifications; use proxy::redis::notifications;
use proxy::redis::publisher::RedisPublisherClient;
use proxy::serverless::GlobalConnPoolOptions; use proxy::serverless::GlobalConnPoolOptions;
use proxy::usage_metrics; use proxy::usage_metrics;
@@ -142,16 +131,10 @@ struct ProxyCliArgs {
/// ///
/// Provided in the form '<Requests Per Second>@<Bucket Duration Size>'. /// Provided in the form '<Requests Per Second>@<Bucket Duration Size>'.
/// Can be given multiple times for different bucket sizes. /// Can be given multiple times for different bucket sizes.
#[clap(long, default_values_t = RateBucketInfo::DEFAULT_ENDPOINT_SET)] #[clap(long, default_values_t = RateBucketInfo::DEFAULT_SET)]
endpoint_rps_limit: Vec<RateBucketInfo>, endpoint_rps_limit: Vec<RateBucketInfo>,
/// Whether the auth rate limiter actually takes effect (for testing)
#[clap(long, default_value_t = false, value_parser = clap::builder::BoolishValueParser::new(), action = clap::ArgAction::Set)]
auth_rate_limit_enabled: bool,
/// Authentication rate limiter max number of hashes per second.
#[clap(long, default_values_t = RateBucketInfo::DEFAULT_AUTH_SET)]
auth_rate_limit: Vec<RateBucketInfo>,
/// Redis rate limiter max number of requests per second. /// Redis rate limiter max number of requests per second.
#[clap(long, default_values_t = RateBucketInfo::DEFAULT_ENDPOINT_SET)] #[clap(long, default_values_t = RateBucketInfo::DEFAULT_SET)]
redis_rps_limit: Vec<RateBucketInfo>, redis_rps_limit: Vec<RateBucketInfo>,
/// Initial limit for dynamic rate limiter. Makes sense only if `rate_limit_algorithm` is *not* `None`. /// Initial limit for dynamic rate limiter. Makes sense only if `rate_limit_algorithm` is *not* `None`.
#[clap(long, default_value_t = 100)] #[clap(long, default_value_t = 100)]
@@ -167,24 +150,9 @@ struct ProxyCliArgs {
/// disable ip check for http requests. If it is too time consuming, it could be turned off. /// disable ip check for http requests. If it is too time consuming, it could be turned off.
#[clap(long, default_value_t = false, value_parser = clap::builder::BoolishValueParser::new(), action = clap::ArgAction::Set)] #[clap(long, default_value_t = false, value_parser = clap::builder::BoolishValueParser::new(), action = clap::ArgAction::Set)]
disable_ip_check_for_http: bool, disable_ip_check_for_http: bool,
/// redis url for notifications (if empty, redis_host:port will be used for both notifications and streaming connections) /// redis url for notifications.
#[clap(long)] #[clap(long)]
redis_notifications: Option<String>, redis_notifications: Option<String>,
/// redis host for streaming connections (might be different from the notifications host)
#[clap(long)]
redis_host: Option<String>,
/// redis port for streaming connections (might be different from the notifications host)
#[clap(long)]
redis_port: Option<u16>,
/// redis cluster name, used in aws elasticache
#[clap(long)]
redis_cluster_name: Option<String>,
/// redis user_id, used in aws elasticache
#[clap(long)]
redis_user_id: Option<String>,
/// aws region to retrieve credentials
#[clap(long, default_value_t = String::new())]
aws_region: String,
/// cache for `project_info` (use `size=0` to disable) /// cache for `project_info` (use `size=0` to disable)
#[clap(long, default_value = config::ProjectInfoCacheOptions::CACHE_DEFAULT_OPTIONS)] #[clap(long, default_value = config::ProjectInfoCacheOptions::CACHE_DEFAULT_OPTIONS)]
project_info_cache: String, project_info_cache: String,
@@ -248,61 +216,6 @@ async fn main() -> anyhow::Result<()> {
let config = build_config(&args)?; let config = build_config(&args)?;
info!("Authentication backend: {}", config.auth_backend); info!("Authentication backend: {}", config.auth_backend);
info!("Using region: {}", config.aws_region);
let region_provider = RegionProviderChain::default_provider().or_else(&*config.aws_region); // Replace with your Redis region if needed
let provider_conf =
ProviderConfig::without_region().with_region(region_provider.region().await);
let aws_credentials_provider = {
// uses "AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY"
CredentialsProviderChain::first_try("env", EnvironmentVariableCredentialsProvider::new())
// uses "AWS_PROFILE" / `aws sso login --profile <profile>`
.or_else(
"profile-sso",
ProfileFileCredentialsProvider::builder()
.configure(&provider_conf)
.build(),
)
// uses "AWS_WEB_IDENTITY_TOKEN_FILE", "AWS_ROLE_ARN", "AWS_ROLE_SESSION_NAME"
// needed to access remote extensions bucket
.or_else(
"token",
WebIdentityTokenCredentialsProvider::builder()
.configure(&provider_conf)
.build(),
)
// uses imds v2
.or_else("imds", ImdsCredentialsProvider::builder().build())
};
let elasticache_credentials_provider = Arc::new(elasticache::CredentialsProvider::new(
elasticache::AWSIRSAConfig::new(
config.aws_region.clone(),
args.redis_cluster_name,
args.redis_user_id,
),
aws_credentials_provider,
));
let redis_notifications_client =
match (args.redis_notifications, (args.redis_host, args.redis_port)) {
(Some(url), _) => {
info!("Starting redis notifications listener ({url})");
Some(ConnectionWithCredentialsProvider::new_with_static_credentials(url))
}
(None, (Some(host), Some(port))) => Some(
ConnectionWithCredentialsProvider::new_with_credentials_provider(
host,
port,
elasticache_credentials_provider.clone(),
),
),
(None, (None, None)) => {
warn!("Redis is disabled");
None
}
_ => {
bail!("redis-host and redis-port must be specified together");
}
};
// Check that we can bind to address before further initialization // Check that we can bind to address before further initialization
let http_address: SocketAddr = args.http.parse()?; let http_address: SocketAddr = args.http.parse()?;
@@ -320,22 +233,17 @@ async fn main() -> anyhow::Result<()> {
let endpoint_rate_limiter = Arc::new(EndpointRateLimiter::new(&config.endpoint_rps_limit)); let endpoint_rate_limiter = Arc::new(EndpointRateLimiter::new(&config.endpoint_rps_limit));
let cancel_map = CancelMap::default(); let cancel_map = CancelMap::default();
let redis_publisher = match &args.redis_notifications {
// let redis_notifications_client = redis_notifications_client.map(|x| Box::leak(Box::new(x))); Some(url) => Some(Arc::new(Mutex::new(RedisPublisherClient::new(
let redis_publisher = match &redis_notifications_client { url,
Some(redis_publisher) => Some(Arc::new(Mutex::new(RedisPublisherClient::new(
redis_publisher.clone(),
args.region.clone(), args.region.clone(),
&config.redis_rps_limit, &config.redis_rps_limit,
)?))), )?))),
None => None, None => None,
}; };
let cancellation_handler = Arc::new(CancellationHandler::< let cancellation_handler = Arc::new(CancellationHandler::new(
Option<Arc<tokio::sync::Mutex<RedisPublisherClient>>>,
>::new(
cancel_map.clone(), cancel_map.clone(),
redis_publisher, redis_publisher,
NUM_CANCELLATION_REQUESTS_SOURCE_FROM_CLIENT,
)); ));
// client facing tasks. these will exit on error or on cancellation // client facing tasks. these will exit on error or on cancellation
@@ -382,16 +290,17 @@ async fn main() -> anyhow::Result<()> {
if let auth::BackendType::Console(api, _) = &config.auth_backend { if let auth::BackendType::Console(api, _) = &config.auth_backend {
if let proxy::console::provider::ConsoleBackend::Console(api) = &**api { if let proxy::console::provider::ConsoleBackend::Console(api) = &**api {
if let Some(redis_notifications_client) = redis_notifications_client { let cache = api.caches.project_info.clone();
let cache = api.caches.project_info.clone(); if let Some(url) = args.redis_notifications {
info!("Starting redis notifications listener ({url})");
maintenance_tasks.spawn(notifications::task_main( maintenance_tasks.spawn(notifications::task_main(
redis_notifications_client.clone(), url.to_owned(),
cache.clone(), cache.clone(),
cancel_map.clone(), cancel_map.clone(),
args.region.clone(), args.region.clone(),
)); ));
maintenance_tasks.spawn(async move { cache.clone().gc_worker().await });
} }
maintenance_tasks.spawn(async move { cache.clone().gc_worker().await });
} }
} }
@@ -517,8 +426,6 @@ fn build_config(args: &ProxyCliArgs) -> anyhow::Result<&'static ProxyConfig> {
}; };
let authentication_config = AuthenticationConfig { let authentication_config = AuthenticationConfig {
scram_protocol_timeout: args.scram_protocol_timeout, scram_protocol_timeout: args.scram_protocol_timeout,
rate_limiter_enabled: args.auth_rate_limit_enabled,
rate_limiter: AuthRateLimiter::new(args.auth_rate_limit.clone()),
}; };
let mut endpoint_rps_limit = args.endpoint_rps_limit.clone(); let mut endpoint_rps_limit = args.endpoint_rps_limit.clone();
@@ -538,8 +445,8 @@ fn build_config(args: &ProxyCliArgs) -> anyhow::Result<&'static ProxyConfig> {
endpoint_rps_limit, endpoint_rps_limit,
redis_rps_limit, redis_rps_limit,
handshake_timeout: args.handshake_timeout, handshake_timeout: args.handshake_timeout,
// TODO: add this argument
region: args.region.clone(), region: args.region.clone(),
aws_region: args.aws_region.clone(),
})); }));
Ok(config) Ok(config)

View File

@@ -43,16 +43,6 @@ impl<C: Cache, V> Cached<C, V> {
Self { token: None, value } Self { token: None, value }
} }
pub fn take_value(self) -> (Cached<C, ()>, V) {
(
Cached {
token: self.token,
value: (),
},
self.value,
)
}
/// Drop this entry from a cache if it's still there. /// Drop this entry from a cache if it's still there.
pub fn invalidate(self) -> V { pub fn invalidate(self) -> V {
if let Some((cache, info)) = &self.token { if let Some((cache, info)) = &self.token {

View File

@@ -373,7 +373,10 @@ mod tests {
let endpoint_id = "endpoint".into(); let endpoint_id = "endpoint".into();
let user1: RoleName = "user1".into(); let user1: RoleName = "user1".into();
let user2: RoleName = "user2".into(); let user2: RoleName = "user2".into();
let secret1 = Some(AuthSecret::Scram(ServerSecret::mock([1; 32]))); let secret1 = Some(AuthSecret::Scram(ServerSecret::mock(
user1.as_str(),
[1; 32],
)));
let secret2 = None; let secret2 = None;
let allowed_ips = Arc::new(vec![ let allowed_ips = Arc::new(vec![
"127.0.0.1".parse().unwrap(), "127.0.0.1".parse().unwrap(),
@@ -392,7 +395,10 @@ mod tests {
// Shouldn't add more than 2 roles. // Shouldn't add more than 2 roles.
let user3: RoleName = "user3".into(); let user3: RoleName = "user3".into();
let secret3 = Some(AuthSecret::Scram(ServerSecret::mock([3; 32]))); let secret3 = Some(AuthSecret::Scram(ServerSecret::mock(
user3.as_str(),
[3; 32],
)));
cache.insert_role_secret(&project_id, &endpoint_id, &user3, secret3.clone()); cache.insert_role_secret(&project_id, &endpoint_id, &user3, secret3.clone());
assert!(cache.get_role_secret(&endpoint_id, &user3).is_none()); assert!(cache.get_role_secret(&endpoint_id, &user3).is_none());
@@ -425,8 +431,14 @@ mod tests {
let endpoint_id = "endpoint".into(); let endpoint_id = "endpoint".into();
let user1: RoleName = "user1".into(); let user1: RoleName = "user1".into();
let user2: RoleName = "user2".into(); let user2: RoleName = "user2".into();
let secret1 = Some(AuthSecret::Scram(ServerSecret::mock([1; 32]))); let secret1 = Some(AuthSecret::Scram(ServerSecret::mock(
let secret2 = Some(AuthSecret::Scram(ServerSecret::mock([2; 32]))); user1.as_str(),
[1; 32],
)));
let secret2 = Some(AuthSecret::Scram(ServerSecret::mock(
user2.as_str(),
[2; 32],
)));
let allowed_ips = Arc::new(vec![ let allowed_ips = Arc::new(vec![
"127.0.0.1".parse().unwrap(), "127.0.0.1".parse().unwrap(),
"127.0.0.2".parse().unwrap(), "127.0.0.2".parse().unwrap(),
@@ -474,8 +486,14 @@ mod tests {
let endpoint_id = "endpoint".into(); let endpoint_id = "endpoint".into();
let user1: RoleName = "user1".into(); let user1: RoleName = "user1".into();
let user2: RoleName = "user2".into(); let user2: RoleName = "user2".into();
let secret1 = Some(AuthSecret::Scram(ServerSecret::mock([1; 32]))); let secret1 = Some(AuthSecret::Scram(ServerSecret::mock(
let secret2 = Some(AuthSecret::Scram(ServerSecret::mock([2; 32]))); user1.as_str(),
[1; 32],
)));
let secret2 = Some(AuthSecret::Scram(ServerSecret::mock(
user2.as_str(),
[2; 32],
)));
let allowed_ips = Arc::new(vec![ let allowed_ips = Arc::new(vec![
"127.0.0.1".parse().unwrap(), "127.0.0.1".parse().unwrap(),
"127.0.0.2".parse().unwrap(), "127.0.0.2".parse().unwrap(),

View File

@@ -1,3 +1,4 @@
use async_trait::async_trait;
use dashmap::DashMap; use dashmap::DashMap;
use pq_proto::CancelKeyData; use pq_proto::CancelKeyData;
use std::{net::SocketAddr, sync::Arc}; use std::{net::SocketAddr, sync::Arc};
@@ -9,26 +10,18 @@ use tracing::info;
use uuid::Uuid; use uuid::Uuid;
use crate::{ use crate::{
error::ReportableError, error::ReportableError, metrics::NUM_CANCELLATION_REQUESTS,
metrics::NUM_CANCELLATION_REQUESTS, redis::publisher::RedisPublisherClient,
redis::cancellation_publisher::{
CancellationPublisher, CancellationPublisherMut, RedisPublisherClient,
},
}; };
pub type CancelMap = Arc<DashMap<CancelKeyData, Option<CancelClosure>>>; pub type CancelMap = Arc<DashMap<CancelKeyData, Option<CancelClosure>>>;
pub type CancellationHandlerMain = CancellationHandler<Option<Arc<Mutex<RedisPublisherClient>>>>;
pub type CancellationHandlerMainInternal = Option<Arc<Mutex<RedisPublisherClient>>>;
/// Enables serving `CancelRequest`s. /// Enables serving `CancelRequest`s.
/// ///
/// If `CancellationPublisher` is available, cancel request will be used to publish the cancellation key to other proxy instances. /// If there is a `RedisPublisherClient` available, it will be used to publish the cancellation key to other proxy instances.
pub struct CancellationHandler<P> { pub struct CancellationHandler {
map: CancelMap, map: CancelMap,
client: P, redis_client: Option<Arc<Mutex<RedisPublisherClient>>>,
/// This field used for the monitoring purposes.
/// Represents the source of the cancellation request.
from: &'static str,
} }
#[derive(Debug, Error)] #[derive(Debug, Error)]
@@ -51,9 +44,49 @@ impl ReportableError for CancelError {
} }
} }
impl<P: CancellationPublisher> CancellationHandler<P> { impl CancellationHandler {
pub fn new(map: CancelMap, redis_client: Option<Arc<Mutex<RedisPublisherClient>>>) -> Self {
Self { map, redis_client }
}
/// Cancel a running query for the corresponding connection.
pub async fn cancel_session(
&self,
key: CancelKeyData,
session_id: Uuid,
) -> Result<(), CancelError> {
let from = "from_client";
// NB: we should immediately release the lock after cloning the token.
let Some(cancel_closure) = self.map.get(&key).and_then(|x| x.clone()) else {
tracing::warn!("query cancellation key not found: {key}");
if let Some(redis_client) = &self.redis_client {
NUM_CANCELLATION_REQUESTS
.with_label_values(&[from, "not_found"])
.inc();
info!("publishing cancellation key to Redis");
match redis_client.lock().await.try_publish(key, session_id).await {
Ok(()) => {
info!("cancellation key successfuly published to Redis");
}
Err(e) => {
tracing::error!("failed to publish a message: {e}");
return Err(CancelError::IO(std::io::Error::new(
std::io::ErrorKind::Other,
e.to_string(),
)));
}
}
}
return Ok(());
};
NUM_CANCELLATION_REQUESTS
.with_label_values(&[from, "found"])
.inc();
info!("cancelling query per user's request using key {key}");
cancel_closure.try_cancel_query().await
}
/// Run async action within an ephemeral session identified by [`CancelKeyData`]. /// Run async action within an ephemeral session identified by [`CancelKeyData`].
pub fn get_session(self: Arc<Self>) -> Session<P> { pub fn get_session(self: Arc<Self>) -> Session {
// HACK: We'd rather get the real backend_pid but tokio_postgres doesn't // HACK: We'd rather get the real backend_pid but tokio_postgres doesn't
// expose it and we don't want to do another roundtrip to query // expose it and we don't want to do another roundtrip to query
// for it. The client will be able to notice that this is not the // for it. The client will be able to notice that this is not the
@@ -79,39 +112,9 @@ impl<P: CancellationPublisher> CancellationHandler<P> {
cancellation_handler: self, cancellation_handler: self,
} }
} }
/// Try to cancel a running query for the corresponding connection.
/// If the cancellation key is not found, it will be published to Redis.
pub async fn cancel_session(
&self,
key: CancelKeyData,
session_id: Uuid,
) -> Result<(), CancelError> {
// NB: we should immediately release the lock after cloning the token.
let Some(cancel_closure) = self.map.get(&key).and_then(|x| x.clone()) else {
tracing::warn!("query cancellation key not found: {key}");
NUM_CANCELLATION_REQUESTS
.with_label_values(&[self.from, "not_found"])
.inc();
match self.client.try_publish(key, session_id).await {
Ok(()) => {} // do nothing
Err(e) => {
return Err(CancelError::IO(std::io::Error::new(
std::io::ErrorKind::Other,
e.to_string(),
)));
}
}
return Ok(());
};
NUM_CANCELLATION_REQUESTS
.with_label_values(&[self.from, "found"])
.inc();
info!("cancelling query per user's request using key {key}");
cancel_closure.try_cancel_query().await
}
#[cfg(test)] #[cfg(test)]
fn contains(&self, session: &Session<P>) -> bool { fn contains(&self, session: &Session) -> bool {
self.map.contains_key(&session.key) self.map.contains_key(&session.key)
} }
@@ -121,19 +124,31 @@ impl<P: CancellationPublisher> CancellationHandler<P> {
} }
} }
impl CancellationHandler<()> { #[async_trait]
pub fn new(map: CancelMap, from: &'static str) -> Self { pub trait NotificationsCancellationHandler {
Self { async fn cancel_session_no_publish(&self, key: CancelKeyData) -> Result<(), CancelError>;
map,
client: (),
from,
}
}
} }
impl<P: CancellationPublisherMut> CancellationHandler<Option<Arc<Mutex<P>>>> { #[async_trait]
pub fn new(map: CancelMap, client: Option<Arc<Mutex<P>>>, from: &'static str) -> Self { impl NotificationsCancellationHandler for CancellationHandler {
Self { map, client, from } async fn cancel_session_no_publish(&self, key: CancelKeyData) -> Result<(), CancelError> {
let from = "from_redis";
let cancel_closure = self.map.get(&key).and_then(|x| x.clone());
match cancel_closure {
Some(cancel_closure) => {
NUM_CANCELLATION_REQUESTS
.with_label_values(&[from, "found"])
.inc();
cancel_closure.try_cancel_query().await
}
None => {
NUM_CANCELLATION_REQUESTS
.with_label_values(&[from, "not_found"])
.inc();
tracing::warn!("query cancellation key not found: {key}");
Ok(())
}
}
} }
} }
@@ -163,14 +178,14 @@ impl CancelClosure {
} }
/// Helper for registering query cancellation tokens. /// Helper for registering query cancellation tokens.
pub struct Session<P> { pub struct Session {
/// The user-facing key identifying this session. /// The user-facing key identifying this session.
key: CancelKeyData, key: CancelKeyData,
/// The [`CancelMap`] this session belongs to. /// The [`CancelMap`] this session belongs to.
cancellation_handler: Arc<CancellationHandler<P>>, cancellation_handler: Arc<CancellationHandler>,
} }
impl<P> Session<P> { impl Session {
/// Store the cancel token for the given session. /// Store the cancel token for the given session.
/// This enables query cancellation in `crate::proxy::prepare_client_connection`. /// This enables query cancellation in `crate::proxy::prepare_client_connection`.
pub fn enable_query_cancellation(&self, cancel_closure: CancelClosure) -> CancelKeyData { pub fn enable_query_cancellation(&self, cancel_closure: CancelClosure) -> CancelKeyData {
@@ -183,7 +198,7 @@ impl<P> Session<P> {
} }
} }
impl<P> Drop for Session<P> { impl Drop for Session {
fn drop(&mut self) { fn drop(&mut self) {
self.cancellation_handler.map.remove(&self.key); self.cancellation_handler.map.remove(&self.key);
info!("dropped query cancellation key {}", &self.key); info!("dropped query cancellation key {}", &self.key);
@@ -192,16 +207,14 @@ impl<P> Drop for Session<P> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::metrics::NUM_CANCELLATION_REQUESTS_SOURCE_FROM_REDIS;
use super::*; use super::*;
#[tokio::test] #[tokio::test]
async fn check_session_drop() -> anyhow::Result<()> { async fn check_session_drop() -> anyhow::Result<()> {
let cancellation_handler = Arc::new(CancellationHandler::<()>::new( let cancellation_handler = Arc::new(CancellationHandler {
CancelMap::default(), map: CancelMap::default(),
NUM_CANCELLATION_REQUESTS_SOURCE_FROM_REDIS, redis_client: None,
)); });
let session = cancellation_handler.clone().get_session(); let session = cancellation_handler.clone().get_session();
assert!(cancellation_handler.contains(&session)); assert!(cancellation_handler.contains(&session));
@@ -211,19 +224,4 @@ mod tests {
Ok(()) Ok(())
} }
#[tokio::test]
async fn cancel_session_noop_regression() {
let handler = CancellationHandler::<()>::new(Default::default(), "local");
handler
.cancel_session(
CancelKeyData {
backend_pid: 0,
cancel_key: 0,
},
Uuid::new_v4(),
)
.await
.unwrap();
}
} }

View File

@@ -82,13 +82,14 @@ pub type ScramKeys = tokio_postgres::config::ScramKeys<32>;
/// A config for establishing a connection to compute node. /// A config for establishing a connection to compute node.
/// Eventually, `tokio_postgres` will be replaced with something better. /// Eventually, `tokio_postgres` will be replaced with something better.
/// Newtype allows us to implement methods on top of it. /// Newtype allows us to implement methods on top of it.
#[derive(Clone, Default)] #[derive(Clone)]
#[repr(transparent)]
pub struct ConnCfg(Box<tokio_postgres::Config>); pub struct ConnCfg(Box<tokio_postgres::Config>);
/// Creation and initialization routines. /// Creation and initialization routines.
impl ConnCfg { impl ConnCfg {
pub fn new() -> Self { pub fn new() -> Self {
Self::default() Self(Default::default())
} }
/// Reuse password or auth keys from the other config. /// Reuse password or auth keys from the other config.
@@ -164,6 +165,12 @@ impl std::ops::DerefMut for ConnCfg {
} }
} }
impl Default for ConnCfg {
fn default() -> Self {
Self::new()
}
}
impl ConnCfg { impl ConnCfg {
/// Establish a raw TCP connection to the compute node. /// Establish a raw TCP connection to the compute node.
async fn connect_raw(&self, timeout: Duration) -> io::Result<(SocketAddr, TcpStream, &str)> { async fn connect_raw(&self, timeout: Duration) -> io::Result<(SocketAddr, TcpStream, &str)> {

Some files were not shown because too many files have changed in this diff Show More