Compare commits

..

49 Commits

Author SHA1 Message Date
Joonas Koivunen
bd235a5fe3 fix: remove initdb image layer optimization
the oldest commit I checked not to have a working initdb
image layer optimization was 4b25b9652a
more a year ago. it is safe to say we have been doing fine without it.
2023-11-09 20:45:40 +00:00
Arpad Müller
f95f001b8b Lsn for get_timestamp_of_lsn should be string, not integer (#5840)
The `get_timestamp_of_lsn` pageserver endpoint has been added in #5497,
but the yml it added was wrong: the lsn is expected in hex format, not
in integer (decimal) format.
2023-11-09 16:12:18 +00:00
John Spray
e0821e1eab pageserver: refined Timeline shutdown (#5833)
## Problem

We have observed the shutdown of a timeline taking a long time when a
deletion arrives at a busy time for the system. This suggests that we
are not respecting cancellation tokens promptly enough.

## Summary of changes

- Refactor timeline shutdown so that rather than having a shutdown()
function that takes a flag for optionally flushing, there are two
distinct functions, one for graceful flushing shutdown, and another that
does the "normal" shutdown where we're just setting a cancellation token
and then tearing down as fast as we can. This makes things a bit easier
to reason about, and enables us to remove the hand-written variant of
shutdown that was maintained in `delete.rs`
- Layer flush task checks cancellation token more carefully
- Logical size calculation's handling of cancellation tokens is
simplified: rather than passing one in, it respects the Timeline's
cancellation token.

This PR doesn't touch RemoteTimelineClient, which will be a key thing to
fix as well, so that a slow remote storage op doesn't hold up shutdown.
2023-11-09 16:02:59 +00:00
bojanserafimov
4469b1a62c Fix blob_io test (#5818) 2023-11-09 10:47:03 -05:00
Joonas Koivunen
842223b47f fix(metric): remove pageserver_wal_redo_wait_seconds (#5791)
the meaning of the values recorded in this histogram changed with #5560
and we never had it visualized as a histogram, just the
`increase(_sum)`. The histogram is not too interesting to look at, so
remove it per discussion in [slack
thread](https://neondb.slack.com/archives/C063LJFF26S/p1699008316109999?thread_ts=1698852436.637559&cid=C063LJFF26S).
2023-11-09 16:40:52 +02:00
Anna Stepanyan
893616051d Update epic-template.md (#5709)
replace the checkbox list with a a proper task list in the epic template

NB: this PR does not change the code, it only touches the github issue
templates
2023-11-09 15:24:43 +01:00
Conrad Ludgate
7cdde285a5 proxy: limit concurrent wake_compute requests per endpoint (#5799)
## Problem

A user can perform many database connections at the same instant of time
- these will all cache miss and materialise as requests to the control
plane. #5705

## Summary of changes

I am using a `DashMap` (a sharded `RwLock<HashMap>`) of endpoints ->
semaphores to apply a limiter. If the limiter is enabled (permits > 0),
the semaphore will be retrieved per endpoint and a permit will be
awaited before continuing to call the wake_compute endpoint.

### Important details

This dashmap would grow uncontrollably without maintenance. It's not a
cache so I don't think an LRU-based reclamation makes sense. Instead,
I've made use of the sharding functionality of DashMap to lock a single
shard and clear out unused semaphores periodically.

I ran a test in release, using 128 tokio tasks among 12 threads each
pushing 1000 entries into the map per second, clearing a shard every 2
seconds (64 second epoch with 32 shards). The endpoint names were
sampled from a gamma distribution to make sure some overlap would occur,
and each permit was held for 1ms. The histogram for time to clear each
shard settled between 256-512us without any variance in my testing.

Holding a lock for under a millisecond for 1 of the shards does not
concern me as blocking
2023-11-09 14:14:30 +00:00
John Spray
9c30883c4b remote_storage: use S3 SDK's adaptive retry policy (#5813)
## Problem

Currently, we aren't doing any explicit slowdown in response to 429
responses. Recently, as we hit remote storage a bit harder (pageserver
does more ListObjectsv2 requests than it used to since #5580 ), we're
seeing storms of 429 responses that may be the result of not just doing
too may requests, but continuing to do those extra requests without
backing off any more than our usual backoff::exponential.

## Summary of changes

Switch from AWS's "Standard" retry policy to "Adaptive" -- docs describe
this as experimental but it has been around for a long time. The main
difference between Standard and Adaptive is that Adaptive rate-limits
the client in response to feedback from the server, which is meant to
avoid scenarios where the client would otherwise repeatedly hit
throttling responses.
2023-11-09 13:50:13 +00:00
Arthur Petukhovsky
0495798591 Fix walproposer build on aarch64 (#5827)
There was a compilation error due to `std::ffi::c_char` being different type on different platforms. Clippy also complained due to a similar reason.
2023-11-09 13:05:17 +00:00
Sasha Krassovsky
87389bc933 Add test simulating bad connection between pageserver and compute (#5728)
## Problem
We have a funny 3-day timeout for connections between the compute and
pageserver. We want to get rid of it, so to do that we need to make sure
the compute is resilient to connection failures.

Closes: https://github.com/neondatabase/neon/issues/5518

## Summary of changes
This test makes the pageserver randomly drop the connection if the
failpoint is enabled, and ensures we can keep querying the pageserver.

This PR also reduces the default timeout to 10 minutes from 3 days.
2023-11-08 19:48:57 +00:00
Arpad Müller
ea118a238a JWT logging improvements (#5823)
* lower level on auth success from info to debug (fixes #5820)
* don't log stacktraces on auth errors (as requested on slack). we do this by introducing an `AuthError` type instead of using `anyhow` and `bail`.
* return errors that have been censored for improved security.
2023-11-08 16:56:53 +00:00
Christian Schwarz
e9b227a11e cleanup unused RemoteStorage fields (#5830)
Found this while working on #5771
2023-11-08 16:54:33 +00:00
John Spray
40441f8ada pageserver: use Gate for stronger safety check in SlotGuard (#5793)
## Problem

#5711 and #5367 raced -- the `SlotGuard` type needs `Gate` to properly
enforce its invariant that we may not drop an `Arc<Tenant>` from a slot.

## Summary of changes

Replace the TODO with the intended check of Gate.
2023-11-08 13:00:11 +00:00
John Spray
a8a39cd464 test: de-flake test_deletion_queue_recovery (#5822)
## Problem

This test could fail if timing is unlucky, and the deletions in the test
land in two deletion lists instead of one.

## Summary of changes

We await _some_ validations instead of _all_ validations, because our
execution failpoint
will prevent validation proceeding for any but the first DeletionList.
Usually the workload
just generates one, but if it generates two due to timing, then we must
not expect that the
second one will be validated.
2023-11-08 12:41:48 +00:00
John Spray
b989ad1922 extend test_change_pageserver for failure case, rework changing pageserver (#5693)
Reproducer for https://github.com/neondatabase/neon/issues/5692

The test change in this PR intentionally fails, to demonstrate the
issue.

---------

Co-authored-by: Sasha Krassovsky <krassovskysasha@gmail.com>
2023-11-08 11:26:56 +00:00
Em Sharnoff
acef742a6e vm-monitor: Remove dependency on workspace_hack (#5752)
neondatabase/autoscaling builds libs/vm-monitor during CI because it's a
necessary component of autoscaling.

workspace_hack includes a lot of crates that are not necessary for
vm-monitor, which artificially inflates the build time on the
autoscaling side, so hopefully removing the dependency should speed
things up.

Co-authored-by: Joonas Koivunen <joonas@neon.tech>
2023-11-07 09:41:20 -08:00
duguorong009
11d9d801b5 pageserver: improve the shutdown log error (#5792)
## Problem
- Close #5784 

## Summary of changes
- Update the `GetActiveTenantError` -> `QueryError` conversion process
in `pageserver/src/page_service.rs`
- Update the pytest logging exceptions in
`./test_runner/regress/test_tenant_detach.py`
2023-11-07 16:57:26 +00:00
Andrew Rudenko
fc47af156f Passing neon options to the console (#5781)
The idea is to pass neon_* prefixed options to control plane. It can be
used by cplane to dynamically create timelines and computes. Such
options also should be excluded from passing to compute. Another issue
is how connection caching is working now, because compute's instance now
depends not only on hostname but probably on such options too I included
them to cache key.
2023-11-07 16:49:26 +01:00
Arpad Müller
e310533ed3 Support JWT key reload in pageserver (#5594)
## Problem

For quickly rotating JWT secrets, we want to be able to reload the JWT
public key file in the pageserver, and also support multiple JWT keys.

See #4897.

## Summary of changes

* Allow directories for the `auth_validation_public_key_path` config
param instead of just files. for the safekeepers, all of their config options
also support multiple JWT keys.
* For the pageservers, make the JWT public keys easily globally swappable
by using the `arc-swap` crate.
* Add an endpoint to the pageserver, triggered by a POST to
`/v1/reload_auth_validation_keys`, that reloads the JWT public keys from
the pre-configured path (for security reasons, you cannot upload any
keys yourself).

Fixes #4897

---------

Co-authored-by: Heikki Linnakangas <heikki@neon.tech>
Co-authored-by: Joonas Koivunen <joonas@neon.tech>
2023-11-07 15:43:29 +01:00
John Spray
1d68f52b57 pageserver: move deletion failpoint inside backoff (#5814)
## Problem

When enabled, this failpoint would busy-spin in a loop that emits log
messages.

## Summary of changes

Move the failpoint inside a backoff::exponential block: it will still
spam the log, but at much lower rate.

---------

Co-authored-by: Joonas Koivunen <joonas@neon.tech>
2023-11-07 14:25:51 +00:00
Alexander Bayandin
4cd47b7d4b Dockerfile: Set BUILD_TAG for storage services (#5812)
## Problem

https://github.com/neondatabase/neon/pull/5576 added `build-tag`
reporting to `libmetrics_build_info`, but it's not reported because we
didn't set the corresponding env variable in the build process.

## Summary of changes
- Add `BUILD_TAG` env var while building services
2023-11-07 13:45:59 +00:00
Fernando Luz
0141c95788 build: Add warning when missing postgres submodule during the build (#5614)
I forked the project and in my local repo, I wasn't able to compile the
project and in my search, I found the solution in neon forum. After a PR
discussion, I made a change in the makefile to alert the missing `git
submodules update` step.

---------

Signed-off-by: Fernando Luz <prof.fernando.luz@gmail.com>
Co-authored-by: Joonas Koivunen <joonas@neon.tech>
2023-11-07 12:13:05 +00:00
Shany Pozin
0ac4cf67a6 Use self.tenants instead of TENANTS (#5811) 2023-11-07 11:38:02 +00:00
Joonas Koivunen
4be6bc7251 refactor: remove unnecessary unsafe (#5802)
unsafe impls for `Send` and `Sync` should not be added by default. in
the case of `SlotGuard` removing them does not cause any issues, as the
compiler automatically derives those.

This PR adds requirement to document the unsafety (see
[clippy::undocumented_unsafe_blocks]) and opportunistically adds
`#![deny(unsafe_code)]` to most places where we don't have unsafe code
right now.

TRPL on Send and Sync:
https://doc.rust-lang.org/book/ch16-04-extensible-concurrency-sync-and-send.html

[clippy::undocumented_unsafe_blocks]:
https://rust-lang.github.io/rust-clippy/master/#/undocumented_unsafe_blocks
2023-11-07 10:26:25 +00:00
John Spray
a394f49e0d pageserver: avoid converting an error to anyhow::Error (#5803)
This was preventing it getting cleanly converted to a
CalculateLogicalSizeError::Cancelled, resulting in "Logical size
calculation failed" errors in logs.
2023-11-07 09:35:45 +00:00
John Spray
c00651ff9b pageserver: start refactoring into TenantManager (#5797)
## Problem

See: https://github.com/neondatabase/neon/issues/5796

## Summary of changes

Completing the refactor is quite verbose and can be done in stages: each
interface that is currently called directly from a top-level mgr.rs
function can be moved into TenantManager once the relevant subsystems
have access to it.

Landing the initial change to create of TenantManager is useful because
it enables new code to use it without having to be altered later, and
sets us up to incrementally fix the existing code to use an explicit
Arc<TenantManager> instead of relying on the static TENANTS.
2023-11-07 09:06:53 +00:00
Richy Wang
bea8efac24 Fix comments in 'receive_wal.rs'. (#5807)
## Problem
Some comments in 'receive_wal.rs' is not suitable. It may copy from
'send_wal.rs' and leave it unchanged.
## Summary of changes
This commit fixes two comments in the code:
Changed "/// Unregister walsender." to "/// Unregister walreceiver."
Changed "///Scope guard to access slot in WalSenders registry" to
"///Scope guard to access slot in WalReceivers registry."
2023-11-07 09:13:01 +01:00
Conrad Ludgate
ad5b02e175 proxy: remove unsafe (#5805)
## Problem

`unsafe {}`

## Summary of changes

`CStr` has a method to parse the bytes up to a null byte, so we don't
have to do it ourselves.
2023-11-06 17:44:44 +00:00
Arpad Müller
b09a851705 Make azure blob storage not do extra metadata requests (#5777)
Load the metadata from the returned `GetBlobResponse` and avoid
downloading it via a separate request.
As it turns out, the SDK does return the metadata:
https://github.com/Azure/azure-sdk-for-rust/issues/1439 .

This PR will reduce the number of requests to Azure caused by downloads.

Fixes #5571
2023-11-06 15:16:55 +00:00
John Spray
85cd97af61 pageserver: add InProgress tenant map state, use a sync lock for the map (#5367)
## Problem

Follows on from #5299 
- We didn't have a generic way to protect a tenant undergoing changes:
`Tenant` had states, but for our arbitrary transitions between
secondary/attached, we need a general way to say "reserve this tenant
ID, and don't allow any other ops on it, but don't try and report it as
being in any particular state".
- The TenantsMap structure was behind an async RwLock, but it was never
correct to hold it across await points: that would block any other
changes for all tenants.


## Summary of changes

- Add the `TenantSlot::InProgress` value.  This means:
  - Incoming administrative operations on the tenant should retry later
- Anything trying to read the live state of the tenant (e.g. a page
service reader) should retry later or block.
- Store TenantsMap in `std::sync::RwLock`
- Provide an extended `get_active_tenant_with_timeout` for page_service
to use, which will wait on InProgress slots as well as non-active
tenants.

Closes: https://github.com/neondatabase/neon/issues/5378

---------

Co-authored-by: Christian Schwarz <christian@neon.tech>
2023-11-06 14:03:22 +00:00
Arpad Müller
e6470ee92e Add API description for safekeeper copy endpoint (#5770)
Adds a yaml API description for a new endpoint that allows creation of a
new timeline as the copy of an existing one.
 
Part of #5282
2023-11-06 15:00:07 +01:00
bojanserafimov
dc72567288 Layer flush minor speedup (#5765)
Convert keys to `i128` before sorting
2023-11-06 08:58:20 -05:00
John Spray
6defa2b5d5 pageserver: add Gate as a partner to CancellationToken for safe shutdown of Tenant & Timeline (#5711)
## Problem

When shutting down a Tenant, it isn't just important to cause any
background tasks to stop. It's also important to wait until they have
stopped before declaring shutdown complete, in cases where we may re-use
the tenant's local storage for something else, such as running in
secondary mode, or creating a new tenant with the same ID.

## Summary of changes

A `Gate` class is added, inspired by
[seastar::gate](https://docs.seastar.io/master/classseastar_1_1gate.html).
For types that have an important lifetime that corresponds to some
physical resource, use of a Gate as well as a CancellationToken provides
a robust pattern for async requests & shutdown:
- Requests must always acquire the gate as long as they are using the
object
- Shutdown must set the cancellation token, and then `close()` the gate
to wait for requests in progress before returning.

This is not for memory safety: it's for expressing the difference
between "Arc<Tenant> exists", and "This tenant's files on disk are
eligible to be read/written".

- Both Tenant and Timeline get a Gate & CancellationToken.
- The Timeline gate is held during eviction of layers, and during
page_service requests.
- Existing cancellation support in page_service is refined to use the
timeline-scope cancellation token instead of a process-scope
cancellation token. This replaces the use of `task_mgr::associate_with`:
tasks no longer change their tenant/timelineidentity after being
spawned.

The Tenant's Gate is not yet used, but will be important for
Tenant-scoped operations in secondary mode, where we must ensure that
our secondary-mode downloads for a tenant are gated wrt the activity of
an attached Tenant.

This is part of a broader move away from using the global-state driven
`task_mgr` shutdown tokens:
- less global state where we rely on implicit knowledge of what task a
given function is running in, and more explicit references to the
cancellation token that a particular function/type will respect, making
shutdown easier to reason about.
- eventually avoid the big global TASKS mutex.

---------

Co-authored-by: Joonas Koivunen <joonas@neon.tech>
2023-11-06 12:39:20 +00:00
duguorong009
b3d3a2587d feat: improve the serde impl for several types(Lsn, TenantId, TimelineId ...) (#5335)
Improve the serde impl for several types (`Lsn`, `TenantId`,
`TimelineId`) by making them sensitive to
`Serializer::is_human_readadable` (true for json, false for bincode).

Fixes #3511 by:
- Implement the custom serde for `Lsn`
- Implement the custom serde for `Id`
- Add the helper module `serde_as_u64` in `libs/utils/src/lsn.rs`
- Remove the unnecessary attr `#[serde_as(as = "DisplayFromStr")]` in
all possible structs

Additionally some safekeeper types gained serde tests.

---------

Co-authored-by: Joonas Koivunen <joonas@neon.tech>
2023-11-06 11:40:03 +02:00
Heikki Linnakangas
b85fc39bdb Update control plane API path for getting compute spec. (#5357)
We changed the path in the control plane. The old path is still accepted
for compatibility with existing computes, but we'd like to phase it out.
2023-11-06 09:26:09 +02:00
duguorong009
09b5954526 refactor: use streaming in safekeeper /v1/debug_dump http response (#5731)
- Update the handler for `/v1/debug_dump` http response in safekeeper
- Update the `debug_dump::build()` to use the streaming in JSON build
process
2023-11-05 10:16:54 +00:00
John Spray
306c4f9967 s3_scrubber: prepare for scrubbing buckets with generation-aware content (#5700)
## Problem

The scrubber didn't know how to find the latest index_part when
generations were in use.

## Summary of changes

- Teach the scrubber to do the same dance that pageserver does when
finding the latest index_part.json
- Teach the scrubber how to understand layer files with generation
suffixes.
- General improvement to testability: scan_metadata has a machine
readable output that the testing `S3Scrubber` wrapper can read.
- Existing test coverage of scrubber was false-passing because it just
didn't see any data due to prefixing of data in the bucket. Fix that.

This is incremental improvement: the more confidence we can have in the
scrubber, the more we can use it in integration tests to validate the
state of remote storage.

---------

Co-authored-by: Arpad Müller <arpad-m@users.noreply.github.com>
2023-11-03 17:36:02 +00:00
Konstantin Knizhnik
5ceccdc7de Logical replication startup fixes (#5750)
## Problem

See https://neondb.slack.com/archives/C04DGM6SMTM/p1698226491736459

## Summary of changes

Update WAL affected buffers when restoring WAL from safekeeper

## Checklist before requesting a review

- [ ] I have performed a self-review of my code.
- [ ] If it is a core feature, I have added thorough tests.
- [ ] Do we need to implement analytics? if so did you add the relevant
metrics to the dashboard?
- [ ] If this PR requires public announcement, mark it with
/release-notes label and add several sentences in this section.

## Checklist before merging

- [ ] Do not forget to reformat commit message to not include the above
checklist

---------

Co-authored-by: Konstantin Knizhnik <knizhnik@neon.tech>
Co-authored-by: Arseny Sher <sher-ars@yandex.ru>
2023-11-03 18:40:27 +02:00
Conrad Ludgate
cdcaa329bf proxy: no more statements (#5747)
## Problem

my prepared statements change in tokio-postgres landed in the latest
release. it didn't work as we intended

## Summary of changes

https://github.com/neondatabase/rust-postgres/pull/24
2023-11-03 08:30:58 +00:00
Joonas Koivunen
27bdbf5e36 chore(layer): restore logging, doc changes (#5766)
Some of the log messages were lost with the #4938. This PR adds some of
them back, most notably:

- starting to on-demand download
- successful completion of on-demand download
- ability to see when there were many waiters for the layer download
- "unexpectedly on-demand downloading ..." is now `info!`

Additionally some rare events are logged as error, which should never
happen.
2023-11-02 19:05:33 +00:00
khanova
4c7fa12a2a Proxy introduce allowed ips (#5729)
## Problem

Proxy doesn't accept wake_compute responses with the allowed IPs.

## Summary of changes

Extend wake_compute api to be able to return allowed_ips.
2023-11-02 16:26:15 +00:00
Em Sharnoff
367971a0e9 vm-monitor: Remove support for file cache in tmpfs (#5617)
ref neondatabase/cloud#7516.

We switched everything over to file cache on disk, now time to remove
support for having it in tmpfs.
2023-11-02 16:06:16 +00:00
bojanserafimov
51570114ea Remove outdated and flaky perf test (#5762) 2023-11-02 10:43:59 -04:00
Joonas Koivunen
098d3111a5 fix(layer): get_and_upgrade and metrics (#5767)
when introducing `get_and_upgrade` I forgot that an `evict_and_wait`
would had already incremented the counter for started evictions, but an
upgrade would just "silently" cancel the eviction as no drop would ever
run. these metrics are likely sources for alerts with the next release,
so it's important to keep them correct.
2023-11-02 13:06:14 +00:00
Joonas Koivunen
3737fe3a4b fix(layer): error out early if layer path is non-file (#5756)
In an earlier PR
https://github.com/neondatabase/neon/pull/5743#discussion_r1378625244 I
added a FIXME and there's a simple solution suggested by @jcsp, so
implement it. Wondering why I did not implement this originally, there
is no concept of a permanent failure, so this failure will happen quite
often. I don't think the frequency is a problem however.

Sadly for std::fs::FileType there is only decimal and hex formatting, no
octal.
2023-11-02 11:03:38 +00:00
John Spray
5650138532 pageserver: helpers for explicitly dying on fatal I/O errors (#5651)
Following from discussion on
https://github.com/neondatabase/neon/pull/5436 where hacking an implicit
die-on-fatal-io behavior into an Error type was a source of disagreement
-- in this PR, dying on fatal I/O errors is explicit, with `fatal_err`
and `maybe_fatal_err` helpers in the `MaybeFatalIo` trait, which is
implemented for std::io::Result.

To enable this approach with `crashsafe_overwrite`, the return type of
that function is changed to std::io::Result -- the previous error enum
for this function was not used for any logic, and the utility of saying
exactly which step in the function failed is outweighed by the hygiene
of having an I/O funciton return an io::Result.

The initial use case for these helpers is the deletion queue.
2023-11-02 09:14:26 +00:00
Joonas Koivunen
2dca4c03fc feat(layer): cancellable get_or_maybe_download (#5744)
With the layer implementation as was done in #4938, it is possible via
cancellation to cause two concurrent downloads on the same path, due to
how `RemoteTimelineClient::download_remote_layer` does tempfiles. Thread
the init semaphore through the spawned task of downloading to make this
impossible to happen.
2023-11-02 08:06:32 +00:00
bojanserafimov
0b790b6d00 Record wal size in import benchmark (#5755) 2023-11-01 17:02:58 -04:00
Joonas Koivunen
e82d1ad6b8 fix(layer): reinit on access before eviction happens (#5743)
Right before merging, I added a loop to `fn
LayerInner::get_or_maybe_download`, which was always supposed to be
there. However I had forgotten to restart initialization instead of
waiting for the eviction to happen to support original design goal of
"eviction should always lose to redownload (or init)". This was wrong.
After this fix, if `spawn_blocking` queue is blocked on something,
nothing bad will happen.

Part of #5737.
2023-11-01 17:38:32 +02:00
161 changed files with 7079 additions and 4320 deletions

View File

@@ -22,5 +22,11 @@ platforms = [
# "x86_64-pc-windows-msvc", # "x86_64-pc-windows-msvc",
] ]
[final-excludes]
# vm_monitor benefits from the same Cargo.lock as the rest of our artifacts, but
# it is built primarly in separate repo neondatabase/autoscaling and thus is excluded
# from depending on workspace-hack because most of the dependencies are not used.
workspace-members = ["vm_monitor"]
# Write out exact versions rather than a semver range. (Defaults to false.) # Write out exact versions rather than a semver range. (Defaults to false.)
# exact-versions = true # exact-versions = true

View File

@@ -17,8 +17,9 @@ assignees: ''
## Implementation ideas ## Implementation ideas
## Tasks ```[tasklist]
- [ ] ### Tasks
```
## Other related tasks and Epics ## Other related tasks and Epics

View File

@@ -723,6 +723,7 @@ jobs:
--cache-repo 369495373322.dkr.ecr.eu-central-1.amazonaws.com/cache --cache-repo 369495373322.dkr.ecr.eu-central-1.amazonaws.com/cache
--context . --context .
--build-arg GIT_VERSION=${{ github.event.pull_request.head.sha || github.sha }} --build-arg GIT_VERSION=${{ github.event.pull_request.head.sha || github.sha }}
--build-arg BUILD_TAG=${{ needs.tag.outputs.build-tag }}
--build-arg REPOSITORY=369495373322.dkr.ecr.eu-central-1.amazonaws.com --build-arg REPOSITORY=369495373322.dkr.ecr.eu-central-1.amazonaws.com
--destination 369495373322.dkr.ecr.eu-central-1.amazonaws.com/neon:${{needs.tag.outputs.build-tag}} --destination 369495373322.dkr.ecr.eu-central-1.amazonaws.com/neon:${{needs.tag.outputs.build-tag}}
--destination neondatabase/neon:${{needs.tag.outputs.build-tag}} --destination neondatabase/neon:${{needs.tag.outputs.build-tag}}

33
Cargo.lock generated
View File

@@ -170,6 +170,12 @@ dependencies = [
"backtrace", "backtrace",
] ]
[[package]]
name = "arc-swap"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bddcadddf5e9015d310179a59bb28c4d4b9920ad0f11e8e14dbadf654890c9a6"
[[package]] [[package]]
name = "archery" name = "archery"
version = "0.5.0" version = "0.5.0"
@@ -3550,7 +3556,7 @@ dependencies = [
[[package]] [[package]]
name = "postgres" name = "postgres"
version = "0.19.4" version = "0.19.4"
source = "git+https://github.com/neondatabase/rust-postgres.git?rev=7434d9388965a17a6d113e5dfc0e65666a03b4c2#7434d9388965a17a6d113e5dfc0e65666a03b4c2" source = "git+https://github.com/neondatabase/rust-postgres.git?rev=ce7260db5998fe27167da42503905a12e7ad9048#ce7260db5998fe27167da42503905a12e7ad9048"
dependencies = [ dependencies = [
"bytes", "bytes",
"fallible-iterator", "fallible-iterator",
@@ -3563,7 +3569,7 @@ dependencies = [
[[package]] [[package]]
name = "postgres-native-tls" name = "postgres-native-tls"
version = "0.5.0" version = "0.5.0"
source = "git+https://github.com/neondatabase/rust-postgres.git?rev=7434d9388965a17a6d113e5dfc0e65666a03b4c2#7434d9388965a17a6d113e5dfc0e65666a03b4c2" source = "git+https://github.com/neondatabase/rust-postgres.git?rev=ce7260db5998fe27167da42503905a12e7ad9048#ce7260db5998fe27167da42503905a12e7ad9048"
dependencies = [ dependencies = [
"native-tls", "native-tls",
"tokio", "tokio",
@@ -3574,7 +3580,7 @@ dependencies = [
[[package]] [[package]]
name = "postgres-protocol" name = "postgres-protocol"
version = "0.6.4" version = "0.6.4"
source = "git+https://github.com/neondatabase/rust-postgres.git?rev=7434d9388965a17a6d113e5dfc0e65666a03b4c2#7434d9388965a17a6d113e5dfc0e65666a03b4c2" source = "git+https://github.com/neondatabase/rust-postgres.git?rev=ce7260db5998fe27167da42503905a12e7ad9048#ce7260db5998fe27167da42503905a12e7ad9048"
dependencies = [ dependencies = [
"base64 0.20.0", "base64 0.20.0",
"byteorder", "byteorder",
@@ -3592,7 +3598,7 @@ dependencies = [
[[package]] [[package]]
name = "postgres-types" name = "postgres-types"
version = "0.2.4" version = "0.2.4"
source = "git+https://github.com/neondatabase/rust-postgres.git?rev=7434d9388965a17a6d113e5dfc0e65666a03b4c2#7434d9388965a17a6d113e5dfc0e65666a03b4c2" source = "git+https://github.com/neondatabase/rust-postgres.git?rev=ce7260db5998fe27167da42503905a12e7ad9048#ce7260db5998fe27167da42503905a12e7ad9048"
dependencies = [ dependencies = [
"bytes", "bytes",
"fallible-iterator", "fallible-iterator",
@@ -4058,6 +4064,7 @@ dependencies = [
"aws-config", "aws-config",
"aws-credential-types", "aws-credential-types",
"aws-sdk-s3", "aws-sdk-s3",
"aws-smithy-async",
"aws-smithy-http", "aws-smithy-http",
"aws-types", "aws-types",
"azure_core", "azure_core",
@@ -4419,6 +4426,7 @@ dependencies = [
"itertools", "itertools",
"pageserver", "pageserver",
"rand 0.8.5", "rand 0.8.5",
"remote_storage",
"reqwest", "reqwest",
"serde", "serde",
"serde_json", "serde_json",
@@ -4477,6 +4485,7 @@ dependencies = [
"tokio", "tokio",
"tokio-io-timeout", "tokio-io-timeout",
"tokio-postgres", "tokio-postgres",
"tokio-stream",
"toml_edit", "toml_edit",
"tracing", "tracing",
"url", "url",
@@ -4679,6 +4688,16 @@ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]]
name = "serde_assert"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eda563240c1288b044209be1f0d38bb4d15044fb3e00dc354fbc922ab4733e80"
dependencies = [
"hashbrown 0.13.2",
"serde",
]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.183" version = "1.0.183"
@@ -5396,7 +5415,7 @@ dependencies = [
[[package]] [[package]]
name = "tokio-postgres" name = "tokio-postgres"
version = "0.7.7" version = "0.7.7"
source = "git+https://github.com/neondatabase/rust-postgres.git?rev=7434d9388965a17a6d113e5dfc0e65666a03b4c2#7434d9388965a17a6d113e5dfc0e65666a03b4c2" source = "git+https://github.com/neondatabase/rust-postgres.git?rev=ce7260db5998fe27167da42503905a12e7ad9048#ce7260db5998fe27167da42503905a12e7ad9048"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"byteorder", "byteorder",
@@ -5939,6 +5958,7 @@ name = "utils"
version = "0.1.0" version = "0.1.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"arc-swap",
"async-trait", "async-trait",
"bincode", "bincode",
"byteorder", "byteorder",
@@ -5965,6 +5985,7 @@ dependencies = [
"routerify", "routerify",
"sentry", "sentry",
"serde", "serde",
"serde_assert",
"serde_json", "serde_json",
"serde_with", "serde_with",
"signal-hook", "signal-hook",
@@ -6035,7 +6056,6 @@ dependencies = [
"tokio-util", "tokio-util",
"tracing", "tracing",
"tracing-subscriber", "tracing-subscriber",
"workspace_hack",
] ]
[[package]] [[package]]
@@ -6463,6 +6483,7 @@ dependencies = [
"clap", "clap",
"clap_builder", "clap_builder",
"crossbeam-utils", "crossbeam-utils",
"dashmap",
"either", "either",
"fail", "fail",
"futures", "futures",

View File

@@ -36,6 +36,7 @@ license = "Apache-2.0"
## All dependency versions, used in the project ## All dependency versions, used in the project
[workspace.dependencies] [workspace.dependencies]
anyhow = { version = "1.0", features = ["backtrace"] } anyhow = { version = "1.0", features = ["backtrace"] }
arc-swap = "1.6"
async-compression = { version = "0.4.0", features = ["tokio", "gzip"] } async-compression = { version = "0.4.0", features = ["tokio", "gzip"] }
azure_core = "0.16" azure_core = "0.16"
azure_identity = "0.16" azure_identity = "0.16"
@@ -47,6 +48,7 @@ async-trait = "0.1"
aws-config = { version = "0.56", default-features = false, features=["rustls"] } aws-config = { version = "0.56", default-features = false, features=["rustls"] }
aws-sdk-s3 = "0.29" aws-sdk-s3 = "0.29"
aws-smithy-http = "0.56" aws-smithy-http = "0.56"
aws-smithy-async = { version = "0.56", default-features = false, features=["rt-tokio"] }
aws-credential-types = "0.56" aws-credential-types = "0.56"
aws-types = "0.56" aws-types = "0.56"
axum = { version = "0.6.20", features = ["ws"] } axum = { version = "0.6.20", features = ["ws"] }
@@ -65,7 +67,7 @@ comfy-table = "6.1"
const_format = "0.2" const_format = "0.2"
crc32c = "0.6" crc32c = "0.6"
crossbeam-utils = "0.8.5" crossbeam-utils = "0.8.5"
dashmap = "5.5.0" dashmap = { version = "5.5.0", features = ["raw-api"] }
either = "1.8" either = "1.8"
enum-map = "2.4.2" enum-map = "2.4.2"
enumset = "1.0.12" enumset = "1.0.12"
@@ -124,6 +126,7 @@ sentry = { version = "0.31", default-features = false, features = ["backtrace",
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
serde_json = "1" serde_json = "1"
serde_with = "2.0" serde_with = "2.0"
serde_assert = "0.5.0"
sha2 = "0.10.2" sha2 = "0.10.2"
signal-hook = "0.3" signal-hook = "0.3"
smallvec = "1.11" smallvec = "1.11"
@@ -161,11 +164,11 @@ env_logger = "0.10"
log = "0.4" log = "0.4"
## Libraries from neondatabase/ git forks, ideally with changes to be upstreamed ## Libraries from neondatabase/ git forks, ideally with changes to be upstreamed
postgres = { git = "https://github.com/neondatabase/rust-postgres.git", rev="7434d9388965a17a6d113e5dfc0e65666a03b4c2" } postgres = { git = "https://github.com/neondatabase/rust-postgres.git", rev="ce7260db5998fe27167da42503905a12e7ad9048" }
postgres-native-tls = { git = "https://github.com/neondatabase/rust-postgres.git", rev="7434d9388965a17a6d113e5dfc0e65666a03b4c2" } postgres-native-tls = { git = "https://github.com/neondatabase/rust-postgres.git", rev="ce7260db5998fe27167da42503905a12e7ad9048" }
postgres-protocol = { git = "https://github.com/neondatabase/rust-postgres.git", rev="7434d9388965a17a6d113e5dfc0e65666a03b4c2" } postgres-protocol = { git = "https://github.com/neondatabase/rust-postgres.git", rev="ce7260db5998fe27167da42503905a12e7ad9048" }
postgres-types = { git = "https://github.com/neondatabase/rust-postgres.git", rev="7434d9388965a17a6d113e5dfc0e65666a03b4c2" } postgres-types = { git = "https://github.com/neondatabase/rust-postgres.git", rev="ce7260db5998fe27167da42503905a12e7ad9048" }
tokio-postgres = { git = "https://github.com/neondatabase/rust-postgres.git", rev="7434d9388965a17a6d113e5dfc0e65666a03b4c2" } tokio-postgres = { git = "https://github.com/neondatabase/rust-postgres.git", rev="ce7260db5998fe27167da42503905a12e7ad9048" }
## Other git libraries ## Other git libraries
heapless = { default-features=false, features=[], git = "https://github.com/japaric/heapless.git", rev = "644653bf3b831c6bb4963be2de24804acf5e5001" } # upstream release pending heapless = { default-features=false, features=[], git = "https://github.com/japaric/heapless.git", rev = "644653bf3b831c6bb4963be2de24804acf5e5001" } # upstream release pending
@@ -202,7 +205,7 @@ tonic-build = "0.9"
# This is only needed for proxy's tests. # This is only needed for proxy's tests.
# TODO: we should probably fork `tokio-postgres-rustls` instead. # TODO: we should probably fork `tokio-postgres-rustls` instead.
tokio-postgres = { git = "https://github.com/neondatabase/rust-postgres.git", rev="7434d9388965a17a6d113e5dfc0e65666a03b4c2" } tokio-postgres = { git = "https://github.com/neondatabase/rust-postgres.git", rev="ce7260db5998fe27167da42503905a12e7ad9048" }
################# Binary contents sections ################# Binary contents sections

View File

@@ -27,6 +27,7 @@ RUN set -e \
FROM $REPOSITORY/$IMAGE:$TAG AS build FROM $REPOSITORY/$IMAGE:$TAG AS build
WORKDIR /home/nonroot WORKDIR /home/nonroot
ARG GIT_VERSION=local ARG GIT_VERSION=local
ARG BUILD_TAG
# Enable https://github.com/paritytech/cachepot to cache Rust crates' compilation results in Docker builds. # Enable https://github.com/paritytech/cachepot to cache Rust crates' compilation results in Docker builds.
# Set up cachepot to use an AWS S3 bucket for cache results, to reuse it between `docker build` invocations. # Set up cachepot to use an AWS S3 bucket for cache results, to reuse it between `docker build` invocations.
@@ -78,9 +79,9 @@ COPY --from=build --chown=neon:neon /home/nonroot/target/release/pg_sni_router
COPY --from=build --chown=neon:neon /home/nonroot/target/release/pageserver /usr/local/bin COPY --from=build --chown=neon:neon /home/nonroot/target/release/pageserver /usr/local/bin
COPY --from=build --chown=neon:neon /home/nonroot/target/release/pagectl /usr/local/bin COPY --from=build --chown=neon:neon /home/nonroot/target/release/pagectl /usr/local/bin
COPY --from=build --chown=neon:neon /home/nonroot/target/release/safekeeper /usr/local/bin COPY --from=build --chown=neon:neon /home/nonroot/target/release/safekeeper /usr/local/bin
COPY --from=build --chown=neon:neon /home/nonroot/target/release/storage_broker /usr/local/bin COPY --from=build --chown=neon:neon /home/nonroot/target/release/storage_broker /usr/local/bin
COPY --from=build --chown=neon:neon /home/nonroot/target/release/proxy /usr/local/bin COPY --from=build --chown=neon:neon /home/nonroot/target/release/proxy /usr/local/bin
COPY --from=build --chown=neon:neon /home/nonroot/target/release/neon_local /usr/local/bin COPY --from=build --chown=neon:neon /home/nonroot/target/release/neon_local /usr/local/bin
COPY --from=pg-build /home/nonroot/pg_install/v14 /usr/local/v14/ COPY --from=pg-build /home/nonroot/pg_install/v14 /usr/local/v14/
COPY --from=pg-build /home/nonroot/pg_install/v15 /usr/local/v15/ COPY --from=pg-build /home/nonroot/pg_install/v15 /usr/local/v15/

View File

@@ -72,6 +72,10 @@ neon: postgres-headers walproposer-lib
# #
$(POSTGRES_INSTALL_DIR)/build/%/config.status: $(POSTGRES_INSTALL_DIR)/build/%/config.status:
+@echo "Configuring Postgres $* build" +@echo "Configuring Postgres $* build"
@test -s $(ROOT_PROJECT_DIR)/vendor/postgres-$*/configure || { \
echo "\nPostgres submodule not found in $(ROOT_PROJECT_DIR)/vendor/postgres-$*/, execute "; \
echo "'git submodule update --init --recursive --depth 2 --progress .' in project root.\n"; \
exit 1; }
mkdir -p $(POSTGRES_INSTALL_DIR)/build/$* mkdir -p $(POSTGRES_INSTALL_DIR)/build/$*
(cd $(POSTGRES_INSTALL_DIR)/build/$* && \ (cd $(POSTGRES_INSTALL_DIR)/build/$* && \
env PATH="$(EXTRA_PATH_OVERRIDES):$$PATH" $(ROOT_PROJECT_DIR)/vendor/postgres-$*/configure \ env PATH="$(EXTRA_PATH_OVERRIDES):$$PATH" $(ROOT_PROJECT_DIR)/vendor/postgres-$*/configure \

View File

@@ -283,7 +283,6 @@ fn main() -> Result<()> {
.expect("--vm-monitor-addr should always be set because it has a default arg"); .expect("--vm-monitor-addr should always be set because it has a default arg");
let file_cache_connstr = matches.get_one::<String>("filecache-connstr"); let file_cache_connstr = matches.get_one::<String>("filecache-connstr");
let cgroup = matches.get_one::<String>("cgroup"); let cgroup = matches.get_one::<String>("cgroup");
let file_cache_on_disk = matches.get_flag("file-cache-on-disk");
// Only make a runtime if we need to. // Only make a runtime if we need to.
// Note: it seems like you can make a runtime in an inner scope and // Note: it seems like you can make a runtime in an inner scope and
@@ -310,7 +309,6 @@ fn main() -> Result<()> {
cgroup: cgroup.cloned(), cgroup: cgroup.cloned(),
pgconnstr: file_cache_connstr.cloned(), pgconnstr: file_cache_connstr.cloned(),
addr: vm_monitor_addr.clone(), addr: vm_monitor_addr.clone(),
file_cache_on_disk,
})), })),
token.clone(), token.clone(),
)) ))
@@ -482,6 +480,8 @@ fn cli() -> clap::Command {
.value_name("FILECACHE_CONNSTR"), .value_name("FILECACHE_CONNSTR"),
) )
.arg( .arg(
// DEPRECATED, NO LONGER DOES ANYTHING.
// See https://github.com/neondatabase/cloud/issues/7516
Arg::new("file-cache-on-disk") Arg::new("file-cache-on-disk")
.long("file-cache-on-disk") .long("file-cache-on-disk")
.action(clap::ArgAction::SetTrue), .action(clap::ArgAction::SetTrue),

View File

@@ -710,8 +710,12 @@ impl ComputeNode {
// `pg_ctl` for start / stop, so this just seems much easier to do as we already // `pg_ctl` for start / stop, so this just seems much easier to do as we already
// have opened connection to Postgres and superuser access. // have opened connection to Postgres and superuser access.
#[instrument(skip_all)] #[instrument(skip_all)]
fn pg_reload_conf(&self, client: &mut Client) -> Result<()> { fn pg_reload_conf(&self) -> Result<()> {
client.simple_query("SELECT pg_reload_conf()")?; let pgctl_bin = Path::new(&self.pgbin).parent().unwrap().join("pg_ctl");
Command::new(pgctl_bin)
.args(["reload", "-D", &self.pgdata])
.output()
.expect("cannot run pg_ctl process");
Ok(()) Ok(())
} }
@@ -724,9 +728,9 @@ impl ComputeNode {
// Write new config // Write new config
let pgdata_path = Path::new(&self.pgdata); let pgdata_path = Path::new(&self.pgdata);
config::write_postgres_conf(&pgdata_path.join("postgresql.conf"), &spec, None)?; config::write_postgres_conf(&pgdata_path.join("postgresql.conf"), &spec, None)?;
self.pg_reload_conf()?;
let mut client = Client::connect(self.connstr.as_str(), NoTls)?; let mut client = Client::connect(self.connstr.as_str(), NoTls)?;
self.pg_reload_conf(&mut client)?;
// Proceed with post-startup configuration. Note, that order of operations is important. // Proceed with post-startup configuration. Note, that order of operations is important.
// Disable DDL forwarding because control plane already knows about these roles/databases. // Disable DDL forwarding because control plane already knows about these roles/databases.

View File

@@ -78,7 +78,7 @@ use regex::Regex;
use remote_storage::*; use remote_storage::*;
use serde_json; use serde_json;
use std::io::Read; use std::io::Read;
use std::num::{NonZeroU32, NonZeroUsize}; use std::num::NonZeroUsize;
use std::path::Path; use std::path::Path;
use std::str; use std::str;
use tar::Archive; use tar::Archive;
@@ -281,8 +281,6 @@ pub fn init_remote_storage(remote_ext_config: &str) -> anyhow::Result<GenericRem
max_keys_per_list_response: None, max_keys_per_list_response: None,
}; };
let config = RemoteStorageConfig { let config = RemoteStorageConfig {
max_concurrent_syncs: NonZeroUsize::new(100).expect("100 != 0"),
max_sync_errors: NonZeroU32::new(100).expect("100 != 0"),
storage: RemoteStorageKind::AwsS3(config), storage: RemoteStorageKind::AwsS3(config),
}; };
GenericRemoteStorage::from_config(&config) GenericRemoteStorage::from_config(&config)

View File

@@ -1,7 +1,7 @@
//!
//! Various tools and helpers to handle cluster / compute node (Postgres) //! Various tools and helpers to handle cluster / compute node (Postgres)
//! configuration. //! configuration.
//! #![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
pub mod checker; pub mod checker;
pub mod config; pub mod config;
pub mod configurator; pub mod configurator;

View File

@@ -68,7 +68,7 @@ pub fn get_spec_from_control_plane(
base_uri: &str, base_uri: &str,
compute_id: &str, compute_id: &str,
) -> Result<Option<ComputeSpec>> { ) -> Result<Option<ComputeSpec>> {
let cp_uri = format!("{base_uri}/management/api/v2/computes/{compute_id}/spec"); let cp_uri = format!("{base_uri}/compute/api/v2/computes/{compute_id}/spec");
let jwt: String = match std::env::var("NEON_CONTROL_PLANE_TOKEN") { let jwt: String = match std::env::var("NEON_CONTROL_PLANE_TOKEN") {
Ok(v) => v, Ok(v) => v,
Err(_) => "".to_string(), Err(_) => "".to_string(),

View File

@@ -2,7 +2,6 @@ use crate::{background_process, local_env::LocalEnv};
use anyhow::anyhow; use anyhow::anyhow;
use camino::Utf8PathBuf; use camino::Utf8PathBuf;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr};
use std::{path::PathBuf, process::Child}; use std::{path::PathBuf, process::Child};
use utils::id::{NodeId, TenantId}; use utils::id::{NodeId, TenantId};
@@ -14,10 +13,8 @@ pub struct AttachmentService {
const COMMAND: &str = "attachment_service"; const COMMAND: &str = "attachment_service";
#[serde_as]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct AttachHookRequest { pub struct AttachHookRequest {
#[serde_as(as = "DisplayFromStr")]
pub tenant_id: TenantId, pub tenant_id: TenantId,
pub node_id: Option<NodeId>, pub node_id: Option<NodeId>,
} }

View File

@@ -262,7 +262,7 @@ where
P: Into<Utf8PathBuf>, P: Into<Utf8PathBuf>,
{ {
let path: Utf8PathBuf = path.into(); let path: Utf8PathBuf = path.into();
// SAFETY // SAFETY:
// pre_exec is marked unsafe because it runs between fork and exec. // pre_exec is marked unsafe because it runs between fork and exec.
// Why is that dangerous in various ways? // Why is that dangerous in various ways?
// Long answer: https://github.com/rust-lang/rust/issues/39575 // Long answer: https://github.com/rust-lang/rust/issues/39575

View File

@@ -46,7 +46,6 @@ use std::time::Duration;
use anyhow::{anyhow, bail, Context, Result}; use anyhow::{anyhow, bail, Context, Result};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr};
use utils::id::{NodeId, TenantId, TimelineId}; use utils::id::{NodeId, TenantId, TimelineId};
use crate::local_env::LocalEnv; use crate::local_env::LocalEnv;
@@ -57,13 +56,10 @@ use compute_api::responses::{ComputeState, ComputeStatus};
use compute_api::spec::{Cluster, ComputeMode, ComputeSpec}; use compute_api::spec::{Cluster, ComputeMode, ComputeSpec};
// contents of a endpoint.json file // contents of a endpoint.json file
#[serde_as]
#[derive(Serialize, Deserialize, PartialEq, Eq, Clone, Debug)] #[derive(Serialize, Deserialize, PartialEq, Eq, Clone, Debug)]
pub struct EndpointConf { pub struct EndpointConf {
endpoint_id: String, endpoint_id: String,
#[serde_as(as = "DisplayFromStr")]
tenant_id: TenantId, tenant_id: TenantId,
#[serde_as(as = "DisplayFromStr")]
timeline_id: TimelineId, timeline_id: TimelineId,
mode: ComputeMode, mode: ComputeMode,
pg_port: u16, pg_port: u16,

View File

@@ -1,11 +1,10 @@
// //! Local control plane.
// Local control plane. //!
// //! Can start, configure and stop postgres instances running as a local processes.
// Can start, configure and stop postgres instances running as a local processes. //!
// //! Intended to be used in integration tests and in CLI tools for
// Intended to be used in integration tests and in CLI tools for //! local installations.
// local installations. #![deny(clippy::undocumented_unsafe_blocks)]
//
pub mod attachment_service; pub mod attachment_service;
mod background_process; mod background_process;

View File

@@ -8,7 +8,6 @@ use anyhow::{bail, ensure, Context};
use postgres_backend::AuthType; use postgres_backend::AuthType;
use reqwest::Url; use reqwest::Url;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr};
use std::collections::HashMap; use std::collections::HashMap;
use std::env; use std::env;
use std::fs; use std::fs;
@@ -33,7 +32,6 @@ pub const DEFAULT_PG_VERSION: u32 = 15;
// to 'neon_local init --config=<path>' option. See control_plane/simple.conf for // to 'neon_local init --config=<path>' option. See control_plane/simple.conf for
// an example. // an example.
// //
#[serde_as]
#[derive(Serialize, Deserialize, PartialEq, Eq, Clone, Debug)] #[derive(Serialize, Deserialize, PartialEq, Eq, Clone, Debug)]
pub struct LocalEnv { pub struct LocalEnv {
// Base directory for all the nodes (the pageserver, safekeepers and // Base directory for all the nodes (the pageserver, safekeepers and
@@ -59,7 +57,6 @@ pub struct LocalEnv {
// Default tenant ID to use with the 'neon_local' command line utility, when // Default tenant ID to use with the 'neon_local' command line utility, when
// --tenant_id is not explicitly specified. // --tenant_id is not explicitly specified.
#[serde(default)] #[serde(default)]
#[serde_as(as = "Option<DisplayFromStr>")]
pub default_tenant_id: Option<TenantId>, pub default_tenant_id: Option<TenantId>,
// used to issue tokens during e.g pg start // used to issue tokens during e.g pg start
@@ -84,7 +81,6 @@ pub struct LocalEnv {
// A `HashMap<String, HashMap<TenantId, TimelineId>>` would be more appropriate here, // A `HashMap<String, HashMap<TenantId, TimelineId>>` would be more appropriate here,
// but deserialization into a generic toml object as `toml::Value::try_from` fails with an error. // but deserialization into a generic toml object as `toml::Value::try_from` fails with an error.
// https://toml.io/en/v1.0.0 does not contain a concept of "a table inside another table". // https://toml.io/en/v1.0.0 does not contain a concept of "a table inside another table".
#[serde_as(as = "HashMap<_, Vec<(DisplayFromStr, DisplayFromStr)>>")]
branch_name_mappings: HashMap<String, Vec<(TenantId, TimelineId)>>, branch_name_mappings: HashMap<String, Vec<(TenantId, TimelineId)>>,
} }

View File

@@ -1,3 +1,5 @@
#![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
pub mod requests; pub mod requests;
pub mod responses; pub mod responses;
pub mod spec; pub mod spec;

View File

@@ -6,7 +6,6 @@
use std::collections::HashMap; use std::collections::HashMap;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr};
use utils::id::{TenantId, TimelineId}; use utils::id::{TenantId, TimelineId};
use utils::lsn::Lsn; use utils::lsn::Lsn;
@@ -19,7 +18,6 @@ pub type PgIdent = String;
/// Cluster spec or configuration represented as an optional number of /// Cluster spec or configuration represented as an optional number of
/// delta operations + final cluster state description. /// delta operations + final cluster state description.
#[serde_as]
#[derive(Clone, Debug, Default, Deserialize, Serialize)] #[derive(Clone, Debug, Default, Deserialize, Serialize)]
pub struct ComputeSpec { pub struct ComputeSpec {
pub format_version: f32, pub format_version: f32,
@@ -50,12 +48,12 @@ pub struct ComputeSpec {
// these, and instead set the "neon.tenant_id", "neon.timeline_id", // these, and instead set the "neon.tenant_id", "neon.timeline_id",
// etc. GUCs in cluster.settings. TODO: Once the control plane has been // etc. GUCs in cluster.settings. TODO: Once the control plane has been
// updated to fill these fields, we can make these non optional. // updated to fill these fields, we can make these non optional.
#[serde_as(as = "Option<DisplayFromStr>")]
pub tenant_id: Option<TenantId>, pub tenant_id: Option<TenantId>,
#[serde_as(as = "Option<DisplayFromStr>")]
pub timeline_id: Option<TimelineId>, pub timeline_id: Option<TimelineId>,
#[serde_as(as = "Option<DisplayFromStr>")]
pub pageserver_connstring: Option<String>, pub pageserver_connstring: Option<String>,
#[serde(default)] #[serde(default)]
pub safekeeper_connstrings: Vec<String>, pub safekeeper_connstrings: Vec<String>,
@@ -140,14 +138,13 @@ impl RemoteExtSpec {
} }
} }
#[serde_as]
#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Deserialize, Serialize)] #[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Deserialize, Serialize)]
pub enum ComputeMode { pub enum ComputeMode {
/// A read-write node /// A read-write node
#[default] #[default]
Primary, Primary,
/// A read-only node, pinned at a particular LSN /// A read-only node, pinned at a particular LSN
Static(#[serde_as(as = "DisplayFromStr")] Lsn), Static(Lsn),
/// A read-only node that follows the tip of the branch in hot standby mode /// A read-only node that follows the tip of the branch in hot standby mode
/// ///
/// Future versions may want to distinguish between replicas with hot standby /// Future versions may want to distinguish between replicas with hot standby

View File

@@ -1,6 +1,6 @@
//!
//! Shared code for consumption metics collection //! Shared code for consumption metics collection
//! #![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use rand::Rng; use rand::Rng;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};

View File

@@ -2,6 +2,7 @@
//! make sure that we use the same dep version everywhere. //! make sure that we use the same dep version everywhere.
//! Otherwise, we might not see all metrics registered via //! Otherwise, we might not see all metrics registered via
//! a default registry. //! a default registry.
#![deny(clippy::undocumented_unsafe_blocks)]
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use prometheus::core::{AtomicU64, Collector, GenericGauge, GenericGaugeVec}; use prometheus::core::{AtomicU64, Collector, GenericGauge, GenericGaugeVec};
pub use prometheus::opts; pub use prometheus::opts;

View File

@@ -4,7 +4,6 @@
//! See docs/rfcs/025-generation-numbers.md //! See docs/rfcs/025-generation-numbers.md
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr};
use utils::id::{NodeId, TenantId}; use utils::id::{NodeId, TenantId};
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
@@ -12,10 +11,8 @@ pub struct ReAttachRequest {
pub node_id: NodeId, pub node_id: NodeId,
} }
#[serde_as]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct ReAttachResponseTenant { pub struct ReAttachResponseTenant {
#[serde_as(as = "DisplayFromStr")]
pub id: TenantId, pub id: TenantId,
pub gen: u32, pub gen: u32,
} }
@@ -25,10 +22,8 @@ pub struct ReAttachResponse {
pub tenants: Vec<ReAttachResponseTenant>, pub tenants: Vec<ReAttachResponseTenant>,
} }
#[serde_as]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct ValidateRequestTenant { pub struct ValidateRequestTenant {
#[serde_as(as = "DisplayFromStr")]
pub id: TenantId, pub id: TenantId,
pub gen: u32, pub gen: u32,
} }
@@ -43,10 +38,8 @@ pub struct ValidateResponse {
pub tenants: Vec<ValidateResponseTenant>, pub tenants: Vec<ValidateResponseTenant>,
} }
#[serde_as]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct ValidateResponseTenant { pub struct ValidateResponseTenant {
#[serde_as(as = "DisplayFromStr")]
pub id: TenantId, pub id: TenantId,
pub valid: bool, pub valid: bool,
} }

View File

@@ -1,3 +1,5 @@
#![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
use const_format::formatcp; use const_format::formatcp;
/// Public API types /// Public API types

View File

@@ -6,7 +6,7 @@ use std::{
use byteorder::{BigEndian, ReadBytesExt}; use byteorder::{BigEndian, ReadBytesExt};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr}; use serde_with::serde_as;
use strum_macros; use strum_macros;
use utils::{ use utils::{
completion, completion,
@@ -174,25 +174,19 @@ pub enum TimelineState {
Broken { reason: String, backtrace: String }, Broken { reason: String, backtrace: String },
} }
#[serde_as]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct TimelineCreateRequest { pub struct TimelineCreateRequest {
#[serde_as(as = "DisplayFromStr")]
pub new_timeline_id: TimelineId, pub new_timeline_id: TimelineId,
#[serde(default)] #[serde(default)]
#[serde_as(as = "Option<DisplayFromStr>")]
pub ancestor_timeline_id: Option<TimelineId>, pub ancestor_timeline_id: Option<TimelineId>,
#[serde(default)] #[serde(default)]
#[serde_as(as = "Option<DisplayFromStr>")]
pub ancestor_start_lsn: Option<Lsn>, pub ancestor_start_lsn: Option<Lsn>,
pub pg_version: Option<u32>, pub pg_version: Option<u32>,
} }
#[serde_as]
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
#[serde(deny_unknown_fields)] #[serde(deny_unknown_fields)]
pub struct TenantCreateRequest { pub struct TenantCreateRequest {
#[serde_as(as = "DisplayFromStr")]
pub new_tenant_id: TenantId, pub new_tenant_id: TenantId,
#[serde(default)] #[serde(default)]
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
@@ -201,7 +195,6 @@ pub struct TenantCreateRequest {
pub config: TenantConfig, // as we have a flattened field, we should reject all unknown fields in it pub config: TenantConfig, // as we have a flattened field, we should reject all unknown fields in it
} }
#[serde_as]
#[derive(Deserialize, Debug)] #[derive(Deserialize, Debug)]
#[serde(deny_unknown_fields)] #[serde(deny_unknown_fields)]
pub struct TenantLoadRequest { pub struct TenantLoadRequest {
@@ -278,31 +271,26 @@ pub struct LocationConfig {
pub tenant_conf: TenantConfig, pub tenant_conf: TenantConfig,
} }
#[serde_as]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
#[serde(transparent)] #[serde(transparent)]
pub struct TenantCreateResponse(#[serde_as(as = "DisplayFromStr")] pub TenantId); pub struct TenantCreateResponse(pub TenantId);
#[derive(Serialize)] #[derive(Serialize)]
pub struct StatusResponse { pub struct StatusResponse {
pub id: NodeId, pub id: NodeId,
} }
#[serde_as]
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
#[serde(deny_unknown_fields)] #[serde(deny_unknown_fields)]
pub struct TenantLocationConfigRequest { pub struct TenantLocationConfigRequest {
#[serde_as(as = "DisplayFromStr")]
pub tenant_id: TenantId, pub tenant_id: TenantId,
#[serde(flatten)] #[serde(flatten)]
pub config: LocationConfig, // as we have a flattened field, we should reject all unknown fields in it pub config: LocationConfig, // as we have a flattened field, we should reject all unknown fields in it
} }
#[serde_as]
#[derive(Serialize, Deserialize, Debug)] #[derive(Serialize, Deserialize, Debug)]
#[serde(deny_unknown_fields)] #[serde(deny_unknown_fields)]
pub struct TenantConfigRequest { pub struct TenantConfigRequest {
#[serde_as(as = "DisplayFromStr")]
pub tenant_id: TenantId, pub tenant_id: TenantId,
#[serde(flatten)] #[serde(flatten)]
pub config: TenantConfig, // as we have a flattened field, we should reject all unknown fields in it pub config: TenantConfig, // as we have a flattened field, we should reject all unknown fields in it
@@ -374,10 +362,8 @@ pub enum TenantAttachmentStatus {
Failed { reason: String }, Failed { reason: String },
} }
#[serde_as]
#[derive(Serialize, Deserialize, Clone)] #[derive(Serialize, Deserialize, Clone)]
pub struct TenantInfo { pub struct TenantInfo {
#[serde_as(as = "DisplayFromStr")]
pub id: TenantId, pub id: TenantId,
// NB: intentionally not part of OpenAPI, we don't want to commit to a specific set of TenantState's // NB: intentionally not part of OpenAPI, we don't want to commit to a specific set of TenantState's
pub state: TenantState, pub state: TenantState,
@@ -388,33 +374,22 @@ pub struct TenantInfo {
} }
/// This represents the output of the "timeline_detail" and "timeline_list" API calls. /// This represents the output of the "timeline_detail" and "timeline_list" API calls.
#[serde_as]
#[derive(Debug, Serialize, Deserialize, Clone)] #[derive(Debug, Serialize, Deserialize, Clone)]
pub struct TimelineInfo { pub struct TimelineInfo {
#[serde_as(as = "DisplayFromStr")]
pub tenant_id: TenantId, pub tenant_id: TenantId,
#[serde_as(as = "DisplayFromStr")]
pub timeline_id: TimelineId, pub timeline_id: TimelineId,
#[serde_as(as = "Option<DisplayFromStr>")]
pub ancestor_timeline_id: Option<TimelineId>, pub ancestor_timeline_id: Option<TimelineId>,
#[serde_as(as = "Option<DisplayFromStr>")]
pub ancestor_lsn: Option<Lsn>, pub ancestor_lsn: Option<Lsn>,
#[serde_as(as = "DisplayFromStr")]
pub last_record_lsn: Lsn, pub last_record_lsn: Lsn,
#[serde_as(as = "Option<DisplayFromStr>")]
pub prev_record_lsn: Option<Lsn>, pub prev_record_lsn: Option<Lsn>,
#[serde_as(as = "DisplayFromStr")]
pub latest_gc_cutoff_lsn: Lsn, pub latest_gc_cutoff_lsn: Lsn,
#[serde_as(as = "DisplayFromStr")]
pub disk_consistent_lsn: Lsn, pub disk_consistent_lsn: Lsn,
/// The LSN that we have succesfully uploaded to remote storage /// The LSN that we have succesfully uploaded to remote storage
#[serde_as(as = "DisplayFromStr")]
pub remote_consistent_lsn: Lsn, pub remote_consistent_lsn: Lsn,
/// The LSN that we are advertizing to safekeepers /// The LSN that we are advertizing to safekeepers
#[serde_as(as = "DisplayFromStr")]
pub remote_consistent_lsn_visible: Lsn, pub remote_consistent_lsn_visible: Lsn,
pub current_logical_size: Option<u64>, // is None when timeline is Unloaded pub current_logical_size: Option<u64>, // is None when timeline is Unloaded
@@ -426,7 +401,6 @@ pub struct TimelineInfo {
pub timeline_dir_layer_file_size_sum: Option<u64>, pub timeline_dir_layer_file_size_sum: Option<u64>,
pub wal_source_connstr: Option<String>, pub wal_source_connstr: Option<String>,
#[serde_as(as = "Option<DisplayFromStr>")]
pub last_received_msg_lsn: Option<Lsn>, pub last_received_msg_lsn: Option<Lsn>,
/// the timestamp (in microseconds) of the last received message /// the timestamp (in microseconds) of the last received message
pub last_received_msg_ts: Option<u128>, pub last_received_msg_ts: Option<u128>,
@@ -523,23 +497,13 @@ pub struct LayerAccessStats {
pub residence_events_history: HistoryBufferWithDropCounter<LayerResidenceEvent, 16>, pub residence_events_history: HistoryBufferWithDropCounter<LayerResidenceEvent, 16>,
} }
#[serde_as]
#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
#[serde(tag = "kind")] #[serde(tag = "kind")]
pub enum InMemoryLayerInfo { pub enum InMemoryLayerInfo {
Open { Open { lsn_start: Lsn },
#[serde_as(as = "DisplayFromStr")] Frozen { lsn_start: Lsn, lsn_end: Lsn },
lsn_start: Lsn,
},
Frozen {
#[serde_as(as = "DisplayFromStr")]
lsn_start: Lsn,
#[serde_as(as = "DisplayFromStr")]
lsn_end: Lsn,
},
} }
#[serde_as]
#[derive(Debug, Clone, Serialize)] #[derive(Debug, Clone, Serialize)]
#[serde(tag = "kind")] #[serde(tag = "kind")]
pub enum HistoricLayerInfo { pub enum HistoricLayerInfo {
@@ -547,9 +511,7 @@ pub enum HistoricLayerInfo {
layer_file_name: String, layer_file_name: String,
layer_file_size: u64, layer_file_size: u64,
#[serde_as(as = "DisplayFromStr")]
lsn_start: Lsn, lsn_start: Lsn,
#[serde_as(as = "DisplayFromStr")]
lsn_end: Lsn, lsn_end: Lsn,
remote: bool, remote: bool,
access_stats: LayerAccessStats, access_stats: LayerAccessStats,
@@ -558,7 +520,6 @@ pub enum HistoricLayerInfo {
layer_file_name: String, layer_file_name: String,
layer_file_size: u64, layer_file_size: u64,
#[serde_as(as = "DisplayFromStr")]
lsn_start: Lsn, lsn_start: Lsn,
remote: bool, remote: bool,
access_stats: LayerAccessStats, access_stats: LayerAccessStats,

View File

@@ -2,6 +2,8 @@
//! To use, create PostgresBackend and run() it, passing the Handler //! To use, create PostgresBackend and run() it, passing the Handler
//! implementation determining how to process the queries. Currently its API //! implementation determining how to process the queries. Currently its API
//! is rather narrow, but we can extend it once required. //! is rather narrow, but we can extend it once required.
#![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
use anyhow::Context; use anyhow::Context;
use bytes::Bytes; use bytes::Bytes;
use futures::pin_mut; use futures::pin_mut;
@@ -15,7 +17,7 @@ use std::{fmt, io};
use std::{future::Future, str::FromStr}; use std::{future::Future, str::FromStr};
use tokio::io::{AsyncRead, AsyncWrite}; use tokio::io::{AsyncRead, AsyncWrite};
use tokio_rustls::TlsAcceptor; use tokio_rustls::TlsAcceptor;
use tracing::{debug, error, info, trace}; use tracing::{debug, error, info, trace, warn};
use pq_proto::framed::{ConnectionError, Framed, FramedReader, FramedWriter}; use pq_proto::framed::{ConnectionError, Framed, FramedReader, FramedWriter};
use pq_proto::{ use pq_proto::{
@@ -33,6 +35,11 @@ pub enum QueryError {
/// We were instructed to shutdown while processing the query /// We were instructed to shutdown while processing the query
#[error("Shutting down")] #[error("Shutting down")]
Shutdown, Shutdown,
/// Authentication failure
#[error("Unauthorized: {0}")]
Unauthorized(std::borrow::Cow<'static, str>),
#[error("Simulated Connection Error")]
SimulatedConnectionError,
/// Some other error /// Some other error
#[error(transparent)] #[error(transparent)]
Other(#[from] anyhow::Error), Other(#[from] anyhow::Error),
@@ -47,8 +54,9 @@ impl From<io::Error> for QueryError {
impl QueryError { impl QueryError {
pub fn pg_error_code(&self) -> &'static [u8; 5] { pub fn pg_error_code(&self) -> &'static [u8; 5] {
match self { match self {
Self::Disconnected(_) => b"08006", // connection failure Self::Disconnected(_) | Self::SimulatedConnectionError => b"08006", // connection failure
Self::Shutdown => SQLSTATE_ADMIN_SHUTDOWN, Self::Shutdown => SQLSTATE_ADMIN_SHUTDOWN,
Self::Unauthorized(_) => SQLSTATE_INTERNAL_ERROR,
Self::Other(_) => SQLSTATE_INTERNAL_ERROR, // internal error Self::Other(_) => SQLSTATE_INTERNAL_ERROR, // internal error
} }
} }
@@ -608,7 +616,7 @@ impl<IO: AsyncRead + AsyncWrite + Unpin> PostgresBackend<IO> {
if let Err(e) = handler.check_auth_jwt(self, jwt_response) { if let Err(e) = handler.check_auth_jwt(self, jwt_response) {
self.write_message_noflush(&BeMessage::ErrorResponse( self.write_message_noflush(&BeMessage::ErrorResponse(
&e.to_string(), &short_error(&e),
Some(e.pg_error_code()), Some(e.pg_error_code()),
))?; ))?;
return Err(e); return Err(e);
@@ -728,12 +736,20 @@ impl<IO: AsyncRead + AsyncWrite + Unpin> PostgresBackend<IO> {
trace!("got query {query_string:?}"); trace!("got query {query_string:?}");
if let Err(e) = handler.process_query(self, query_string).await { if let Err(e) = handler.process_query(self, query_string).await {
log_query_error(query_string, &e); match e {
let short_error = short_error(&e); QueryError::Shutdown => return Ok(ProcessMsgResult::Break),
self.write_message_noflush(&BeMessage::ErrorResponse( QueryError::SimulatedConnectionError => {
&short_error, return Err(QueryError::SimulatedConnectionError)
Some(e.pg_error_code()), }
))?; e => {
log_query_error(query_string, &e);
let short_error = short_error(&e);
self.write_message_noflush(&BeMessage::ErrorResponse(
&short_error,
Some(e.pg_error_code()),
))?;
}
}
} }
self.write_message_noflush(&BeMessage::ReadyForQuery)?; self.write_message_noflush(&BeMessage::ReadyForQuery)?;
} }
@@ -959,6 +975,8 @@ pub fn short_error(e: &QueryError) -> String {
match e { match e {
QueryError::Disconnected(connection_error) => connection_error.to_string(), QueryError::Disconnected(connection_error) => connection_error.to_string(),
QueryError::Shutdown => "shutdown".to_string(), QueryError::Shutdown => "shutdown".to_string(),
QueryError::Unauthorized(_e) => "JWT authentication error".to_string(),
QueryError::SimulatedConnectionError => "simulated connection error".to_string(),
QueryError::Other(e) => format!("{e:#}"), QueryError::Other(e) => format!("{e:#}"),
} }
} }
@@ -975,9 +993,15 @@ fn log_query_error(query: &str, e: &QueryError) {
QueryError::Disconnected(other_connection_error) => { QueryError::Disconnected(other_connection_error) => {
error!("query handler for '{query}' failed with connection error: {other_connection_error:?}") error!("query handler for '{query}' failed with connection error: {other_connection_error:?}")
} }
QueryError::SimulatedConnectionError => {
error!("query handler for query '{query}' failed due to a simulated connection error")
}
QueryError::Shutdown => { QueryError::Shutdown => {
info!("query handler for '{query}' cancelled during tenant shutdown") info!("query handler for '{query}' cancelled during tenant shutdown")
} }
QueryError::Unauthorized(e) => {
warn!("query handler for '{query}' failed with authentication error: {e}");
}
QueryError::Other(e) => { QueryError::Other(e) => {
error!("query handler for '{query}' failed: {e:?}"); error!("query handler for '{query}' failed: {e:?}");
} }

View File

@@ -1,3 +1,5 @@
#![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
use anyhow::{bail, Context}; use anyhow::{bail, Context};
use itertools::Itertools; use itertools::Itertools;
use std::borrow::Cow; use std::borrow::Cow;

View File

@@ -8,6 +8,7 @@
// modules included with the postgres_ffi macro depend on the types of the specific version's // modules included with the postgres_ffi macro depend on the types of the specific version's
// types, and trigger a too eager lint. // types, and trigger a too eager lint.
#![allow(clippy::duplicate_mod)] #![allow(clippy::duplicate_mod)]
#![deny(clippy::undocumented_unsafe_blocks)]
use bytes::Bytes; use bytes::Bytes;
use utils::bin_ser::SerializeError; use utils::bin_ser::SerializeError;
@@ -20,6 +21,7 @@ macro_rules! postgres_ffi {
pub mod bindings { pub mod bindings {
// bindgen generates bindings for a lot of stuff we don't need // bindgen generates bindings for a lot of stuff we don't need
#![allow(dead_code)] #![allow(dead_code)]
#![allow(clippy::undocumented_unsafe_blocks)]
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
include!(concat!( include!(concat!(

View File

@@ -1,6 +1,7 @@
//! Postgres protocol messages serialization-deserialization. See //! Postgres protocol messages serialization-deserialization. See
//! <https://www.postgresql.org/docs/devel/protocol-message-formats.html> //! <https://www.postgresql.org/docs/devel/protocol-message-formats.html>
//! on message formats. //! on message formats.
#![deny(clippy::undocumented_unsafe_blocks)]
pub mod framed; pub mod framed;

View File

@@ -8,6 +8,7 @@ license.workspace = true
anyhow.workspace = true anyhow.workspace = true
async-trait.workspace = true async-trait.workspace = true
once_cell.workspace = true once_cell.workspace = true
aws-smithy-async.workspace = true
aws-smithy-http.workspace = true aws-smithy-http.workspace = true
aws-types.workspace = true aws-types.workspace = true
aws-config.workspace = true aws-config.workspace = true

View File

@@ -1,21 +1,18 @@
//! Azure Blob Storage wrapper //! Azure Blob Storage wrapper
use std::collections::HashMap;
use std::env; use std::env;
use std::num::NonZeroU32; use std::num::NonZeroU32;
use std::sync::Arc; use std::sync::Arc;
use std::{borrow::Cow, collections::HashMap, io::Cursor}; use std::{borrow::Cow, io::Cursor};
use super::REMOTE_STORAGE_PREFIX_SEPARATOR; use super::REMOTE_STORAGE_PREFIX_SEPARATOR;
use anyhow::Result; use anyhow::Result;
use azure_core::request_options::{MaxResults, Metadata, Range}; use azure_core::request_options::{MaxResults, Metadata, Range};
use azure_core::Header;
use azure_identity::DefaultAzureCredential; use azure_identity::DefaultAzureCredential;
use azure_storage::StorageCredentials; use azure_storage::StorageCredentials;
use azure_storage_blobs::prelude::ClientBuilder; use azure_storage_blobs::prelude::ClientBuilder;
use azure_storage_blobs::{ use azure_storage_blobs::{blob::operations::GetBlobBuilder, prelude::ContainerClient};
blob::operations::GetBlobBuilder,
prelude::{BlobClient, ContainerClient},
};
use futures_util::StreamExt; use futures_util::StreamExt;
use http_types::StatusCode; use http_types::StatusCode;
use tokio::io::AsyncRead; use tokio::io::AsyncRead;
@@ -112,16 +109,19 @@ impl AzureBlobStorage {
async fn download_for_builder( async fn download_for_builder(
&self, &self,
metadata: StorageMetadata,
builder: GetBlobBuilder, builder: GetBlobBuilder,
) -> Result<Download, DownloadError> { ) -> Result<Download, DownloadError> {
let mut response = builder.into_stream(); let mut response = builder.into_stream();
let mut metadata = HashMap::new();
// TODO give proper streaming response instead of buffering into RAM // TODO give proper streaming response instead of buffering into RAM
// https://github.com/neondatabase/neon/issues/5563 // https://github.com/neondatabase/neon/issues/5563
let mut buf = Vec::new(); let mut buf = Vec::new();
while let Some(part) = response.next().await { while let Some(part) = response.next().await {
let part = part.map_err(to_download_error)?; let part = part.map_err(to_download_error)?;
if let Some(blob_meta) = part.blob.metadata {
metadata.extend(blob_meta.iter().map(|(k, v)| (k.to_owned(), v.to_owned())));
}
let data = part let data = part
.data .data
.collect() .collect()
@@ -131,28 +131,9 @@ impl AzureBlobStorage {
} }
Ok(Download { Ok(Download {
download_stream: Box::pin(Cursor::new(buf)), download_stream: Box::pin(Cursor::new(buf)),
metadata: Some(metadata), metadata: Some(StorageMetadata(metadata)),
}) })
} }
// TODO get rid of this function once we have metadata included in the response
// https://github.com/Azure/azure-sdk-for-rust/issues/1439
async fn get_metadata(
&self,
blob_client: &BlobClient,
) -> Result<StorageMetadata, DownloadError> {
let builder = blob_client.get_metadata();
let response = builder.into_future().await.map_err(to_download_error)?;
let mut map = HashMap::new();
for md in response.metadata.iter() {
map.insert(
md.name().as_str().to_string(),
md.value().as_str().to_string(),
);
}
Ok(StorageMetadata(map))
}
async fn permit(&self, kind: RequestKind) -> tokio::sync::SemaphorePermit<'_> { async fn permit(&self, kind: RequestKind) -> tokio::sync::SemaphorePermit<'_> {
self.concurrency_limiter self.concurrency_limiter
@@ -269,11 +250,9 @@ impl RemoteStorage for AzureBlobStorage {
let _permit = self.permit(RequestKind::Get).await; let _permit = self.permit(RequestKind::Get).await;
let blob_client = self.client.blob_client(self.relative_path_to_name(from)); let blob_client = self.client.blob_client(self.relative_path_to_name(from));
let metadata = self.get_metadata(&blob_client).await?;
let builder = blob_client.get(); let builder = blob_client.get();
self.download_for_builder(metadata, builder).await self.download_for_builder(builder).await
} }
async fn download_byte_range( async fn download_byte_range(
@@ -285,8 +264,6 @@ impl RemoteStorage for AzureBlobStorage {
let _permit = self.permit(RequestKind::Get).await; let _permit = self.permit(RequestKind::Get).await;
let blob_client = self.client.blob_client(self.relative_path_to_name(from)); let blob_client = self.client.blob_client(self.relative_path_to_name(from));
let metadata = self.get_metadata(&blob_client).await?;
let mut builder = blob_client.get(); let mut builder = blob_client.get();
if let Some(end_exclusive) = end_exclusive { if let Some(end_exclusive) = end_exclusive {
@@ -301,7 +278,7 @@ impl RemoteStorage for AzureBlobStorage {
builder = builder.range(Range::new(start_inclusive, end_exclusive)); builder = builder.range(Range::new(start_inclusive, end_exclusive));
} }
self.download_for_builder(metadata, builder).await self.download_for_builder(builder).await
} }
async fn delete(&self, path: &RemotePath) -> anyhow::Result<()> { async fn delete(&self, path: &RemotePath) -> anyhow::Result<()> {

View File

@@ -6,19 +6,15 @@
//! * [`s3_bucket`] uses AWS S3 bucket as an external storage //! * [`s3_bucket`] uses AWS S3 bucket as an external storage
//! * [`azure_blob`] allows to use Azure Blob storage as an external storage //! * [`azure_blob`] allows to use Azure Blob storage as an external storage
//! //!
#![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
mod azure_blob; mod azure_blob;
mod local_fs; mod local_fs;
mod s3_bucket; mod s3_bucket;
mod simulate_failures; mod simulate_failures;
use std::{ use std::{collections::HashMap, fmt::Debug, num::NonZeroUsize, pin::Pin, sync::Arc};
collections::HashMap,
fmt::Debug,
num::{NonZeroU32, NonZeroUsize},
pin::Pin,
sync::Arc,
};
use anyhow::{bail, Context}; use anyhow::{bail, Context};
use camino::{Utf8Path, Utf8PathBuf}; use camino::{Utf8Path, Utf8PathBuf};
@@ -34,12 +30,6 @@ pub use self::{
}; };
use s3_bucket::RequestKind; use s3_bucket::RequestKind;
/// How many different timelines can be processed simultaneously when synchronizing layers with the remote storage.
/// During regular work, pageserver produces one layer file per timeline checkpoint, with bursts of concurrency
/// during start (where local and remote timelines are compared and initial sync tasks are scheduled) and timeline attach.
/// Both cases may trigger timeline download, that might download a lot of layers. This concurrency is limited by the clients internally, if needed.
pub const DEFAULT_REMOTE_STORAGE_MAX_CONCURRENT_SYNCS: usize = 50;
pub const DEFAULT_REMOTE_STORAGE_MAX_SYNC_ERRORS: u32 = 10;
/// Currently, sync happens with AWS S3, that has two limits on requests per second: /// Currently, sync happens with AWS S3, that has two limits on requests per second:
/// ~200 RPS for IAM services /// ~200 RPS for IAM services
/// <https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html> /// <https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.html>
@@ -441,10 +431,6 @@ pub struct StorageMetadata(HashMap<String, String>);
/// External backup storage configuration, enough for creating a client for that storage. /// External backup storage configuration, enough for creating a client for that storage.
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub struct RemoteStorageConfig { pub struct RemoteStorageConfig {
/// Max allowed number of concurrent sync operations between the API user and the remote storage.
pub max_concurrent_syncs: NonZeroUsize,
/// Max allowed errors before the sync task is considered failed and evicted.
pub max_sync_errors: NonZeroU32,
/// The storage connection configuration. /// The storage connection configuration.
pub storage: RemoteStorageKind, pub storage: RemoteStorageKind,
} }
@@ -540,18 +526,6 @@ impl RemoteStorageConfig {
let use_azure = container_name.is_some() && container_region.is_some(); let use_azure = container_name.is_some() && container_region.is_some();
let max_concurrent_syncs = NonZeroUsize::new(
parse_optional_integer("max_concurrent_syncs", toml)?
.unwrap_or(DEFAULT_REMOTE_STORAGE_MAX_CONCURRENT_SYNCS),
)
.context("Failed to parse 'max_concurrent_syncs' as a positive integer")?;
let max_sync_errors = NonZeroU32::new(
parse_optional_integer("max_sync_errors", toml)?
.unwrap_or(DEFAULT_REMOTE_STORAGE_MAX_SYNC_ERRORS),
)
.context("Failed to parse 'max_sync_errors' as a positive integer")?;
let default_concurrency_limit = if use_azure { let default_concurrency_limit = if use_azure {
DEFAULT_REMOTE_STORAGE_AZURE_CONCURRENCY_LIMIT DEFAULT_REMOTE_STORAGE_AZURE_CONCURRENCY_LIMIT
} else { } else {
@@ -633,11 +607,7 @@ impl RemoteStorageConfig {
} }
}; };
Ok(Some(RemoteStorageConfig { Ok(Some(RemoteStorageConfig { storage }))
max_concurrent_syncs,
max_sync_errors,
storage,
}))
} }
} }

View File

@@ -4,23 +4,27 @@
//! allowing multiple api users to independently work with the same S3 bucket, if //! allowing multiple api users to independently work with the same S3 bucket, if
//! their bucket prefixes are both specified and different. //! their bucket prefixes are both specified and different.
use std::borrow::Cow; use std::{borrow::Cow, sync::Arc};
use anyhow::Context; use anyhow::Context;
use aws_config::{ use aws_config::{
environment::credentials::EnvironmentVariableCredentialsProvider, environment::credentials::EnvironmentVariableCredentialsProvider,
imds::credentials::ImdsCredentialsProvider, meta::credentials::CredentialsProviderChain, imds::credentials::ImdsCredentialsProvider,
provider_config::ProviderConfig, web_identity_token::WebIdentityTokenCredentialsProvider, meta::credentials::CredentialsProviderChain,
provider_config::ProviderConfig,
retry::{RetryConfigBuilder, RetryMode},
web_identity_token::WebIdentityTokenCredentialsProvider,
}; };
use aws_credential_types::cache::CredentialsCache; use aws_credential_types::cache::CredentialsCache;
use aws_sdk_s3::{ use aws_sdk_s3::{
config::{Config, Region}, config::{AsyncSleep, Config, Region, SharedAsyncSleep},
error::SdkError, error::SdkError,
operation::get_object::GetObjectError, operation::get_object::GetObjectError,
primitives::ByteStream, primitives::ByteStream,
types::{Delete, ObjectIdentifier}, types::{Delete, ObjectIdentifier},
Client, Client,
}; };
use aws_smithy_async::rt::sleep::TokioSleep;
use aws_smithy_http::body::SdkBody; use aws_smithy_http::body::SdkBody;
use hyper::Body; use hyper::Body;
use scopeguard::ScopeGuard; use scopeguard::ScopeGuard;
@@ -83,10 +87,23 @@ impl S3Bucket {
.or_else("imds", ImdsCredentialsProvider::builder().build()) .or_else("imds", ImdsCredentialsProvider::builder().build())
}; };
// AWS SDK requires us to specify how the RetryConfig should sleep when it wants to back off
let sleep_impl: Arc<dyn AsyncSleep> = Arc::new(TokioSleep::new());
// We do our own retries (see [`backoff::retry`]). However, for the AWS SDK to enable rate limiting in response to throttling
// responses (e.g. 429 on too many ListObjectsv2 requests), we must provide a retry config. We set it to use at most one
// attempt, and enable 'Adaptive' mode, which causes rate limiting to be enabled.
let mut retry_config = RetryConfigBuilder::new();
retry_config
.set_max_attempts(Some(1))
.set_mode(Some(RetryMode::Adaptive));
let mut config_builder = Config::builder() let mut config_builder = Config::builder()
.region(region) .region(region)
.credentials_cache(CredentialsCache::lazy()) .credentials_cache(CredentialsCache::lazy())
.credentials_provider(credentials_provider); .credentials_provider(credentials_provider)
.sleep_impl(SharedAsyncSleep::from(sleep_impl))
.retry_config(retry_config.build());
if let Some(custom_endpoint) = aws_config.endpoint.clone() { if let Some(custom_endpoint) = aws_config.endpoint.clone() {
config_builder = config_builder config_builder = config_builder

View File

@@ -1,6 +1,6 @@
use std::collections::HashSet; use std::collections::HashSet;
use std::env; use std::env;
use std::num::{NonZeroU32, NonZeroUsize}; use std::num::NonZeroUsize;
use std::ops::ControlFlow; use std::ops::ControlFlow;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
@@ -469,8 +469,6 @@ fn create_azure_client(
let random = rand::thread_rng().gen::<u32>(); let random = rand::thread_rng().gen::<u32>();
let remote_storage_config = RemoteStorageConfig { let remote_storage_config = RemoteStorageConfig {
max_concurrent_syncs: NonZeroUsize::new(100).unwrap(),
max_sync_errors: NonZeroU32::new(5).unwrap(),
storage: RemoteStorageKind::AzureContainer(AzureConfig { storage: RemoteStorageKind::AzureContainer(AzureConfig {
container_name: remote_storage_azure_container, container_name: remote_storage_azure_container,
container_region: remote_storage_azure_region, container_region: remote_storage_azure_region,

View File

@@ -1,6 +1,6 @@
use std::collections::HashSet; use std::collections::HashSet;
use std::env; use std::env;
use std::num::{NonZeroU32, NonZeroUsize}; use std::num::NonZeroUsize;
use std::ops::ControlFlow; use std::ops::ControlFlow;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
@@ -396,8 +396,6 @@ fn create_s3_client(
let random = rand::thread_rng().gen::<u32>(); let random = rand::thread_rng().gen::<u32>();
let remote_storage_config = RemoteStorageConfig { let remote_storage_config = RemoteStorageConfig {
max_concurrent_syncs: NonZeroUsize::new(100).unwrap(),
max_sync_errors: NonZeroU32::new(5).unwrap(),
storage: RemoteStorageKind::AwsS3(S3Config { storage: RemoteStorageKind::AwsS3(S3Config {
bucket_name: remote_storage_s3_bucket, bucket_name: remote_storage_s3_bucket,
bucket_region: remote_storage_s3_region, bucket_region: remote_storage_s3_region,

View File

@@ -1,3 +1,5 @@
#![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
use const_format::formatcp; use const_format::formatcp;
/// Public API types /// Public API types

View File

@@ -1,23 +1,18 @@
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr};
use utils::{ use utils::{
id::{NodeId, TenantId, TimelineId}, id::{NodeId, TenantId, TimelineId},
lsn::Lsn, lsn::Lsn,
}; };
#[serde_as]
#[derive(Serialize, Deserialize)] #[derive(Serialize, Deserialize)]
pub struct TimelineCreateRequest { pub struct TimelineCreateRequest {
#[serde_as(as = "DisplayFromStr")]
pub tenant_id: TenantId, pub tenant_id: TenantId,
#[serde_as(as = "DisplayFromStr")]
pub timeline_id: TimelineId, pub timeline_id: TimelineId,
pub peer_ids: Option<Vec<NodeId>>, pub peer_ids: Option<Vec<NodeId>>,
pub pg_version: u32, pub pg_version: u32,
pub system_id: Option<u64>, pub system_id: Option<u64>,
pub wal_seg_size: Option<u32>, pub wal_seg_size: Option<u32>,
#[serde_as(as = "DisplayFromStr")]
pub commit_lsn: Lsn, pub commit_lsn: Lsn,
// If not passed, it is assigned to the beginning of commit_lsn segment. // If not passed, it is assigned to the beginning of commit_lsn segment.
pub local_start_lsn: Option<Lsn>, pub local_start_lsn: Option<Lsn>,
@@ -28,7 +23,6 @@ fn lsn_invalid() -> Lsn {
} }
/// Data about safekeeper's timeline, mirrors broker.proto. /// Data about safekeeper's timeline, mirrors broker.proto.
#[serde_as]
#[derive(Debug, Clone, Deserialize, Serialize)] #[derive(Debug, Clone, Deserialize, Serialize)]
pub struct SkTimelineInfo { pub struct SkTimelineInfo {
/// Term. /// Term.
@@ -36,25 +30,19 @@ pub struct SkTimelineInfo {
/// Term of the last entry. /// Term of the last entry.
pub last_log_term: Option<u64>, pub last_log_term: Option<u64>,
/// LSN of the last record. /// LSN of the last record.
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "lsn_invalid")] #[serde(default = "lsn_invalid")]
pub flush_lsn: Lsn, pub flush_lsn: Lsn,
/// Up to which LSN safekeeper regards its WAL as committed. /// Up to which LSN safekeeper regards its WAL as committed.
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "lsn_invalid")] #[serde(default = "lsn_invalid")]
pub commit_lsn: Lsn, pub commit_lsn: Lsn,
/// LSN up to which safekeeper has backed WAL. /// LSN up to which safekeeper has backed WAL.
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "lsn_invalid")] #[serde(default = "lsn_invalid")]
pub backup_lsn: Lsn, pub backup_lsn: Lsn,
/// LSN of last checkpoint uploaded by pageserver. /// LSN of last checkpoint uploaded by pageserver.
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "lsn_invalid")] #[serde(default = "lsn_invalid")]
pub remote_consistent_lsn: Lsn, pub remote_consistent_lsn: Lsn,
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "lsn_invalid")] #[serde(default = "lsn_invalid")]
pub peer_horizon_lsn: Lsn, pub peer_horizon_lsn: Lsn,
#[serde_as(as = "DisplayFromStr")]
#[serde(default = "lsn_invalid")] #[serde(default = "lsn_invalid")]
pub local_start_lsn: Lsn, pub local_start_lsn: Lsn,
/// A connection string to use for WAL receiving. /// A connection string to use for WAL receiving.

View File

@@ -1,4 +1,6 @@
//! Synthetic size calculation //! Synthetic size calculation
#![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
mod calculation; mod calculation;
pub mod svg; pub mod svg;

View File

@@ -32,6 +32,8 @@
//! .init(); //! .init();
//! } //! }
//! ``` //! ```
#![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
use opentelemetry::sdk::Resource; use opentelemetry::sdk::Resource;
use opentelemetry::KeyValue; use opentelemetry::KeyValue;

View File

@@ -5,6 +5,7 @@ edition.workspace = true
license.workspace = true license.workspace = true
[dependencies] [dependencies]
arc-swap.workspace = true
sentry.workspace = true sentry.workspace = true
async-trait.workspace = true async-trait.workspace = true
anyhow.workspace = true anyhow.workspace = true
@@ -55,6 +56,7 @@ bytes.workspace = true
criterion.workspace = true criterion.workspace = true
hex-literal.workspace = true hex-literal.workspace = true
camino-tempfile.workspace = true camino-tempfile.workspace = true
serde_assert.workspace = true
[[bench]] [[bench]]
name = "benchmarks" name = "benchmarks"

View File

@@ -1,7 +1,8 @@
// For details about authentication see docs/authentication.md // For details about authentication see docs/authentication.md
use arc_swap::ArcSwap;
use serde; use serde;
use std::fs; use std::{borrow::Cow, fmt::Display, fs, sync::Arc};
use anyhow::Result; use anyhow::Result;
use camino::Utf8Path; use camino::Utf8Path;
@@ -9,9 +10,8 @@ use jsonwebtoken::{
decode, encode, Algorithm, DecodingKey, EncodingKey, Header, TokenData, Validation, decode, encode, Algorithm, DecodingKey, EncodingKey, Header, TokenData, Validation,
}; };
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr};
use crate::id::TenantId; use crate::{http::error::ApiError, id::TenantId};
/// Algorithm to use. We require EdDSA. /// Algorithm to use. We require EdDSA.
const STORAGE_TOKEN_ALGORITHM: Algorithm = Algorithm::EdDSA; const STORAGE_TOKEN_ALGORITHM: Algorithm = Algorithm::EdDSA;
@@ -32,11 +32,9 @@ pub enum Scope {
} }
/// JWT payload. See docs/authentication.md for the format /// JWT payload. See docs/authentication.md for the format
#[serde_as]
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)] #[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
pub struct Claims { pub struct Claims {
#[serde(default)] #[serde(default)]
#[serde_as(as = "Option<DisplayFromStr>")]
pub tenant_id: Option<TenantId>, pub tenant_id: Option<TenantId>,
pub scope: Scope, pub scope: Scope,
} }
@@ -47,31 +45,106 @@ impl Claims {
} }
} }
pub struct SwappableJwtAuth(ArcSwap<JwtAuth>);
impl SwappableJwtAuth {
pub fn new(jwt_auth: JwtAuth) -> Self {
SwappableJwtAuth(ArcSwap::new(Arc::new(jwt_auth)))
}
pub fn swap(&self, jwt_auth: JwtAuth) {
self.0.swap(Arc::new(jwt_auth));
}
pub fn decode(&self, token: &str) -> std::result::Result<TokenData<Claims>, AuthError> {
self.0.load().decode(token)
}
}
impl std::fmt::Debug for SwappableJwtAuth {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "Swappable({:?})", self.0.load())
}
}
#[derive(Clone, PartialEq, Eq, Hash, Debug)]
pub struct AuthError(pub Cow<'static, str>);
impl Display for AuthError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.0)
}
}
impl From<AuthError> for ApiError {
fn from(_value: AuthError) -> Self {
// Don't pass on the value of the AuthError as a precautionary measure.
// Being intentionally vague in public error communication hurts debugability
// but it is more secure.
ApiError::Forbidden("JWT authentication error".to_string())
}
}
pub struct JwtAuth { pub struct JwtAuth {
decoding_key: DecodingKey, decoding_keys: Vec<DecodingKey>,
validation: Validation, validation: Validation,
} }
impl JwtAuth { impl JwtAuth {
pub fn new(decoding_key: DecodingKey) -> Self { pub fn new(decoding_keys: Vec<DecodingKey>) -> Self {
let mut validation = Validation::default(); let mut validation = Validation::default();
validation.algorithms = vec![STORAGE_TOKEN_ALGORITHM]; validation.algorithms = vec![STORAGE_TOKEN_ALGORITHM];
// The default 'required_spec_claims' is 'exp'. But we don't want to require // The default 'required_spec_claims' is 'exp'. But we don't want to require
// expiration. // expiration.
validation.required_spec_claims = [].into(); validation.required_spec_claims = [].into();
Self { Self {
decoding_key, decoding_keys,
validation, validation,
} }
} }
pub fn from_key_path(key_path: &Utf8Path) -> Result<Self> { pub fn from_key_path(key_path: &Utf8Path) -> Result<Self> {
let public_key = fs::read(key_path)?; let metadata = key_path.metadata()?;
Ok(Self::new(DecodingKey::from_ed_pem(&public_key)?)) let decoding_keys = if metadata.is_dir() {
let mut keys = Vec::new();
for entry in fs::read_dir(key_path)? {
let path = entry?.path();
if !path.is_file() {
// Ignore directories (don't recurse)
continue;
}
let public_key = fs::read(path)?;
keys.push(DecodingKey::from_ed_pem(&public_key)?);
}
keys
} else if metadata.is_file() {
let public_key = fs::read(key_path)?;
vec![DecodingKey::from_ed_pem(&public_key)?]
} else {
anyhow::bail!("path is neither a directory or a file")
};
if decoding_keys.is_empty() {
anyhow::bail!("Configured for JWT auth with zero decoding keys. All JWT gated requests would be rejected.");
}
Ok(Self::new(decoding_keys))
} }
pub fn decode(&self, token: &str) -> Result<TokenData<Claims>> { /// Attempt to decode the token with the internal decoding keys.
Ok(decode(token, &self.decoding_key, &self.validation)?) ///
/// The function tries the stored decoding keys in succession,
/// and returns the first yielding a successful result.
/// If there is no working decoding key, it returns the last error.
pub fn decode(&self, token: &str) -> std::result::Result<TokenData<Claims>, AuthError> {
let mut res = None;
for decoding_key in &self.decoding_keys {
res = Some(decode(token, decoding_key, &self.validation));
if let Some(Ok(res)) = res {
return Ok(res);
}
}
if let Some(res) = res {
res.map_err(|e| AuthError(Cow::Owned(e.to_string())))
} else {
Err(AuthError(Cow::Borrowed("no JWT decoding keys configured")))
}
} }
} }
@@ -111,9 +184,9 @@ MC4CAQAwBQYDK2VwBCIEID/Drmc1AA6U/znNRWpF3zEGegOATQxfkdWxitcOMsIH
"#; "#;
#[test] #[test]
fn test_decode() -> Result<(), anyhow::Error> { fn test_decode() {
let expected_claims = Claims { let expected_claims = Claims {
tenant_id: Some(TenantId::from_str("3d1f7595b468230304e0b73cecbcb081")?), tenant_id: Some(TenantId::from_str("3d1f7595b468230304e0b73cecbcb081").unwrap()),
scope: Scope::Tenant, scope: Scope::Tenant,
}; };
@@ -132,28 +205,24 @@ MC4CAQAwBQYDK2VwBCIEID/Drmc1AA6U/znNRWpF3zEGegOATQxfkdWxitcOMsIH
let encoded_eddsa = "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJzY29wZSI6InRlbmFudCIsInRlbmFudF9pZCI6IjNkMWY3NTk1YjQ2ODIzMDMwNGUwYjczY2VjYmNiMDgxIiwiaXNzIjoibmVvbi5jb250cm9scGxhbmUiLCJleHAiOjE3MDkyMDA4NzksImlhdCI6MTY3ODQ0MjQ3OX0.U3eA8j-uU-JnhzeO3EDHRuXLwkAUFCPxtGHEgw6p7Ccc3YRbFs2tmCdbD9PZEXP-XsxSeBQi1FY0YPcT3NXADw"; let encoded_eddsa = "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJzY29wZSI6InRlbmFudCIsInRlbmFudF9pZCI6IjNkMWY3NTk1YjQ2ODIzMDMwNGUwYjczY2VjYmNiMDgxIiwiaXNzIjoibmVvbi5jb250cm9scGxhbmUiLCJleHAiOjE3MDkyMDA4NzksImlhdCI6MTY3ODQ0MjQ3OX0.U3eA8j-uU-JnhzeO3EDHRuXLwkAUFCPxtGHEgw6p7Ccc3YRbFs2tmCdbD9PZEXP-XsxSeBQi1FY0YPcT3NXADw";
// Check it can be validated with the public key // Check it can be validated with the public key
let auth = JwtAuth::new(DecodingKey::from_ed_pem(TEST_PUB_KEY_ED25519)?); let auth = JwtAuth::new(vec![DecodingKey::from_ed_pem(TEST_PUB_KEY_ED25519).unwrap()]);
let claims_from_token = auth.decode(encoded_eddsa)?.claims; let claims_from_token = auth.decode(encoded_eddsa).unwrap().claims;
assert_eq!(claims_from_token, expected_claims); assert_eq!(claims_from_token, expected_claims);
Ok(())
} }
#[test] #[test]
fn test_encode() -> Result<(), anyhow::Error> { fn test_encode() {
let claims = Claims { let claims = Claims {
tenant_id: Some(TenantId::from_str("3d1f7595b468230304e0b73cecbcb081")?), tenant_id: Some(TenantId::from_str("3d1f7595b468230304e0b73cecbcb081").unwrap()),
scope: Scope::Tenant, scope: Scope::Tenant,
}; };
let encoded = encode_from_key_file(&claims, TEST_PRIV_KEY_ED25519)?; let encoded = encode_from_key_file(&claims, TEST_PRIV_KEY_ED25519).unwrap();
// decode it back // decode it back
let auth = JwtAuth::new(DecodingKey::from_ed_pem(TEST_PUB_KEY_ED25519)?); let auth = JwtAuth::new(vec![DecodingKey::from_ed_pem(TEST_PUB_KEY_ED25519).unwrap()]);
let decoded = auth.decode(&encoded)?; let decoded = auth.decode(&encoded).unwrap();
assert_eq!(decoded.claims, claims); assert_eq!(decoded.claims, claims);
Ok(())
} }
} }

View File

@@ -7,7 +7,7 @@ use serde::{Deserialize, Serialize};
/// ///
/// See docs/rfcs/025-generation-numbers.md for detail on how generation /// See docs/rfcs/025-generation-numbers.md for detail on how generation
/// numbers are used. /// numbers are used.
#[derive(Copy, Clone, Eq, PartialEq, PartialOrd, Ord)] #[derive(Copy, Clone, Eq, PartialEq, PartialOrd, Ord, Hash)]
pub enum Generation { pub enum Generation {
// Generations with this magic value will not add a suffix to S3 keys, and will not // Generations with this magic value will not add a suffix to S3 keys, and will not
// be included in persisted index_part.json. This value is only to be used // be included in persisted index_part.json. This value is only to be used

41
libs/utils/src/hex.rs Normal file
View File

@@ -0,0 +1,41 @@
/// Useful type for asserting that expected bytes match reporting the bytes more readable
/// array-syntax compatible hex bytes.
///
/// # Usage
///
/// ```
/// use utils::Hex;
///
/// let actual = serialize_something();
/// let expected = [0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f, 0x72, 0x6c, 0x64];
///
/// // the type implements PartialEq and on mismatch, both sides are printed in 16 wide multiline
/// // output suffixed with an array style length for easier comparisons.
/// assert_eq!(Hex(&actual), Hex(&expected));
///
/// // with `let expected = [0x68];` the error would had been:
/// // assertion `left == right` failed
/// // left: [0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f, 0x72, 0x6c, 0x64; 11]
/// // right: [0x68; 1]
/// # fn serialize_something() -> Vec<u8> { "hello world".as_bytes().to_vec() }
/// ```
#[derive(PartialEq)]
pub struct Hex<'a>(pub &'a [u8]);
impl std::fmt::Debug for Hex<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "[")?;
for (i, c) in self.0.chunks(16).enumerate() {
if i > 0 && !c.is_empty() {
writeln!(f, ", ")?;
}
for (j, b) in c.iter().enumerate() {
if j > 0 {
write!(f, ", ")?;
}
write!(f, "0x{b:02x}")?;
}
}
write!(f, "; {}]", self.0.len())
}
}

View File

@@ -1,4 +1,4 @@
use crate::auth::{Claims, JwtAuth}; use crate::auth::{AuthError, Claims, SwappableJwtAuth};
use crate::http::error::{api_error_handler, route_error_handler, ApiError}; use crate::http::error::{api_error_handler, route_error_handler, ApiError};
use anyhow::Context; use anyhow::Context;
use hyper::header::{HeaderName, AUTHORIZATION}; use hyper::header::{HeaderName, AUTHORIZATION};
@@ -14,6 +14,11 @@ use tracing::{self, debug, info, info_span, warn, Instrument};
use std::future::Future; use std::future::Future;
use std::str::FromStr; use std::str::FromStr;
use bytes::{Bytes, BytesMut};
use std::io::Write as _;
use tokio::sync::mpsc;
use tokio_stream::wrappers::ReceiverStream;
static SERVE_METRICS_COUNT: Lazy<IntCounter> = Lazy::new(|| { static SERVE_METRICS_COUNT: Lazy<IntCounter> = Lazy::new(|| {
register_int_counter!( register_int_counter!(
"libmetrics_metric_handler_requests_total", "libmetrics_metric_handler_requests_total",
@@ -146,94 +151,89 @@ impl Drop for RequestCancelled {
} }
} }
/// An [`std::io::Write`] implementation on top of a channel sending [`bytes::Bytes`] chunks.
pub struct ChannelWriter {
buffer: BytesMut,
pub tx: mpsc::Sender<std::io::Result<Bytes>>,
written: usize,
}
impl ChannelWriter {
pub fn new(buf_len: usize, tx: mpsc::Sender<std::io::Result<Bytes>>) -> Self {
assert_ne!(buf_len, 0);
ChannelWriter {
// split about half off the buffer from the start, because we flush depending on
// capacity. first flush will come sooner than without this, but now resizes will
// have better chance of picking up the "other" half. not guaranteed of course.
buffer: BytesMut::with_capacity(buf_len).split_off(buf_len / 2),
tx,
written: 0,
}
}
pub fn flush0(&mut self) -> std::io::Result<usize> {
let n = self.buffer.len();
if n == 0 {
return Ok(0);
}
tracing::trace!(n, "flushing");
let ready = self.buffer.split().freeze();
// not ideal to call from blocking code to block_on, but we are sure that this
// operation does not spawn_blocking other tasks
let res: Result<(), ()> = tokio::runtime::Handle::current().block_on(async {
self.tx.send(Ok(ready)).await.map_err(|_| ())?;
// throttle sending to allow reuse of our buffer in `write`.
self.tx.reserve().await.map_err(|_| ())?;
// now the response task has picked up the buffer and hopefully started
// sending it to the client.
Ok(())
});
if res.is_err() {
return Err(std::io::ErrorKind::BrokenPipe.into());
}
self.written += n;
Ok(n)
}
pub fn flushed_bytes(&self) -> usize {
self.written
}
}
impl std::io::Write for ChannelWriter {
fn write(&mut self, mut buf: &[u8]) -> std::io::Result<usize> {
let remaining = self.buffer.capacity() - self.buffer.len();
let out_of_space = remaining < buf.len();
let original_len = buf.len();
if out_of_space {
let can_still_fit = buf.len() - remaining;
self.buffer.extend_from_slice(&buf[..can_still_fit]);
buf = &buf[can_still_fit..];
self.flush0()?;
}
// assume that this will often under normal operation just move the pointer back to the
// beginning of allocation, because previous split off parts are already sent and
// dropped.
self.buffer.extend_from_slice(buf);
Ok(original_len)
}
fn flush(&mut self) -> std::io::Result<()> {
self.flush0().map(|_| ())
}
}
async fn prometheus_metrics_handler(_req: Request<Body>) -> Result<Response<Body>, ApiError> { async fn prometheus_metrics_handler(_req: Request<Body>) -> Result<Response<Body>, ApiError> {
use bytes::{Bytes, BytesMut};
use std::io::Write as _;
use tokio::sync::mpsc;
use tokio_stream::wrappers::ReceiverStream;
SERVE_METRICS_COUNT.inc(); SERVE_METRICS_COUNT.inc();
/// An [`std::io::Write`] implementation on top of a channel sending [`bytes::Bytes`] chunks.
struct ChannelWriter {
buffer: BytesMut,
tx: mpsc::Sender<std::io::Result<Bytes>>,
written: usize,
}
impl ChannelWriter {
fn new(buf_len: usize, tx: mpsc::Sender<std::io::Result<Bytes>>) -> Self {
assert_ne!(buf_len, 0);
ChannelWriter {
// split about half off the buffer from the start, because we flush depending on
// capacity. first flush will come sooner than without this, but now resizes will
// have better chance of picking up the "other" half. not guaranteed of course.
buffer: BytesMut::with_capacity(buf_len).split_off(buf_len / 2),
tx,
written: 0,
}
}
fn flush0(&mut self) -> std::io::Result<usize> {
let n = self.buffer.len();
if n == 0 {
return Ok(0);
}
tracing::trace!(n, "flushing");
let ready = self.buffer.split().freeze();
// not ideal to call from blocking code to block_on, but we are sure that this
// operation does not spawn_blocking other tasks
let res: Result<(), ()> = tokio::runtime::Handle::current().block_on(async {
self.tx.send(Ok(ready)).await.map_err(|_| ())?;
// throttle sending to allow reuse of our buffer in `write`.
self.tx.reserve().await.map_err(|_| ())?;
// now the response task has picked up the buffer and hopefully started
// sending it to the client.
Ok(())
});
if res.is_err() {
return Err(std::io::ErrorKind::BrokenPipe.into());
}
self.written += n;
Ok(n)
}
fn flushed_bytes(&self) -> usize {
self.written
}
}
impl std::io::Write for ChannelWriter {
fn write(&mut self, mut buf: &[u8]) -> std::io::Result<usize> {
let remaining = self.buffer.capacity() - self.buffer.len();
let out_of_space = remaining < buf.len();
let original_len = buf.len();
if out_of_space {
let can_still_fit = buf.len() - remaining;
self.buffer.extend_from_slice(&buf[..can_still_fit]);
buf = &buf[can_still_fit..];
self.flush0()?;
}
// assume that this will often under normal operation just move the pointer back to the
// beginning of allocation, because previous split off parts are already sent and
// dropped.
self.buffer.extend_from_slice(buf);
Ok(original_len)
}
fn flush(&mut self) -> std::io::Result<()> {
self.flush0().map(|_| ())
}
}
let started_at = std::time::Instant::now(); let started_at = std::time::Instant::now();
let (tx, rx) = mpsc::channel(1); let (tx, rx) = mpsc::channel(1);
@@ -389,7 +389,7 @@ fn parse_token(header_value: &str) -> Result<&str, ApiError> {
} }
pub fn auth_middleware<B: hyper::body::HttpBody + Send + Sync + 'static>( pub fn auth_middleware<B: hyper::body::HttpBody + Send + Sync + 'static>(
provide_auth: fn(&Request<Body>) -> Option<&JwtAuth>, provide_auth: fn(&Request<Body>) -> Option<&SwappableJwtAuth>,
) -> Middleware<B, ApiError> { ) -> Middleware<B, ApiError> {
Middleware::pre(move |req| async move { Middleware::pre(move |req| async move {
if let Some(auth) = provide_auth(&req) { if let Some(auth) = provide_auth(&req) {
@@ -400,9 +400,11 @@ pub fn auth_middleware<B: hyper::body::HttpBody + Send + Sync + 'static>(
})?; })?;
let token = parse_token(header_value)?; let token = parse_token(header_value)?;
let data = auth let data = auth.decode(token).map_err(|err| {
.decode(token) warn!("Authentication error: {err}");
.map_err(|_| ApiError::Unauthorized("malformed jwt token".to_string()))?; // Rely on From<AuthError> for ApiError impl
err
})?;
req.set_context(data.claims); req.set_context(data.claims);
} }
None => { None => {
@@ -450,12 +452,11 @@ where
pub fn check_permission_with( pub fn check_permission_with(
req: &Request<Body>, req: &Request<Body>,
check_permission: impl Fn(&Claims) -> Result<(), anyhow::Error>, check_permission: impl Fn(&Claims) -> Result<(), AuthError>,
) -> Result<(), ApiError> { ) -> Result<(), ApiError> {
match req.context::<Claims>() { match req.context::<Claims>() {
Some(claims) => { Some(claims) => Ok(check_permission(&claims)
Ok(check_permission(&claims).map_err(|err| ApiError::Forbidden(err.to_string()))?) .map_err(|_err| ApiError::Forbidden("JWT authentication error".to_string()))?),
}
None => Ok(()), // claims is None because auth is disabled None => Ok(()), // claims is None because auth is disabled
} }
} }

View File

@@ -3,7 +3,7 @@ use serde::{Deserialize, Serialize};
use std::borrow::Cow; use std::borrow::Cow;
use std::error::Error as StdError; use std::error::Error as StdError;
use thiserror::Error; use thiserror::Error;
use tracing::{error, info}; use tracing::{error, info, warn};
#[derive(Debug, Error)] #[derive(Debug, Error)]
pub enum ApiError { pub enum ApiError {
@@ -118,6 +118,9 @@ pub fn api_error_handler(api_error: ApiError) -> Response<Body> {
// Print a stack trace for Internal Server errors // Print a stack trace for Internal Server errors
match api_error { match api_error {
ApiError::Forbidden(_) | ApiError::Unauthorized(_) => {
warn!("Error processing HTTP request: {api_error:#}")
}
ApiError::ResourceUnavailable(_) => info!("Error processing HTTP request: {api_error:#}"), ApiError::ResourceUnavailable(_) => info!("Error processing HTTP request: {api_error:#}"),
ApiError::NotFound(_) => info!("Error processing HTTP request: {api_error:#}"), ApiError::NotFound(_) => info!("Error processing HTTP request: {api_error:#}"),
ApiError::InternalServerError(_) => error!("Error processing HTTP request: {api_error:?}"), ApiError::InternalServerError(_) => error!("Error processing HTTP request: {api_error:?}"),

View File

@@ -3,6 +3,7 @@ use std::{fmt, str::FromStr};
use anyhow::Context; use anyhow::Context;
use hex::FromHex; use hex::FromHex;
use rand::Rng; use rand::Rng;
use serde::de::Visitor;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use thiserror::Error; use thiserror::Error;
@@ -17,12 +18,74 @@ pub enum IdError {
/// ///
/// NOTE: It (de)serializes as an array of hex bytes, so the string representation would look /// NOTE: It (de)serializes as an array of hex bytes, so the string representation would look
/// like `[173,80,132,115,129,226,72,254,170,201,135,108,199,26,228,24]`. /// like `[173,80,132,115,129,226,72,254,170,201,135,108,199,26,228,24]`.
/// #[derive(Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord)]
/// Use `#[serde_as(as = "DisplayFromStr")]` to (de)serialize it as hex string instead: `ad50847381e248feaac9876cc71ae418`.
/// Check the `serde_with::serde_as` documentation for options for more complex types.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize, PartialOrd, Ord)]
struct Id([u8; 16]); struct Id([u8; 16]);
impl Serialize for Id {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
if serializer.is_human_readable() {
serializer.collect_str(self)
} else {
self.0.serialize(serializer)
}
}
}
impl<'de> Deserialize<'de> for Id {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
struct IdVisitor {
is_human_readable_deserializer: bool,
}
impl<'de> Visitor<'de> for IdVisitor {
type Value = Id;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
if self.is_human_readable_deserializer {
formatter.write_str("value in form of hex string")
} else {
formatter.write_str("value in form of integer array([u8; 16])")
}
}
fn visit_seq<A>(self, seq: A) -> Result<Self::Value, A::Error>
where
A: serde::de::SeqAccess<'de>,
{
let s = serde::de::value::SeqAccessDeserializer::new(seq);
let id: [u8; 16] = Deserialize::deserialize(s)?;
Ok(Id::from(id))
}
fn visit_str<E>(self, v: &str) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Id::from_str(v).map_err(E::custom)
}
}
if deserializer.is_human_readable() {
deserializer.deserialize_str(IdVisitor {
is_human_readable_deserializer: true,
})
} else {
deserializer.deserialize_tuple(
16,
IdVisitor {
is_human_readable_deserializer: false,
},
)
}
}
}
impl Id { impl Id {
pub fn get_from_buf(buf: &mut impl bytes::Buf) -> Id { pub fn get_from_buf(buf: &mut impl bytes::Buf) -> Id {
let mut arr = [0u8; 16]; let mut arr = [0u8; 16];
@@ -57,6 +120,8 @@ impl Id {
chunk[0] = HEX[((b >> 4) & 0xf) as usize]; chunk[0] = HEX[((b >> 4) & 0xf) as usize];
chunk[1] = HEX[(b & 0xf) as usize]; chunk[1] = HEX[(b & 0xf) as usize];
} }
// SAFETY: vec constructed out of `HEX`, it can only be ascii
unsafe { String::from_utf8_unchecked(buf) } unsafe { String::from_utf8_unchecked(buf) }
} }
} }
@@ -308,3 +373,112 @@ impl fmt::Display for NodeId {
write!(f, "{}", self.0) write!(f, "{}", self.0)
} }
} }
#[cfg(test)]
mod tests {
use serde_assert::{Deserializer, Serializer, Token, Tokens};
use crate::bin_ser::BeSer;
use super::*;
#[test]
fn test_id_serde_non_human_readable() {
let original_id = Id([
173, 80, 132, 115, 129, 226, 72, 254, 170, 201, 135, 108, 199, 26, 228, 24,
]);
let expected_tokens = Tokens(vec![
Token::Tuple { len: 16 },
Token::U8(173),
Token::U8(80),
Token::U8(132),
Token::U8(115),
Token::U8(129),
Token::U8(226),
Token::U8(72),
Token::U8(254),
Token::U8(170),
Token::U8(201),
Token::U8(135),
Token::U8(108),
Token::U8(199),
Token::U8(26),
Token::U8(228),
Token::U8(24),
Token::TupleEnd,
]);
let serializer = Serializer::builder().is_human_readable(false).build();
let serialized_tokens = original_id.serialize(&serializer).unwrap();
assert_eq!(serialized_tokens, expected_tokens);
let mut deserializer = Deserializer::builder()
.is_human_readable(false)
.tokens(serialized_tokens)
.build();
let deserialized_id = Id::deserialize(&mut deserializer).unwrap();
assert_eq!(deserialized_id, original_id);
}
#[test]
fn test_id_serde_human_readable() {
let original_id = Id([
173, 80, 132, 115, 129, 226, 72, 254, 170, 201, 135, 108, 199, 26, 228, 24,
]);
let expected_tokens = Tokens(vec![Token::Str(String::from(
"ad50847381e248feaac9876cc71ae418",
))]);
let serializer = Serializer::builder().is_human_readable(true).build();
let serialized_tokens = original_id.serialize(&serializer).unwrap();
assert_eq!(serialized_tokens, expected_tokens);
let mut deserializer = Deserializer::builder()
.is_human_readable(true)
.tokens(Tokens(vec![Token::Str(String::from(
"ad50847381e248feaac9876cc71ae418",
))]))
.build();
assert_eq!(Id::deserialize(&mut deserializer).unwrap(), original_id);
}
macro_rules! roundtrip_type {
($type:ty, $expected_bytes:expr) => {{
let expected_bytes: [u8; 16] = $expected_bytes;
let original_id = <$type>::from(expected_bytes);
let ser_bytes = original_id.ser().unwrap();
assert_eq!(ser_bytes, expected_bytes);
let des_id = <$type>::des(&ser_bytes).unwrap();
assert_eq!(des_id, original_id);
}};
}
#[test]
fn test_id_bincode_serde() {
let expected_bytes = [
173, 80, 132, 115, 129, 226, 72, 254, 170, 201, 135, 108, 199, 26, 228, 24,
];
roundtrip_type!(Id, expected_bytes);
}
#[test]
fn test_tenant_id_bincode_serde() {
let expected_bytes = [
173, 80, 132, 115, 129, 226, 72, 254, 170, 201, 135, 108, 199, 26, 228, 24,
];
roundtrip_type!(TenantId, expected_bytes);
}
#[test]
fn test_timeline_id_bincode_serde() {
let expected_bytes = [
173, 80, 132, 115, 129, 226, 72, 254, 170, 201, 135, 108, 199, 26, 228, 24,
];
roundtrip_type!(TimelineId, expected_bytes);
}
}

View File

@@ -1,5 +1,6 @@
//! `utils` is intended to be a place to put code that is shared //! `utils` is intended to be a place to put code that is shared
//! between other crates in this repository. //! between other crates in this repository.
#![deny(clippy::undocumented_unsafe_blocks)]
pub mod backoff; pub mod backoff;
@@ -24,6 +25,10 @@ pub mod auth;
// utility functions and helper traits for unified unique id generation/serialization etc. // utility functions and helper traits for unified unique id generation/serialization etc.
pub mod id; pub mod id;
mod hex;
pub use hex::Hex;
// http endpoint utils // http endpoint utils
pub mod http; pub mod http;
@@ -73,6 +78,9 @@ pub mod completion;
/// Reporting utilities /// Reporting utilities
pub mod error; pub mod error;
/// async timeout helper
pub mod timeout;
pub mod sync; pub mod sync;
/// This is a shortcut to embed git sha into binaries and avoid copying the same build script to all packages /// This is a shortcut to embed git sha into binaries and avoid copying the same build script to all packages

View File

@@ -1,7 +1,7 @@
#![warn(missing_docs)] #![warn(missing_docs)]
use camino::Utf8Path; use camino::Utf8Path;
use serde::{Deserialize, Serialize}; use serde::{de::Visitor, Deserialize, Serialize};
use std::fmt; use std::fmt;
use std::ops::{Add, AddAssign}; use std::ops::{Add, AddAssign};
use std::str::FromStr; use std::str::FromStr;
@@ -13,10 +13,114 @@ use crate::seqwait::MonotonicCounter;
pub const XLOG_BLCKSZ: u32 = 8192; pub const XLOG_BLCKSZ: u32 = 8192;
/// A Postgres LSN (Log Sequence Number), also known as an XLogRecPtr /// A Postgres LSN (Log Sequence Number), also known as an XLogRecPtr
#[derive(Clone, Copy, Eq, Ord, PartialEq, PartialOrd, Hash, Serialize, Deserialize)] #[derive(Clone, Copy, Eq, Ord, PartialEq, PartialOrd, Hash)]
#[serde(transparent)]
pub struct Lsn(pub u64); pub struct Lsn(pub u64);
impl Serialize for Lsn {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
if serializer.is_human_readable() {
serializer.collect_str(self)
} else {
self.0.serialize(serializer)
}
}
}
impl<'de> Deserialize<'de> for Lsn {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
struct LsnVisitor {
is_human_readable_deserializer: bool,
}
impl<'de> Visitor<'de> for LsnVisitor {
type Value = Lsn;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
if self.is_human_readable_deserializer {
formatter.write_str(
"value in form of hex string({upper_u32_hex}/{lower_u32_hex}) representing u64 integer",
)
} else {
formatter.write_str("value in form of integer(u64)")
}
}
fn visit_u64<E>(self, v: u64) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(Lsn(v))
}
fn visit_str<E>(self, v: &str) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Lsn::from_str(v).map_err(|e| E::custom(e))
}
}
if deserializer.is_human_readable() {
deserializer.deserialize_str(LsnVisitor {
is_human_readable_deserializer: true,
})
} else {
deserializer.deserialize_u64(LsnVisitor {
is_human_readable_deserializer: false,
})
}
}
}
/// Allows (de)serialization of an `Lsn` always as `u64`.
///
/// ### Example
///
/// ```rust
/// # use serde::{Serialize, Deserialize};
/// use utils::lsn::Lsn;
///
/// #[derive(PartialEq, Serialize, Deserialize, Debug)]
/// struct Foo {
/// #[serde(with = "utils::lsn::serde_as_u64")]
/// always_u64: Lsn,
/// }
///
/// let orig = Foo { always_u64: Lsn(1234) };
///
/// let res = serde_json::to_string(&orig).unwrap();
/// assert_eq!(res, r#"{"always_u64":1234}"#);
///
/// let foo = serde_json::from_str::<Foo>(&res).unwrap();
/// assert_eq!(foo, orig);
/// ```
///
pub mod serde_as_u64 {
use super::Lsn;
/// Serializes the Lsn as u64 disregarding the human readability of the format.
///
/// Meant to be used via `#[serde(with = "...")]` or `#[serde(serialize_with = "...")]`.
pub fn serialize<S: serde::Serializer>(lsn: &Lsn, serializer: S) -> Result<S::Ok, S::Error> {
use serde::Serialize;
lsn.0.serialize(serializer)
}
/// Deserializes the Lsn as u64 disregarding the human readability of the format.
///
/// Meant to be used via `#[serde(with = "...")]` or `#[serde(deserialize_with = "...")]`.
pub fn deserialize<'de, D: serde::Deserializer<'de>>(deserializer: D) -> Result<Lsn, D::Error> {
use serde::Deserialize;
u64::deserialize(deserializer).map(Lsn)
}
}
/// We tried to parse an LSN from a string, but failed /// We tried to parse an LSN from a string, but failed
#[derive(Debug, PartialEq, Eq, thiserror::Error)] #[derive(Debug, PartialEq, Eq, thiserror::Error)]
#[error("LsnParseError")] #[error("LsnParseError")]
@@ -264,8 +368,13 @@ impl MonotonicCounter<Lsn> for RecordLsn {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use crate::bin_ser::BeSer;
use super::*; use super::*;
use serde::ser::Serialize;
use serde_assert::{Deserializer, Serializer, Token, Tokens};
#[test] #[test]
fn test_lsn_strings() { fn test_lsn_strings() {
assert_eq!("12345678/AAAA5555".parse(), Ok(Lsn(0x12345678AAAA5555))); assert_eq!("12345678/AAAA5555".parse(), Ok(Lsn(0x12345678AAAA5555)));
@@ -341,4 +450,95 @@ mod tests {
assert_eq!(lsn.fetch_max(Lsn(6000)), Lsn(5678)); assert_eq!(lsn.fetch_max(Lsn(6000)), Lsn(5678));
assert_eq!(lsn.fetch_max(Lsn(5000)), Lsn(6000)); assert_eq!(lsn.fetch_max(Lsn(5000)), Lsn(6000));
} }
#[test]
fn test_lsn_serde() {
let original_lsn = Lsn(0x0123456789abcdef);
let expected_readable_tokens = Tokens(vec![Token::U64(0x0123456789abcdef)]);
let expected_non_readable_tokens =
Tokens(vec![Token::Str(String::from("1234567/89ABCDEF"))]);
// Testing human_readable ser/de
let serializer = Serializer::builder().is_human_readable(false).build();
let readable_ser_tokens = original_lsn.serialize(&serializer).unwrap();
assert_eq!(readable_ser_tokens, expected_readable_tokens);
let mut deserializer = Deserializer::builder()
.is_human_readable(false)
.tokens(readable_ser_tokens)
.build();
let des_lsn = Lsn::deserialize(&mut deserializer).unwrap();
assert_eq!(des_lsn, original_lsn);
// Testing NON human_readable ser/de
let serializer = Serializer::builder().is_human_readable(true).build();
let non_readable_ser_tokens = original_lsn.serialize(&serializer).unwrap();
assert_eq!(non_readable_ser_tokens, expected_non_readable_tokens);
let mut deserializer = Deserializer::builder()
.is_human_readable(true)
.tokens(non_readable_ser_tokens)
.build();
let des_lsn = Lsn::deserialize(&mut deserializer).unwrap();
assert_eq!(des_lsn, original_lsn);
// Testing mismatching ser/de
let serializer = Serializer::builder().is_human_readable(false).build();
let non_readable_ser_tokens = original_lsn.serialize(&serializer).unwrap();
let mut deserializer = Deserializer::builder()
.is_human_readable(true)
.tokens(non_readable_ser_tokens)
.build();
Lsn::deserialize(&mut deserializer).unwrap_err();
let serializer = Serializer::builder().is_human_readable(true).build();
let readable_ser_tokens = original_lsn.serialize(&serializer).unwrap();
let mut deserializer = Deserializer::builder()
.is_human_readable(false)
.tokens(readable_ser_tokens)
.build();
Lsn::deserialize(&mut deserializer).unwrap_err();
}
#[test]
fn test_lsn_ensure_roundtrip() {
let original_lsn = Lsn(0xaaaabbbb);
let serializer = Serializer::builder().is_human_readable(false).build();
let ser_tokens = original_lsn.serialize(&serializer).unwrap();
let mut deserializer = Deserializer::builder()
.is_human_readable(false)
.tokens(ser_tokens)
.build();
let des_lsn = Lsn::deserialize(&mut deserializer).unwrap();
assert_eq!(des_lsn, original_lsn);
}
#[test]
fn test_lsn_bincode_serde() {
let lsn = Lsn(0x0123456789abcdef);
let expected_bytes = [0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef];
let ser_bytes = lsn.ser().unwrap();
assert_eq!(ser_bytes, expected_bytes);
let des_lsn = Lsn::des(&ser_bytes).unwrap();
assert_eq!(des_lsn, lsn);
}
#[test]
fn test_lsn_bincode_ensure_roundtrip() {
let original_lsn = Lsn(0x01_02_03_04_05_06_07_08);
let expected_bytes = vec![0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08];
let ser_bytes = original_lsn.ser().unwrap();
assert_eq!(ser_bytes, expected_bytes);
let des_lsn = Lsn::des(&ser_bytes).unwrap();
assert_eq!(des_lsn, original_lsn);
}
} }

View File

@@ -3,7 +3,6 @@ use std::time::{Duration, SystemTime};
use bytes::{Buf, BufMut, Bytes, BytesMut}; use bytes::{Buf, BufMut, Bytes, BytesMut};
use pq_proto::{read_cstr, PG_EPOCH}; use pq_proto::{read_cstr, PG_EPOCH};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr};
use tracing::{trace, warn}; use tracing::{trace, warn};
use crate::lsn::Lsn; use crate::lsn::Lsn;
@@ -15,21 +14,17 @@ use crate::lsn::Lsn;
/// ///
/// serde Serialize is used only for human readable dump to json (e.g. in /// serde Serialize is used only for human readable dump to json (e.g. in
/// safekeepers debug_dump). /// safekeepers debug_dump).
#[serde_as]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)] #[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub struct PageserverFeedback { pub struct PageserverFeedback {
/// Last known size of the timeline. Used to enforce timeline size limit. /// Last known size of the timeline. Used to enforce timeline size limit.
pub current_timeline_size: u64, pub current_timeline_size: u64,
/// LSN last received and ingested by the pageserver. Controls backpressure. /// LSN last received and ingested by the pageserver. Controls backpressure.
#[serde_as(as = "DisplayFromStr")]
pub last_received_lsn: Lsn, pub last_received_lsn: Lsn,
/// LSN up to which data is persisted by the pageserver to its local disc. /// LSN up to which data is persisted by the pageserver to its local disc.
/// Controls backpressure. /// Controls backpressure.
#[serde_as(as = "DisplayFromStr")]
pub disk_consistent_lsn: Lsn, pub disk_consistent_lsn: Lsn,
/// LSN up to which data is persisted by the pageserver on s3; safekeepers /// LSN up to which data is persisted by the pageserver on s3; safekeepers
/// consider WAL before it can be removed. /// consider WAL before it can be removed.
#[serde_as(as = "DisplayFromStr")]
pub remote_consistent_lsn: Lsn, pub remote_consistent_lsn: Lsn,
// Serialize with RFC3339 format. // Serialize with RFC3339 format.
#[serde(with = "serde_systemtime")] #[serde(with = "serde_systemtime")]

View File

@@ -125,6 +125,9 @@ where
// Wake everyone with an error. // Wake everyone with an error.
let mut internal = self.internal.lock().unwrap(); let mut internal = self.internal.lock().unwrap();
// Block any future waiters from starting
internal.shutdown = true;
// This will steal the entire waiters map. // This will steal the entire waiters map.
// When we drop it all waiters will be woken. // When we drop it all waiters will be woken.
mem::take(&mut internal.waiters) mem::take(&mut internal.waiters)

View File

@@ -1,6 +1,7 @@
/// Immediately terminate the calling process without calling /// Immediately terminate the calling process without calling
/// atexit callbacks, C runtime destructors etc. We mainly use /// atexit callbacks, C runtime destructors etc. We mainly use
/// this to protect coverage data from concurrent writes. /// this to protect coverage data from concurrent writes.
pub fn exit_now(code: u8) { pub fn exit_now(code: u8) -> ! {
// SAFETY: exiting is safe, the ffi is not safe
unsafe { nix::libc::_exit(code as _) }; unsafe { nix::libc::_exit(code as _) };
} }

View File

@@ -1 +1,3 @@
pub mod heavier_once_cell; pub mod heavier_once_cell;
pub mod gate;

158
libs/utils/src/sync/gate.rs Normal file
View File

@@ -0,0 +1,158 @@
use std::{sync::Arc, time::Duration};
/// Gates are a concurrency helper, primarily used for implementing safe shutdown.
///
/// Users of a resource call `enter()` to acquire a GateGuard, and the owner of
/// the resource calls `close()` when they want to ensure that all holders of guards
/// have released them, and that no future guards will be issued.
pub struct Gate {
/// Each caller of enter() takes one unit from the semaphore. In close(), we
/// take all the units to ensure all GateGuards are destroyed.
sem: Arc<tokio::sync::Semaphore>,
/// For observability only: a name that will be used to log warnings if a particular
/// gate is holding up shutdown
name: String,
}
/// RAII guard for a [`Gate`]: as long as this exists, calls to [`Gate::close`] will
/// not complete.
#[derive(Debug)]
pub struct GateGuard(tokio::sync::OwnedSemaphorePermit);
/// Observability helper: every `warn_period`, emit a log warning that we're still waiting on this gate
async fn warn_if_stuck<Fut: std::future::Future>(
fut: Fut,
name: &str,
warn_period: std::time::Duration,
) -> <Fut as std::future::Future>::Output {
let started = std::time::Instant::now();
let mut fut = std::pin::pin!(fut);
loop {
match tokio::time::timeout(warn_period, &mut fut).await {
Ok(ret) => return ret,
Err(_) => {
tracing::warn!(
gate = name,
elapsed_ms = started.elapsed().as_millis(),
"still waiting, taking longer than expected..."
);
}
}
}
}
#[derive(Debug)]
pub enum GateError {
GateClosed,
}
impl Gate {
const MAX_UNITS: u32 = u32::MAX;
pub fn new(name: String) -> Self {
Self {
sem: Arc::new(tokio::sync::Semaphore::new(Self::MAX_UNITS as usize)),
name,
}
}
/// Acquire a guard that will prevent close() calls from completing. If close()
/// was already called, this will return an error which should be interpreted
/// as "shutting down".
///
/// This function would typically be used from e.g. request handlers. While holding
/// the guard returned from this function, it is important to respect a CancellationToken
/// to avoid blocking close() indefinitely: typically types that contain a Gate will
/// also contain a CancellationToken.
pub fn enter(&self) -> Result<GateGuard, GateError> {
self.sem
.clone()
.try_acquire_owned()
.map(GateGuard)
.map_err(|_| GateError::GateClosed)
}
/// Types with a shutdown() method and a gate should call this method at the
/// end of shutdown, to ensure that all GateGuard holders are done.
///
/// This will wait for all guards to be destroyed. For this to complete promptly, it is
/// important that the holders of such guards are respecting a CancellationToken which has
/// been cancelled before entering this function.
pub async fn close(&self) {
warn_if_stuck(self.do_close(), &self.name, Duration::from_millis(1000)).await
}
/// Check if [`Self::close()`] has finished waiting for all [`Self::enter()`] users to finish. This
/// is usually analoguous for "Did shutdown finish?" for types that include a Gate, whereas checking
/// the CancellationToken on such types is analogous to "Did shutdown start?"
pub fn close_complete(&self) -> bool {
self.sem.is_closed()
}
async fn do_close(&self) {
tracing::debug!(gate = self.name, "Closing Gate...");
match self.sem.acquire_many(Self::MAX_UNITS).await {
Ok(_units) => {
// While holding all units, close the semaphore. All subsequent calls to enter() will fail.
self.sem.close();
}
Err(_) => {
// Semaphore closed: we are the only function that can do this, so it indicates a double-call.
// This is legal. Timeline::shutdown for example is not protected from being called more than
// once.
tracing::debug!(gate = self.name, "Double close")
}
}
tracing::debug!(gate = self.name, "Closed Gate.")
}
}
#[cfg(test)]
mod tests {
use futures::FutureExt;
use super::*;
#[tokio::test]
async fn test_idle_gate() {
// Having taken no gates, we should not be blocked in close
let gate = Gate::new("test".to_string());
gate.close().await;
// If a guard is dropped before entering, close should not be blocked
let gate = Gate::new("test".to_string());
let guard = gate.enter().unwrap();
drop(guard);
gate.close().await;
// Entering a closed guard fails
gate.enter().expect_err("enter should fail after close");
}
#[tokio::test]
async fn test_busy_gate() {
let gate = Gate::new("test".to_string());
let guard = gate.enter().unwrap();
let mut close_fut = std::pin::pin!(gate.close());
// Close should be blocked
assert!(close_fut.as_mut().now_or_never().is_none());
// Attempting to enter() should fail, even though close isn't done yet.
gate.enter()
.expect_err("enter should fail after entering close");
drop(guard);
// Guard is gone, close should finish
assert!(close_fut.as_mut().now_or_never().is_some());
// Attempting to enter() is still forbidden
gate.enter().expect_err("enter should fail finishing close");
}
}

View File

@@ -1,4 +1,7 @@
use std::sync::{Arc, Mutex, MutexGuard}; use std::sync::{
atomic::{AtomicUsize, Ordering},
Arc, Mutex, MutexGuard,
};
use tokio::sync::Semaphore; use tokio::sync::Semaphore;
/// Custom design like [`tokio::sync::OnceCell`] but using [`OwnedSemaphorePermit`] instead of /// Custom design like [`tokio::sync::OnceCell`] but using [`OwnedSemaphorePermit`] instead of
@@ -10,6 +13,7 @@ use tokio::sync::Semaphore;
/// [`OwnedSemaphorePermit`]: tokio::sync::OwnedSemaphorePermit /// [`OwnedSemaphorePermit`]: tokio::sync::OwnedSemaphorePermit
pub struct OnceCell<T> { pub struct OnceCell<T> {
inner: Mutex<Inner<T>>, inner: Mutex<Inner<T>>,
initializers: AtomicUsize,
} }
impl<T> Default for OnceCell<T> { impl<T> Default for OnceCell<T> {
@@ -17,6 +21,7 @@ impl<T> Default for OnceCell<T> {
fn default() -> Self { fn default() -> Self {
Self { Self {
inner: Default::default(), inner: Default::default(),
initializers: AtomicUsize::new(0),
} }
} }
} }
@@ -49,6 +54,7 @@ impl<T> OnceCell<T> {
init_semaphore: Arc::new(sem), init_semaphore: Arc::new(sem),
value: Some(value), value: Some(value),
}), }),
initializers: AtomicUsize::new(0),
} }
} }
@@ -60,8 +66,8 @@ impl<T> OnceCell<T> {
/// Initialization is panic-safe and cancellation-safe. /// Initialization is panic-safe and cancellation-safe.
pub async fn get_or_init<F, Fut, E>(&self, factory: F) -> Result<Guard<'_, T>, E> pub async fn get_or_init<F, Fut, E>(&self, factory: F) -> Result<Guard<'_, T>, E>
where where
F: FnOnce() -> Fut, F: FnOnce(InitPermit) -> Fut,
Fut: std::future::Future<Output = Result<T, E>>, Fut: std::future::Future<Output = Result<(T, InitPermit), E>>,
{ {
let sem = { let sem = {
let guard = self.inner.lock().unwrap(); let guard = self.inner.lock().unwrap();
@@ -71,29 +77,61 @@ impl<T> OnceCell<T> {
guard.init_semaphore.clone() guard.init_semaphore.clone()
}; };
let permit = sem.acquire_owned().await; let permit = {
if permit.is_err() { // increment the count for the duration of queued
let guard = self.inner.lock().unwrap(); let _guard = CountWaitingInitializers::start(self);
assert!( sem.acquire_owned().await
guard.value.is_some(), };
"semaphore got closed, must be initialized"
);
return Ok(Guard(guard));
} else {
// now we try
let value = factory().await?;
let mut guard = self.inner.lock().unwrap(); match permit {
assert!( Ok(permit) => {
guard.value.is_none(), let permit = InitPermit(permit);
"we won permit, must not be initialized" let (value, _permit) = factory(permit).await?;
);
guard.value = Some(value); let guard = self.inner.lock().unwrap();
guard.init_semaphore.close();
Ok(Guard(guard)) Ok(Self::set0(value, guard))
}
Err(_closed) => {
let guard = self.inner.lock().unwrap();
assert!(
guard.value.is_some(),
"semaphore got closed, must be initialized"
);
return Ok(Guard(guard));
}
} }
} }
/// Assuming a permit is held after previous call to [`Guard::take_and_deinit`], it can be used
/// to complete initializing the inner value.
///
/// # Panics
///
/// If the inner has already been initialized.
pub fn set(&self, value: T, _permit: InitPermit) -> Guard<'_, T> {
let guard = self.inner.lock().unwrap();
// cannot assert that this permit is for self.inner.semaphore, but we can assert it cannot
// give more permits right now.
if guard.init_semaphore.try_acquire().is_ok() {
drop(guard);
panic!("permit is of wrong origin");
}
Self::set0(value, guard)
}
fn set0(value: T, mut guard: std::sync::MutexGuard<'_, Inner<T>>) -> Guard<'_, T> {
if guard.value.is_some() {
drop(guard);
unreachable!("we won permit, must not be initialized");
}
guard.value = Some(value);
guard.init_semaphore.close();
Guard(guard)
}
/// Returns a guard to an existing initialized value, if any. /// Returns a guard to an existing initialized value, if any.
pub fn get(&self) -> Option<Guard<'_, T>> { pub fn get(&self) -> Option<Guard<'_, T>> {
let guard = self.inner.lock().unwrap(); let guard = self.inner.lock().unwrap();
@@ -103,6 +141,28 @@ impl<T> OnceCell<T> {
None None
} }
} }
/// Return the number of [`Self::get_or_init`] calls waiting for initialization to complete.
pub fn initializer_count(&self) -> usize {
self.initializers.load(Ordering::Relaxed)
}
}
/// DropGuard counter for queued tasks waiting to initialize, mainly accessible for the
/// initializing task for example at the end of initialization.
struct CountWaitingInitializers<'a, T>(&'a OnceCell<T>);
impl<'a, T> CountWaitingInitializers<'a, T> {
fn start(target: &'a OnceCell<T>) -> Self {
target.initializers.fetch_add(1, Ordering::Relaxed);
CountWaitingInitializers(target)
}
}
impl<'a, T> Drop for CountWaitingInitializers<'a, T> {
fn drop(&mut self) {
self.0.initializers.fetch_sub(1, Ordering::Relaxed);
}
} }
/// Uninteresting guard object to allow short-lived access to inspect or clone the held, /// Uninteresting guard object to allow short-lived access to inspect or clone the held,
@@ -135,7 +195,7 @@ impl<'a, T> Guard<'a, T> {
/// ///
/// The permit will be on a semaphore part of the new internal value, and any following /// The permit will be on a semaphore part of the new internal value, and any following
/// [`OnceCell::get_or_init`] will wait on it to complete. /// [`OnceCell::get_or_init`] will wait on it to complete.
pub fn take_and_deinit(&mut self) -> (T, tokio::sync::OwnedSemaphorePermit) { pub fn take_and_deinit(&mut self) -> (T, InitPermit) {
let mut swapped = Inner::default(); let mut swapped = Inner::default();
let permit = swapped let permit = swapped
.init_semaphore .init_semaphore
@@ -145,11 +205,14 @@ impl<'a, T> Guard<'a, T> {
std::mem::swap(&mut *self.0, &mut swapped); std::mem::swap(&mut *self.0, &mut swapped);
swapped swapped
.value .value
.map(|v| (v, permit)) .map(|v| (v, InitPermit(permit)))
.expect("guard is not created unless value has been initialized") .expect("guard is not created unless value has been initialized")
} }
} }
/// Type held by OnceCell (de)initializing task.
pub struct InitPermit(tokio::sync::OwnedSemaphorePermit);
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
@@ -185,11 +248,11 @@ mod tests {
barrier.wait().await; barrier.wait().await;
let won = { let won = {
let g = cell let g = cell
.get_or_init(|| { .get_or_init(|permit| {
counters.factory_got_to_run.fetch_add(1, Ordering::Relaxed); counters.factory_got_to_run.fetch_add(1, Ordering::Relaxed);
async { async {
counters.future_polled.fetch_add(1, Ordering::Relaxed); counters.future_polled.fetch_add(1, Ordering::Relaxed);
Ok::<_, Infallible>(i) Ok::<_, Infallible>((i, permit))
} }
}) })
.await .await
@@ -243,7 +306,7 @@ mod tests {
deinitialization_started.wait().await; deinitialization_started.wait().await;
let started_at = tokio::time::Instant::now(); let started_at = tokio::time::Instant::now();
cell.get_or_init(|| async { Ok::<_, Infallible>(reinit) }) cell.get_or_init(|permit| async { Ok::<_, Infallible>((reinit, permit)) })
.await .await
.unwrap(); .unwrap();
@@ -258,18 +321,32 @@ mod tests {
assert_eq!(*cell.get().unwrap(), reinit); assert_eq!(*cell.get().unwrap(), reinit);
} }
#[test]
fn reinit_with_deinit_permit() {
let cell = Arc::new(OnceCell::new(42));
let (mol, permit) = cell.get().unwrap().take_and_deinit();
cell.set(5, permit);
assert_eq!(*cell.get().unwrap(), 5);
let (five, permit) = cell.get().unwrap().take_and_deinit();
assert_eq!(5, five);
cell.set(mol, permit);
assert_eq!(*cell.get().unwrap(), 42);
}
#[tokio::test] #[tokio::test]
async fn initialization_attemptable_until_ok() { async fn initialization_attemptable_until_ok() {
let cell = OnceCell::default(); let cell = OnceCell::default();
for _ in 0..10 { for _ in 0..10 {
cell.get_or_init(|| async { Err("whatever error") }) cell.get_or_init(|_permit| async { Err("whatever error") })
.await .await
.unwrap_err(); .unwrap_err();
} }
let g = cell let g = cell
.get_or_init(|| async { Ok::<_, Infallible>("finally success") }) .get_or_init(|permit| async { Ok::<_, Infallible>(("finally success", permit)) })
.await .await
.unwrap(); .unwrap();
assert_eq!(*g, "finally success"); assert_eq!(*g, "finally success");
@@ -281,11 +358,11 @@ mod tests {
let barrier = tokio::sync::Barrier::new(2); let barrier = tokio::sync::Barrier::new(2);
let initializer = cell.get_or_init(|| async { let initializer = cell.get_or_init(|permit| async {
barrier.wait().await; barrier.wait().await;
futures::future::pending::<()>().await; futures::future::pending::<()>().await;
Ok::<_, Infallible>("never reached") Ok::<_, Infallible>(("never reached", permit))
}); });
tokio::select! { tokio::select! {
@@ -298,7 +375,7 @@ mod tests {
assert!(cell.get().is_none()); assert!(cell.get().is_none());
let g = cell let g = cell
.get_or_init(|| async { Ok::<_, Infallible>("now initialized") }) .get_or_init(|permit| async { Ok::<_, Infallible>(("now initialized", permit)) })
.await .await
.unwrap(); .unwrap();
assert_eq!(*g, "now initialized"); assert_eq!(*g, "now initialized");

37
libs/utils/src/timeout.rs Normal file
View File

@@ -0,0 +1,37 @@
use std::time::Duration;
use tokio_util::sync::CancellationToken;
pub enum TimeoutCancellableError {
Timeout,
Cancelled,
}
/// Wrap [`tokio::time::timeout`] with a CancellationToken.
///
/// This wrapper is appropriate for any long running operation in a task
/// that ought to respect a CancellationToken (which means most tasks).
///
/// The only time you should use a bare tokio::timeout is when the future `F`
/// itself respects a CancellationToken: otherwise, always use this wrapper
/// with your CancellationToken to ensure that your task does not hold up
/// graceful shutdown.
pub async fn timeout_cancellable<F>(
duration: Duration,
cancel: &CancellationToken,
future: F,
) -> Result<F::Output, TimeoutCancellableError>
where
F: std::future::Future,
{
tokio::select!(
r = tokio::time::timeout(duration, future) => {
r.map_err(|_| TimeoutCancellableError::Timeout)
},
_ = cancel.cancelled() => {
Err(TimeoutCancellableError::Cancelled)
}
)
}

View File

@@ -19,13 +19,12 @@ inotify.workspace = true
serde.workspace = true serde.workspace = true
serde_json.workspace = true serde_json.workspace = true
sysinfo.workspace = true sysinfo.workspace = true
tokio.workspace = true tokio = { workspace = true, features = ["rt-multi-thread"] }
tokio-postgres.workspace = true tokio-postgres.workspace = true
tokio-stream.workspace = true tokio-stream.workspace = true
tokio-util.workspace = true tokio-util.workspace = true
tracing.workspace = true tracing.workspace = true
tracing-subscriber.workspace = true tracing-subscriber.workspace = true
workspace_hack = { version = "0.1", path = "../../workspace_hack" }
[target.'cfg(target_os = "linux")'.dependencies] [target.'cfg(target_os = "linux")'.dependencies]
cgroups-rs = "0.3.3" cgroups-rs = "0.3.3"

View File

@@ -21,11 +21,6 @@ pub struct FileCacheState {
#[derive(Debug)] #[derive(Debug)]
pub struct FileCacheConfig { pub struct FileCacheConfig {
/// Whether the file cache is *actually* stored in memory (e.g. by writing to
/// a tmpfs or shmem file). If true, the size of the file cache will be counted against the
/// memory available for the cgroup.
pub(crate) in_memory: bool,
/// The size of the file cache, in terms of the size of the resource it consumes /// The size of the file cache, in terms of the size of the resource it consumes
/// (currently: only memory) /// (currently: only memory)
/// ///
@@ -59,22 +54,9 @@ pub struct FileCacheConfig {
spread_factor: f64, spread_factor: f64,
} }
impl FileCacheConfig { impl Default for FileCacheConfig {
pub fn default_in_memory() -> Self { fn default() -> Self {
Self { Self {
in_memory: true,
// 75 %
resource_multiplier: 0.75,
// 640 MiB; (512 + 128)
min_remaining_after_cache: NonZeroU64::new(640 * MiB).unwrap(),
// ensure any increase in file cache size is split 90-10 with 10% to other memory
spread_factor: 0.1,
}
}
pub fn default_on_disk() -> Self {
Self {
in_memory: false,
resource_multiplier: 0.75, resource_multiplier: 0.75,
// 256 MiB - lower than when in memory because overcommitting is safe; if we don't have // 256 MiB - lower than when in memory because overcommitting is safe; if we don't have
// memory, the kernel will just evict from its page cache, rather than e.g. killing // memory, the kernel will just evict from its page cache, rather than e.g. killing
@@ -83,7 +65,9 @@ impl FileCacheConfig {
spread_factor: 0.1, spread_factor: 0.1,
} }
} }
}
impl FileCacheConfig {
/// Make sure fields of the config are consistent. /// Make sure fields of the config are consistent.
pub fn validate(&self) -> anyhow::Result<()> { pub fn validate(&self) -> anyhow::Result<()> {
// Single field validity // Single field validity

View File

@@ -1,3 +1,5 @@
#![deny(unsafe_code)]
#![deny(clippy::undocumented_unsafe_blocks)]
#![cfg(target_os = "linux")] #![cfg(target_os = "linux")]
use anyhow::Context; use anyhow::Context;
@@ -39,16 +41,6 @@ pub struct Args {
#[arg(short, long)] #[arg(short, long)]
pub pgconnstr: Option<String>, pub pgconnstr: Option<String>,
/// Flag to signal that the Postgres file cache is on disk (i.e. not in memory aside from the
/// kernel's page cache), and therefore should not count against available memory.
//
// NB: Ideally this flag would directly refer to whether the file cache is in memory (rather
// than a roundabout way, via whether it's on disk), but in order to be backwards compatible
// during the switch away from an in-memory file cache, we had to default to the previous
// behavior.
#[arg(long)]
pub file_cache_on_disk: bool,
/// The address we should listen on for connection requests. For the /// The address we should listen on for connection requests. For the
/// agent, this is 0.0.0.0:10301. For the informant, this is 127.0.0.1:10369. /// agent, this is 0.0.0.0:10301. For the informant, this is 127.0.0.1:10369.
#[arg(short, long)] #[arg(short, long)]

View File

@@ -156,10 +156,7 @@ impl Runner {
// memory limits. // memory limits.
if let Some(connstr) = &args.pgconnstr { if let Some(connstr) = &args.pgconnstr {
info!("initializing file cache"); info!("initializing file cache");
let config = match args.file_cache_on_disk { let config = FileCacheConfig::default();
true => FileCacheConfig::default_on_disk(),
false => FileCacheConfig::default_in_memory(),
};
let mut file_cache = FileCacheState::new(connstr, config, token.clone()) let mut file_cache = FileCacheState::new(connstr, config, token.clone())
.await .await
@@ -187,10 +184,7 @@ impl Runner {
info!("file cache size actually got set to {actual_size}") info!("file cache size actually got set to {actual_size}")
} }
if args.file_cache_on_disk { file_cache_disk_size = actual_size;
file_cache_disk_size = actual_size;
}
state.filecache = Some(file_cache); state.filecache = Some(file_cache);
} }
@@ -239,17 +233,11 @@ impl Runner {
let requested_mem = target.mem; let requested_mem = target.mem;
let usable_system_memory = requested_mem.saturating_sub(self.config.sys_buffer_bytes); let usable_system_memory = requested_mem.saturating_sub(self.config.sys_buffer_bytes);
let (expected_file_cache_size, expected_file_cache_disk_size) = self let expected_file_cache_size = self
.filecache .filecache
.as_ref() .as_ref()
.map(|file_cache| { .map(|file_cache| file_cache.config.calculate_cache_size(usable_system_memory))
let size = file_cache.config.calculate_cache_size(usable_system_memory); .unwrap_or(0);
match file_cache.config.in_memory {
true => (size, 0),
false => (size, size),
}
})
.unwrap_or((0, 0));
if let Some(cgroup) = &self.cgroup { if let Some(cgroup) = &self.cgroup {
let (last_time, last_history) = *cgroup.watcher.borrow(); let (last_time, last_history) = *cgroup.watcher.borrow();
@@ -273,7 +261,7 @@ impl Runner {
let new_threshold = self let new_threshold = self
.config .config
.cgroup_threshold(usable_system_memory, expected_file_cache_disk_size); .cgroup_threshold(usable_system_memory, expected_file_cache_size);
let current = last_history.avg_non_reclaimable; let current = last_history.avg_non_reclaimable;
@@ -300,13 +288,10 @@ impl Runner {
.set_file_cache_size(expected_file_cache_size) .set_file_cache_size(expected_file_cache_size)
.await .await
.context("failed to set file cache size")?; .context("failed to set file cache size")?;
if !file_cache.config.in_memory { file_cache_disk_size = actual_usage;
file_cache_disk_size = actual_usage;
}
let message = format!( let message = format!(
"set file cache size to {} MiB (in memory = {})", "set file cache size to {} MiB",
bytes_to_mebibytes(actual_usage), bytes_to_mebibytes(actual_usage),
file_cache.config.in_memory,
); );
info!("downscale: {message}"); info!("downscale: {message}");
status.push(message); status.push(message);
@@ -357,9 +342,7 @@ impl Runner {
.set_file_cache_size(expected_usage) .set_file_cache_size(expected_usage)
.await .await
.context("failed to set file cache size")?; .context("failed to set file cache size")?;
if !file_cache.config.in_memory { file_cache_disk_size = actual_usage;
file_cache_disk_size = actual_usage;
}
if actual_usage != expected_usage { if actual_usage != expected_usage {
warn!( warn!(

View File

@@ -188,6 +188,7 @@ extern "C" fn recovery_download(
} }
} }
#[allow(clippy::unnecessary_cast)]
extern "C" fn wal_read( extern "C" fn wal_read(
sk: *mut Safekeeper, sk: *mut Safekeeper,
buf: *mut ::std::os::raw::c_char, buf: *mut ::std::os::raw::c_char,
@@ -421,6 +422,7 @@ impl std::fmt::Display for Level {
} }
/// Take ownership of `Vec<u8>` from StringInfoData. /// Take ownership of `Vec<u8>` from StringInfoData.
#[allow(clippy::unnecessary_cast)]
pub(crate) fn take_vec_u8(pg: &mut StringInfoData) -> Option<Vec<u8>> { pub(crate) fn take_vec_u8(pg: &mut StringInfoData) -> Option<Vec<u8>> {
if pg.data.is_null() { if pg.data.is_null() {
return None; return None;

View File

@@ -186,7 +186,7 @@ impl Wrapper {
.unwrap() .unwrap()
.into_bytes_with_nul(); .into_bytes_with_nul();
assert!(safekeepers_list_vec.len() == safekeepers_list_vec.capacity()); assert!(safekeepers_list_vec.len() == safekeepers_list_vec.capacity());
let safekeepers_list = safekeepers_list_vec.as_mut_ptr() as *mut i8; let safekeepers_list = safekeepers_list_vec.as_mut_ptr() as *mut std::ffi::c_char;
let callback_data = Box::into_raw(Box::new(api)) as *mut ::std::os::raw::c_void; let callback_data = Box::into_raw(Box::new(api)) as *mut ::std::os::raw::c_void;

View File

@@ -88,10 +88,6 @@ criterion.workspace = true
hex-literal.workspace = true hex-literal.workspace = true
tokio = { workspace = true, features = ["process", "sync", "fs", "rt", "io-util", "time", "test-util"] } tokio = { workspace = true, features = ["process", "sync", "fs", "rt", "io-util", "time", "test-util"] }
[[bench]]
name = "bench_writes"
harness = false
[[bench]] [[bench]]
name = "bench_layer_map" name = "bench_layer_map"
harness = false harness = false

View File

@@ -10,7 +10,3 @@ To run a specific file:
To run a specific function: To run a specific function:
`cargo bench --bench bench_layer_map -- real_map_uniform_queries` `cargo bench --bench bench_layer_map -- real_map_uniform_queries`
To add a new benchmark:
1. Create new file containing `criterion_main!`
2. Add it to `Cargo.toml`

View File

@@ -1,76 +0,0 @@
use bytes::{Bytes, BytesMut};
use camino::{Utf8Path, Utf8PathBuf};
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use pageserver::{tenant::storage_layer::InMemoryLayer, config::PageServerConf, context::{RequestContext, DownloadBehavior}, task_mgr::TaskKind, repository::Key, virtual_file};
use pageserver::repository::Value;
use utils::{id::{TimelineId, TenantId}, lsn::Lsn};
fn bench_writes(c: &mut Criterion) {
// Boilerplate
// TODO this setup can be avoided if I reuse TenantHarness but it's difficult
// because it's only compiled for tests, and it's hacky because tbh we
// shouldn't need this many inputs for a function that just writes bytes
// from memory to disk. Performance-critical functions should be
// self-contained (almost like they're separate libraries) and all the
// monolithic pageserver machinery should live outside.
virtual_file::init(10);
let repo_dir = Utf8PathBuf::from(&"/home/bojan/tmp/repo_dir");
let conf = PageServerConf::dummy_conf(repo_dir);
let conf: &'static PageServerConf = Box::leak(Box::new(conf));
let timeline_id = TimelineId::generate();
let tenant_id = TenantId::generate();
let start_lsn = Lsn(0);
let ctx = RequestContext::new(TaskKind::LayerFlushTask, DownloadBehavior::Error);
let rt = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
fn test_img(s: &str) -> Bytes {
let mut buf = BytesMut::new();
buf.extend_from_slice(s.as_bytes());
buf.resize(64, 0);
buf.freeze()
}
// Make the InMemoryLayer that will be flushed
let layer = rt.block_on(async {
let l = InMemoryLayer::create(&conf, timeline_id, tenant_id, start_lsn).await.unwrap();
let mut lsn = Lsn(0x10);
let mut key = Key::from_hex("012222222233333333444444445500000000").unwrap();
let mut blknum = 0;
for _ in 0..100 {
key.field6 = blknum;
let val = Value::Image(test_img(&format!("{} at {}", blknum, lsn)));
l.put_value(key, lsn, &val, &ctx).await.unwrap();
lsn = Lsn(lsn.0 + 0x10);
blknum += 1;
}
l
});
rt.block_on(async {
layer.write_to_disk_bench(&ctx).await.unwrap();
});
let mut group = c.benchmark_group("g1");
group.bench_function("f1", |b| {
b.iter(|| {
// TODO
});
});
group.bench_function("f2", |b| {
b.iter(|| {
// TODO
});
});
group.finish();
}
criterion_group!(group_1, bench_writes);
criterion_main!(group_1);

View File

@@ -1,22 +1,21 @@
use anyhow::{bail, Result}; use utils::auth::{AuthError, Claims, Scope};
use utils::auth::{Claims, Scope};
use utils::id::TenantId; use utils::id::TenantId;
pub fn check_permission(claims: &Claims, tenant_id: Option<TenantId>) -> Result<()> { pub fn check_permission(claims: &Claims, tenant_id: Option<TenantId>) -> Result<(), AuthError> {
match (&claims.scope, tenant_id) { match (&claims.scope, tenant_id) {
(Scope::Tenant, None) => { (Scope::Tenant, None) => Err(AuthError(
bail!("Attempt to access management api with tenant scope. Permission denied") "Attempt to access management api with tenant scope. Permission denied".into(),
} )),
(Scope::Tenant, Some(tenant_id)) => { (Scope::Tenant, Some(tenant_id)) => {
if claims.tenant_id.unwrap() != tenant_id { if claims.tenant_id.unwrap() != tenant_id {
bail!("Tenant id mismatch. Permission denied") return Err(AuthError("Tenant id mismatch. Permission denied".into()));
} }
Ok(()) Ok(())
} }
(Scope::PageServerApi, None) => Ok(()), // access to management api for PageServerApi scope (Scope::PageServerApi, None) => Ok(()), // access to management api for PageServerApi scope
(Scope::PageServerApi, Some(_)) => Ok(()), // access to tenant api using PageServerApi scope (Scope::PageServerApi, Some(_)) => Ok(()), // access to tenant api using PageServerApi scope
(Scope::SafekeeperData, _) => { (Scope::SafekeeperData, _) => Err(AuthError(
bail!("SafekeeperData scope makes no sense for Pageserver") "SafekeeperData scope makes no sense for Pageserver".into(),
} )),
} }
} }

View File

@@ -34,8 +34,11 @@ use postgres_backend::AuthType;
use utils::logging::TracingErrorLayerEnablement; use utils::logging::TracingErrorLayerEnablement;
use utils::signals::ShutdownSignals; use utils::signals::ShutdownSignals;
use utils::{ use utils::{
auth::JwtAuth, logging, project_build_tag, project_git_version, sentry_init::init_sentry, auth::{JwtAuth, SwappableJwtAuth},
signals::Signal, tcp_listener, logging, project_build_tag, project_git_version,
sentry_init::init_sentry,
signals::Signal,
tcp_listener,
}; };
project_git_version!(GIT_VERSION); project_git_version!(GIT_VERSION);
@@ -321,13 +324,12 @@ fn start_pageserver(
let http_auth; let http_auth;
let pg_auth; let pg_auth;
if conf.http_auth_type == AuthType::NeonJWT || conf.pg_auth_type == AuthType::NeonJWT { if conf.http_auth_type == AuthType::NeonJWT || conf.pg_auth_type == AuthType::NeonJWT {
// unwrap is ok because check is performed when creating config, so path is set and file exists // unwrap is ok because check is performed when creating config, so path is set and exists
let key_path = conf.auth_validation_public_key_path.as_ref().unwrap(); let key_path = conf.auth_validation_public_key_path.as_ref().unwrap();
info!( info!("Loading public key(s) for verifying JWT tokens from {key_path:?}");
"Loading public key for verifying JWT tokens from {:#?}",
key_path let jwt_auth = JwtAuth::from_key_path(key_path)?;
); let auth: Arc<SwappableJwtAuth> = Arc::new(SwappableJwtAuth::new(jwt_auth));
let auth: Arc<JwtAuth> = Arc::new(JwtAuth::from_key_path(key_path)?);
http_auth = match &conf.http_auth_type { http_auth = match &conf.http_auth_type {
AuthType::Trust => None, AuthType::Trust => None,
@@ -410,7 +412,7 @@ fn start_pageserver(
// Scan the local 'tenants/' directory and start loading the tenants // Scan the local 'tenants/' directory and start loading the tenants
let deletion_queue_client = deletion_queue.new_client(); let deletion_queue_client = deletion_queue.new_client();
BACKGROUND_RUNTIME.block_on(mgr::init_tenant_mgr( let tenant_manager = BACKGROUND_RUNTIME.block_on(mgr::init_tenant_mgr(
conf, conf,
TenantSharedResources { TenantSharedResources {
broker_client: broker_client.clone(), broker_client: broker_client.clone(),
@@ -420,6 +422,7 @@ fn start_pageserver(
order, order,
shutdown_pageserver.clone(), shutdown_pageserver.clone(),
))?; ))?;
let tenant_manager = Arc::new(tenant_manager);
BACKGROUND_RUNTIME.spawn({ BACKGROUND_RUNTIME.spawn({
let init_done_rx = init_done_rx; let init_done_rx = init_done_rx;
@@ -548,6 +551,7 @@ fn start_pageserver(
let router_state = Arc::new( let router_state = Arc::new(
http::routes::State::new( http::routes::State::new(
conf, conf,
tenant_manager,
http_auth.clone(), http_auth.clone(),
remote_storage.clone(), remote_storage.clone(),
broker_client.clone(), broker_client.clone(),

View File

@@ -161,7 +161,7 @@ pub struct PageServerConf {
pub http_auth_type: AuthType, pub http_auth_type: AuthType,
/// authentication method for libpq connections from compute /// authentication method for libpq connections from compute
pub pg_auth_type: AuthType, pub pg_auth_type: AuthType,
/// Path to a file containing public key for verifying JWT tokens. /// Path to a file or directory containing public key(s) for verifying JWT tokens.
/// Used for both mgmt and compute auth, if enabled. /// Used for both mgmt and compute auth, if enabled.
pub auth_validation_public_key_path: Option<Utf8PathBuf>, pub auth_validation_public_key_path: Option<Utf8PathBuf>,
@@ -1314,12 +1314,6 @@ broker_endpoint = '{broker_endpoint}'
assert_eq!( assert_eq!(
parsed_remote_storage_config, parsed_remote_storage_config,
RemoteStorageConfig { RemoteStorageConfig {
max_concurrent_syncs: NonZeroUsize::new(
remote_storage::DEFAULT_REMOTE_STORAGE_MAX_CONCURRENT_SYNCS
)
.unwrap(),
max_sync_errors: NonZeroU32::new(remote_storage::DEFAULT_REMOTE_STORAGE_MAX_SYNC_ERRORS)
.unwrap(),
storage: RemoteStorageKind::LocalFs(local_storage_path.clone()), storage: RemoteStorageKind::LocalFs(local_storage_path.clone()),
}, },
"Remote storage config should correctly parse the local FS config and fill other storage defaults" "Remote storage config should correctly parse the local FS config and fill other storage defaults"
@@ -1380,8 +1374,6 @@ broker_endpoint = '{broker_endpoint}'
assert_eq!( assert_eq!(
parsed_remote_storage_config, parsed_remote_storage_config,
RemoteStorageConfig { RemoteStorageConfig {
max_concurrent_syncs,
max_sync_errors,
storage: RemoteStorageKind::AwsS3(S3Config { storage: RemoteStorageKind::AwsS3(S3Config {
bucket_name: bucket_name.clone(), bucket_name: bucket_name.clone(),
bucket_region: bucket_region.clone(), bucket_region: bucket_region.clone(),

View File

@@ -266,7 +266,7 @@ async fn calculate_synthetic_size_worker(
continue; continue;
} }
if let Ok(tenant) = mgr::get_tenant(tenant_id, true).await { if let Ok(tenant) = mgr::get_tenant(tenant_id, true) {
// TODO should we use concurrent_background_tasks_rate_limit() here, like the other background tasks? // TODO should we use concurrent_background_tasks_rate_limit() here, like the other background tasks?
// We can put in some prioritization for consumption metrics. // We can put in some prioritization for consumption metrics.
// Same for the loop that fetches computed metrics. // Same for the loop that fetches computed metrics.

View File

@@ -3,7 +3,6 @@ use anyhow::Context;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use consumption_metrics::EventType; use consumption_metrics::EventType;
use futures::stream::StreamExt; use futures::stream::StreamExt;
use serde_with::serde_as;
use std::{sync::Arc, time::SystemTime}; use std::{sync::Arc, time::SystemTime};
use utils::{ use utils::{
id::{TenantId, TimelineId}, id::{TenantId, TimelineId},
@@ -42,13 +41,10 @@ pub(super) enum Name {
/// ///
/// This is a denormalization done at the MetricsKey const methods; these should not be constructed /// This is a denormalization done at the MetricsKey const methods; these should not be constructed
/// elsewhere. /// elsewhere.
#[serde_with::serde_as]
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, serde::Serialize, serde::Deserialize)] #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, serde::Serialize, serde::Deserialize)]
pub(crate) struct MetricsKey { pub(crate) struct MetricsKey {
#[serde_as(as = "serde_with::DisplayFromStr")]
pub(super) tenant_id: TenantId, pub(super) tenant_id: TenantId,
#[serde_as(as = "Option<serde_with::DisplayFromStr>")]
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub(super) timeline_id: Option<TimelineId>, pub(super) timeline_id: Option<TimelineId>,
@@ -206,7 +202,6 @@ pub(super) async fn collect_all_metrics(
None None
} else { } else {
crate::tenant::mgr::get_tenant(id, true) crate::tenant::mgr::get_tenant(id, true)
.await
.ok() .ok()
.map(|tenant| (id, tenant)) .map(|tenant| (id, tenant))
} }

View File

@@ -1,5 +1,4 @@
use consumption_metrics::{Event, EventChunk, IdempotencyKey, CHUNK_SIZE}; use consumption_metrics::{Event, EventChunk, IdempotencyKey, CHUNK_SIZE};
use serde_with::serde_as;
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
use tracing::Instrument; use tracing::Instrument;
@@ -7,12 +6,9 @@ use super::{metrics::Name, Cache, MetricsKey, RawMetric};
use utils::id::{TenantId, TimelineId}; use utils::id::{TenantId, TimelineId};
/// How the metrics from pageserver are identified. /// How the metrics from pageserver are identified.
#[serde_with::serde_as]
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, Copy, PartialEq)] #[derive(serde::Serialize, serde::Deserialize, Debug, Clone, Copy, PartialEq)]
struct Ids { struct Ids {
#[serde_as(as = "serde_with::DisplayFromStr")]
pub(super) tenant_id: TenantId, pub(super) tenant_id: TenantId,
#[serde_as(as = "Option<serde_with::DisplayFromStr>")]
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub(super) timeline_id: Option<TimelineId>, pub(super) timeline_id: Option<TimelineId>,
} }

View File

@@ -10,6 +10,7 @@ use crate::control_plane_client::ControlPlaneGenerationsApi;
use crate::metrics; use crate::metrics;
use crate::tenant::remote_timeline_client::remote_layer_path; use crate::tenant::remote_timeline_client::remote_layer_path;
use crate::tenant::remote_timeline_client::remote_timeline_path; use crate::tenant::remote_timeline_client::remote_timeline_path;
use crate::virtual_file::MaybeFatalIo;
use crate::virtual_file::VirtualFile; use crate::virtual_file::VirtualFile;
use anyhow::Context; use anyhow::Context;
use camino::Utf8PathBuf; use camino::Utf8PathBuf;
@@ -17,7 +18,6 @@ use hex::FromHex;
use remote_storage::{GenericRemoteStorage, RemotePath}; use remote_storage::{GenericRemoteStorage, RemotePath};
use serde::Deserialize; use serde::Deserialize;
use serde::Serialize; use serde::Serialize;
use serde_with::serde_as;
use thiserror::Error; use thiserror::Error;
use tokio; use tokio;
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
@@ -214,7 +214,6 @@ where
/// during recovery as startup. /// during recovery as startup.
const TEMP_SUFFIX: &str = "tmp"; const TEMP_SUFFIX: &str = "tmp";
#[serde_as]
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
struct DeletionList { struct DeletionList {
/// Serialization version, for future use /// Serialization version, for future use
@@ -243,7 +242,6 @@ struct DeletionList {
validated: bool, validated: bool,
} }
#[serde_as]
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
struct DeletionHeader { struct DeletionHeader {
/// Serialization version, for future use /// Serialization version, for future use
@@ -271,7 +269,9 @@ impl DeletionHeader {
let temp_path = path_with_suffix_extension(&header_path, TEMP_SUFFIX); let temp_path = path_with_suffix_extension(&header_path, TEMP_SUFFIX);
VirtualFile::crashsafe_overwrite(&header_path, &temp_path, &header_bytes) VirtualFile::crashsafe_overwrite(&header_path, &temp_path, &header_bytes)
.await .await
.map_err(Into::into) .maybe_fatal_err("save deletion header")?;
Ok(())
} }
} }
@@ -360,6 +360,7 @@ impl DeletionList {
let bytes = serde_json::to_vec(self).expect("Failed to serialize deletion list"); let bytes = serde_json::to_vec(self).expect("Failed to serialize deletion list");
VirtualFile::crashsafe_overwrite(&path, &temp_path, &bytes) VirtualFile::crashsafe_overwrite(&path, &temp_path, &bytes)
.await .await
.maybe_fatal_err("save deletion list")
.map_err(Into::into) .map_err(Into::into)
} }
} }
@@ -892,14 +893,6 @@ mod test {
std::fs::create_dir_all(remote_fs_dir)?; std::fs::create_dir_all(remote_fs_dir)?;
let remote_fs_dir = harness.conf.workdir.join("remote_fs").canonicalize_utf8()?; let remote_fs_dir = harness.conf.workdir.join("remote_fs").canonicalize_utf8()?;
let storage_config = RemoteStorageConfig { let storage_config = RemoteStorageConfig {
max_concurrent_syncs: std::num::NonZeroUsize::new(
remote_storage::DEFAULT_REMOTE_STORAGE_MAX_CONCURRENT_SYNCS,
)
.unwrap(),
max_sync_errors: std::num::NonZeroU32::new(
remote_storage::DEFAULT_REMOTE_STORAGE_MAX_SYNC_ERRORS,
)
.unwrap(),
storage: RemoteStorageKind::LocalFs(remote_fs_dir.clone()), storage: RemoteStorageKind::LocalFs(remote_fs_dir.clone()),
}; };
let storage = GenericRemoteStorage::from_config(&storage_config).unwrap(); let storage = GenericRemoteStorage::from_config(&storage_config).unwrap();

View File

@@ -55,21 +55,24 @@ impl Deleter {
/// Wrap the remote `delete_objects` with a failpoint /// Wrap the remote `delete_objects` with a failpoint
async fn remote_delete(&self) -> Result<(), anyhow::Error> { async fn remote_delete(&self) -> Result<(), anyhow::Error> {
fail::fail_point!("deletion-queue-before-execute", |_| {
info!("Skipping execution, failpoint set");
metrics::DELETION_QUEUE
.remote_errors
.with_label_values(&["failpoint"])
.inc();
Err(anyhow::anyhow!("failpoint hit"))
});
// A backoff::retry is used here for two reasons: // A backoff::retry is used here for two reasons:
// - To provide a backoff rather than busy-polling the API on errors // - To provide a backoff rather than busy-polling the API on errors
// - To absorb transient 429/503 conditions without hitting our error // - To absorb transient 429/503 conditions without hitting our error
// logging path for issues deleting objects. // logging path for issues deleting objects.
backoff::retry( backoff::retry(
|| async { self.remote_storage.delete_objects(&self.accumulator).await }, || async {
fail::fail_point!("deletion-queue-before-execute", |_| {
info!("Skipping execution, failpoint set");
metrics::DELETION_QUEUE
.remote_errors
.with_label_values(&["failpoint"])
.inc();
Err(anyhow::anyhow!("failpoint: deletion-queue-before-execute"))
});
self.remote_storage.delete_objects(&self.accumulator).await
},
|_| false, |_| false,
3, 3,
10, 10,

View File

@@ -34,6 +34,8 @@ use crate::deletion_queue::TEMP_SUFFIX;
use crate::metrics; use crate::metrics;
use crate::tenant::remote_timeline_client::remote_layer_path; use crate::tenant::remote_timeline_client::remote_layer_path;
use crate::tenant::storage_layer::LayerFileName; use crate::tenant::storage_layer::LayerFileName;
use crate::virtual_file::on_fatal_io_error;
use crate::virtual_file::MaybeFatalIo;
// The number of keys in a DeletionList before we will proactively persist it // The number of keys in a DeletionList before we will proactively persist it
// (without reaching a flush deadline). This aims to deliver objects of the order // (without reaching a flush deadline). This aims to deliver objects of the order
@@ -195,7 +197,7 @@ impl ListWriter {
debug!("Deletion header {header_path} not found, first start?"); debug!("Deletion header {header_path} not found, first start?");
Ok(None) Ok(None)
} else { } else {
Err(anyhow::anyhow!(e)) on_fatal_io_error(&e, "reading deletion header");
} }
} }
} }
@@ -216,16 +218,9 @@ impl ListWriter {
self.pending.sequence = validated_sequence + 1; self.pending.sequence = validated_sequence + 1;
let deletion_directory = self.conf.deletion_prefix(); let deletion_directory = self.conf.deletion_prefix();
let mut dir = match tokio::fs::read_dir(&deletion_directory).await { let mut dir = tokio::fs::read_dir(&deletion_directory)
Ok(d) => d, .await
Err(e) => { .fatal_err("read deletion directory");
warn!("Failed to open deletion list directory {deletion_directory}: {e:#}");
// Give up: if we can't read the deletion list directory, we probably can't
// write lists into it later, so the queue won't work.
return Err(e.into());
}
};
let list_name_pattern = let list_name_pattern =
Regex::new("(?<sequence>[a-zA-Z0-9]{16})-(?<version>[a-zA-Z0-9]{2}).list").unwrap(); Regex::new("(?<sequence>[a-zA-Z0-9]{16})-(?<version>[a-zA-Z0-9]{2}).list").unwrap();
@@ -233,7 +228,7 @@ impl ListWriter {
let temp_extension = format!(".{TEMP_SUFFIX}"); let temp_extension = format!(".{TEMP_SUFFIX}");
let header_path = self.conf.deletion_header_path(); let header_path = self.conf.deletion_header_path();
let mut seqs: Vec<u64> = Vec::new(); let mut seqs: Vec<u64> = Vec::new();
while let Some(dentry) = dir.next_entry().await? { while let Some(dentry) = dir.next_entry().await.fatal_err("read deletion dentry") {
let file_name = dentry.file_name(); let file_name = dentry.file_name();
let dentry_str = file_name.to_string_lossy(); let dentry_str = file_name.to_string_lossy();
@@ -246,11 +241,9 @@ impl ListWriter {
info!("Cleaning up temporary file {dentry_str}"); info!("Cleaning up temporary file {dentry_str}");
let absolute_path = let absolute_path =
deletion_directory.join(dentry.file_name().to_str().expect("non-Unicode path")); deletion_directory.join(dentry.file_name().to_str().expect("non-Unicode path"));
if let Err(e) = tokio::fs::remove_file(&absolute_path).await { tokio::fs::remove_file(&absolute_path)
// Non-fatal error: we will just leave the file behind but not .await
// try and load it. .fatal_err("delete temp file");
warn!("Failed to clean up temporary file {absolute_path}: {e:#}");
}
continue; continue;
} }
@@ -290,7 +283,9 @@ impl ListWriter {
for s in seqs { for s in seqs {
let list_path = self.conf.deletion_list_path(s); let list_path = self.conf.deletion_list_path(s);
let list_bytes = tokio::fs::read(&list_path).await?; let list_bytes = tokio::fs::read(&list_path)
.await
.fatal_err("read deletion list");
let mut deletion_list = match serde_json::from_slice::<DeletionList>(&list_bytes) { let mut deletion_list = match serde_json::from_slice::<DeletionList>(&list_bytes) {
Ok(l) => l, Ok(l) => l,

View File

@@ -28,6 +28,7 @@ use crate::config::PageServerConf;
use crate::control_plane_client::ControlPlaneGenerationsApi; use crate::control_plane_client::ControlPlaneGenerationsApi;
use crate::control_plane_client::RetryForeverError; use crate::control_plane_client::RetryForeverError;
use crate::metrics; use crate::metrics;
use crate::virtual_file::MaybeFatalIo;
use super::deleter::DeleterMessage; use super::deleter::DeleterMessage;
use super::DeletionHeader; use super::DeletionHeader;
@@ -287,16 +288,9 @@ where
async fn cleanup_lists(&mut self, list_paths: Vec<Utf8PathBuf>) { async fn cleanup_lists(&mut self, list_paths: Vec<Utf8PathBuf>) {
for list_path in list_paths { for list_path in list_paths {
debug!("Removing deletion list {list_path}"); debug!("Removing deletion list {list_path}");
tokio::fs::remove_file(&list_path)
if let Err(e) = tokio::fs::remove_file(&list_path).await { .await
// Unexpected: we should have permissions and nothing else should .fatal_err("remove deletion list");
// be touching these files. We will leave the file behind. Subsequent
// pageservers will try and load it again: hopefully whatever storage
// issue (probably permissions) has been fixed by then.
tracing::error!("Failed to delete {list_path}: {e:#}");
metrics::DELETION_QUEUE.unexpected_errors.inc();
break;
}
} }
} }

View File

@@ -403,7 +403,7 @@ pub async fn disk_usage_eviction_task_iteration_impl<U: Usage>(
return (evicted_bytes, evictions_failed); return (evicted_bytes, evictions_failed);
}; };
let results = timeline.evict_layers(&batch, &cancel).await; let results = timeline.evict_layers(&batch).await;
match results { match results {
Ok(results) => { Ok(results) => {
@@ -545,7 +545,7 @@ async fn collect_eviction_candidates(
if cancel.is_cancelled() { if cancel.is_cancelled() {
return Ok(EvictionCandidates::Cancelled); return Ok(EvictionCandidates::Cancelled);
} }
let tenant = match tenant::mgr::get_tenant(*tenant_id, true).await { let tenant = match tenant::mgr::get_tenant(*tenant_id, true) {
Ok(tenant) => tenant, Ok(tenant) => tenant,
Err(e) => { Err(e) => {
// this can happen if tenant has lifecycle transition after we fetched it // this can happen if tenant has lifecycle transition after we fetched it
@@ -554,6 +554,11 @@ async fn collect_eviction_candidates(
} }
}; };
if tenant.cancel.is_cancelled() {
info!(%tenant_id, "Skipping tenant for eviction, it is shutting down");
continue;
}
// collect layers from all timelines in this tenant // collect layers from all timelines in this tenant
// //
// If one of the timelines becomes `!is_active()` during the iteration, // If one of the timelines becomes `!is_active()` during the iteration,

View File

@@ -52,6 +52,31 @@ paths:
schema: schema:
type: object type: object
/v1/reload_auth_validation_keys:
post:
description: Reloads the JWT public keys from their pre-configured location on disk.
responses:
"200":
description: The reload completed successfully.
"401":
description: Unauthorized Error
content:
application/json:
schema:
$ref: "#/components/schemas/UnauthorizedError"
"403":
description: Forbidden Error
content:
application/json:
schema:
$ref: "#/components/schemas/ForbiddenError"
"500":
description: Generic operation error (also hits if no keys were found)
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/v1/tenant/{tenant_id}: /v1/tenant/{tenant_id}:
parameters: parameters:
- name: tenant_id - name: tenant_id
@@ -327,7 +352,8 @@ paths:
in: query in: query
required: true required: true
schema: schema:
type: integer type: string
format: hex
description: A LSN to get the timestamp description: A LSN to get the timestamp
responses: responses:
"200": "200":

View File

@@ -17,10 +17,10 @@ use pageserver_api::models::{
TenantLoadRequest, TenantLocationConfigRequest, TenantLoadRequest, TenantLocationConfigRequest,
}; };
use remote_storage::GenericRemoteStorage; use remote_storage::GenericRemoteStorage;
use serde_with::{serde_as, DisplayFromStr};
use tenant_size_model::{SizeResult, StorageModel}; use tenant_size_model::{SizeResult, StorageModel};
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
use tracing::*; use tracing::*;
use utils::auth::JwtAuth;
use utils::http::endpoint::request_span; use utils::http::endpoint::request_span;
use utils::http::json::json_request_or_empty_body; use utils::http::json::json_request_or_empty_body;
use utils::http::request::{get_request_param, must_get_query_param, parse_query_param}; use utils::http::request::{get_request_param, must_get_query_param, parse_query_param};
@@ -36,7 +36,8 @@ use crate::pgdatadir_mapping::LsnForTimestamp;
use crate::task_mgr::TaskKind; use crate::task_mgr::TaskKind;
use crate::tenant::config::{LocationConf, TenantConfOpt}; use crate::tenant::config::{LocationConf, TenantConfOpt};
use crate::tenant::mgr::{ use crate::tenant::mgr::{
GetTenantError, SetNewTenantConfigError, TenantMapInsertError, TenantStateError, GetTenantError, SetNewTenantConfigError, TenantManager, TenantMapError, TenantMapInsertError,
TenantSlotError, TenantSlotUpsertError, TenantStateError,
}; };
use crate::tenant::size::ModelInputs; use crate::tenant::size::ModelInputs;
use crate::tenant::storage_layer::LayerAccessStatsReset; use crate::tenant::storage_layer::LayerAccessStatsReset;
@@ -45,7 +46,7 @@ use crate::tenant::{LogicalSizeCalculationCause, PageReconstructError, TenantSha
use crate::{config::PageServerConf, tenant::mgr}; use crate::{config::PageServerConf, tenant::mgr};
use crate::{disk_usage_eviction_task, tenant}; use crate::{disk_usage_eviction_task, tenant};
use utils::{ use utils::{
auth::JwtAuth, auth::SwappableJwtAuth,
generation::Generation, generation::Generation,
http::{ http::{
endpoint::{self, attach_openapi_ui, auth_middleware, check_permission_with}, endpoint::{self, attach_openapi_ui, auth_middleware, check_permission_with},
@@ -63,7 +64,8 @@ use super::models::ConfigureFailpointsRequest;
pub struct State { pub struct State {
conf: &'static PageServerConf, conf: &'static PageServerConf,
auth: Option<Arc<JwtAuth>>, tenant_manager: Arc<TenantManager>,
auth: Option<Arc<SwappableJwtAuth>>,
allowlist_routes: Vec<Uri>, allowlist_routes: Vec<Uri>,
remote_storage: Option<GenericRemoteStorage>, remote_storage: Option<GenericRemoteStorage>,
broker_client: storage_broker::BrokerClientChannel, broker_client: storage_broker::BrokerClientChannel,
@@ -74,7 +76,8 @@ pub struct State {
impl State { impl State {
pub fn new( pub fn new(
conf: &'static PageServerConf, conf: &'static PageServerConf,
auth: Option<Arc<JwtAuth>>, tenant_manager: Arc<TenantManager>,
auth: Option<Arc<SwappableJwtAuth>>,
remote_storage: Option<GenericRemoteStorage>, remote_storage: Option<GenericRemoteStorage>,
broker_client: storage_broker::BrokerClientChannel, broker_client: storage_broker::BrokerClientChannel,
disk_usage_eviction_state: Arc<disk_usage_eviction_task::State>, disk_usage_eviction_state: Arc<disk_usage_eviction_task::State>,
@@ -86,6 +89,7 @@ impl State {
.collect::<Vec<_>>(); .collect::<Vec<_>>();
Ok(Self { Ok(Self {
conf, conf,
tenant_manager,
auth, auth,
allowlist_routes, allowlist_routes,
remote_storage, remote_storage,
@@ -147,28 +151,59 @@ impl From<PageReconstructError> for ApiError {
impl From<TenantMapInsertError> for ApiError { impl From<TenantMapInsertError> for ApiError {
fn from(tmie: TenantMapInsertError) -> ApiError { fn from(tmie: TenantMapInsertError) -> ApiError {
match tmie { match tmie {
TenantMapInsertError::StillInitializing | TenantMapInsertError::ShuttingDown => { TenantMapInsertError::SlotError(e) => e.into(),
ApiError::ResourceUnavailable(format!("{tmie}").into()) TenantMapInsertError::SlotUpsertError(e) => e.into(),
}
TenantMapInsertError::TenantAlreadyExists(id, state) => {
ApiError::Conflict(format!("tenant {id} already exists, state: {state:?}"))
}
TenantMapInsertError::TenantExistsSecondary(id) => {
ApiError::Conflict(format!("tenant {id} already exists as secondary"))
}
TenantMapInsertError::Other(e) => ApiError::InternalServerError(e), TenantMapInsertError::Other(e) => ApiError::InternalServerError(e),
} }
} }
} }
impl From<TenantSlotError> for ApiError {
fn from(e: TenantSlotError) -> ApiError {
use TenantSlotError::*;
match e {
NotFound(tenant_id) => {
ApiError::NotFound(anyhow::anyhow!("NotFound: tenant {tenant_id}").into())
}
e @ (AlreadyExists(_, _) | Conflict(_)) => ApiError::Conflict(format!("{e}")),
InProgress => {
ApiError::ResourceUnavailable("Tenant is being modified concurrently".into())
}
MapState(e) => e.into(),
}
}
}
impl From<TenantSlotUpsertError> for ApiError {
fn from(e: TenantSlotUpsertError) -> ApiError {
use TenantSlotUpsertError::*;
match e {
InternalError(e) => ApiError::InternalServerError(anyhow::anyhow!("{e}")),
MapState(e) => e.into(),
}
}
}
impl From<TenantMapError> for ApiError {
fn from(e: TenantMapError) -> ApiError {
use TenantMapError::*;
match e {
StillInitializing | ShuttingDown => {
ApiError::ResourceUnavailable(format!("{e}").into())
}
}
}
}
impl From<TenantStateError> for ApiError { impl From<TenantStateError> for ApiError {
fn from(tse: TenantStateError) -> ApiError { fn from(tse: TenantStateError) -> ApiError {
match tse { match tse {
TenantStateError::NotFound(tid) => ApiError::NotFound(anyhow!("tenant {}", tid).into()),
TenantStateError::IsStopping(_) => { TenantStateError::IsStopping(_) => {
ApiError::ResourceUnavailable("Tenant is stopping".into()) ApiError::ResourceUnavailable("Tenant is stopping".into())
} }
_ => ApiError::InternalServerError(anyhow::Error::new(tse)), TenantStateError::SlotError(e) => e.into(),
TenantStateError::SlotUpsertError(e) => e.into(),
TenantStateError::Other(e) => ApiError::InternalServerError(anyhow!(e)),
} }
} }
} }
@@ -189,6 +224,7 @@ impl From<GetTenantError> for ApiError {
// (We can produce this variant only in `mgr::get_tenant(..., active=true)` calls). // (We can produce this variant only in `mgr::get_tenant(..., active=true)` calls).
ApiError::ResourceUnavailable("Tenant not yet active".into()) ApiError::ResourceUnavailable("Tenant not yet active".into())
} }
GetTenantError::MapState(e) => ApiError::ResourceUnavailable(format!("{e}").into()),
} }
} }
} }
@@ -243,6 +279,9 @@ impl From<crate::tenant::delete::DeleteTenantError> for ApiError {
Get(g) => ApiError::from(g), Get(g) => ApiError::from(g),
e @ AlreadyInProgress => ApiError::Conflict(e.to_string()), e @ AlreadyInProgress => ApiError::Conflict(e.to_string()),
Timeline(t) => ApiError::from(t), Timeline(t) => ApiError::from(t),
NotAttached => ApiError::NotFound(anyhow::anyhow!("Tenant is not attached").into()),
SlotError(e) => e.into(),
SlotUpsertError(e) => e.into(),
Other(o) => ApiError::InternalServerError(o), Other(o) => ApiError::InternalServerError(o),
e @ InvalidState(_) => ApiError::PreconditionFailed(e.to_string().into_boxed_str()), e @ InvalidState(_) => ApiError::PreconditionFailed(e.to_string().into_boxed_str()),
} }
@@ -264,11 +303,7 @@ async fn build_timeline_info(
// we're executing this function, we will outlive the timeline on-disk state. // we're executing this function, we will outlive the timeline on-disk state.
info.current_logical_size_non_incremental = Some( info.current_logical_size_non_incremental = Some(
timeline timeline
.get_current_logical_size_non_incremental( .get_current_logical_size_non_incremental(info.last_record_lsn, ctx)
info.last_record_lsn,
CancellationToken::new(),
ctx,
)
.await?, .await?,
); );
} }
@@ -354,6 +389,32 @@ async fn status_handler(
json_response(StatusCode::OK, StatusResponse { id: config.id }) json_response(StatusCode::OK, StatusResponse { id: config.id })
} }
async fn reload_auth_validation_keys_handler(
request: Request<Body>,
_cancel: CancellationToken,
) -> Result<Response<Body>, ApiError> {
check_permission(&request, None)?;
let config = get_config(&request);
let state = get_state(&request);
let Some(shared_auth) = &state.auth else {
return json_response(StatusCode::BAD_REQUEST, ());
};
// unwrap is ok because check is performed when creating config, so path is set and exists
let key_path = config.auth_validation_public_key_path.as_ref().unwrap();
info!("Reloading public key(s) for verifying JWT tokens from {key_path:?}");
match JwtAuth::from_key_path(key_path) {
Ok(new_auth) => {
shared_auth.swap(new_auth);
json_response(StatusCode::OK, ())
}
Err(e) => {
warn!("Error reloading public keys from {key_path:?}: {e:}");
json_response(StatusCode::INTERNAL_SERVER_ERROR, ())
}
}
}
async fn timeline_create_handler( async fn timeline_create_handler(
mut request: Request<Body>, mut request: Request<Body>,
_cancel: CancellationToken, _cancel: CancellationToken,
@@ -369,7 +430,7 @@ async fn timeline_create_handler(
let state = get_state(&request); let state = get_state(&request);
async { async {
let tenant = mgr::get_tenant(tenant_id, true).await?; let tenant = mgr::get_tenant(tenant_id, true)?;
match tenant.create_timeline( match tenant.create_timeline(
new_timeline_id, new_timeline_id,
request_data.ancestor_timeline_id.map(TimelineId::from), request_data.ancestor_timeline_id.map(TimelineId::from),
@@ -397,6 +458,9 @@ async fn timeline_create_handler(
Err(e @ tenant::CreateTimelineError::AncestorNotActive) => { Err(e @ tenant::CreateTimelineError::AncestorNotActive) => {
json_response(StatusCode::SERVICE_UNAVAILABLE, HttpErrorBody::from_msg(e.to_string())) json_response(StatusCode::SERVICE_UNAVAILABLE, HttpErrorBody::from_msg(e.to_string()))
} }
Err(tenant::CreateTimelineError::ShuttingDown) => {
json_response(StatusCode::SERVICE_UNAVAILABLE,HttpErrorBody::from_msg("tenant shutting down".to_string()))
}
Err(tenant::CreateTimelineError::Other(err)) => Err(ApiError::InternalServerError(err)), Err(tenant::CreateTimelineError::Other(err)) => Err(ApiError::InternalServerError(err)),
} }
} }
@@ -416,7 +480,7 @@ async fn timeline_list_handler(
let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download); let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download);
let response_data = async { let response_data = async {
let tenant = mgr::get_tenant(tenant_id, true).await?; let tenant = mgr::get_tenant(tenant_id, true)?;
let timelines = tenant.list_timelines(); let timelines = tenant.list_timelines();
let mut response_data = Vec::with_capacity(timelines.len()); let mut response_data = Vec::with_capacity(timelines.len());
@@ -455,7 +519,7 @@ async fn timeline_detail_handler(
let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download); let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download);
let timeline_info = async { let timeline_info = async {
let tenant = mgr::get_tenant(tenant_id, true).await?; let tenant = mgr::get_tenant(tenant_id, true)?;
let timeline = tenant let timeline = tenant
.get_timeline(timeline_id, false) .get_timeline(timeline_id, false)
@@ -499,10 +563,8 @@ async fn get_lsn_by_timestamp_handler(
let result = timeline.find_lsn_for_timestamp(timestamp_pg, &ctx).await?; let result = timeline.find_lsn_for_timestamp(timestamp_pg, &ctx).await?;
if version.unwrap_or(0) > 1 { if version.unwrap_or(0) > 1 {
#[serde_as]
#[derive(serde::Serialize)] #[derive(serde::Serialize)]
struct Result { struct Result {
#[serde_as(as = "DisplayFromStr")]
lsn: Lsn, lsn: Lsn,
kind: &'static str, kind: &'static str,
} }
@@ -713,7 +775,7 @@ async fn tenant_status(
check_permission(&request, Some(tenant_id))?; check_permission(&request, Some(tenant_id))?;
let tenant_info = async { let tenant_info = async {
let tenant = mgr::get_tenant(tenant_id, false).await?; let tenant = mgr::get_tenant(tenant_id, false)?;
// Calculate total physical size of all timelines // Calculate total physical size of all timelines
let mut current_physical_size = 0; let mut current_physical_size = 0;
@@ -776,7 +838,7 @@ async fn tenant_size_handler(
let headers = request.headers(); let headers = request.headers();
let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download); let ctx = RequestContext::new(TaskKind::MgmtRequest, DownloadBehavior::Download);
let tenant = mgr::get_tenant(tenant_id, true).await?; let tenant = mgr::get_tenant(tenant_id, true)?;
// this can be long operation // this can be long operation
let inputs = tenant let inputs = tenant
@@ -811,10 +873,8 @@ async fn tenant_size_handler(
} }
/// The type resides in the pageserver not to expose `ModelInputs`. /// The type resides in the pageserver not to expose `ModelInputs`.
#[serde_with::serde_as]
#[derive(serde::Serialize)] #[derive(serde::Serialize)]
struct TenantHistorySize { struct TenantHistorySize {
#[serde_as(as = "serde_with::DisplayFromStr")]
id: TenantId, id: TenantId,
/// Size is a mixture of WAL and logical size, so the unit is bytes. /// Size is a mixture of WAL and logical size, so the unit is bytes.
/// ///
@@ -1035,7 +1095,7 @@ async fn get_tenant_config_handler(
let tenant_id: TenantId = parse_request_param(&request, "tenant_id")?; let tenant_id: TenantId = parse_request_param(&request, "tenant_id")?;
check_permission(&request, Some(tenant_id))?; check_permission(&request, Some(tenant_id))?;
let tenant = mgr::get_tenant(tenant_id, false).await?; let tenant = mgr::get_tenant(tenant_id, false)?;
let response = HashMap::from([ let response = HashMap::from([
( (
@@ -1094,7 +1154,7 @@ async fn put_tenant_location_config_handler(
.await .await
{ {
match e { match e {
TenantStateError::NotFound(_) => { TenantStateError::SlotError(TenantSlotError::NotFound(_)) => {
// This API is idempotent: a NotFound on a detach is fine. // This API is idempotent: a NotFound on a detach is fine.
} }
_ => return Err(e.into()), _ => return Err(e.into()),
@@ -1106,20 +1166,14 @@ async fn put_tenant_location_config_handler(
let location_conf = let location_conf =
LocationConf::try_from(&request_data.config).map_err(ApiError::BadRequest)?; LocationConf::try_from(&request_data.config).map_err(ApiError::BadRequest)?;
mgr::upsert_location( state
state.conf, .tenant_manager
tenant_id, .upsert_location(tenant_id, location_conf, &ctx)
location_conf, .await
state.broker_client.clone(), // TODO: badrequest assumes the caller was asking for something unreasonable, but in
state.remote_storage.clone(), // principle we might have hit something like concurrent API calls to the same tenant,
state.deletion_queue_client.clone(), // which is not a 400 but a 409.
&ctx, .map_err(ApiError::BadRequest)?;
)
.await
// TODO: badrequest assumes the caller was asking for something unreasonable, but in
// principle we might have hit something like concurrent API calls to the same tenant,
// which is not a 400 but a 409.
.map_err(ApiError::BadRequest)?;
json_response(StatusCode::OK, ()) json_response(StatusCode::OK, ())
} }
@@ -1132,7 +1186,6 @@ async fn handle_tenant_break(
let tenant_id: TenantId = parse_request_param(&r, "tenant_id")?; let tenant_id: TenantId = parse_request_param(&r, "tenant_id")?;
let tenant = crate::tenant::mgr::get_tenant(tenant_id, true) let tenant = crate::tenant::mgr::get_tenant(tenant_id, true)
.await
.map_err(|_| ApiError::Conflict(String::from("no active tenant found")))?; .map_err(|_| ApiError::Conflict(String::from("no active tenant found")))?;
tenant.set_broken("broken from test".to_owned()).await; tenant.set_broken("broken from test".to_owned()).await;
@@ -1437,7 +1490,7 @@ async fn active_timeline_of_active_tenant(
tenant_id: TenantId, tenant_id: TenantId,
timeline_id: TimelineId, timeline_id: TimelineId,
) -> Result<Arc<Timeline>, ApiError> { ) -> Result<Arc<Timeline>, ApiError> {
let tenant = mgr::get_tenant(tenant_id, true).await?; let tenant = mgr::get_tenant(tenant_id, true)?;
tenant tenant
.get_timeline(timeline_id, true) .get_timeline(timeline_id, true)
.map_err(|e| ApiError::NotFound(e.into())) .map_err(|e| ApiError::NotFound(e.into()))
@@ -1614,6 +1667,8 @@ where
); );
match handle.await { match handle.await {
// TODO: never actually return Err from here, always Ok(...) so that we can log
// spanned errors. Call api_error_handler instead and return appropriate Body.
Ok(result) => result, Ok(result) => result,
Err(e) => { Err(e) => {
// The handler task panicked. We have a global panic handler that logs the // The handler task panicked. We have a global panic handler that logs the
@@ -1662,7 +1717,7 @@ where
pub fn make_router( pub fn make_router(
state: Arc<State>, state: Arc<State>,
launch_ts: &'static LaunchTimestamp, launch_ts: &'static LaunchTimestamp,
auth: Option<Arc<JwtAuth>>, auth: Option<Arc<SwappableJwtAuth>>,
) -> anyhow::Result<RouterBuilder<hyper::Body, ApiError>> { ) -> anyhow::Result<RouterBuilder<hyper::Body, ApiError>> {
let spec = include_bytes!("openapi_spec.yml"); let spec = include_bytes!("openapi_spec.yml");
let mut router = attach_openapi_ui(endpoint::make_router(), spec, "/swagger.yml", "/v1/doc"); let mut router = attach_openapi_ui(endpoint::make_router(), spec, "/swagger.yml", "/v1/doc");
@@ -1691,6 +1746,9 @@ pub fn make_router(
.put("/v1/failpoints", |r| { .put("/v1/failpoints", |r| {
testing_api_handler("manage failpoints", r, failpoints_handler) testing_api_handler("manage failpoints", r, failpoints_handler)
}) })
.post("/v1/reload_auth_validation_keys", |r| {
api_handler(r, reload_auth_validation_keys_handler)
})
.get("/v1/tenant", |r| api_handler(r, tenant_list_handler)) .get("/v1/tenant", |r| api_handler(r, tenant_list_handler))
.post("/v1/tenant", |r| api_handler(r, tenant_create_handler)) .post("/v1/tenant", |r| api_handler(r, tenant_create_handler))
.get("/v1/tenant/:tenant_id", |r| api_handler(r, tenant_status)) .get("/v1/tenant/:tenant_id", |r| api_handler(r, tenant_status))

View File

@@ -1,3 +1,5 @@
#![deny(clippy::undocumented_unsafe_blocks)]
mod auth; mod auth;
pub mod basebackup; pub mod basebackup;
pub mod config; pub mod config;
@@ -61,14 +63,6 @@ pub async fn shutdown_pageserver(deletion_queue: Option<DeletionQueue>, exit_cod
) )
.await; .await;
// Shut down any page service tasks.
timed(
task_mgr::shutdown_tasks(Some(TaskKind::PageRequestHandler), None, None),
"shutdown PageRequestHandlers",
Duration::from_secs(1),
)
.await;
// Shut down all the tenants. This flushes everything to disk and kills // Shut down all the tenants. This flushes everything to disk and kills
// the checkpoint and GC tasks. // the checkpoint and GC tasks.
timed( timed(
@@ -78,6 +72,15 @@ pub async fn shutdown_pageserver(deletion_queue: Option<DeletionQueue>, exit_cod
) )
.await; .await;
// Shut down any page service tasks: any in-progress work for particular timelines or tenants
// should already have been canclled via mgr::shutdown_all_tenants
timed(
task_mgr::shutdown_tasks(Some(TaskKind::PageRequestHandler), None, None),
"shutdown PageRequestHandlers",
Duration::from_secs(1),
)
.await;
// Best effort to persist any outstanding deletions, to avoid leaking objects // Best effort to persist any outstanding deletions, to avoid leaking objects
if let Some(mut deletion_queue) = deletion_queue { if let Some(mut deletion_queue) = deletion_queue {
deletion_queue.shutdown(Duration::from_secs(5)).await; deletion_queue.shutdown(Duration::from_secs(5)).await;

View File

@@ -962,6 +962,32 @@ static REMOTE_TIMELINE_CLIENT_BYTES_FINISHED_COUNTER: Lazy<IntCounterVec> = Lazy
.expect("failed to define a metric") .expect("failed to define a metric")
}); });
pub(crate) struct TenantManagerMetrics {
pub(crate) tenant_slots: UIntGauge,
pub(crate) tenant_slot_writes: IntCounter,
pub(crate) unexpected_errors: IntCounter,
}
pub(crate) static TENANT_MANAGER: Lazy<TenantManagerMetrics> = Lazy::new(|| {
TenantManagerMetrics {
tenant_slots: register_uint_gauge!(
"pageserver_tenant_manager_slots",
"How many slots currently exist, including all attached, secondary and in-progress operations",
)
.expect("failed to define a metric"),
tenant_slot_writes: register_int_counter!(
"pageserver_tenant_manager_slot_writes",
"Writes to a tenant slot, including all of create/attach/detach/delete"
)
.expect("failed to define a metric"),
unexpected_errors: register_int_counter!(
"pageserver_tenant_manager_unexpected_errors_total",
"Number of unexpected conditions encountered: nonzero value indicates a non-fatal bug."
)
.expect("failed to define a metric"),
}
});
pub(crate) struct DeletionQueueMetrics { pub(crate) struct DeletionQueueMetrics {
pub(crate) keys_submitted: IntCounter, pub(crate) keys_submitted: IntCounter,
pub(crate) keys_dropped: IntCounter, pub(crate) keys_dropped: IntCounter,
@@ -1199,15 +1225,6 @@ pub(crate) static WAL_REDO_TIME: Lazy<Histogram> = Lazy::new(|| {
.expect("failed to define a metric") .expect("failed to define a metric")
}); });
pub(crate) static WAL_REDO_WAIT_TIME: Lazy<Histogram> = Lazy::new(|| {
register_histogram!(
"pageserver_wal_redo_wait_seconds",
"Time spent waiting for access to the Postgres WAL redo process",
redo_histogram_time_buckets!(),
)
.expect("failed to define a metric")
});
pub(crate) static WAL_REDO_RECORDS_HISTOGRAM: Lazy<Histogram> = Lazy::new(|| { pub(crate) static WAL_REDO_RECORDS_HISTOGRAM: Lazy<Histogram> = Lazy::new(|| {
register_histogram!( register_histogram!(
"pageserver_wal_redo_records_histogram", "pageserver_wal_redo_records_histogram",
@@ -1884,6 +1901,9 @@ pub fn preinitialize_metrics() {
// Deletion queue stats // Deletion queue stats
Lazy::force(&DELETION_QUEUE); Lazy::force(&DELETION_QUEUE);
// Tenant manager stats
Lazy::force(&TENANT_MANAGER);
// countervecs // countervecs
[&BACKGROUND_LOOP_PERIOD_OVERRUN_COUNT] [&BACKGROUND_LOOP_PERIOD_OVERRUN_COUNT]
.into_iter() .into_iter()
@@ -1899,7 +1919,6 @@ pub fn preinitialize_metrics() {
&READ_NUM_FS_LAYERS, &READ_NUM_FS_LAYERS,
&WAIT_LSN_TIME, &WAIT_LSN_TIME,
&WAL_REDO_TIME, &WAL_REDO_TIME,
&WAL_REDO_WAIT_TIME,
&WAL_REDO_RECORDS_HISTOGRAM, &WAL_REDO_RECORDS_HISTOGRAM,
&WAL_REDO_BYTES_HISTOGRAM, &WAL_REDO_BYTES_HISTOGRAM,
] ]

View File

@@ -40,7 +40,7 @@ use tracing::field;
use tracing::*; use tracing::*;
use utils::id::ConnectionId; use utils::id::ConnectionId;
use utils::{ use utils::{
auth::{Claims, JwtAuth, Scope}, auth::{Claims, Scope, SwappableJwtAuth},
id::{TenantId, TimelineId}, id::{TenantId, TimelineId},
lsn::Lsn, lsn::Lsn,
simple_rcu::RcuReadGuard, simple_rcu::RcuReadGuard,
@@ -55,16 +55,20 @@ use crate::metrics;
use crate::metrics::LIVE_CONNECTIONS_COUNT; use crate::metrics::LIVE_CONNECTIONS_COUNT;
use crate::task_mgr; use crate::task_mgr;
use crate::task_mgr::TaskKind; use crate::task_mgr::TaskKind;
use crate::tenant;
use crate::tenant::debug_assert_current_span_has_tenant_and_timeline_id; use crate::tenant::debug_assert_current_span_has_tenant_and_timeline_id;
use crate::tenant::mgr; use crate::tenant::mgr;
use crate::tenant::mgr::GetTenantError; use crate::tenant::mgr::get_active_tenant_with_timeout;
use crate::tenant::{Tenant, Timeline}; use crate::tenant::mgr::GetActiveTenantError;
use crate::tenant::Timeline;
use crate::trace::Tracer; use crate::trace::Tracer;
use postgres_ffi::pg_constants::DEFAULTTABLESPACE_OID; use postgres_ffi::pg_constants::DEFAULTTABLESPACE_OID;
use postgres_ffi::BLCKSZ; use postgres_ffi::BLCKSZ;
// How long we may block waiting for a [`TenantSlot::InProgress`]` and/or a [`Tenant`] which
// is not yet in state [`TenantState::Active`].
const ACTIVE_TENANT_TIMEOUT: Duration = Duration::from_millis(5000);
/// Read the end of a tar archive. /// Read the end of a tar archive.
/// ///
/// A tar archive normally ends with two consecutive blocks of zeros, 512 bytes each. /// A tar archive normally ends with two consecutive blocks of zeros, 512 bytes each.
@@ -118,7 +122,7 @@ async fn read_tar_eof(mut reader: (impl AsyncRead + Unpin)) -> anyhow::Result<()
pub async fn libpq_listener_main( pub async fn libpq_listener_main(
conf: &'static PageServerConf, conf: &'static PageServerConf,
broker_client: storage_broker::BrokerClientChannel, broker_client: storage_broker::BrokerClientChannel,
auth: Option<Arc<JwtAuth>>, auth: Option<Arc<SwappableJwtAuth>>,
listener: TcpListener, listener: TcpListener,
auth_type: AuthType, auth_type: AuthType,
listener_ctx: RequestContext, listener_ctx: RequestContext,
@@ -186,7 +190,7 @@ pub async fn libpq_listener_main(
async fn page_service_conn_main( async fn page_service_conn_main(
conf: &'static PageServerConf, conf: &'static PageServerConf,
broker_client: storage_broker::BrokerClientChannel, broker_client: storage_broker::BrokerClientChannel,
auth: Option<Arc<JwtAuth>>, auth: Option<Arc<SwappableJwtAuth>>,
socket: tokio::net::TcpStream, socket: tokio::net::TcpStream,
auth_type: AuthType, auth_type: AuthType,
connection_ctx: RequestContext, connection_ctx: RequestContext,
@@ -214,22 +218,34 @@ async fn page_service_conn_main(
// no write timeout is used, because the kernel is assumed to error writes after some time. // no write timeout is used, because the kernel is assumed to error writes after some time.
let mut socket = tokio_io_timeout::TimeoutReader::new(socket); let mut socket = tokio_io_timeout::TimeoutReader::new(socket);
// timeout should be lower, but trying out multiple days for let default_timeout_ms = 10 * 60 * 1000; // 10 minutes by default
// <https://github.com/neondatabase/neon/issues/4205> let socket_timeout_ms = (|| {
socket.set_timeout(Some(std::time::Duration::from_secs(60 * 60 * 24 * 3))); fail::fail_point!("simulated-bad-compute-connection", |avg_timeout_ms| {
// Exponential distribution for simulating
// poor network conditions, expect about avg_timeout_ms to be around 15
// in tests
if let Some(avg_timeout_ms) = avg_timeout_ms {
let avg = avg_timeout_ms.parse::<i64>().unwrap() as f32;
let u = rand::random::<f32>();
((1.0 - u).ln() / (-avg)) as u64
} else {
default_timeout_ms
}
});
default_timeout_ms
})();
// A timeout here does not mean the client died, it can happen if it's just idle for
// a while: we will tear down this PageServerHandler and instantiate a new one if/when
// they reconnect.
socket.set_timeout(Some(std::time::Duration::from_millis(socket_timeout_ms)));
let socket = std::pin::pin!(socket); let socket = std::pin::pin!(socket);
// XXX: pgbackend.run() should take the connection_ctx, // XXX: pgbackend.run() should take the connection_ctx,
// and create a child per-query context when it invokes process_query. // and create a child per-query context when it invokes process_query.
// But it's in a shared crate, so, we store connection_ctx inside PageServerHandler // But it's in a shared crate, so, we store connection_ctx inside PageServerHandler
// and create the per-query context in process_query ourselves. // and create the per-query context in process_query ourselves.
let mut conn_handler = PageServerHandler::new( let mut conn_handler = PageServerHandler::new(conf, broker_client, auth, connection_ctx);
conf,
broker_client,
auth,
connection_ctx,
task_mgr::shutdown_token(),
);
let pgbackend = PostgresBackend::new_from_io(socket, peer_addr, auth_type, None)?; let pgbackend = PostgresBackend::new_from_io(socket, peer_addr, auth_type, None)?;
match pgbackend match pgbackend
@@ -255,7 +271,7 @@ async fn page_service_conn_main(
struct PageServerHandler { struct PageServerHandler {
_conf: &'static PageServerConf, _conf: &'static PageServerConf,
broker_client: storage_broker::BrokerClientChannel, broker_client: storage_broker::BrokerClientChannel,
auth: Option<Arc<JwtAuth>>, auth: Option<Arc<SwappableJwtAuth>>,
claims: Option<Claims>, claims: Option<Claims>,
/// The context created for the lifetime of the connection /// The context created for the lifetime of the connection
@@ -263,19 +279,14 @@ struct PageServerHandler {
/// For each query received over the connection, /// For each query received over the connection,
/// `process_query` creates a child context from this one. /// `process_query` creates a child context from this one.
connection_ctx: RequestContext, connection_ctx: RequestContext,
/// A token that should fire when the tenant transitions from
/// attached state, or when the pageserver is shutting down.
cancel: CancellationToken,
} }
impl PageServerHandler { impl PageServerHandler {
pub fn new( pub fn new(
conf: &'static PageServerConf, conf: &'static PageServerConf,
broker_client: storage_broker::BrokerClientChannel, broker_client: storage_broker::BrokerClientChannel,
auth: Option<Arc<JwtAuth>>, auth: Option<Arc<SwappableJwtAuth>>,
connection_ctx: RequestContext, connection_ctx: RequestContext,
cancel: CancellationToken,
) -> Self { ) -> Self {
PageServerHandler { PageServerHandler {
_conf: conf, _conf: conf,
@@ -283,7 +294,6 @@ impl PageServerHandler {
auth, auth,
claims: None, claims: None,
connection_ctx, connection_ctx,
cancel,
} }
} }
@@ -291,7 +301,11 @@ impl PageServerHandler {
/// this rather than naked flush() in order to shut down promptly. Without this, we would /// this rather than naked flush() in order to shut down promptly. Without this, we would
/// block shutdown of a tenant if a postgres client was failing to consume bytes we send /// block shutdown of a tenant if a postgres client was failing to consume bytes we send
/// in the flush. /// in the flush.
async fn flush_cancellable<IO>(&self, pgb: &mut PostgresBackend<IO>) -> Result<(), QueryError> async fn flush_cancellable<IO>(
&self,
pgb: &mut PostgresBackend<IO>,
cancel: &CancellationToken,
) -> Result<(), QueryError>
where where
IO: AsyncRead + AsyncWrite + Send + Sync + Unpin, IO: AsyncRead + AsyncWrite + Send + Sync + Unpin,
{ {
@@ -299,7 +313,7 @@ impl PageServerHandler {
flush_r = pgb.flush() => { flush_r = pgb.flush() => {
Ok(flush_r?) Ok(flush_r?)
}, },
_ = self.cancel.cancelled() => { _ = cancel.cancelled() => {
Err(QueryError::Shutdown) Err(QueryError::Shutdown)
} }
) )
@@ -308,6 +322,7 @@ impl PageServerHandler {
fn copyin_stream<'a, IO>( fn copyin_stream<'a, IO>(
&'a self, &'a self,
pgb: &'a mut PostgresBackend<IO>, pgb: &'a mut PostgresBackend<IO>,
cancel: &'a CancellationToken,
) -> impl Stream<Item = io::Result<Bytes>> + 'a ) -> impl Stream<Item = io::Result<Bytes>> + 'a
where where
IO: AsyncRead + AsyncWrite + Send + Sync + Unpin, IO: AsyncRead + AsyncWrite + Send + Sync + Unpin,
@@ -317,7 +332,7 @@ impl PageServerHandler {
let msg = tokio::select! { let msg = tokio::select! {
biased; biased;
_ = self.cancel.cancelled() => { _ = cancel.cancelled() => {
// We were requested to shut down. // We were requested to shut down.
let msg = "pageserver is shutting down"; let msg = "pageserver is shutting down";
let _ = pgb.write_message_noflush(&BeMessage::ErrorResponse(msg, None)); let _ = pgb.write_message_noflush(&BeMessage::ErrorResponse(msg, None));
@@ -357,7 +372,7 @@ impl PageServerHandler {
let query_error = QueryError::Disconnected(ConnectionError::Io(io::Error::new(io::ErrorKind::ConnectionReset, msg))); let query_error = QueryError::Disconnected(ConnectionError::Io(io::Error::new(io::ErrorKind::ConnectionReset, msg)));
// error can't happen here, ErrorResponse serialization should be always ok // error can't happen here, ErrorResponse serialization should be always ok
pgb.write_message_noflush(&BeMessage::ErrorResponse(msg, Some(query_error.pg_error_code()))).map_err(|e| e.into_io_error())?; pgb.write_message_noflush(&BeMessage::ErrorResponse(msg, Some(query_error.pg_error_code()))).map_err(|e| e.into_io_error())?;
self.flush_cancellable(pgb).await.map_err(|e| io::Error::new(io::ErrorKind::Other, e.to_string()))?; self.flush_cancellable(pgb, cancel).await.map_err(|e| io::Error::new(io::ErrorKind::Other, e.to_string()))?;
Err(io::Error::new(io::ErrorKind::ConnectionReset, msg))?; Err(io::Error::new(io::ErrorKind::ConnectionReset, msg))?;
} }
Err(QueryError::Disconnected(ConnectionError::Io(io_error))) => { Err(QueryError::Disconnected(ConnectionError::Io(io_error))) => {
@@ -384,12 +399,13 @@ impl PageServerHandler {
{ {
debug_assert_current_span_has_tenant_and_timeline_id(); debug_assert_current_span_has_tenant_and_timeline_id();
// NOTE: pagerequests handler exits when connection is closed,
// so there is no need to reset the association
task_mgr::associate_with(Some(tenant_id), Some(timeline_id));
// Make request tracer if needed // Make request tracer if needed
let tenant = get_active_tenant_with_timeout(tenant_id, &ctx).await?; let tenant = mgr::get_active_tenant_with_timeout(
tenant_id,
ACTIVE_TENANT_TIMEOUT,
&task_mgr::shutdown_token(),
)
.await?;
let mut tracer = if tenant.get_trace_read_requests() { let mut tracer = if tenant.get_trace_read_requests() {
let connection_id = ConnectionId::generate(); let connection_id = ConnectionId::generate();
let path = tenant let path = tenant
@@ -405,9 +421,14 @@ impl PageServerHandler {
.get_timeline(timeline_id, true) .get_timeline(timeline_id, true)
.map_err(|e| anyhow::anyhow!(e))?; .map_err(|e| anyhow::anyhow!(e))?;
// Avoid starting new requests if the timeline has already started shutting down,
// and block timeline shutdown until this request is complete, or drops out due
// to cancellation.
let _timeline_guard = timeline.gate.enter().map_err(|_| QueryError::Shutdown)?;
// switch client to COPYBOTH // switch client to COPYBOTH
pgb.write_message_noflush(&BeMessage::CopyBothResponse)?; pgb.write_message_noflush(&BeMessage::CopyBothResponse)?;
self.flush_cancellable(pgb).await?; self.flush_cancellable(pgb, &timeline.cancel).await?;
let metrics = metrics::SmgrQueryTimePerTimeline::new(&tenant_id, &timeline_id); let metrics = metrics::SmgrQueryTimePerTimeline::new(&tenant_id, &timeline_id);
@@ -415,7 +436,7 @@ impl PageServerHandler {
let msg = tokio::select! { let msg = tokio::select! {
biased; biased;
_ = self.cancel.cancelled() => { _ = timeline.cancel.cancelled() => {
// We were requested to shut down. // We were requested to shut down.
info!("shutdown request received in page handler"); info!("shutdown request received in page handler");
return Err(QueryError::Shutdown) return Err(QueryError::Shutdown)
@@ -490,9 +511,24 @@ impl PageServerHandler {
} }
}; };
if let Err(e) = &response {
// Requests may fail as soon as we are Stopping, even if the Timeline's cancellation token wasn't fired yet,
// because wait_lsn etc will drop out
// is_stopping(): [`Timeline::flush_and_shutdown`] has entered
// is_canceled(): [`Timeline::shutdown`]` has entered
if timeline.cancel.is_cancelled() || timeline.is_stopping() {
// If we fail to fulfil a request during shutdown, which may be _because_ of
// shutdown, then do not send the error to the client. Instead just drop the
// connection.
span.in_scope(|| info!("dropped response during shutdown: {e:#}"));
return Err(QueryError::Shutdown);
}
}
let response = response.unwrap_or_else(|e| { let response = response.unwrap_or_else(|e| {
// print the all details to the log with {:#}, but for the client the // print the all details to the log with {:#}, but for the client the
// error message is enough // error message is enough. Do not log if shutting down, as the anyhow::Error
// here includes cancellation which is not an error.
span.in_scope(|| error!("error reading relation or page version: {:#}", e)); span.in_scope(|| error!("error reading relation or page version: {:#}", e));
PagestreamBeMessage::Error(PagestreamErrorResponse { PagestreamBeMessage::Error(PagestreamErrorResponse {
message: e.to_string(), message: e.to_string(),
@@ -500,7 +536,7 @@ impl PageServerHandler {
}); });
pgb.write_message_noflush(&BeMessage::CopyData(&response.serialize()))?; pgb.write_message_noflush(&BeMessage::CopyData(&response.serialize()))?;
self.flush_cancellable(pgb).await?; self.flush_cancellable(pgb, &timeline.cancel).await?;
} }
Ok(()) Ok(())
} }
@@ -522,10 +558,14 @@ impl PageServerHandler {
{ {
debug_assert_current_span_has_tenant_and_timeline_id(); debug_assert_current_span_has_tenant_and_timeline_id();
task_mgr::associate_with(Some(tenant_id), Some(timeline_id));
// Create empty timeline // Create empty timeline
info!("creating new timeline"); info!("creating new timeline");
let tenant = get_active_tenant_with_timeout(tenant_id, &ctx).await?; let tenant = get_active_tenant_with_timeout(
tenant_id,
ACTIVE_TENANT_TIMEOUT,
&task_mgr::shutdown_token(),
)
.await?;
let timeline = tenant let timeline = tenant
.create_empty_timeline(timeline_id, base_lsn, pg_version, &ctx) .create_empty_timeline(timeline_id, base_lsn, pg_version, &ctx)
.await?; .await?;
@@ -543,9 +583,9 @@ impl PageServerHandler {
// Import basebackup provided via CopyData // Import basebackup provided via CopyData
info!("importing basebackup"); info!("importing basebackup");
pgb.write_message_noflush(&BeMessage::CopyInResponse)?; pgb.write_message_noflush(&BeMessage::CopyInResponse)?;
self.flush_cancellable(pgb).await?; self.flush_cancellable(pgb, &tenant.cancel).await?;
let mut copyin_reader = pin!(StreamReader::new(self.copyin_stream(pgb))); let mut copyin_reader = pin!(StreamReader::new(self.copyin_stream(pgb, &tenant.cancel)));
timeline timeline
.import_basebackup_from_tar( .import_basebackup_from_tar(
&mut copyin_reader, &mut copyin_reader,
@@ -582,9 +622,10 @@ impl PageServerHandler {
IO: AsyncRead + AsyncWrite + Send + Sync + Unpin, IO: AsyncRead + AsyncWrite + Send + Sync + Unpin,
{ {
debug_assert_current_span_has_tenant_and_timeline_id(); debug_assert_current_span_has_tenant_and_timeline_id();
task_mgr::associate_with(Some(tenant_id), Some(timeline_id));
let timeline = get_active_tenant_timeline(tenant_id, timeline_id, &ctx).await?; let timeline = self
.get_active_tenant_timeline(tenant_id, timeline_id)
.await?;
let last_record_lsn = timeline.get_last_record_lsn(); let last_record_lsn = timeline.get_last_record_lsn();
if last_record_lsn != start_lsn { if last_record_lsn != start_lsn {
return Err(QueryError::Other( return Err(QueryError::Other(
@@ -598,8 +639,8 @@ impl PageServerHandler {
// Import wal provided via CopyData // Import wal provided via CopyData
info!("importing wal"); info!("importing wal");
pgb.write_message_noflush(&BeMessage::CopyInResponse)?; pgb.write_message_noflush(&BeMessage::CopyInResponse)?;
self.flush_cancellable(pgb).await?; self.flush_cancellable(pgb, &timeline.cancel).await?;
let mut copyin_reader = pin!(StreamReader::new(self.copyin_stream(pgb))); let mut copyin_reader = pin!(StreamReader::new(self.copyin_stream(pgb, &timeline.cancel)));
import_wal_from_tar(&timeline, &mut copyin_reader, start_lsn, end_lsn, &ctx).await?; import_wal_from_tar(&timeline, &mut copyin_reader, start_lsn, end_lsn, &ctx).await?;
info!("wal import complete"); info!("wal import complete");
@@ -792,7 +833,9 @@ impl PageServerHandler {
let started = std::time::Instant::now(); let started = std::time::Instant::now();
// check that the timeline exists // check that the timeline exists
let timeline = get_active_tenant_timeline(tenant_id, timeline_id, &ctx).await?; let timeline = self
.get_active_tenant_timeline(tenant_id, timeline_id)
.await?;
let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn(); let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn();
if let Some(lsn) = lsn { if let Some(lsn) = lsn {
// Backup was requested at a particular LSN. Wait for it to arrive. // Backup was requested at a particular LSN. Wait for it to arrive.
@@ -807,7 +850,7 @@ impl PageServerHandler {
// switch client to COPYOUT // switch client to COPYOUT
pgb.write_message_noflush(&BeMessage::CopyOutResponse)?; pgb.write_message_noflush(&BeMessage::CopyOutResponse)?;
self.flush_cancellable(pgb).await?; self.flush_cancellable(pgb, &timeline.cancel).await?;
// Send a tarball of the latest layer on the timeline. Compress if not // Send a tarball of the latest layer on the timeline. Compress if not
// fullbackup. TODO Compress in that case too (tests need to be updated) // fullbackup. TODO Compress in that case too (tests need to be updated)
@@ -859,7 +902,7 @@ impl PageServerHandler {
} }
pgb.write_message_noflush(&BeMessage::CopyDone)?; pgb.write_message_noflush(&BeMessage::CopyDone)?;
self.flush_cancellable(pgb).await?; self.flush_cancellable(pgb, &timeline.cancel).await?;
let basebackup_after = started let basebackup_after = started
.elapsed() .elapsed()
@@ -877,7 +920,7 @@ impl PageServerHandler {
// when accessing management api supply None as an argument // when accessing management api supply None as an argument
// when using to authorize tenant pass corresponding tenant id // when using to authorize tenant pass corresponding tenant id
fn check_permission(&self, tenant_id: Option<TenantId>) -> anyhow::Result<()> { fn check_permission(&self, tenant_id: Option<TenantId>) -> Result<(), QueryError> {
if self.auth.is_none() { if self.auth.is_none() {
// auth is set to Trust, nothing to check so just return ok // auth is set to Trust, nothing to check so just return ok
return Ok(()); return Ok(());
@@ -889,7 +932,26 @@ impl PageServerHandler {
.claims .claims
.as_ref() .as_ref()
.expect("claims presence already checked"); .expect("claims presence already checked");
check_permission(claims, tenant_id) check_permission(claims, tenant_id).map_err(|e| QueryError::Unauthorized(e.0))
}
/// Shorthand for getting a reference to a Timeline of an Active tenant.
async fn get_active_tenant_timeline(
&self,
tenant_id: TenantId,
timeline_id: TimelineId,
) -> Result<Arc<Timeline>, GetActiveTimelineError> {
let tenant = get_active_tenant_with_timeout(
tenant_id,
ACTIVE_TENANT_TIMEOUT,
&task_mgr::shutdown_token(),
)
.await
.map_err(GetActiveTimelineError::Tenant)?;
let timeline = tenant
.get_timeline(timeline_id, true)
.map_err(|e| GetActiveTimelineError::Timeline(anyhow::anyhow!(e)))?;
Ok(timeline)
} }
} }
@@ -909,16 +971,17 @@ where
.auth .auth
.as_ref() .as_ref()
.unwrap() .unwrap()
.decode(str::from_utf8(jwt_response).context("jwt response is not UTF-8")?)?; .decode(str::from_utf8(jwt_response).context("jwt response is not UTF-8")?)
.map_err(|e| QueryError::Unauthorized(e.0))?;
if matches!(data.claims.scope, Scope::Tenant) && data.claims.tenant_id.is_none() { if matches!(data.claims.scope, Scope::Tenant) && data.claims.tenant_id.is_none() {
return Err(QueryError::Other(anyhow::anyhow!( return Err(QueryError::Unauthorized(
"jwt token scope is Tenant, but tenant id is missing" "jwt token scope is Tenant, but tenant id is missing".into(),
))); ));
} }
info!( debug!(
"jwt auth succeeded for scope: {:#?} by tenant id: {:?}", "jwt scope check succeeded for scope: {:#?} by tenant id: {:?}",
data.claims.scope, data.claims.tenant_id, data.claims.scope, data.claims.tenant_id,
); );
@@ -940,9 +1003,13 @@ where
pgb: &mut PostgresBackend<IO>, pgb: &mut PostgresBackend<IO>,
query_string: &str, query_string: &str,
) -> Result<(), QueryError> { ) -> Result<(), QueryError> {
fail::fail_point!("simulated-bad-compute-connection", |_| {
info!("Hit failpoint for bad connection");
Err(QueryError::SimulatedConnectionError)
});
let ctx = self.connection_ctx.attached_child(); let ctx = self.connection_ctx.attached_child();
debug!("process query {query_string:?}"); debug!("process query {query_string:?}");
if query_string.starts_with("pagestream ") { if query_string.starts_with("pagestream ") {
let (_, params_raw) = query_string.split_at("pagestream ".len()); let (_, params_raw) = query_string.split_at("pagestream ".len());
let params = params_raw.split(' ').collect::<Vec<_>>(); let params = params_raw.split(' ').collect::<Vec<_>>();
@@ -1048,7 +1115,9 @@ where
.record("timeline_id", field::display(timeline_id)); .record("timeline_id", field::display(timeline_id));
self.check_permission(Some(tenant_id))?; self.check_permission(Some(tenant_id))?;
let timeline = get_active_tenant_timeline(tenant_id, timeline_id, &ctx).await?; let timeline = self
.get_active_tenant_timeline(tenant_id, timeline_id)
.await?;
let end_of_timeline = timeline.get_last_record_rlsn(); let end_of_timeline = timeline.get_last_record_rlsn();
@@ -1232,7 +1301,12 @@ where
self.check_permission(Some(tenant_id))?; self.check_permission(Some(tenant_id))?;
let tenant = get_active_tenant_with_timeout(tenant_id, &ctx).await?; let tenant = get_active_tenant_with_timeout(
tenant_id,
ACTIVE_TENANT_TIMEOUT,
&task_mgr::shutdown_token(),
)
.await?;
pgb.write_message_noflush(&BeMessage::RowDescription(&[ pgb.write_message_noflush(&BeMessage::RowDescription(&[
RowDescriptor::int8_col(b"checkpoint_distance"), RowDescriptor::int8_col(b"checkpoint_distance"),
RowDescriptor::int8_col(b"checkpoint_timeout"), RowDescriptor::int8_col(b"checkpoint_timeout"),
@@ -1278,67 +1352,16 @@ where
} }
} }
#[derive(thiserror::Error, Debug)]
enum GetActiveTenantError {
#[error(
"Timed out waiting {wait_time:?} for tenant active state. Latest state: {latest_state:?}"
)]
WaitForActiveTimeout {
latest_state: TenantState,
wait_time: Duration,
},
#[error(transparent)]
NotFound(GetTenantError),
#[error(transparent)]
WaitTenantActive(tenant::WaitToBecomeActiveError),
}
impl From<GetActiveTenantError> for QueryError { impl From<GetActiveTenantError> for QueryError {
fn from(e: GetActiveTenantError) -> Self { fn from(e: GetActiveTenantError) -> Self {
match e { match e {
GetActiveTenantError::WaitForActiveTimeout { .. } => QueryError::Disconnected( GetActiveTenantError::WaitForActiveTimeout { .. } => QueryError::Disconnected(
ConnectionError::Io(io::Error::new(io::ErrorKind::TimedOut, e.to_string())), ConnectionError::Io(io::Error::new(io::ErrorKind::TimedOut, e.to_string())),
), ),
GetActiveTenantError::WaitTenantActive(e) => QueryError::Other(anyhow::Error::new(e)), GetActiveTenantError::WillNotBecomeActive(TenantState::Stopping { .. }) => {
GetActiveTenantError::NotFound(e) => QueryError::Other(anyhow::Error::new(e)), QueryError::Shutdown
}
}
}
/// Get active tenant.
///
/// If the tenant is Loading, waits for it to become Active, for up to 30 s. That
/// ensures that queries don't fail immediately after pageserver startup, because
/// all tenants are still loading.
async fn get_active_tenant_with_timeout(
tenant_id: TenantId,
_ctx: &RequestContext, /* require get a context to support cancellation in the future */
) -> Result<Arc<Tenant>, GetActiveTenantError> {
let tenant = match mgr::get_tenant(tenant_id, false).await {
Ok(tenant) => tenant,
Err(e @ GetTenantError::NotFound(_)) => return Err(GetActiveTenantError::NotFound(e)),
Err(GetTenantError::NotActive(_)) => {
unreachable!("we're calling get_tenant with active_only=false")
}
Err(GetTenantError::Broken(_)) => {
unreachable!("we're calling get_tenant with active_only=false")
}
};
let wait_time = Duration::from_secs(30);
match tokio::time::timeout(wait_time, tenant.wait_to_become_active()).await {
Ok(Ok(())) => Ok(tenant),
// no .context(), the error message is good enough and some tests depend on it
Ok(Err(e)) => Err(GetActiveTenantError::WaitTenantActive(e)),
Err(_) => {
let latest_state = tenant.current_state();
if latest_state == TenantState::Active {
Ok(tenant)
} else {
Err(GetActiveTenantError::WaitForActiveTimeout {
latest_state,
wait_time,
})
} }
e => QueryError::Other(anyhow::anyhow!(e)),
} }
} }
} }
@@ -1359,18 +1382,3 @@ impl From<GetActiveTimelineError> for QueryError {
} }
} }
} }
/// Shorthand for getting a reference to a Timeline of an Active tenant.
async fn get_active_tenant_timeline(
tenant_id: TenantId,
timeline_id: TimelineId,
ctx: &RequestContext,
) -> Result<Arc<Timeline>, GetActiveTimelineError> {
let tenant = get_active_tenant_with_timeout(tenant_id, ctx)
.await
.map_err(GetActiveTimelineError::Tenant)?;
let timeline = tenant
.get_timeline(timeline_id, true)
.map_err(|e| GetActiveTimelineError::Timeline(anyhow::anyhow!(e)))?;
Ok(timeline)
}

View File

@@ -21,7 +21,6 @@ use serde::{Deserialize, Serialize};
use std::collections::{hash_map, HashMap, HashSet}; use std::collections::{hash_map, HashMap, HashSet};
use std::ops::ControlFlow; use std::ops::ControlFlow;
use std::ops::Range; use std::ops::Range;
use tokio_util::sync::CancellationToken;
use tracing::{debug, trace, warn}; use tracing::{debug, trace, warn};
use utils::{bin_ser::BeSer, lsn::Lsn}; use utils::{bin_ser::BeSer, lsn::Lsn};
@@ -44,6 +43,17 @@ pub enum CalculateLogicalSizeError {
Other(#[from] anyhow::Error), Other(#[from] anyhow::Error),
} }
impl From<PageReconstructError> for CalculateLogicalSizeError {
fn from(pre: PageReconstructError) -> Self {
match pre {
PageReconstructError::AncestorStopping(_) | PageReconstructError::Cancelled => {
Self::Cancelled
}
_ => Self::Other(pre.into()),
}
}
}
#[derive(Debug, thiserror::Error)] #[derive(Debug, thiserror::Error)]
pub enum RelationError { pub enum RelationError {
#[error("Relation Already Exists")] #[error("Relation Already Exists")]
@@ -567,30 +577,22 @@ impl Timeline {
pub async fn get_current_logical_size_non_incremental( pub async fn get_current_logical_size_non_incremental(
&self, &self,
lsn: Lsn, lsn: Lsn,
cancel: CancellationToken,
ctx: &RequestContext, ctx: &RequestContext,
) -> Result<u64, CalculateLogicalSizeError> { ) -> Result<u64, CalculateLogicalSizeError> {
crate::tenant::debug_assert_current_span_has_tenant_and_timeline_id(); crate::tenant::debug_assert_current_span_has_tenant_and_timeline_id();
// Fetch list of database dirs and iterate them // Fetch list of database dirs and iterate them
let buf = self.get(DBDIR_KEY, lsn, ctx).await.context("read dbdir")?; let buf = self.get(DBDIR_KEY, lsn, ctx).await?;
let dbdir = DbDirectory::des(&buf).context("deserialize db directory")?; let dbdir = DbDirectory::des(&buf).context("deserialize db directory")?;
let mut total_size: u64 = 0; let mut total_size: u64 = 0;
for (spcnode, dbnode) in dbdir.dbdirs.keys() { for (spcnode, dbnode) in dbdir.dbdirs.keys() {
for rel in self for rel in self.list_rels(*spcnode, *dbnode, lsn, ctx).await? {
.list_rels(*spcnode, *dbnode, lsn, ctx) if self.cancel.is_cancelled() {
.await
.context("list rels")?
{
if cancel.is_cancelled() {
return Err(CalculateLogicalSizeError::Cancelled); return Err(CalculateLogicalSizeError::Cancelled);
} }
let relsize_key = rel_size_to_key(rel); let relsize_key = rel_size_to_key(rel);
let mut buf = self let mut buf = self.get(relsize_key, lsn, ctx).await?;
.get(relsize_key, lsn, ctx)
.await
.with_context(|| format!("read relation size of {rel:?}"))?;
let relsize = buf.get_u32_le(); let relsize = buf.get_u32_le();
total_size += relsize as u64; total_size += relsize as u64;

View File

@@ -299,10 +299,6 @@ pub enum TaskKind {
#[derive(Default)] #[derive(Default)]
struct MutableTaskState { struct MutableTaskState {
/// Tenant and timeline that this task is associated with.
tenant_id: Option<TenantId>,
timeline_id: Option<TimelineId>,
/// Handle for waiting for the task to exit. It can be None, if the /// Handle for waiting for the task to exit. It can be None, if the
/// the task has already exited. /// the task has already exited.
join_handle: Option<JoinHandle<()>>, join_handle: Option<JoinHandle<()>>,
@@ -319,6 +315,11 @@ struct PageServerTask {
// To request task shutdown, just cancel this token. // To request task shutdown, just cancel this token.
cancel: CancellationToken, cancel: CancellationToken,
/// Tasks may optionally be launched for a particular tenant/timeline, enabling
/// later cancelling tasks for that tenant/timeline in [`shutdown_tasks`]
tenant_id: Option<TenantId>,
timeline_id: Option<TimelineId>,
mutable: Mutex<MutableTaskState>, mutable: Mutex<MutableTaskState>,
} }
@@ -344,11 +345,9 @@ where
kind, kind,
name: name.to_string(), name: name.to_string(),
cancel: cancel.clone(), cancel: cancel.clone(),
mutable: Mutex::new(MutableTaskState { tenant_id,
tenant_id, timeline_id,
timeline_id, mutable: Mutex::new(MutableTaskState { join_handle: None }),
join_handle: None,
}),
}); });
TASKS.lock().unwrap().insert(task_id, Arc::clone(&task)); TASKS.lock().unwrap().insert(task_id, Arc::clone(&task));
@@ -418,8 +417,6 @@ async fn task_finish(
let mut shutdown_process = false; let mut shutdown_process = false;
{ {
let task_mut = task.mutable.lock().unwrap();
match result { match result {
Ok(Ok(())) => { Ok(Ok(())) => {
debug!("Task '{}' exited normally", task_name); debug!("Task '{}' exited normally", task_name);
@@ -428,13 +425,13 @@ async fn task_finish(
if shutdown_process_on_error { if shutdown_process_on_error {
error!( error!(
"Shutting down: task '{}' tenant_id: {:?}, timeline_id: {:?} exited with error: {:?}", "Shutting down: task '{}' tenant_id: {:?}, timeline_id: {:?} exited with error: {:?}",
task_name, task_mut.tenant_id, task_mut.timeline_id, err task_name, task.tenant_id, task.timeline_id, err
); );
shutdown_process = true; shutdown_process = true;
} else { } else {
error!( error!(
"Task '{}' tenant_id: {:?}, timeline_id: {:?} exited with error: {:?}", "Task '{}' tenant_id: {:?}, timeline_id: {:?} exited with error: {:?}",
task_name, task_mut.tenant_id, task_mut.timeline_id, err task_name, task.tenant_id, task.timeline_id, err
); );
} }
} }
@@ -442,13 +439,13 @@ async fn task_finish(
if shutdown_process_on_error { if shutdown_process_on_error {
error!( error!(
"Shutting down: task '{}' tenant_id: {:?}, timeline_id: {:?} panicked: {:?}", "Shutting down: task '{}' tenant_id: {:?}, timeline_id: {:?} panicked: {:?}",
task_name, task_mut.tenant_id, task_mut.timeline_id, err task_name, task.tenant_id, task.timeline_id, err
); );
shutdown_process = true; shutdown_process = true;
} else { } else {
error!( error!(
"Task '{}' tenant_id: {:?}, timeline_id: {:?} panicked: {:?}", "Task '{}' tenant_id: {:?}, timeline_id: {:?} panicked: {:?}",
task_name, task_mut.tenant_id, task_mut.timeline_id, err task_name, task.tenant_id, task.timeline_id, err
); );
} }
} }
@@ -460,17 +457,6 @@ async fn task_finish(
} }
} }
// expected to be called from the task of the given id.
pub fn associate_with(tenant_id: Option<TenantId>, timeline_id: Option<TimelineId>) {
CURRENT_TASK.with(|ct| {
let mut task_mut = ct.mutable.lock().unwrap();
task_mut.tenant_id = tenant_id;
task_mut.timeline_id = timeline_id;
});
}
/// Is there a task running that matches the criteria
/// Signal and wait for tasks to shut down. /// Signal and wait for tasks to shut down.
/// ///
/// ///
@@ -493,17 +479,16 @@ pub async fn shutdown_tasks(
{ {
let tasks = TASKS.lock().unwrap(); let tasks = TASKS.lock().unwrap();
for task in tasks.values() { for task in tasks.values() {
let task_mut = task.mutable.lock().unwrap();
if (kind.is_none() || Some(task.kind) == kind) if (kind.is_none() || Some(task.kind) == kind)
&& (tenant_id.is_none() || task_mut.tenant_id == tenant_id) && (tenant_id.is_none() || task.tenant_id == tenant_id)
&& (timeline_id.is_none() || task_mut.timeline_id == timeline_id) && (timeline_id.is_none() || task.timeline_id == timeline_id)
{ {
task.cancel.cancel(); task.cancel.cancel();
victim_tasks.push(( victim_tasks.push((
Arc::clone(task), Arc::clone(task),
task.kind, task.kind,
task_mut.tenant_id, task.tenant_id,
task_mut.timeline_id, task.timeline_id,
)); ));
} }
} }

View File

@@ -26,6 +26,7 @@ use tracing::*;
use utils::completion; use utils::completion;
use utils::crashsafe::path_with_suffix_extension; use utils::crashsafe::path_with_suffix_extension;
use utils::fs_ext; use utils::fs_ext;
use utils::sync::gate::Gate;
use std::cmp::min; use std::cmp::min;
use std::collections::hash_map::Entry; use std::collections::hash_map::Entry;
@@ -54,6 +55,8 @@ use self::config::TenantConf;
use self::delete::DeleteTenantFlow; use self::delete::DeleteTenantFlow;
use self::metadata::LoadMetadataError; use self::metadata::LoadMetadataError;
use self::metadata::TimelineMetadata; use self::metadata::TimelineMetadata;
use self::mgr::GetActiveTenantError;
use self::mgr::GetTenantError;
use self::mgr::TenantsMap; use self::mgr::TenantsMap;
use self::remote_timeline_client::RemoteTimelineClient; use self::remote_timeline_client::RemoteTimelineClient;
use self::timeline::uninit::TimelineUninitMark; use self::timeline::uninit::TimelineUninitMark;
@@ -252,6 +255,20 @@ pub struct Tenant {
eviction_task_tenant_state: tokio::sync::Mutex<EvictionTaskTenantState>, eviction_task_tenant_state: tokio::sync::Mutex<EvictionTaskTenantState>,
pub(crate) delete_progress: Arc<tokio::sync::Mutex<DeleteTenantFlow>>, pub(crate) delete_progress: Arc<tokio::sync::Mutex<DeleteTenantFlow>>,
// Cancellation token fires when we have entered shutdown(). This is a parent of
// Timelines' cancellation token.
pub(crate) cancel: CancellationToken,
// Users of the Tenant such as the page service must take this Gate to avoid
// trying to use a Tenant which is shutting down.
pub(crate) gate: Gate,
}
impl std::fmt::Debug for Tenant {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{} ({})", self.tenant_id, self.current_state())
}
} }
pub(crate) enum WalRedoManager { pub(crate) enum WalRedoManager {
@@ -359,34 +376,6 @@ impl Debug for SetStoppingError {
} }
} }
#[derive(Debug, thiserror::Error)]
pub(crate) enum WaitToBecomeActiveError {
WillNotBecomeActive {
tenant_id: TenantId,
state: TenantState,
},
TenantDropped {
tenant_id: TenantId,
},
}
impl std::fmt::Display for WaitToBecomeActiveError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
WaitToBecomeActiveError::WillNotBecomeActive { tenant_id, state } => {
write!(
f,
"Tenant {} will not become active. Current state: {:?}",
tenant_id, state
)
}
WaitToBecomeActiveError::TenantDropped { tenant_id } => {
write!(f, "Tenant {tenant_id} will not become active (dropped)")
}
}
}
}
#[derive(thiserror::Error, Debug)] #[derive(thiserror::Error, Debug)]
pub enum CreateTimelineError { pub enum CreateTimelineError {
#[error("a timeline with the given ID already exists")] #[error("a timeline with the given ID already exists")]
@@ -395,6 +384,8 @@ pub enum CreateTimelineError {
AncestorLsn(anyhow::Error), AncestorLsn(anyhow::Error),
#[error("ancestor timeline is not active")] #[error("ancestor timeline is not active")]
AncestorNotActive, AncestorNotActive,
#[error("tenant shutting down")]
ShuttingDown,
#[error(transparent)] #[error(transparent)]
Other(#[from] anyhow::Error), Other(#[from] anyhow::Error),
} }
@@ -526,7 +517,7 @@ impl Tenant {
resources: TenantSharedResources, resources: TenantSharedResources,
attached_conf: AttachedTenantConf, attached_conf: AttachedTenantConf,
init_order: Option<InitializationOrder>, init_order: Option<InitializationOrder>,
tenants: &'static tokio::sync::RwLock<TenantsMap>, tenants: &'static std::sync::RwLock<TenantsMap>,
mode: SpawnMode, mode: SpawnMode,
ctx: &RequestContext, ctx: &RequestContext,
) -> anyhow::Result<Arc<Tenant>> { ) -> anyhow::Result<Arc<Tenant>> {
@@ -1524,6 +1515,11 @@ impl Tenant {
))); )));
} }
let _gate = self
.gate
.enter()
.map_err(|_| CreateTimelineError::ShuttingDown)?;
if let Ok(existing) = self.get_timeline(new_timeline_id, false) { if let Ok(existing) = self.get_timeline(new_timeline_id, false) {
debug!("timeline {new_timeline_id} already exists"); debug!("timeline {new_timeline_id} already exists");
@@ -1808,6 +1804,7 @@ impl Tenant {
freeze_and_flush: bool, freeze_and_flush: bool,
) -> Result<(), completion::Barrier> { ) -> Result<(), completion::Barrier> {
span::debug_assert_current_span_has_tenant_id(); span::debug_assert_current_span_has_tenant_id();
// Set tenant (and its timlines) to Stoppping state. // Set tenant (and its timlines) to Stoppping state.
// //
// Since we can only transition into Stopping state after activation is complete, // Since we can only transition into Stopping state after activation is complete,
@@ -1833,6 +1830,7 @@ impl Tenant {
} }
Err(SetStoppingError::AlreadyStopping(other)) => { Err(SetStoppingError::AlreadyStopping(other)) => {
// give caller the option to wait for this this shutdown // give caller the option to wait for this this shutdown
info!("Tenant::shutdown: AlreadyStopping");
return Err(other); return Err(other);
} }
}; };
@@ -1843,9 +1841,16 @@ impl Tenant {
timelines.values().for_each(|timeline| { timelines.values().for_each(|timeline| {
let timeline = Arc::clone(timeline); let timeline = Arc::clone(timeline);
let span = Span::current(); let span = Span::current();
js.spawn(async move { timeline.shutdown(freeze_and_flush).instrument(span).await }); js.spawn(async move {
if freeze_and_flush {
timeline.flush_and_shutdown().instrument(span).await
} else {
timeline.shutdown().instrument(span).await
}
});
}) })
}; };
tracing::info!("Waiting for timelines...");
while let Some(res) = js.join_next().await { while let Some(res) = js.join_next().await {
match res { match res {
Ok(()) => {} Ok(()) => {}
@@ -1855,12 +1860,21 @@ impl Tenant {
} }
} }
// We cancel the Tenant's cancellation token _after_ the timelines have all shut down. This permits
// them to continue to do work during their shutdown methods, e.g. flushing data.
tracing::debug!("Cancelling CancellationToken");
self.cancel.cancel();
// shutdown all tenant and timeline tasks: gc, compaction, page service // shutdown all tenant and timeline tasks: gc, compaction, page service
// No new tasks will be started for this tenant because it's in `Stopping` state. // No new tasks will be started for this tenant because it's in `Stopping` state.
// //
// this will additionally shutdown and await all timeline tasks. // this will additionally shutdown and await all timeline tasks.
tracing::debug!("Waiting for tasks...");
task_mgr::shutdown_tasks(None, Some(self.tenant_id), None).await; task_mgr::shutdown_tasks(None, Some(self.tenant_id), None).await;
// Wait for any in-flight operations to complete
self.gate.close().await;
Ok(()) Ok(())
} }
@@ -2021,7 +2035,7 @@ impl Tenant {
self.state.subscribe() self.state.subscribe()
} }
pub(crate) async fn wait_to_become_active(&self) -> Result<(), WaitToBecomeActiveError> { pub(crate) async fn wait_to_become_active(&self) -> Result<(), GetActiveTenantError> {
let mut receiver = self.state.subscribe(); let mut receiver = self.state.subscribe();
loop { loop {
let current_state = receiver.borrow_and_update().clone(); let current_state = receiver.borrow_and_update().clone();
@@ -2029,11 +2043,9 @@ impl Tenant {
TenantState::Loading | TenantState::Attaching | TenantState::Activating(_) => { TenantState::Loading | TenantState::Attaching | TenantState::Activating(_) => {
// in these states, there's a chance that we can reach ::Active // in these states, there's a chance that we can reach ::Active
receiver.changed().await.map_err( receiver.changed().await.map_err(
|_e: tokio::sync::watch::error::RecvError| { |_e: tokio::sync::watch::error::RecvError|
WaitToBecomeActiveError::TenantDropped { // Tenant existed but was dropped: report it as non-existent
tenant_id: self.tenant_id, GetActiveTenantError::NotFound(GetTenantError::NotFound(self.tenant_id))
}
},
)?; )?;
} }
TenantState::Active { .. } => { TenantState::Active { .. } => {
@@ -2041,10 +2053,7 @@ impl Tenant {
} }
TenantState::Broken { .. } | TenantState::Stopping { .. } => { TenantState::Broken { .. } | TenantState::Stopping { .. } => {
// There's no chance the tenant can transition back into ::Active // There's no chance the tenant can transition back into ::Active
return Err(WaitToBecomeActiveError::WillNotBecomeActive { return Err(GetActiveTenantError::WillNotBecomeActive(current_state));
tenant_id: self.tenant_id,
state: current_state,
});
} }
} }
} }
@@ -2110,6 +2119,9 @@ where
} }
impl Tenant { impl Tenant {
pub fn get_tenant_id(&self) -> TenantId {
self.tenant_id
}
pub fn tenant_specific_overrides(&self) -> TenantConfOpt { pub fn tenant_specific_overrides(&self) -> TenantConfOpt {
self.tenant_conf.read().unwrap().tenant_conf self.tenant_conf.read().unwrap().tenant_conf
} }
@@ -2267,6 +2279,7 @@ impl Tenant {
initial_logical_size_can_start.cloned(), initial_logical_size_can_start.cloned(),
initial_logical_size_attempt.cloned().flatten(), initial_logical_size_attempt.cloned().flatten(),
state, state,
self.cancel.child_token(),
); );
Ok(timeline) Ok(timeline)
@@ -2356,6 +2369,8 @@ impl Tenant {
cached_synthetic_tenant_size: Arc::new(AtomicU64::new(0)), cached_synthetic_tenant_size: Arc::new(AtomicU64::new(0)),
eviction_task_tenant_state: tokio::sync::Mutex::new(EvictionTaskTenantState::default()), eviction_task_tenant_state: tokio::sync::Mutex::new(EvictionTaskTenantState::default()),
delete_progress: Arc::new(tokio::sync::Mutex::new(DeleteTenantFlow::default())), delete_progress: Arc::new(tokio::sync::Mutex::new(DeleteTenantFlow::default())),
cancel: CancellationToken::default(),
gate: Gate::new(format!("Tenant<{tenant_id}>")),
} }
} }
@@ -3519,10 +3534,6 @@ pub(crate) mod harness {
let remote_fs_dir = conf.workdir.join("localfs"); let remote_fs_dir = conf.workdir.join("localfs");
std::fs::create_dir_all(&remote_fs_dir).unwrap(); std::fs::create_dir_all(&remote_fs_dir).unwrap();
let config = RemoteStorageConfig { let config = RemoteStorageConfig {
// TODO: why not remote_storage::DEFAULT_REMOTE_STORAGE_MAX_CONCURRENT_SYNCS,
max_concurrent_syncs: std::num::NonZeroUsize::new(2_000_000).unwrap(),
// TODO: why not remote_storage::DEFAULT_REMOTE_STORAGE_MAX_SYNC_ERRORS,
max_sync_errors: std::num::NonZeroU32::new(3_000_000).unwrap(),
storage: RemoteStorageKind::LocalFs(remote_fs_dir.clone()), storage: RemoteStorageKind::LocalFs(remote_fs_dir.clone()),
}; };
let remote_storage = GenericRemoteStorage::from_config(&config).unwrap(); let remote_storage = GenericRemoteStorage::from_config(&config).unwrap();
@@ -3692,7 +3703,7 @@ mod tests {
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
static TEST_KEY: Lazy<Key> = static TEST_KEY: Lazy<Key> =
Lazy::new(|| Key::from_slice(&hex!("112222222233333333444444445500000001"))); Lazy::new(|| Key::from_slice(&hex!("010000000033333333444444445500000001")));
#[tokio::test] #[tokio::test]
async fn test_basic() -> anyhow::Result<()> { async fn test_basic() -> anyhow::Result<()> {
@@ -3788,9 +3799,9 @@ mod tests {
let writer = tline.writer().await; let writer = tline.writer().await;
#[allow(non_snake_case)] #[allow(non_snake_case)]
let TEST_KEY_A: Key = Key::from_hex("112222222233333333444444445500000001").unwrap(); let TEST_KEY_A: Key = Key::from_hex("110000000033333333444444445500000001").unwrap();
#[allow(non_snake_case)] #[allow(non_snake_case)]
let TEST_KEY_B: Key = Key::from_hex("112222222233333333444444445500000002").unwrap(); let TEST_KEY_B: Key = Key::from_hex("110000000033333333444444445500000002").unwrap();
// Insert a value on the timeline // Insert a value on the timeline
writer writer
@@ -4236,11 +4247,7 @@ mod tests {
metadata_bytes[8] ^= 1; metadata_bytes[8] ^= 1;
std::fs::write(metadata_path, metadata_bytes)?; std::fs::write(metadata_path, metadata_bytes)?;
let err = harness let err = harness.try_load_local(&ctx).await.expect_err("should fail");
.try_load_local(&ctx)
.await
.err()
.expect("should fail");
// get all the stack with all .context, not only the last one // get all the stack with all .context, not only the last one
let message = format!("{err:#}"); let message = format!("{err:#}");
let expected = "failed to load metadata"; let expected = "failed to load metadata";
@@ -4374,7 +4381,7 @@ mod tests {
let mut keyspace = KeySpaceAccum::new(); let mut keyspace = KeySpaceAccum::new();
let mut test_key = Key::from_hex("012222222233333333444444445500000000").unwrap(); let mut test_key = Key::from_hex("010000000033333333444444445500000000").unwrap();
let mut blknum = 0; let mut blknum = 0;
for _ in 0..50 { for _ in 0..50 {
for _ in 0..10000 { for _ in 0..10000 {
@@ -4420,7 +4427,7 @@ mod tests {
const NUM_KEYS: usize = 1000; const NUM_KEYS: usize = 1000;
let mut test_key = Key::from_hex("012222222233333333444444445500000000").unwrap(); let mut test_key = Key::from_hex("010000000033333333444444445500000000").unwrap();
let mut keyspace = KeySpaceAccum::new(); let mut keyspace = KeySpaceAccum::new();
@@ -4501,7 +4508,7 @@ mod tests {
const NUM_KEYS: usize = 1000; const NUM_KEYS: usize = 1000;
let mut test_key = Key::from_hex("012222222233333333444444445500000000").unwrap(); let mut test_key = Key::from_hex("010000000033333333444444445500000000").unwrap();
let mut keyspace = KeySpaceAccum::new(); let mut keyspace = KeySpaceAccum::new();
@@ -4592,7 +4599,7 @@ mod tests {
const NUM_KEYS: usize = 100; const NUM_KEYS: usize = 100;
const NUM_TLINES: usize = 50; const NUM_TLINES: usize = 50;
let mut test_key = Key::from_hex("012222222233333333444444445500000000").unwrap(); let mut test_key = Key::from_hex("010000000033333333444444445500000000").unwrap();
// Track page mutation lsns across different timelines. // Track page mutation lsns across different timelines.
let mut updated = [[Lsn(0); NUM_KEYS]; NUM_TLINES]; let mut updated = [[Lsn(0); NUM_KEYS]; NUM_TLINES];
@@ -4646,74 +4653,6 @@ mod tests {
Ok(()) Ok(())
} }
#[tokio::test]
async fn test_write_at_initdb_lsn_takes_optimization_code_path() -> anyhow::Result<()> {
let (tenant, ctx) = TenantHarness::create("test_empty_test_timeline_is_usable")?
.load()
.await;
let initdb_lsn = Lsn(0x20);
let utline = tenant
.create_empty_timeline(TIMELINE_ID, initdb_lsn, DEFAULT_PG_VERSION, &ctx)
.await?;
let tline = utline.raw_timeline().unwrap();
// Spawn flush loop now so that we can set the `expect_initdb_optimization`
tline.maybe_spawn_flush_loop();
// Make sure the timeline has the minimum set of required keys for operation.
// The only operation you can always do on an empty timeline is to `put` new data.
// Except if you `put` at `initdb_lsn`.
// In that case, there's an optimization to directly create image layers instead of delta layers.
// It uses `repartition()`, which assumes some keys to be present.
// Let's make sure the test timeline can handle that case.
{
let mut state = tline.flush_loop_state.lock().unwrap();
assert_eq!(
timeline::FlushLoopState::Running {
expect_initdb_optimization: false,
initdb_optimization_count: 0,
},
*state
);
*state = timeline::FlushLoopState::Running {
expect_initdb_optimization: true,
initdb_optimization_count: 0,
};
}
// Make writes at the initdb_lsn. When we flush it below, it should be handled by the optimization.
// As explained above, the optimization requires some keys to be present.
// As per `create_empty_timeline` documentation, use init_empty to set them.
// This is what `create_test_timeline` does, by the way.
let mut modification = tline.begin_modification(initdb_lsn);
modification
.init_empty_test_timeline()
.context("init_empty_test_timeline")?;
modification
.commit(&ctx)
.await
.context("commit init_empty_test_timeline modification")?;
// Do the flush. The flush code will check the expectations that we set above.
tline.freeze_and_flush().await?;
// assert freeze_and_flush exercised the initdb optimization
{
let state = tline.flush_loop_state.lock().unwrap();
let timeline::FlushLoopState::Running {
expect_initdb_optimization,
initdb_optimization_count,
} = *state
else {
panic!("unexpected state: {:?}", *state);
};
assert!(expect_initdb_optimization);
assert!(initdb_optimization_count > 0);
}
Ok(())
}
#[tokio::test] #[tokio::test]
async fn test_uninit_mark_crash() -> anyhow::Result<()> { async fn test_uninit_mark_crash() -> anyhow::Result<()> {
let name = "test_uninit_mark_crash"; let name = "test_uninit_mark_crash";
@@ -4726,7 +4665,7 @@ mod tests {
// Keeps uninit mark in place // Keeps uninit mark in place
let raw_tline = tline.raw_timeline().unwrap(); let raw_tline = tline.raw_timeline().unwrap();
raw_tline raw_tline
.shutdown(false) .shutdown()
.instrument(info_span!("test_shutdown", tenant_id=%raw_tline.tenant_id)) .instrument(info_span!("test_shutdown", tenant_id=%raw_tline.tenant_id))
.await; .await;
std::mem::forget(tline); std::mem::forget(tline);

View File

@@ -327,7 +327,7 @@ mod tests {
let mut sz: u16 = rng.gen(); let mut sz: u16 = rng.gen();
// Make 50% of the arrays small // Make 50% of the arrays small
if rng.gen() { if rng.gen() {
sz |= 63; sz &= 63;
} }
random_array(sz.into()) random_array(sz.into())
}) })

View File

@@ -21,7 +21,7 @@ use crate::{
}; };
use super::{ use super::{
mgr::{GetTenantError, TenantsMap}, mgr::{GetTenantError, TenantSlotError, TenantSlotUpsertError, TenantsMap},
remote_timeline_client::{FAILED_REMOTE_OP_RETRIES, FAILED_UPLOAD_WARN_THRESHOLD}, remote_timeline_client::{FAILED_REMOTE_OP_RETRIES, FAILED_UPLOAD_WARN_THRESHOLD},
span, span,
timeline::delete::DeleteTimelineFlow, timeline::delete::DeleteTimelineFlow,
@@ -33,12 +33,21 @@ pub(crate) enum DeleteTenantError {
#[error("GetTenant {0}")] #[error("GetTenant {0}")]
Get(#[from] GetTenantError), Get(#[from] GetTenantError),
#[error("Tenant not attached")]
NotAttached,
#[error("Invalid state {0}. Expected Active or Broken")] #[error("Invalid state {0}. Expected Active or Broken")]
InvalidState(TenantState), InvalidState(TenantState),
#[error("Tenant deletion is already in progress")] #[error("Tenant deletion is already in progress")]
AlreadyInProgress, AlreadyInProgress,
#[error("Tenant map slot error {0}")]
SlotError(#[from] TenantSlotError),
#[error("Tenant map slot upsert error {0}")]
SlotUpsertError(#[from] TenantSlotUpsertError),
#[error("Timeline {0}")] #[error("Timeline {0}")]
Timeline(#[from] DeleteTimelineError), Timeline(#[from] DeleteTimelineError),
@@ -273,12 +282,12 @@ impl DeleteTenantFlow {
pub(crate) async fn run( pub(crate) async fn run(
conf: &'static PageServerConf, conf: &'static PageServerConf,
remote_storage: Option<GenericRemoteStorage>, remote_storage: Option<GenericRemoteStorage>,
tenants: &'static tokio::sync::RwLock<TenantsMap>, tenants: &'static std::sync::RwLock<TenantsMap>,
tenant_id: TenantId, tenant: Arc<Tenant>,
) -> Result<(), DeleteTenantError> { ) -> Result<(), DeleteTenantError> {
span::debug_assert_current_span_has_tenant_id(); span::debug_assert_current_span_has_tenant_id();
let (tenant, mut guard) = Self::prepare(tenants, tenant_id).await?; let mut guard = Self::prepare(&tenant).await?;
if let Err(e) = Self::run_inner(&mut guard, conf, remote_storage.as_ref(), &tenant).await { if let Err(e) = Self::run_inner(&mut guard, conf, remote_storage.as_ref(), &tenant).await {
tenant.set_broken(format!("{e:#}")).await; tenant.set_broken(format!("{e:#}")).await;
@@ -378,7 +387,7 @@ impl DeleteTenantFlow {
guard: DeletionGuard, guard: DeletionGuard,
tenant: &Arc<Tenant>, tenant: &Arc<Tenant>,
preload: Option<TenantPreload>, preload: Option<TenantPreload>,
tenants: &'static tokio::sync::RwLock<TenantsMap>, tenants: &'static std::sync::RwLock<TenantsMap>,
init_order: Option<InitializationOrder>, init_order: Option<InitializationOrder>,
ctx: &RequestContext, ctx: &RequestContext,
) -> Result<(), DeleteTenantError> { ) -> Result<(), DeleteTenantError> {
@@ -405,15 +414,8 @@ impl DeleteTenantFlow {
} }
async fn prepare( async fn prepare(
tenants: &tokio::sync::RwLock<TenantsMap>, tenant: &Arc<Tenant>,
tenant_id: TenantId, ) -> Result<tokio::sync::OwnedMutexGuard<Self>, DeleteTenantError> {
) -> Result<(Arc<Tenant>, tokio::sync::OwnedMutexGuard<Self>), DeleteTenantError> {
let m = tenants.read().await;
let tenant = m
.get(&tenant_id)
.ok_or(GetTenantError::NotFound(tenant_id))?;
// FIXME: unsure about active only. Our init jobs may not be cancellable properly, // FIXME: unsure about active only. Our init jobs may not be cancellable properly,
// so at least for now allow deletions only for active tenants. TODO recheck // so at least for now allow deletions only for active tenants. TODO recheck
// Broken and Stopping is needed for retries. // Broken and Stopping is needed for retries.
@@ -447,14 +449,14 @@ impl DeleteTenantFlow {
))); )));
} }
Ok((Arc::clone(tenant), guard)) Ok(guard)
} }
fn schedule_background( fn schedule_background(
guard: OwnedMutexGuard<Self>, guard: OwnedMutexGuard<Self>,
conf: &'static PageServerConf, conf: &'static PageServerConf,
remote_storage: Option<GenericRemoteStorage>, remote_storage: Option<GenericRemoteStorage>,
tenants: &'static tokio::sync::RwLock<TenantsMap>, tenants: &'static std::sync::RwLock<TenantsMap>,
tenant: Arc<Tenant>, tenant: Arc<Tenant>,
) { ) {
let tenant_id = tenant.tenant_id; let tenant_id = tenant.tenant_id;
@@ -487,7 +489,7 @@ impl DeleteTenantFlow {
mut guard: OwnedMutexGuard<Self>, mut guard: OwnedMutexGuard<Self>,
conf: &PageServerConf, conf: &PageServerConf,
remote_storage: Option<GenericRemoteStorage>, remote_storage: Option<GenericRemoteStorage>,
tenants: &'static tokio::sync::RwLock<TenantsMap>, tenants: &'static std::sync::RwLock<TenantsMap>,
tenant: &Arc<Tenant>, tenant: &Arc<Tenant>,
) -> Result<(), DeleteTenantError> { ) -> Result<(), DeleteTenantError> {
// Tree sort timelines, schedule delete for them. Mention retries from the console side. // Tree sort timelines, schedule delete for them. Mention retries from the console side.
@@ -535,10 +537,18 @@ impl DeleteTenantFlow {
.await .await
.context("cleanup_remaining_fs_traces")?; .context("cleanup_remaining_fs_traces")?;
let mut locked = tenants.write().await; {
if locked.remove(&tenant.tenant_id).is_none() { let mut locked = tenants.write().unwrap();
warn!("Tenant got removed from tenants map during deletion"); if locked.remove(&tenant.tenant_id).is_none() {
}; warn!("Tenant got removed from tenants map during deletion");
};
// FIXME: we should not be modifying this from outside of mgr.rs.
// This will go away when we simplify deletion (https://github.com/neondatabase/neon/issues/5080)
crate::metrics::TENANT_MANAGER
.tenant_slots
.set(locked.len() as u64);
}
*guard = Self::Finished; *guard = Self::Finished;

File diff suppressed because it is too large Load Diff

View File

@@ -406,4 +406,123 @@ mod tests {
METADATA_OLD_FORMAT_VERSION, METADATA_FORMAT_VERSION METADATA_OLD_FORMAT_VERSION, METADATA_FORMAT_VERSION
); );
} }
#[test]
fn test_metadata_bincode_serde() {
let original_metadata = TimelineMetadata::new(
Lsn(0x200),
Some(Lsn(0x100)),
Some(TIMELINE_ID),
Lsn(0),
Lsn(0),
Lsn(0),
// Any version will do here, so use the default
crate::DEFAULT_PG_VERSION,
);
let metadata_bytes = original_metadata
.to_bytes()
.expect("Cannot create bytes array from metadata");
let metadata_bincode_be_bytes = original_metadata
.ser()
.expect("Cannot serialize the metadata");
// 8 bytes for the length of the vector
assert_eq!(metadata_bincode_be_bytes.len(), 8 + metadata_bytes.len());
let expected_bincode_bytes = {
let mut temp = vec![];
let len_bytes = metadata_bytes.len().to_be_bytes();
temp.extend_from_slice(&len_bytes);
temp.extend_from_slice(&metadata_bytes);
temp
};
assert_eq!(metadata_bincode_be_bytes, expected_bincode_bytes);
let deserialized_metadata = TimelineMetadata::des(&metadata_bincode_be_bytes).unwrap();
// Deserialized metadata has the metadata header, which is different from the serialized one.
// Reference: TimelineMetaData::to_bytes()
let expected_metadata = {
let mut temp_metadata = original_metadata;
let body_bytes = temp_metadata
.body
.ser()
.expect("Cannot serialize the metadata body");
let metadata_size = METADATA_HDR_SIZE + body_bytes.len();
let hdr = TimelineMetadataHeader {
size: metadata_size as u16,
format_version: METADATA_FORMAT_VERSION,
checksum: crc32c::crc32c(&body_bytes),
};
temp_metadata.hdr = hdr;
temp_metadata
};
assert_eq!(deserialized_metadata, expected_metadata);
}
#[test]
fn test_metadata_bincode_serde_ensure_roundtrip() {
let original_metadata = TimelineMetadata::new(
Lsn(0x200),
Some(Lsn(0x100)),
Some(TIMELINE_ID),
Lsn(0),
Lsn(0),
Lsn(0),
// Any version will do here, so use the default
crate::DEFAULT_PG_VERSION,
);
let expected_bytes = vec![
/* bincode length encoding bytes */
0, 0, 0, 0, 0, 0, 2, 0, // 8 bytes for the length of the serialized vector
/* TimelineMetadataHeader */
4, 37, 101, 34, 0, 70, 0, 4, // checksum, size, format_version (4 + 2 + 2)
/* TimelineMetadataBodyV2 */
0, 0, 0, 0, 0, 0, 2, 0, // disk_consistent_lsn (8 bytes)
1, 0, 0, 0, 0, 0, 0, 1, 0, // prev_record_lsn (9 bytes)
1, 17, 34, 51, 68, 85, 102, 119, 136, 17, 34, 51, 68, 85, 102, 119,
136, // ancestor_timeline (17 bytes)
0, 0, 0, 0, 0, 0, 0, 0, // ancestor_lsn (8 bytes)
0, 0, 0, 0, 0, 0, 0, 0, // latest_gc_cutoff_lsn (8 bytes)
0, 0, 0, 0, 0, 0, 0, 0, // initdb_lsn (8 bytes)
0, 0, 0, 15, // pg_version (4 bytes)
/* padding bytes */
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0,
];
let metadata_ser_bytes = original_metadata.ser().unwrap();
assert_eq!(metadata_ser_bytes, expected_bytes);
let expected_metadata = {
let mut temp_metadata = original_metadata;
let body_bytes = temp_metadata
.body
.ser()
.expect("Cannot serialize the metadata body");
let metadata_size = METADATA_HDR_SIZE + body_bytes.len();
let hdr = TimelineMetadataHeader {
size: metadata_size as u16,
format_version: METADATA_FORMAT_VERSION,
checksum: crc32c::crc32c(&body_bytes),
};
temp_metadata.hdr = hdr;
temp_metadata
};
let des_metadata = TimelineMetadata::des(&metadata_ser_bytes).unwrap();
assert_eq!(des_metadata, expected_metadata);
}
} }

File diff suppressed because it is too large Load Diff

View File

@@ -1542,7 +1542,7 @@ pub fn remote_index_path(
} }
/// Given the key of an index, parse out the generation part of the name /// Given the key of an index, parse out the generation part of the name
pub(crate) fn parse_remote_index_path(path: RemotePath) -> Option<Generation> { pub fn parse_remote_index_path(path: RemotePath) -> Option<Generation> {
let file_name = match path.get_path().file_name() { let file_name = match path.get_path().file_name() {
Some(f) => f, Some(f) => f,
None => { None => {

View File

@@ -6,7 +6,6 @@ use std::collections::HashMap;
use chrono::NaiveDateTime; use chrono::NaiveDateTime;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_with::{serde_as, DisplayFromStr};
use utils::bin_ser::SerializeError; use utils::bin_ser::SerializeError;
use crate::tenant::metadata::TimelineMetadata; use crate::tenant::metadata::TimelineMetadata;
@@ -58,7 +57,6 @@ impl LayerFileMetadata {
/// ///
/// This type needs to be backwards and forwards compatible. When changing the fields, /// This type needs to be backwards and forwards compatible. When changing the fields,
/// remember to add a test case for the changed version. /// remember to add a test case for the changed version.
#[serde_as]
#[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)] #[derive(Debug, PartialEq, Eq, Clone, Serialize, Deserialize)]
pub struct IndexPart { pub struct IndexPart {
/// Debugging aid describing the version of this type. /// Debugging aid describing the version of this type.
@@ -78,7 +76,6 @@ pub struct IndexPart {
// 'disk_consistent_lsn' is a copy of the 'disk_consistent_lsn' in the metadata. // 'disk_consistent_lsn' is a copy of the 'disk_consistent_lsn' in the metadata.
// It's duplicated for convenience when reading the serialized structure, but is // It's duplicated for convenience when reading the serialized structure, but is
// private because internally we would read from metadata instead. // private because internally we would read from metadata instead.
#[serde_as(as = "DisplayFromStr")]
disk_consistent_lsn: Lsn, disk_consistent_lsn: Lsn,
#[serde(rename = "metadata_bytes")] #[serde(rename = "metadata_bytes")]
@@ -155,7 +152,7 @@ pub struct IndexLayerMetadata {
#[serde(default = "Generation::none")] #[serde(default = "Generation::none")]
#[serde(skip_serializing_if = "Generation::is_none")] #[serde(skip_serializing_if = "Generation::is_none")]
pub(super) generation: Generation, pub generation: Generation,
} }
impl From<LayerFileMetadata> for IndexLayerMetadata { impl From<LayerFileMetadata> for IndexLayerMetadata {

View File

@@ -6,7 +6,6 @@ use std::sync::Arc;
use anyhow::{bail, Context}; use anyhow::{bail, Context};
use tokio::sync::oneshot::error::RecvError; use tokio::sync::oneshot::error::RecvError;
use tokio::sync::Semaphore; use tokio::sync::Semaphore;
use tokio_util::sync::CancellationToken;
use crate::context::RequestContext; use crate::context::RequestContext;
use crate::pgdatadir_mapping::CalculateLogicalSizeError; use crate::pgdatadir_mapping::CalculateLogicalSizeError;
@@ -29,7 +28,6 @@ use tenant_size_model::{Segment, StorageModel};
/// needs. We will convert this into a StorageModel when it's time to perform /// needs. We will convert this into a StorageModel when it's time to perform
/// the calculation. /// the calculation.
/// ///
#[serde_with::serde_as]
#[derive(Debug, serde::Serialize, serde::Deserialize)] #[derive(Debug, serde::Serialize, serde::Deserialize)]
pub struct ModelInputs { pub struct ModelInputs {
pub segments: Vec<SegmentMeta>, pub segments: Vec<SegmentMeta>,
@@ -37,11 +35,9 @@ pub struct ModelInputs {
} }
/// A [`Segment`], with some extra information for display purposes /// A [`Segment`], with some extra information for display purposes
#[serde_with::serde_as]
#[derive(Debug, serde::Serialize, serde::Deserialize)] #[derive(Debug, serde::Serialize, serde::Deserialize)]
pub struct SegmentMeta { pub struct SegmentMeta {
pub segment: Segment, pub segment: Segment,
#[serde_as(as = "serde_with::DisplayFromStr")]
pub timeline_id: TimelineId, pub timeline_id: TimelineId,
pub kind: LsnKind, pub kind: LsnKind,
} }
@@ -77,32 +73,22 @@ pub enum LsnKind {
/// Collect all relevant LSNs to the inputs. These will only be helpful in the serialized form as /// Collect all relevant LSNs to the inputs. These will only be helpful in the serialized form as
/// part of [`ModelInputs`] from the HTTP api, explaining the inputs. /// part of [`ModelInputs`] from the HTTP api, explaining the inputs.
#[serde_with::serde_as]
#[derive(Debug, serde::Serialize, serde::Deserialize)] #[derive(Debug, serde::Serialize, serde::Deserialize)]
pub struct TimelineInputs { pub struct TimelineInputs {
#[serde_as(as = "serde_with::DisplayFromStr")]
pub timeline_id: TimelineId, pub timeline_id: TimelineId,
#[serde_as(as = "Option<serde_with::DisplayFromStr>")]
pub ancestor_id: Option<TimelineId>, pub ancestor_id: Option<TimelineId>,
#[serde_as(as = "serde_with::DisplayFromStr")]
ancestor_lsn: Lsn, ancestor_lsn: Lsn,
#[serde_as(as = "serde_with::DisplayFromStr")]
last_record: Lsn, last_record: Lsn,
#[serde_as(as = "serde_with::DisplayFromStr")]
latest_gc_cutoff: Lsn, latest_gc_cutoff: Lsn,
#[serde_as(as = "serde_with::DisplayFromStr")]
horizon_cutoff: Lsn, horizon_cutoff: Lsn,
#[serde_as(as = "serde_with::DisplayFromStr")]
pitr_cutoff: Lsn, pitr_cutoff: Lsn,
/// Cutoff point based on GC settings /// Cutoff point based on GC settings
#[serde_as(as = "serde_with::DisplayFromStr")]
next_gc_cutoff: Lsn, next_gc_cutoff: Lsn,
/// Cutoff point calculated from the user-supplied 'max_retention_period' /// Cutoff point calculated from the user-supplied 'max_retention_period'
#[serde_as(as = "Option<serde_with::DisplayFromStr>")]
retention_param_cutoff: Option<Lsn>, retention_param_cutoff: Option<Lsn>,
} }
@@ -363,10 +349,6 @@ async fn fill_logical_sizes(
// our advantage with `?` error handling. // our advantage with `?` error handling.
let mut joinset = tokio::task::JoinSet::new(); let mut joinset = tokio::task::JoinSet::new();
let cancel = tokio_util::sync::CancellationToken::new();
// be sure to cancel all spawned tasks if we are dropped
let _dg = cancel.clone().drop_guard();
// For each point that would benefit from having a logical size available, // For each point that would benefit from having a logical size available,
// spawn a Task to fetch it, unless we have it cached already. // spawn a Task to fetch it, unless we have it cached already.
for seg in segments.iter() { for seg in segments.iter() {
@@ -384,15 +366,8 @@ async fn fill_logical_sizes(
let parallel_size_calcs = Arc::clone(limit); let parallel_size_calcs = Arc::clone(limit);
let ctx = ctx.attached_child(); let ctx = ctx.attached_child();
joinset.spawn( joinset.spawn(
calculate_logical_size( calculate_logical_size(parallel_size_calcs, timeline, lsn, cause, ctx)
parallel_size_calcs, .in_current_span(),
timeline,
lsn,
cause,
ctx,
cancel.child_token(),
)
.in_current_span(),
); );
} }
e.insert(cached_size); e.insert(cached_size);
@@ -419,10 +394,12 @@ async fn fill_logical_sizes(
have_any_error = true; have_any_error = true;
} }
Ok(Ok(TimelineAtLsnSizeResult(timeline, lsn, Err(error)))) => { Ok(Ok(TimelineAtLsnSizeResult(timeline, lsn, Err(error)))) => {
warn!( if !matches!(error, CalculateLogicalSizeError::Cancelled) {
timeline_id=%timeline.timeline_id, warn!(
"failed to calculate logical size at {lsn}: {error:#}" timeline_id=%timeline.timeline_id,
); "failed to calculate logical size at {lsn}: {error:#}"
);
}
have_any_error = true; have_any_error = true;
} }
Ok(Ok(TimelineAtLsnSizeResult(timeline, lsn, Ok(size)))) => { Ok(Ok(TimelineAtLsnSizeResult(timeline, lsn, Ok(size)))) => {
@@ -498,14 +475,13 @@ async fn calculate_logical_size(
lsn: utils::lsn::Lsn, lsn: utils::lsn::Lsn,
cause: LogicalSizeCalculationCause, cause: LogicalSizeCalculationCause,
ctx: RequestContext, ctx: RequestContext,
cancel: CancellationToken,
) -> Result<TimelineAtLsnSizeResult, RecvError> { ) -> Result<TimelineAtLsnSizeResult, RecvError> {
let _permit = tokio::sync::Semaphore::acquire_owned(limit) let _permit = tokio::sync::Semaphore::acquire_owned(limit)
.await .await
.expect("global semaphore should not had been closed"); .expect("global semaphore should not had been closed");
let size_res = timeline let size_res = timeline
.spawn_ondemand_logical_size_calculation(lsn, cause, ctx, cancel) .spawn_ondemand_logical_size_calculation(lsn, cause, ctx)
.instrument(info_span!("spawn_ondemand_logical_size_calculation")) .instrument(info_span!("spawn_ondemand_logical_size_calculation"))
.await?; .await?;
Ok(TimelineAtLsnSizeResult(timeline, lsn, size_res)) Ok(TimelineAtLsnSizeResult(timeline, lsn, size_res))

View File

@@ -4,7 +4,6 @@ pub mod delta_layer;
mod filename; mod filename;
mod image_layer; mod image_layer;
mod inmemory_layer; mod inmemory_layer;
mod inmemory_layer_raw;
mod layer; mod layer;
mod layer_desc; mod layer_desc;

View File

@@ -345,14 +345,19 @@ impl InMemoryLayer {
let cursor = inner.file.block_cursor(); let cursor = inner.file.block_cursor();
let mut keys: Vec<(&Key, &VecMap<Lsn, u64>)> = inner.index.iter().collect(); // Sort the keys because delta layer writer expects them sorted.
keys.sort_by_key(|k| k.0); //
// NOTE: this sort can take up significant time if the layer has millions of
// keys. To speed up all the comparisons we convert the key to i128 and
// keep the value as a reference.
let mut keys: Vec<_> = inner.index.iter().map(|(k, m)| (k.to_i128(), m)).collect();
keys.sort_unstable_by_key(|k| k.0);
let ctx = RequestContextBuilder::extend(ctx) let ctx = RequestContextBuilder::extend(ctx)
.page_content_kind(PageContentKind::InMemoryLayer) .page_content_kind(PageContentKind::InMemoryLayer)
.build(); .build();
for (key, vec_map) in keys.iter() { for (key, vec_map) in keys.iter() {
let key = **key; let key = Key::from_i128(*key);
// Write all page versions // Write all page versions
for (lsn, pos) in vec_map.as_slice() { for (lsn, pos) in vec_map.as_slice() {
cursor.read_blob_into_buf(*pos, &mut buf, &ctx).await?; cursor.read_blob_into_buf(*pos, &mut buf, &ctx).await?;
@@ -367,61 +372,4 @@ impl InMemoryLayer {
let delta_layer = delta_layer_writer.finish(Key::MAX, timeline).await?; let delta_layer = delta_layer_writer.finish(Key::MAX, timeline).await?;
Ok(delta_layer) Ok(delta_layer)
} }
/// Write this frozen in-memory layer to disk.
///
/// Returns a new delta layer with all the same data as this in-memory layer
pub async fn write_to_disk_bench(
&self,
ctx: &RequestContext,
) -> Result<()> {
// Grab the lock in read-mode. We hold it over the I/O, but because this
// layer is not writeable anymore, no one should be trying to acquire the
// write lock on it, so we shouldn't block anyone. There's one exception
// though: another thread might have grabbed a reference to this layer
// in `get_layer_for_write' just before the checkpointer called
// `freeze`, and then `write_to_disk` on it. When the thread gets the
// lock, it will see that it's not writeable anymore and retry, but it
// would have to wait until we release it. That race condition is very
// rare though, so we just accept the potential latency hit for now.
let inner = self.inner.read().await;
let end_lsn = *self.end_lsn.get().unwrap();
let mut delta_layer_writer = DeltaLayerWriter::new(
self.conf,
self.timeline_id,
self.tenant_id,
Key::MIN,
self.start_lsn..end_lsn,
)
.await?;
let mut buf = Vec::new();
let cursor = inner.file.block_cursor();
let mut keys: Vec<(&Key, &VecMap<Lsn, u64>)> = inner.index.iter().collect();
keys.sort_by_key(|k| k.0);
let ctx = RequestContextBuilder::extend(ctx)
.page_content_kind(PageContentKind::InMemoryLayer)
.build();
for (key, vec_map) in keys.iter() {
let key = **key;
// Write all page versions
for (lsn, pos) in vec_map.as_slice() {
cursor.read_blob_into_buf(*pos, &mut buf, &ctx).await?;
let will_init = Value::des(&buf)?.will_init();
delta_layer_writer
.put_value_bytes(key, *lsn, &buf, will_init)
.await?;
}
}
// MAX is used here because we identify L0 layers by full key range
// TODO XXX do this
// let delta_layer = delta_layer_writer.finish(Key::MAX, timeline).await?;
Ok(())
}
} }

View File

@@ -1,23 +0,0 @@
pub struct InMemoryLayerRaw {
}
impl InMemoryLayerRaw {
pub async fn new() -> Self {
Self {
}
}
pub async fn put_value(
&self,
key: Key,
lsn: Lsn,
val: &Value,
ctx: &RequestContext,
) -> Result<()> {
Ok(())
}
}

View File

@@ -337,31 +337,39 @@ enum ResidentOrWantedEvicted {
} }
impl ResidentOrWantedEvicted { impl ResidentOrWantedEvicted {
fn get(&self) -> Option<Arc<DownloadedLayer>> { fn get_and_upgrade(&mut self) -> Option<(Arc<DownloadedLayer>, bool)> {
match self { match self {
ResidentOrWantedEvicted::Resident(strong) => Some(strong.clone()), ResidentOrWantedEvicted::Resident(strong) => Some((strong.clone(), false)),
ResidentOrWantedEvicted::WantedEvicted(weak, _) => match weak.upgrade() { ResidentOrWantedEvicted::WantedEvicted(weak, _) => match weak.upgrade() {
Some(strong) => { Some(strong) => {
LAYER_IMPL_METRICS.inc_raced_wanted_evicted_accesses(); LAYER_IMPL_METRICS.inc_raced_wanted_evicted_accesses();
Some(strong)
*self = ResidentOrWantedEvicted::Resident(strong.clone());
Some((strong, true))
} }
None => None, None => None,
}, },
} }
} }
/// When eviction is first requested, drop down to holding a [`Weak`]. /// When eviction is first requested, drop down to holding a [`Weak`].
/// ///
/// Returns `true` if this was the first time eviction was requested. /// Returns `Some` if this was the first time eviction was requested. Care should be taken to
fn downgrade(&mut self) -> bool { /// drop the possibly last strong reference outside of the mutex of
/// heavier_once_cell::OnceCell.
fn downgrade(&mut self) -> Option<Arc<DownloadedLayer>> {
match self { match self {
ResidentOrWantedEvicted::Resident(strong) => { ResidentOrWantedEvicted::Resident(strong) => {
let weak = Arc::downgrade(strong); let weak = Arc::downgrade(strong);
*self = ResidentOrWantedEvicted::WantedEvicted(weak, strong.version); let mut temp = ResidentOrWantedEvicted::WantedEvicted(weak, strong.version);
// returning the weak is not useful, because the drop could had already ran with std::mem::swap(self, &mut temp);
// the replacement above, and that will take care of cleaning the Option we are in match temp {
true ResidentOrWantedEvicted::Resident(strong) => Some(strong),
ResidentOrWantedEvicted::WantedEvicted(..) => unreachable!("just swapped"),
}
} }
ResidentOrWantedEvicted::WantedEvicted(..) => false, ResidentOrWantedEvicted::WantedEvicted(..) => None,
} }
} }
} }
@@ -403,6 +411,10 @@ struct LayerInner {
version: AtomicUsize, version: AtomicUsize,
/// Allow subscribing to when the layer actually gets evicted. /// Allow subscribing to when the layer actually gets evicted.
///
/// If in future we need to implement "wait until layer instances are gone and done", carrying
/// this over to the gc spawn_blocking from LayerInner::drop will do the trick, and adding a
/// method for "wait_gc" which will wait to this being closed.
status: tokio::sync::broadcast::Sender<Status>, status: tokio::sync::broadcast::Sender<Status>,
/// Counter for exponential backoff with the download /// Counter for exponential backoff with the download
@@ -553,6 +565,8 @@ impl LayerInner {
} }
} }
/// Cancellation safe, however dropping the future and calling this method again might result
/// in a new attempt to evict OR join the previously started attempt.
pub(crate) async fn evict_and_wait( pub(crate) async fn evict_and_wait(
&self, &self,
_: &RemoteTimelineClient, _: &RemoteTimelineClient,
@@ -563,20 +577,22 @@ impl LayerInner {
let mut rx = self.status.subscribe(); let mut rx = self.status.subscribe();
let res = let strong = {
self.wanted_evicted match self.inner.get() {
.compare_exchange(false, true, Ordering::Release, Ordering::Relaxed); Some(mut either) => {
self.wanted_evicted.store(true, Ordering::Relaxed);
either.downgrade()
}
None => return Err(EvictionError::NotFound),
}
};
if res.is_ok() { if strong.is_some() {
// drop the DownloadedLayer outside of the holding the guard
drop(strong);
LAYER_IMPL_METRICS.inc_started_evictions(); LAYER_IMPL_METRICS.inc_started_evictions();
} }
if self.get().is_none() {
// it was not evictable in the first place
// our store to the wanted_evicted does not matter; it will be reset by next download
return Err(EvictionError::NotFound);
}
match rx.recv().await { match rx.recv().await {
Ok(Status::Evicted) => Ok(()), Ok(Status::Evicted) => Ok(()),
Ok(Status::Downloaded) => Err(EvictionError::Downloaded), Ok(Status::Downloaded) => Err(EvictionError::Downloaded),
@@ -590,7 +606,8 @@ impl LayerInner {
// //
// use however late (compared to the initial expressing of wanted) as the // use however late (compared to the initial expressing of wanted) as the
// "outcome" now // "outcome" now
match self.get() { LAYER_IMPL_METRICS.inc_broadcast_lagged();
match self.inner.get() {
Some(_) => Err(EvictionError::Downloaded), Some(_) => Err(EvictionError::Downloaded),
None => Ok(()), None => Ok(()),
} }
@@ -598,15 +615,17 @@ impl LayerInner {
} }
} }
/// Should be cancellation safe, but cancellation is troublesome together with the spawned /// Cancellation safe.
/// download. #[tracing::instrument(skip_all, fields(layer=%self))]
async fn get_or_maybe_download( async fn get_or_maybe_download(
self: &Arc<Self>, self: &Arc<Self>,
allow_download: bool, allow_download: bool,
ctx: Option<&RequestContext>, ctx: Option<&RequestContext>,
) -> Result<Arc<DownloadedLayer>, DownloadError> { ) -> Result<Arc<DownloadedLayer>, DownloadError> {
let mut init_permit = None;
loop { loop {
let download = move || async move { let download = move |permit| async move {
// disable any scheduled but not yet running eviction deletions for this // disable any scheduled but not yet running eviction deletions for this
let next_version = 1 + self.version.fetch_add(1, Ordering::Relaxed); let next_version = 1 + self.version.fetch_add(1, Ordering::Relaxed);
@@ -627,7 +646,11 @@ impl LayerInner {
.await .await
.map_err(DownloadError::PreStatFailed)?; .map_err(DownloadError::PreStatFailed)?;
if let Some(reason) = needs_download { let permit = if let Some(reason) = needs_download {
if let NeedsDownload::NotFile(ft) = reason {
return Err(DownloadError::NotFile(ft));
}
// only reset this after we've decided we really need to download. otherwise it'd // only reset this after we've decided we really need to download. otherwise it'd
// be impossible to mark cancelled downloads for eviction, like one could imagine // be impossible to mark cancelled downloads for eviction, like one could imagine
// we would like to do for prefetching which was not needed. // we would like to do for prefetching which was not needed.
@@ -637,8 +660,6 @@ impl LayerInner {
return Err(DownloadError::NoRemoteStorage); return Err(DownloadError::NoRemoteStorage);
} }
tracing::debug!(%reason, "downloading layer");
if let Some(ctx) = ctx { if let Some(ctx) = ctx {
self.check_expected_download(ctx)?; self.check_expected_download(ctx)?;
} }
@@ -649,12 +670,16 @@ impl LayerInner {
return Err(DownloadError::DownloadRequired); return Err(DownloadError::DownloadRequired);
} }
self.spawn_download_and_wait(timeline).await?; tracing::info!(%reason, "downloading on-demand");
self.spawn_download_and_wait(timeline, permit).await?
} else { } else {
// the file is present locally, probably by a previous but cancelled call to // the file is present locally, probably by a previous but cancelled call to
// get_or_maybe_download. alternatively we might be running without remote storage. // get_or_maybe_download. alternatively we might be running without remote storage.
LAYER_IMPL_METRICS.inc_init_needed_no_download(); LAYER_IMPL_METRICS.inc_init_needed_no_download();
}
permit
};
let res = Arc::new(DownloadedLayer { let res = Arc::new(DownloadedLayer {
owner: Arc::downgrade(self), owner: Arc::downgrade(self),
@@ -667,19 +692,60 @@ impl LayerInner {
LayerResidenceEventReason::ResidenceChange, LayerResidenceEventReason::ResidenceChange,
); );
Ok(ResidentOrWantedEvicted::Resident(res)) let waiters = self.inner.initializer_count();
if waiters > 0 {
tracing::info!(waiters, "completing the on-demand download for other tasks");
}
Ok((ResidentOrWantedEvicted::Resident(res), permit))
}; };
let locked = self.inner.get_or_init(download).await?; if let Some(init_permit) = init_permit.take() {
// use the already held initialization permit because it is impossible to hit the
if let Some(strong) = Self::get_or_apply_evictedness(Some(locked), &self.wanted_evicted) // below paths anymore essentially limiting the max loop iterations to 2.
{ let (value, init_permit) = download(init_permit).await?;
let mut guard = self.inner.set(value, init_permit);
let (strong, _upgraded) = guard
.get_and_upgrade()
.expect("init creates strong reference, we held the init permit");
return Ok(strong); return Ok(strong);
} }
// the situation in which we might need to retry is that our init was ready let (weak, permit) = {
// immediatedly, but the DownloadedLayer had been dropped BUT failed to complete let mut locked = self.inner.get_or_init(download).await?;
// Self::evict_blocking
if let Some((strong, upgraded)) = locked.get_and_upgrade() {
if upgraded {
// when upgraded back, the Arc<DownloadedLayer> is still available, but
// previously a `evict_and_wait` was received.
self.wanted_evicted.store(false, Ordering::Relaxed);
// error out any `evict_and_wait`
drop(self.status.send(Status::Downloaded));
LAYER_IMPL_METRICS
.inc_eviction_cancelled(EvictionCancelled::UpgradedBackOnAccess);
}
return Ok(strong);
} else {
// path to here: the evict_blocking is stuck on spawn_blocking queue.
//
// reset the contents, deactivating the eviction and causing a
// EvictionCancelled::LostToDownload or EvictionCancelled::VersionCheckFailed.
locked.take_and_deinit()
}
};
// unlock first, then drop the weak, but because upgrade failed, we
// know it cannot be a problem.
assert!(
matches!(weak, ResidentOrWantedEvicted::WantedEvicted(..)),
"unexpected {weak:?}, ResidentOrWantedEvicted::get_and_upgrade has a bug"
);
init_permit = Some(permit);
LAYER_IMPL_METRICS.inc_retried_get_or_maybe_download(); LAYER_IMPL_METRICS.inc_retried_get_or_maybe_download();
} }
} }
@@ -691,8 +757,8 @@ impl LayerInner {
match b { match b {
Download => Ok(()), Download => Ok(()),
Warn | Error => { Warn | Error => {
tracing::warn!( tracing::info!(
"unexpectedly on-demand downloading remote layer {self} for task kind {:?}", "unexpectedly on-demand downloading for task kind {:?}",
ctx.task_kind() ctx.task_kind()
); );
crate::metrics::UNEXPECTED_ONDEMAND_DOWNLOADS.inc(); crate::metrics::UNEXPECTED_ONDEMAND_DOWNLOADS.inc();
@@ -714,14 +780,17 @@ impl LayerInner {
async fn spawn_download_and_wait( async fn spawn_download_and_wait(
self: &Arc<Self>, self: &Arc<Self>,
timeline: Arc<Timeline>, timeline: Arc<Timeline>,
) -> Result<(), DownloadError> { permit: heavier_once_cell::InitPermit,
) -> Result<heavier_once_cell::InitPermit, DownloadError> {
let task_name = format!("download layer {}", self); let task_name = format!("download layer {}", self);
let (tx, rx) = tokio::sync::oneshot::channel(); let (tx, rx) = tokio::sync::oneshot::channel();
// this is sadly needed because of task_mgr::shutdown_tasks, otherwise we cannot // this is sadly needed because of task_mgr::shutdown_tasks, otherwise we cannot
// block tenant::mgr::remove_tenant_from_memory. // block tenant::mgr::remove_tenant_from_memory.
let this: Arc<Self> = self.clone(); let this: Arc<Self> = self.clone();
crate::task_mgr::spawn( crate::task_mgr::spawn(
&tokio::runtime::Handle::current(), &tokio::runtime::Handle::current(),
crate::task_mgr::TaskKind::RemoteDownloadTask, crate::task_mgr::TaskKind::RemoteDownloadTask,
@@ -730,6 +799,7 @@ impl LayerInner {
&task_name, &task_name,
false, false,
async move { async move {
let client = timeline let client = timeline
.remote_client .remote_client
.as_ref() .as_ref()
@@ -751,9 +821,9 @@ impl LayerInner {
} }
}; };
if let Err(res) = tx.send(result) { if let Err(res) = tx.send((result, permit)) {
match res { match res {
Ok(()) => { (Ok(()), _) => {
// our caller is cancellation safe so this is fine; if someone // our caller is cancellation safe so this is fine; if someone
// else requests the layer, they'll find it already downloaded // else requests the layer, they'll find it already downloaded
// or redownload. // or redownload.
@@ -764,7 +834,7 @@ impl LayerInner {
tracing::info!("layer file download completed after requester had cancelled"); tracing::info!("layer file download completed after requester had cancelled");
LAYER_IMPL_METRICS.inc_download_completed_without_requester(); LAYER_IMPL_METRICS.inc_download_completed_without_requester();
}, },
Err(e) => { (Err(e), _) => {
// our caller is cancellation safe, but we might be racing with // our caller is cancellation safe, but we might be racing with
// another attempt to initialize. before we have cancellation // another attempt to initialize. before we have cancellation
// token support: these attempts should converge regardless of // token support: these attempts should converge regardless of
@@ -780,7 +850,7 @@ impl LayerInner {
.in_current_span(), .in_current_span(),
); );
match rx.await { match rx.await {
Ok(Ok(())) => { Ok((Ok(()), permit)) => {
if let Some(reason) = self if let Some(reason) = self
.needs_download() .needs_download()
.await .await
@@ -791,10 +861,12 @@ impl LayerInner {
} }
self.consecutive_failures.store(0, Ordering::Relaxed); self.consecutive_failures.store(0, Ordering::Relaxed);
tracing::info!("on-demand download successful");
Ok(()) Ok(permit)
} }
Ok(Err(e)) => { Ok((Err(e), _permit)) => {
// FIXME: this should be with the spawned task and be cancellation sensitive
let consecutive_failures = let consecutive_failures =
self.consecutive_failures.fetch_add(1, Ordering::Relaxed); self.consecutive_failures.fetch_add(1, Ordering::Relaxed);
tracing::error!(consecutive_failures, "layer file download failed: {e:#}"); tracing::error!(consecutive_failures, "layer file download failed: {e:#}");
@@ -812,33 +884,6 @@ impl LayerInner {
} }
} }
/// Access the current state without waiting for the file to be downloaded.
///
/// Requires that we've initialized to state which is respective to the
/// actual residency state.
fn get(&self) -> Option<Arc<DownloadedLayer>> {
let locked = self.inner.get();
Self::get_or_apply_evictedness(locked, &self.wanted_evicted)
}
fn get_or_apply_evictedness(
guard: Option<heavier_once_cell::Guard<'_, ResidentOrWantedEvicted>>,
wanted_evicted: &AtomicBool,
) -> Option<Arc<DownloadedLayer>> {
if let Some(mut x) = guard {
if let Some(won) = x.get() {
// there are no guarantees that we will always get to observe a concurrent call
// to evict
if wanted_evicted.load(Ordering::Acquire) {
x.downgrade();
}
return Some(won);
}
}
None
}
async fn needs_download(&self) -> Result<Option<NeedsDownload>, std::io::Error> { async fn needs_download(&self) -> Result<Option<NeedsDownload>, std::io::Error> {
match tokio::fs::metadata(&self.path).await { match tokio::fs::metadata(&self.path).await {
Ok(m) => Ok(self.is_file_present_and_good_size(&m).err()), Ok(m) => Ok(self.is_file_present_and_good_size(&m).err()),
@@ -858,7 +903,7 @@ impl LayerInner {
fn is_file_present_and_good_size(&self, m: &std::fs::Metadata) -> Result<(), NeedsDownload> { fn is_file_present_and_good_size(&self, m: &std::fs::Metadata) -> Result<(), NeedsDownload> {
// in future, this should include sha2-256 validation of the file. // in future, this should include sha2-256 validation of the file.
if !m.is_file() { if !m.is_file() {
Err(NeedsDownload::NotFile) Err(NeedsDownload::NotFile(m.file_type()))
} else if m.len() != self.desc.file_size { } else if m.len() != self.desc.file_size {
Err(NeedsDownload::WrongSize { Err(NeedsDownload::WrongSize {
actual: m.len(), actual: m.len(),
@@ -872,7 +917,9 @@ impl LayerInner {
fn info(&self, reset: LayerAccessStatsReset) -> HistoricLayerInfo { fn info(&self, reset: LayerAccessStatsReset) -> HistoricLayerInfo {
let layer_file_name = self.desc.filename().file_name(); let layer_file_name = self.desc.filename().file_name();
let remote = self.get().is_none(); // this is not accurate: we could have the file locally but there was a cancellation
// and now we are not in sync, or we are currently downloading it.
let remote = self.inner.get().is_none();
let access_stats = self.access_stats.as_api_model(reset); let access_stats = self.access_stats.as_api_model(reset);
@@ -1007,11 +1054,14 @@ impl LayerInner {
Ok(()) Ok(())
} }
Err(e) if e.kind() == std::io::ErrorKind::NotFound => { Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
tracing::info!("failed to evict file from disk, it was already gone"); tracing::error!(
layer_size = %self.desc.file_size,
"failed to evict layer from disk, it was already gone (metrics will be inaccurate)"
);
Err(EvictionCancelled::FileNotFound) Err(EvictionCancelled::FileNotFound)
} }
Err(e) => { Err(e) => {
tracing::warn!("failed to evict file from disk: {e:#}"); tracing::error!("failed to evict file from disk: {e:#}");
Err(EvictionCancelled::RemoveFailed) Err(EvictionCancelled::RemoveFailed)
} }
}; };
@@ -1055,6 +1105,8 @@ enum DownloadError {
ContextAndConfigReallyDeniesDownloads, ContextAndConfigReallyDeniesDownloads,
#[error("downloading is really required but not allowed by this method")] #[error("downloading is really required but not allowed by this method")]
DownloadRequired, DownloadRequired,
#[error("layer path exists, but it is not a file: {0:?}")]
NotFile(std::fs::FileType),
/// Why no error here? Because it will be reported by page_service. We should had also done /// Why no error here? Because it will be reported by page_service. We should had also done
/// retries already. /// retries already.
#[error("downloading evicted layer file failed")] #[error("downloading evicted layer file failed")]
@@ -1070,7 +1122,7 @@ enum DownloadError {
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq)]
pub(crate) enum NeedsDownload { pub(crate) enum NeedsDownload {
NotFound, NotFound,
NotFile, NotFile(std::fs::FileType),
WrongSize { actual: u64, expected: u64 }, WrongSize { actual: u64, expected: u64 },
} }
@@ -1078,7 +1130,7 @@ impl std::fmt::Display for NeedsDownload {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self { match self {
NeedsDownload::NotFound => write!(f, "file was not found"), NeedsDownload::NotFound => write!(f, "file was not found"),
NeedsDownload::NotFile => write!(f, "path is not a file"), NeedsDownload::NotFile(ft) => write!(f, "path is not a file; {ft:?}"),
NeedsDownload::WrongSize { actual, expected } => { NeedsDownload::WrongSize { actual, expected } => {
write!(f, "file size mismatch {actual} vs. {expected}") write!(f, "file size mismatch {actual} vs. {expected}")
} }
@@ -1089,6 +1141,8 @@ impl std::fmt::Display for NeedsDownload {
/// Existence of `DownloadedLayer` means that we have the file locally, and can later evict it. /// Existence of `DownloadedLayer` means that we have the file locally, and can later evict it.
pub(crate) struct DownloadedLayer { pub(crate) struct DownloadedLayer {
owner: Weak<LayerInner>, owner: Weak<LayerInner>,
// Use tokio OnceCell as we do not need to deinitialize this, it'll just get dropped with the
// DownloadedLayer
kind: tokio::sync::OnceCell<anyhow::Result<LayerKind>>, kind: tokio::sync::OnceCell<anyhow::Result<LayerKind>>,
version: usize, version: usize,
} }
@@ -1132,7 +1186,6 @@ impl DownloadedLayer {
"these are the same, just avoiding the upgrade" "these are the same, just avoiding the upgrade"
); );
// there is nothing async here, but it should be async
let res = if owner.desc.is_delta { let res = if owner.desc.is_delta {
let summary = Some(delta_layer::Summary::expected( let summary = Some(delta_layer::Summary::expected(
owner.desc.tenant_id, owner.desc.tenant_id,
@@ -1231,6 +1284,8 @@ impl std::fmt::Debug for ResidentLayer {
impl ResidentLayer { impl ResidentLayer {
/// Release the eviction guard, converting back into a plain [`Layer`]. /// Release the eviction guard, converting back into a plain [`Layer`].
///
/// You can access the [`Layer`] also by using `as_ref`.
pub(crate) fn drop_eviction_guard(self) -> Layer { pub(crate) fn drop_eviction_guard(self) -> Layer {
self.into() self.into()
} }
@@ -1286,7 +1341,7 @@ impl AsRef<Layer> for ResidentLayer {
} }
} }
/// Allow slimming down if we don't want the `2*usize` with eviction candidates? /// Drop the eviction guard.
impl From<ResidentLayer> for Layer { impl From<ResidentLayer> for Layer {
fn from(value: ResidentLayer) -> Self { fn from(value: ResidentLayer) -> Self {
value.owner value.owner
@@ -1456,6 +1511,13 @@ impl LayerImplMetrics {
.unwrap() .unwrap()
.inc(); .inc();
} }
fn inc_broadcast_lagged(&self) {
self.rare_counters
.get_metric_with_label_values(&["broadcast_lagged"])
.unwrap()
.inc();
}
} }
enum EvictionCancelled { enum EvictionCancelled {
@@ -1467,6 +1529,8 @@ enum EvictionCancelled {
AlreadyReinitialized, AlreadyReinitialized,
/// Not evicted because of a pending reinitialization /// Not evicted because of a pending reinitialization
LostToDownload, LostToDownload,
/// After eviction, there was a new layer access which cancelled the eviction.
UpgradedBackOnAccess,
} }
impl EvictionCancelled { impl EvictionCancelled {
@@ -1479,6 +1543,7 @@ impl EvictionCancelled {
EvictionCancelled::RemoveFailed => "remove_failed", EvictionCancelled::RemoveFailed => "remove_failed",
EvictionCancelled::AlreadyReinitialized => "already_reinitialized", EvictionCancelled::AlreadyReinitialized => "already_reinitialized",
EvictionCancelled::LostToDownload => "lost_to_download", EvictionCancelled::LostToDownload => "lost_to_download",
EvictionCancelled::UpgradedBackOnAccess => "upgraded_back_on_access",
} }
} }
} }

View File

@@ -23,7 +23,7 @@ use tokio::{
}; };
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
use tracing::*; use tracing::*;
use utils::id::TenantTimelineId; use utils::{id::TenantTimelineId, sync::gate::Gate};
use std::cmp::{max, min, Ordering}; use std::cmp::{max, min, Ordering};
use std::collections::{BinaryHeap, HashMap, HashSet}; use std::collections::{BinaryHeap, HashMap, HashSet};
@@ -36,7 +36,6 @@ use std::time::{Duration, Instant, SystemTime};
use crate::context::{ use crate::context::{
AccessStatsBehavior, DownloadBehavior, RequestContext, RequestContextBuilder, AccessStatsBehavior, DownloadBehavior, RequestContext, RequestContextBuilder,
}; };
use crate::deletion_queue::DeletionQueueClient;
use crate::tenant::storage_layer::delta_layer::DeltaEntry; use crate::tenant::storage_layer::delta_layer::DeltaEntry;
use crate::tenant::storage_layer::{ use crate::tenant::storage_layer::{
AsLayerDesc, DeltaLayerWriter, EvictionError, ImageLayerWriter, InMemoryLayer, Layer, AsLayerDesc, DeltaLayerWriter, EvictionError, ImageLayerWriter, InMemoryLayer, Layer,
@@ -50,6 +49,7 @@ use crate::tenant::{
metadata::{save_metadata, TimelineMetadata}, metadata::{save_metadata, TimelineMetadata},
par_fsync, par_fsync,
}; };
use crate::{deletion_queue::DeletionQueueClient, tenant::remote_timeline_client::StopError};
use crate::config::PageServerConf; use crate::config::PageServerConf;
use crate::keyspace::{KeyPartitioning, KeySpace, KeySpaceRandomAccum}; use crate::keyspace::{KeyPartitioning, KeySpace, KeySpaceRandomAccum};
@@ -95,12 +95,7 @@ use super::{debug_assert_current_span_has_tenant_and_timeline_id, AttachedTenant
#[derive(Debug, PartialEq, Eq, Clone, Copy)] #[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub(super) enum FlushLoopState { pub(super) enum FlushLoopState {
NotStarted, NotStarted,
Running { Running,
#[cfg(test)]
expect_initdb_optimization: bool,
#[cfg(test)]
initdb_optimization_count: usize,
},
Exited, Exited,
} }
@@ -247,7 +242,7 @@ pub struct Timeline {
/// the flush finishes. You can use that to wait for the flush to finish. /// the flush finishes. You can use that to wait for the flush to finish.
layer_flush_start_tx: tokio::sync::watch::Sender<u64>, layer_flush_start_tx: tokio::sync::watch::Sender<u64>,
/// to be notified when layer flushing has finished, subscribe to the layer_flush_done channel /// to be notified when layer flushing has finished, subscribe to the layer_flush_done channel
layer_flush_done_tx: tokio::sync::watch::Sender<(u64, anyhow::Result<()>)>, layer_flush_done_tx: tokio::sync::watch::Sender<(u64, Result<(), FlushLayerError>)>,
/// Layer removal lock. /// Layer removal lock.
/// A lock to ensure that no layer of the timeline is removed concurrently by other tasks. /// A lock to ensure that no layer of the timeline is removed concurrently by other tasks.
@@ -310,6 +305,13 @@ pub struct Timeline {
/// Load or creation time information about the disk_consistent_lsn and when the loading /// Load or creation time information about the disk_consistent_lsn and when the loading
/// happened. Used for consumption metrics. /// happened. Used for consumption metrics.
pub(crate) loaded_at: (Lsn, SystemTime), pub(crate) loaded_at: (Lsn, SystemTime),
/// Gate to prevent shutdown completing while I/O is still happening to this timeline's data
pub(crate) gate: Gate,
/// Cancellation token scoped to this timeline: anything doing long-running work relating
/// to the timeline should drop out when this token fires.
pub(crate) cancel: CancellationToken,
} }
pub struct WalReceiverInfo { pub struct WalReceiverInfo {
@@ -367,6 +369,19 @@ pub enum PageReconstructError {
WalRedo(anyhow::Error), WalRedo(anyhow::Error),
} }
#[derive(thiserror::Error, Debug)]
enum FlushLayerError {
/// Timeline cancellation token was cancelled
#[error("timeline shutting down")]
Cancelled,
#[error(transparent)]
PageReconstructError(#[from] PageReconstructError),
#[error(transparent)]
Other(#[from] anyhow::Error),
}
impl std::fmt::Debug for PageReconstructError { impl std::fmt::Debug for PageReconstructError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> Result<(), std::fmt::Error> {
match self { match self {
@@ -786,7 +801,11 @@ impl Timeline {
// as an empty timeline. Also in unit tests, when we use the timeline // as an empty timeline. Also in unit tests, when we use the timeline
// as a simple key-value store, ignoring the datadir layout. Log the // as a simple key-value store, ignoring the datadir layout. Log the
// error but continue. // error but continue.
error!("could not compact, repartitioning keyspace failed: {err:?}"); //
// Suppress error when it's due to cancellation
if !self.cancel.is_cancelled() {
error!("could not compact, repartitioning keyspace failed: {err:?}");
}
} }
}; };
@@ -880,11 +899,17 @@ impl Timeline {
self.launch_eviction_task(background_jobs_can_start); self.launch_eviction_task(background_jobs_can_start);
} }
/// Graceful shutdown, may do a lot of I/O as we flush any open layers to disk and then
/// also to remote storage. This method can easily take multiple seconds for a busy timeline.
///
/// While we are flushing, we continue to accept read I/O.
#[instrument(skip_all, fields(timeline_id=%self.timeline_id))] #[instrument(skip_all, fields(timeline_id=%self.timeline_id))]
pub async fn shutdown(self: &Arc<Self>, freeze_and_flush: bool) { pub(crate) async fn flush_and_shutdown(&self) {
debug_assert_current_span_has_tenant_and_timeline_id(); debug_assert_current_span_has_tenant_and_timeline_id();
// prevent writes to the InMemoryLayer // Stop ingesting data, so that we are not still writing to an InMemoryLayer while
// trying to flush
tracing::debug!("Waiting for WalReceiverManager...");
task_mgr::shutdown_tasks( task_mgr::shutdown_tasks(
Some(TaskKind::WalReceiverManager), Some(TaskKind::WalReceiverManager),
Some(self.tenant_id), Some(self.tenant_id),
@@ -892,34 +917,74 @@ impl Timeline {
) )
.await; .await;
// Since we have shut down WAL ingest, we should not let anyone start waiting for the LSN to advance
self.last_record_lsn.shutdown();
// now all writers to InMemory layer are gone, do the final flush if requested // now all writers to InMemory layer are gone, do the final flush if requested
if freeze_and_flush { match self.freeze_and_flush().await {
match self.freeze_and_flush().await { Ok(_) => {
Ok(()) => {} // drain the upload queue
Err(e) => { if let Some(client) = self.remote_client.as_ref() {
warn!("failed to freeze and flush: {e:#}"); // if we did not wait for completion here, it might be our shutdown process
return; // TODO: should probably drain remote timeline client anyways? // didn't wait for remote uploads to complete at all, as new tasks can forever
// be spawned.
//
// what is problematic is the shutting down of RemoteTimelineClient, because
// obviously it does not make sense to stop while we wait for it, but what
// about corner cases like s3 suddenly hanging up?
if let Err(e) = client.wait_completion().await {
// Non-fatal. Shutdown is infallible. Failures to flush just mean that
// we have some extra WAL replay to do next time the timeline starts.
warn!("failed to flush to remote storage: {e:#}");
}
} }
} }
Err(e) => {
// drain the upload queue // Non-fatal. Shutdown is infallible. Failures to flush just mean that
let res = if let Some(client) = self.remote_client.as_ref() { // we have some extra WAL replay to do next time the timeline starts.
// if we did not wait for completion here, it might be our shutdown process warn!("failed to freeze and flush: {e:#}");
// didn't wait for remote uploads to complete at all, as new tasks can forever
// be spawned.
//
// what is problematic is the shutting down of RemoteTimelineClient, because
// obviously it does not make sense to stop while we wait for it, but what
// about corner cases like s3 suddenly hanging up?
client.wait_completion().await
} else {
Ok(())
};
if let Err(e) = res {
warn!("failed to await for frozen and flushed uploads: {e:#}");
} }
} }
self.shutdown().await;
}
/// Shut down immediately, without waiting for any open layers to flush to disk. This is a subset of
/// the graceful [`Timeline::flush_and_shutdown`] function.
pub(crate) async fn shutdown(&self) {
// Signal any subscribers to our cancellation token to drop out
tracing::debug!("Cancelling CancellationToken");
self.cancel.cancel();
// Page request handlers might be waiting for LSN to advance: they do not respect Timeline::cancel
// while doing so.
self.last_record_lsn.shutdown();
// Shut down the layer flush task before the remote client, as one depends on the other
task_mgr::shutdown_tasks(
Some(TaskKind::LayerFlushTask),
Some(self.tenant_id),
Some(self.timeline_id),
)
.await;
// Shut down remote timeline client: this gracefully moves its metadata into its Stopping state in
// case our caller wants to use that for a deletion
if let Some(remote_client) = self.remote_client.as_ref() {
match remote_client.stop() {
Ok(()) => {}
Err(StopError::QueueUninitialized) => {
// Shutting down during initialization is legal
}
}
}
tracing::debug!("Waiting for tasks...");
task_mgr::shutdown_tasks(None, Some(self.tenant_id), Some(self.timeline_id)).await;
// Finally wait until any gate-holders are complete
self.gate.close().await;
} }
pub fn set_state(&self, new_state: TimelineState) { pub fn set_state(&self, new_state: TimelineState) {
@@ -959,7 +1024,12 @@ impl Timeline {
reason, reason,
backtrace: backtrace_str, backtrace: backtrace_str,
}; };
self.set_state(broken_state) self.set_state(broken_state);
// Although the Broken state is not equivalent to shutdown() (shutdown will be called
// later when this tenant is detach or the process shuts down), firing the cancellation token
// here avoids the need for other tasks to watch for the Broken state explicitly.
self.cancel.cancel();
} }
pub fn current_state(&self) -> TimelineState { pub fn current_state(&self) -> TimelineState {
@@ -1048,6 +1118,11 @@ impl Timeline {
/// Like [`evict_layer_batch`](Self::evict_layer_batch), but for just one layer. /// Like [`evict_layer_batch`](Self::evict_layer_batch), but for just one layer.
/// Additional case `Ok(None)` covers the case where the layer could not be found by its `layer_file_name`. /// Additional case `Ok(None)` covers the case where the layer could not be found by its `layer_file_name`.
pub async fn evict_layer(&self, layer_file_name: &str) -> anyhow::Result<Option<bool>> { pub async fn evict_layer(&self, layer_file_name: &str) -> anyhow::Result<Option<bool>> {
let _gate = self
.gate
.enter()
.map_err(|_| anyhow::anyhow!("Shutting down"))?;
let Some(local_layer) = self.find_layer(layer_file_name).await else { let Some(local_layer) = self.find_layer(layer_file_name).await else {
return Ok(None); return Ok(None);
}; };
@@ -1063,9 +1138,8 @@ impl Timeline {
.as_ref() .as_ref()
.ok_or_else(|| anyhow::anyhow!("remote storage not configured; cannot evict"))?; .ok_or_else(|| anyhow::anyhow!("remote storage not configured; cannot evict"))?;
let cancel = CancellationToken::new();
let results = self let results = self
.evict_layer_batch(remote_client, &[local_layer], &cancel) .evict_layer_batch(remote_client, &[local_layer])
.await?; .await?;
assert_eq!(results.len(), 1); assert_eq!(results.len(), 1);
let result: Option<Result<(), EvictionError>> = results.into_iter().next().unwrap(); let result: Option<Result<(), EvictionError>> = results.into_iter().next().unwrap();
@@ -1080,15 +1154,18 @@ impl Timeline {
pub(crate) async fn evict_layers( pub(crate) async fn evict_layers(
&self, &self,
layers_to_evict: &[Layer], layers_to_evict: &[Layer],
cancel: &CancellationToken,
) -> anyhow::Result<Vec<Option<Result<(), EvictionError>>>> { ) -> anyhow::Result<Vec<Option<Result<(), EvictionError>>>> {
let _gate = self
.gate
.enter()
.map_err(|_| anyhow::anyhow!("Shutting down"))?;
let remote_client = self let remote_client = self
.remote_client .remote_client
.as_ref() .as_ref()
.context("timeline must have RemoteTimelineClient")?; .context("timeline must have RemoteTimelineClient")?;
self.evict_layer_batch(remote_client, layers_to_evict, cancel) self.evict_layer_batch(remote_client, layers_to_evict).await
.await
} }
/// Evict multiple layers at once, continuing through errors. /// Evict multiple layers at once, continuing through errors.
@@ -1109,7 +1186,6 @@ impl Timeline {
&self, &self,
remote_client: &Arc<RemoteTimelineClient>, remote_client: &Arc<RemoteTimelineClient>,
layers_to_evict: &[Layer], layers_to_evict: &[Layer],
cancel: &CancellationToken,
) -> anyhow::Result<Vec<Option<Result<(), EvictionError>>>> { ) -> anyhow::Result<Vec<Option<Result<(), EvictionError>>>> {
// ensure that the layers have finished uploading // ensure that the layers have finished uploading
// (don't hold the layer_removal_cs while we do it, we're not removing anything yet) // (don't hold the layer_removal_cs while we do it, we're not removing anything yet)
@@ -1157,7 +1233,7 @@ impl Timeline {
}; };
tokio::select! { tokio::select! {
_ = cancel.cancelled() => {}, _ = self.cancel.cancelled() => {},
_ = join => {} _ = join => {}
} }
@@ -1267,6 +1343,7 @@ impl Timeline {
initial_logical_size_can_start: Option<completion::Barrier>, initial_logical_size_can_start: Option<completion::Barrier>,
initial_logical_size_attempt: Option<completion::Completion>, initial_logical_size_attempt: Option<completion::Completion>,
state: TimelineState, state: TimelineState,
cancel: CancellationToken,
) -> Arc<Self> { ) -> Arc<Self> {
let disk_consistent_lsn = metadata.disk_consistent_lsn(); let disk_consistent_lsn = metadata.disk_consistent_lsn();
let (state, _) = watch::channel(state); let (state, _) = watch::channel(state);
@@ -1367,6 +1444,8 @@ impl Timeline {
initial_logical_size_can_start, initial_logical_size_can_start,
initial_logical_size_attempt: Mutex::new(initial_logical_size_attempt), initial_logical_size_attempt: Mutex::new(initial_logical_size_attempt),
cancel,
gate: Gate::new(format!("Timeline<{tenant_id}/{timeline_id}>")),
}; };
result.repartition_threshold = result.repartition_threshold =
result.get_checkpoint_distance() / REPARTITION_FREQ_IN_CHECKPOINT_DISTANCE; result.get_checkpoint_distance() / REPARTITION_FREQ_IN_CHECKPOINT_DISTANCE;
@@ -1382,7 +1461,7 @@ impl Timeline {
let mut flush_loop_state = self.flush_loop_state.lock().unwrap(); let mut flush_loop_state = self.flush_loop_state.lock().unwrap();
match *flush_loop_state { match *flush_loop_state {
FlushLoopState::NotStarted => (), FlushLoopState::NotStarted => (),
FlushLoopState::Running { .. } => { FlushLoopState::Running => {
info!( info!(
"skipping attempt to start flush_loop twice {}/{}", "skipping attempt to start flush_loop twice {}/{}",
self.tenant_id, self.timeline_id self.tenant_id, self.timeline_id
@@ -1402,12 +1481,7 @@ impl Timeline {
let self_clone = Arc::clone(self); let self_clone = Arc::clone(self);
debug!("spawning flush loop"); debug!("spawning flush loop");
*flush_loop_state = FlushLoopState::Running { *flush_loop_state = FlushLoopState::Running;
#[cfg(test)]
expect_initdb_optimization: false,
#[cfg(test)]
initdb_optimization_count: 0,
};
task_mgr::spawn( task_mgr::spawn(
task_mgr::BACKGROUND_RUNTIME.handle(), task_mgr::BACKGROUND_RUNTIME.handle(),
task_mgr::TaskKind::LayerFlushTask, task_mgr::TaskKind::LayerFlushTask,
@@ -1419,7 +1493,7 @@ impl Timeline {
let background_ctx = RequestContext::todo_child(TaskKind::LayerFlushTask, DownloadBehavior::Error); let background_ctx = RequestContext::todo_child(TaskKind::LayerFlushTask, DownloadBehavior::Error);
self_clone.flush_loop(layer_flush_start_rx, &background_ctx).await; self_clone.flush_loop(layer_flush_start_rx, &background_ctx).await;
let mut flush_loop_state = self_clone.flush_loop_state.lock().unwrap(); let mut flush_loop_state = self_clone.flush_loop_state.lock().unwrap();
assert!(matches!(*flush_loop_state, FlushLoopState::Running{ ..})); assert!(matches!(*flush_loop_state, FlushLoopState::Running));
*flush_loop_state = FlushLoopState::Exited; *flush_loop_state = FlushLoopState::Exited;
Ok(()) Ok(())
} }
@@ -1706,12 +1780,8 @@ impl Timeline {
// delay will be terminated by a timeout regardless. // delay will be terminated by a timeout regardless.
let _completion = { self_clone.initial_logical_size_attempt.lock().expect("unexpected initial_logical_size_attempt poisoned").take() }; let _completion = { self_clone.initial_logical_size_attempt.lock().expect("unexpected initial_logical_size_attempt poisoned").take() };
// no extra cancellation here, because nothing really waits for this to complete compared
// to spawn_ondemand_logical_size_calculation.
let cancel = CancellationToken::new();
let calculated_size = match self_clone let calculated_size = match self_clone
.logical_size_calculation_task(lsn, LogicalSizeCalculationCause::Initial, &background_ctx, cancel) .logical_size_calculation_task(lsn, LogicalSizeCalculationCause::Initial, &background_ctx)
.await .await
{ {
Ok(s) => s, Ok(s) => s,
@@ -1780,7 +1850,6 @@ impl Timeline {
lsn: Lsn, lsn: Lsn,
cause: LogicalSizeCalculationCause, cause: LogicalSizeCalculationCause,
ctx: RequestContext, ctx: RequestContext,
cancel: CancellationToken,
) -> oneshot::Receiver<Result<u64, CalculateLogicalSizeError>> { ) -> oneshot::Receiver<Result<u64, CalculateLogicalSizeError>> {
let (sender, receiver) = oneshot::channel(); let (sender, receiver) = oneshot::channel();
let self_clone = Arc::clone(self); let self_clone = Arc::clone(self);
@@ -1801,7 +1870,7 @@ impl Timeline {
false, false,
async move { async move {
let res = self_clone let res = self_clone
.logical_size_calculation_task(lsn, cause, &ctx, cancel) .logical_size_calculation_task(lsn, cause, &ctx)
.await; .await;
let _ = sender.send(res).ok(); let _ = sender.send(res).ok();
Ok(()) // Receiver is responsible for handling errors Ok(()) // Receiver is responsible for handling errors
@@ -1817,58 +1886,28 @@ impl Timeline {
lsn: Lsn, lsn: Lsn,
cause: LogicalSizeCalculationCause, cause: LogicalSizeCalculationCause,
ctx: &RequestContext, ctx: &RequestContext,
cancel: CancellationToken,
) -> Result<u64, CalculateLogicalSizeError> { ) -> Result<u64, CalculateLogicalSizeError> {
span::debug_assert_current_span_has_tenant_and_timeline_id(); span::debug_assert_current_span_has_tenant_and_timeline_id();
let mut timeline_state_updates = self.subscribe_for_state_updates(); let _guard = self.gate.enter();
let self_calculation = Arc::clone(self); let self_calculation = Arc::clone(self);
let mut calculation = pin!(async { let mut calculation = pin!(async {
let cancel = cancel.child_token();
let ctx = ctx.attached_child(); let ctx = ctx.attached_child();
self_calculation self_calculation
.calculate_logical_size(lsn, cause, cancel, &ctx) .calculate_logical_size(lsn, cause, &ctx)
.await .await
}); });
let timeline_state_cancellation = async {
loop {
match timeline_state_updates.changed().await {
Ok(()) => {
let new_state = timeline_state_updates.borrow().clone();
match new_state {
// we're running this job for active timelines only
TimelineState::Active => continue,
TimelineState::Broken { .. }
| TimelineState::Stopping
| TimelineState::Loading => {
break format!("aborted because timeline became inactive (new state: {new_state:?})")
}
}
}
Err(_sender_dropped_error) => {
// can't happen, the sender is not dropped as long as the Timeline exists
break "aborted because state watch was dropped".to_string();
}
}
}
};
let taskmgr_shutdown_cancellation = async {
task_mgr::shutdown_watcher().await;
"aborted because task_mgr shutdown requested".to_string()
};
tokio::select! { tokio::select! {
res = &mut calculation => { res } res = &mut calculation => { res }
reason = timeline_state_cancellation => { _ = self.cancel.cancelled() => {
debug!(reason = reason, "cancelling calculation"); debug!("cancelling logical size calculation for timeline shutdown");
cancel.cancel();
calculation.await calculation.await
} }
reason = taskmgr_shutdown_cancellation => { _ = task_mgr::shutdown_watcher() => {
debug!(reason = reason, "cancelling calculation"); debug!("cancelling logical size calculation for task shutdown");
cancel.cancel();
calculation.await calculation.await
} }
} }
@@ -1882,7 +1921,6 @@ impl Timeline {
&self, &self,
up_to_lsn: Lsn, up_to_lsn: Lsn,
cause: LogicalSizeCalculationCause, cause: LogicalSizeCalculationCause,
cancel: CancellationToken,
ctx: &RequestContext, ctx: &RequestContext,
) -> Result<u64, CalculateLogicalSizeError> { ) -> Result<u64, CalculateLogicalSizeError> {
info!( info!(
@@ -1925,7 +1963,7 @@ impl Timeline {
}; };
let timer = storage_time_metrics.start_timer(); let timer = storage_time_metrics.start_timer();
let logical_size = self let logical_size = self
.get_current_logical_size_non_incremental(up_to_lsn, cancel, ctx) .get_current_logical_size_non_incremental(up_to_lsn, ctx)
.await?; .await?;
debug!("calculated logical size: {logical_size}"); debug!("calculated logical size: {logical_size}");
timer.stop_and_record(); timer.stop_and_record();
@@ -2030,6 +2068,10 @@ impl Timeline {
let mut cont_lsn = Lsn(request_lsn.0 + 1); let mut cont_lsn = Lsn(request_lsn.0 + 1);
'outer: loop { 'outer: loop {
if self.cancel.is_cancelled() {
return Err(PageReconstructError::Cancelled);
}
// The function should have updated 'state' // The function should have updated 'state'
//info!("CALLED for {} at {}: {:?} with {} records, cached {}", key, cont_lsn, result, reconstruct_state.records.len(), cached_lsn); //info!("CALLED for {} at {}: {:?} with {} records, cached {}", key, cont_lsn, result, reconstruct_state.records.len(), cached_lsn);
match result { match result {
@@ -2334,6 +2376,10 @@ impl Timeline {
info!("started flush loop"); info!("started flush loop");
loop { loop {
tokio::select! { tokio::select! {
_ = self.cancel.cancelled() => {
info!("shutting down layer flush task");
break;
},
_ = task_mgr::shutdown_watcher() => { _ = task_mgr::shutdown_watcher() => {
info!("shutting down layer flush task"); info!("shutting down layer flush task");
break; break;
@@ -2345,6 +2391,14 @@ impl Timeline {
let timer = self.metrics.flush_time_histo.start_timer(); let timer = self.metrics.flush_time_histo.start_timer();
let flush_counter = *layer_flush_start_rx.borrow(); let flush_counter = *layer_flush_start_rx.borrow();
let result = loop { let result = loop {
if self.cancel.is_cancelled() {
info!("dropping out of flush loop for timeline shutdown");
// Note: we do not bother transmitting into [`layer_flush_done_tx`], because
// anyone waiting on that will respect self.cancel as well: they will stop
// waiting at the same time we as drop out of this loop.
return;
}
let layer_to_flush = { let layer_to_flush = {
let guard = self.layers.read().await; let guard = self.layers.read().await;
guard.layer_map().frozen_layers.front().cloned() guard.layer_map().frozen_layers.front().cloned()
@@ -2353,9 +2407,18 @@ impl Timeline {
let Some(layer_to_flush) = layer_to_flush else { let Some(layer_to_flush) = layer_to_flush else {
break Ok(()); break Ok(());
}; };
if let Err(err) = self.flush_frozen_layer(layer_to_flush, ctx).await { match self.flush_frozen_layer(layer_to_flush, ctx).await {
error!("could not flush frozen layer: {err:?}"); Ok(()) => {}
break Err(err); Err(FlushLayerError::Cancelled) => {
info!("dropping out of flush loop for timeline shutdown");
return;
}
err @ Err(
FlushLayerError::Other(_) | FlushLayerError::PageReconstructError(_),
) => {
error!("could not flush frozen layer: {err:?}");
break err;
}
} }
}; };
// Notify any listeners that we're done // Notify any listeners that we're done
@@ -2377,7 +2440,7 @@ impl Timeline {
let mut my_flush_request = 0; let mut my_flush_request = 0;
let flush_loop_state = { *self.flush_loop_state.lock().unwrap() }; let flush_loop_state = { *self.flush_loop_state.lock().unwrap() };
if !matches!(flush_loop_state, FlushLoopState::Running { .. }) { if !matches!(flush_loop_state, FlushLoopState::Running) {
anyhow::bail!("cannot flush frozen layers when flush_loop is not running, state is {flush_loop_state:?}") anyhow::bail!("cannot flush frozen layers when flush_loop is not running, state is {flush_loop_state:?}")
} }
@@ -2404,7 +2467,17 @@ impl Timeline {
} }
} }
trace!("waiting for flush to complete"); trace!("waiting for flush to complete");
rx.changed().await?; tokio::select! {
rx_e = rx.changed() => {
rx_e?;
},
// Cancellation safety: we are not leaving an I/O in-flight for the flush, we're just ignoring
// the notification from [`flush_loop`] that it completed.
_ = self.cancel.cancelled() => {
tracing::info!("Cancelled layer flush due on timeline shutdown");
return Ok(())
}
};
trace!("done") trace!("done")
} }
} }
@@ -2419,61 +2492,13 @@ impl Timeline {
self: &Arc<Self>, self: &Arc<Self>,
frozen_layer: Arc<InMemoryLayer>, frozen_layer: Arc<InMemoryLayer>,
ctx: &RequestContext, ctx: &RequestContext,
) -> anyhow::Result<()> { ) -> Result<(), FlushLayerError> {
// As a special case, when we have just imported an image into the repository,
// instead of writing out a L0 delta layer, we directly write out image layer
// files instead. This is possible as long as *all* the data imported into the
// repository have the same LSN.
let lsn_range = frozen_layer.get_lsn_range(); let lsn_range = frozen_layer.get_lsn_range();
let (layers_to_upload, delta_layer_to_add) = let layer = self.create_delta_layer(&frozen_layer, ctx).await?;
if lsn_range.start == self.initdb_lsn && lsn_range.end == Lsn(self.initdb_lsn.0 + 1) {
#[cfg(test)] if self.cancel.is_cancelled() {
match &mut *self.flush_loop_state.lock().unwrap() { return Err(FlushLayerError::Cancelled);
FlushLoopState::NotStarted | FlushLoopState::Exited => { }
panic!("flush loop not running")
}
FlushLoopState::Running {
initdb_optimization_count,
..
} => {
*initdb_optimization_count += 1;
}
}
// Note: The 'ctx' in use here has DownloadBehavior::Error. We should not
// require downloading anything during initial import.
let (partitioning, _lsn) = self
.repartition(self.initdb_lsn, self.get_compaction_target_size(), ctx)
.await?;
// For image layers, we add them immediately into the layer map.
(
self.create_image_layers(&partitioning, self.initdb_lsn, true, ctx)
.await?,
None,
)
} else {
#[cfg(test)]
match &mut *self.flush_loop_state.lock().unwrap() {
FlushLoopState::NotStarted | FlushLoopState::Exited => {
panic!("flush loop not running")
}
FlushLoopState::Running {
expect_initdb_optimization,
..
} => {
assert!(!*expect_initdb_optimization, "expected initdb optimization");
}
}
// Normal case, write out a L0 delta layer file.
// `create_delta_layer` will not modify the layer map.
// We will remove frozen layer and add delta layer in one atomic operation later.
let layer = self.create_delta_layer(&frozen_layer, ctx).await?;
(
// FIXME: even though we have a single image and single delta layer assumption
// we push them to vec
vec![layer.clone()],
Some(layer),
)
};
let disk_consistent_lsn = Lsn(lsn_range.end.0 - 1); let disk_consistent_lsn = Lsn(lsn_range.end.0 - 1);
let old_disk_consistent_lsn = self.disk_consistent_lsn.load(); let old_disk_consistent_lsn = self.disk_consistent_lsn.load();
@@ -2484,18 +2509,21 @@ impl Timeline {
let metadata = { let metadata = {
let mut guard = self.layers.write().await; let mut guard = self.layers.write().await;
guard.finish_flush_l0_layer(delta_layer_to_add.as_ref(), &frozen_layer, &self.metrics); if self.cancel.is_cancelled() {
return Err(FlushLayerError::Cancelled);
}
guard.finish_flush_l0_layer(&layer, &frozen_layer, &self.metrics);
if disk_consistent_lsn != old_disk_consistent_lsn { if disk_consistent_lsn != old_disk_consistent_lsn {
assert!(disk_consistent_lsn > old_disk_consistent_lsn); assert!(disk_consistent_lsn > old_disk_consistent_lsn);
self.disk_consistent_lsn.store(disk_consistent_lsn); self.disk_consistent_lsn.store(disk_consistent_lsn);
// Schedule remote uploads that will reflect our new disk_consistent_lsn // Schedule remote uploads that will reflect our new disk_consistent_lsn
Some(self.schedule_uploads(disk_consistent_lsn, layers_to_upload)?) Some(self.schedule_uploads(disk_consistent_lsn, [layer])?)
} else { } else {
None None
} }
// release lock on 'layers'
}; };
// FIXME: between create_delta_layer and the scheduling of the upload in `update_metadata_file`, // FIXME: between create_delta_layer and the scheduling of the upload in `update_metadata_file`,
@@ -2936,13 +2964,10 @@ struct CompactLevel0Phase1StatsBuilder {
new_deltas_size: Option<u64>, new_deltas_size: Option<u64>,
} }
#[serde_as]
#[derive(serde::Serialize)] #[derive(serde::Serialize)]
struct CompactLevel0Phase1Stats { struct CompactLevel0Phase1Stats {
version: u64, version: u64,
#[serde_as(as = "serde_with::DisplayFromStr")]
tenant_id: TenantId, tenant_id: TenantId,
#[serde_as(as = "serde_with::DisplayFromStr")]
timeline_id: TimelineId, timeline_id: TimelineId,
read_lock_acquisition_micros: RecordedDuration, read_lock_acquisition_micros: RecordedDuration,
read_lock_held_spawn_blocking_startup_micros: RecordedDuration, read_lock_held_spawn_blocking_startup_micros: RecordedDuration,
@@ -4369,25 +4394,10 @@ mod tests {
.expect("should had been resident") .expect("should had been resident")
.drop_eviction_guard(); .drop_eviction_guard();
let cancel = tokio_util::sync::CancellationToken::new();
let batch = [layer]; let batch = [layer];
let first = { let first = async { timeline.evict_layer_batch(&rc, &batch).await.unwrap() };
let cancel = cancel.child_token(); let second = async { timeline.evict_layer_batch(&rc, &batch).await.unwrap() };
async {
let cancel = cancel;
timeline
.evict_layer_batch(&rc, &batch, &cancel)
.await
.unwrap()
}
};
let second = async {
timeline
.evict_layer_batch(&rc, &batch, &cancel)
.await
.unwrap()
};
let (first, second) = tokio::join!(first, second); let (first, second) = tokio::join!(first, second);

View File

@@ -17,6 +17,7 @@ use crate::{
deletion_queue::DeletionQueueClient, deletion_queue::DeletionQueueClient,
task_mgr::{self, TaskKind}, task_mgr::{self, TaskKind},
tenant::{ tenant::{
debug_assert_current_span_has_tenant_and_timeline_id,
metadata::TimelineMetadata, metadata::TimelineMetadata,
remote_timeline_client::{ remote_timeline_client::{
self, PersistIndexPartWithDeletedFlagError, RemoteTimelineClient, self, PersistIndexPartWithDeletedFlagError, RemoteTimelineClient,
@@ -30,6 +31,11 @@ use super::{Timeline, TimelineResources};
/// Now that the Timeline is in Stopping state, request all the related tasks to shut down. /// Now that the Timeline is in Stopping state, request all the related tasks to shut down.
async fn stop_tasks(timeline: &Timeline) -> Result<(), DeleteTimelineError> { async fn stop_tasks(timeline: &Timeline) -> Result<(), DeleteTimelineError> {
debug_assert_current_span_has_tenant_and_timeline_id();
// Notify any timeline work to drop out of loops/requests
tracing::debug!("Cancelling CancellationToken");
timeline.cancel.cancel();
// Stop the walreceiver first. // Stop the walreceiver first.
debug!("waiting for wal receiver to shutdown"); debug!("waiting for wal receiver to shutdown");
let maybe_started_walreceiver = { timeline.walreceiver.lock().unwrap().take() }; let maybe_started_walreceiver = { timeline.walreceiver.lock().unwrap().take() };
@@ -74,6 +80,11 @@ async fn stop_tasks(timeline: &Timeline) -> Result<(), DeleteTimelineError> {
"failpoint: timeline-delete-before-index-deleted-at" "failpoint: timeline-delete-before-index-deleted-at"
))? ))?
}); });
tracing::debug!("Waiting for gate...");
timeline.gate.close().await;
tracing::debug!("Shutdown complete");
Ok(()) Ok(())
} }

View File

@@ -277,10 +277,7 @@ impl Timeline {
Some(c) => c, Some(c) => c,
}; };
let results = match self let results = match self.evict_layer_batch(remote_client, &candidates).await {
.evict_layer_batch(remote_client, &candidates, cancel)
.await
{
Err(pre_err) => { Err(pre_err) => {
stats.errors += candidates.len(); stats.errors += candidates.len();
error!("could not do any evictions: {pre_err:#}"); error!("could not do any evictions: {pre_err:#}");
@@ -329,8 +326,7 @@ impl Timeline {
match state.last_layer_access_imitation { match state.last_layer_access_imitation {
Some(ts) if ts.elapsed() < inter_imitate_period => { /* no need to run */ } Some(ts) if ts.elapsed() < inter_imitate_period => { /* no need to run */ }
_ => { _ => {
self.imitate_timeline_cached_layer_accesses(cancel, ctx) self.imitate_timeline_cached_layer_accesses(ctx).await;
.await;
state.last_layer_access_imitation = Some(tokio::time::Instant::now()) state.last_layer_access_imitation = Some(tokio::time::Instant::now())
} }
} }
@@ -344,20 +340,7 @@ impl Timeline {
// Make one of the tenant's timelines draw the short straw and run the calculation. // Make one of the tenant's timelines draw the short straw and run the calculation.
// The others wait until the calculation is done so that they take into account the // The others wait until the calculation is done so that they take into account the
// imitated accesses that the winner made. // imitated accesses that the winner made.
// let tenant = match crate::tenant::mgr::get_tenant(self.tenant_id, true) {
// It is critical we are responsive to cancellation here. Otherwise, we deadlock with
// tenant deletion (holds TENANTS in read mode) any other task that attempts to
// acquire TENANTS in write mode before we here call get_tenant.
// See https://github.com/neondatabase/neon/issues/5284.
let res = tokio::select! {
_ = cancel.cancelled() => {
return ControlFlow::Break(());
}
res = crate::tenant::mgr::get_tenant(self.tenant_id, true) => {
res
}
};
let tenant = match res {
Ok(t) => t, Ok(t) => t,
Err(_) => { Err(_) => {
return ControlFlow::Break(()); return ControlFlow::Break(());
@@ -383,21 +366,12 @@ impl Timeline {
/// Recompute the values which would cause on-demand downloads during restart. /// Recompute the values which would cause on-demand downloads during restart.
#[instrument(skip_all)] #[instrument(skip_all)]
async fn imitate_timeline_cached_layer_accesses( async fn imitate_timeline_cached_layer_accesses(&self, ctx: &RequestContext) {
&self,
cancel: &CancellationToken,
ctx: &RequestContext,
) {
let lsn = self.get_last_record_lsn(); let lsn = self.get_last_record_lsn();
// imitiate on-restart initial logical size // imitiate on-restart initial logical size
let size = self let size = self
.calculate_logical_size( .calculate_logical_size(lsn, LogicalSizeCalculationCause::EvictionTaskImitation, ctx)
lsn,
LogicalSizeCalculationCause::EvictionTaskImitation,
cancel.clone(),
ctx,
)
.instrument(info_span!("calculate_logical_size")) .instrument(info_span!("calculate_logical_size"))
.await; .await;

View File

@@ -164,7 +164,7 @@ impl LayerManager {
/// Flush a frozen layer and add the written delta layer to the layer map. /// Flush a frozen layer and add the written delta layer to the layer map.
pub(crate) fn finish_flush_l0_layer( pub(crate) fn finish_flush_l0_layer(
&mut self, &mut self,
delta_layer: Option<&ResidentLayer>, delta_layer: &ResidentLayer,
frozen_layer_for_check: &Arc<InMemoryLayer>, frozen_layer_for_check: &Arc<InMemoryLayer>,
metrics: &TimelineMetrics, metrics: &TimelineMetrics,
) { ) {
@@ -179,12 +179,14 @@ impl LayerManager {
// layer to disk at the same time, that would not work. // layer to disk at the same time, that would not work.
assert_eq!(Arc::as_ptr(&inmem), Arc::as_ptr(frozen_layer_for_check)); assert_eq!(Arc::as_ptr(&inmem), Arc::as_ptr(frozen_layer_for_check));
if let Some(l) = delta_layer { let mut updates = self.layer_map.batch_update();
let mut updates = self.layer_map.batch_update(); Self::insert_historic_layer(
Self::insert_historic_layer(l.as_ref().clone(), &mut updates, &mut self.layer_fmgr); delta_layer.as_ref().clone(),
metrics.record_new_file_metrics(l.layer_desc().file_size); &mut updates,
updates.flush(); &mut self.layer_fmgr,
} );
metrics.record_new_file_metrics(delta_layer.layer_desc().file_size);
updates.flush();
} }
/// Called when compaction is completed. /// Called when compaction is completed.

View File

@@ -426,7 +426,7 @@ impl ConnectionManagerState {
timeline, timeline,
new_sk.wal_source_connconf, new_sk.wal_source_connconf,
events_sender, events_sender,
cancellation, cancellation.clone(),
connect_timeout, connect_timeout,
ctx, ctx,
node_id, node_id,
@@ -447,7 +447,14 @@ impl ConnectionManagerState {
} }
WalReceiverError::Other(e) => { WalReceiverError::Other(e) => {
// give out an error to have task_mgr give it a really verbose logging // give out an error to have task_mgr give it a really verbose logging
Err(e).context("walreceiver connection handling failure") if cancellation.is_cancelled() {
// Ideally we would learn about this via some path other than Other, but
// that requires refactoring all the intermediate layers of ingest code
// that only emit anyhow::Error
Ok(())
} else {
Err(e).context("walreceiver connection handling failure")
}
} }
} }
} }

View File

@@ -19,6 +19,7 @@ use std::io::{Error, ErrorKind, Seek, SeekFrom};
use std::os::unix::fs::FileExt; use std::os::unix::fs::FileExt;
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering}; use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::sync::{RwLock, RwLockWriteGuard}; use std::sync::{RwLock, RwLockWriteGuard};
use utils::fs_ext;
/// ///
/// A virtual file descriptor. You can use this just like std::fs::File, but internally /// A virtual file descriptor. You can use this just like std::fs::File, but internally
@@ -173,37 +174,78 @@ impl OpenFiles {
} }
} }
#[derive(Debug, thiserror::Error)] /// Identify error types that should alwways terminate the process. Other
pub enum CrashsafeOverwriteError { /// error types may be elegible for retry.
#[error("final path has no parent dir")] pub(crate) fn is_fatal_io_error(e: &std::io::Error) -> bool {
FinalPathHasNoParentDir, use nix::errno::Errno::*;
#[error("remove tempfile")] match e.raw_os_error().map(nix::errno::from_i32) {
RemovePreviousTempfile(#[source] std::io::Error), Some(EIO) => {
#[error("create tempfile")] // Terminate on EIO because we no longer trust the device to store
CreateTempfile(#[source] std::io::Error), // data safely, or to uphold persistence guarantees on fsync.
#[error("write tempfile")] true
WriteContents(#[source] std::io::Error), }
#[error("sync tempfile")] Some(EROFS) => {
SyncTempfile(#[source] std::io::Error), // Terminate on EROFS because a filesystem is usually remounted
#[error("rename tempfile to final path")] // readonly when it has experienced some critical issue, so the same
RenameTempfileToFinalPath(#[source] std::io::Error), // logic as EIO applies.
#[error("open final path parent dir")] true
OpenFinalPathParentDir(#[source] std::io::Error), }
#[error("sync final path parent dir")] Some(EACCES) => {
SyncFinalPathParentDir(#[source] std::io::Error), // Terminate on EACCESS because we should always have permissions
// for our own data dir: if we don't, then we can't do our job and
// need administrative intervention to fix permissions. Terminating
// is the best way to make sure we stop cleanly rather than going
// into infinite retry loops, and will make it clear to the outside
// world that we need help.
true
}
_ => {
// Treat all other local file I/O errors are retryable. This includes:
// - ENOSPC: we stay up and wait for eviction to free some space
// - EINVAL, EBADF, EBADFD: this is a code bug, not a filesystem/hardware issue
// - WriteZero, Interrupted: these are used internally VirtualFile
false
}
}
} }
impl CrashsafeOverwriteError {
/// Returns true iff the new contents are durably stored. /// Call this when the local filesystem gives us an error with an external
pub fn are_new_contents_durable(&self) -> bool { /// cause: this includes EIO, EROFS, and EACCESS: all these indicate either
/// bad storage or bad configuration, and we can't fix that from inside
/// a running process.
pub(crate) fn on_fatal_io_error(e: &std::io::Error, context: &str) -> ! {
tracing::error!("Fatal I/O error: {e}: {context})");
std::process::abort();
}
pub(crate) trait MaybeFatalIo<T> {
fn maybe_fatal_err(self, context: &str) -> std::io::Result<T>;
fn fatal_err(self, context: &str) -> T;
}
impl<T> MaybeFatalIo<T> for std::io::Result<T> {
/// Terminate the process if the result is an error of a fatal type, else pass it through
///
/// This is appropriate for writes, where we typically want to die on EIO/ACCES etc, but
/// not on ENOSPC.
fn maybe_fatal_err(self, context: &str) -> std::io::Result<T> {
if let Err(e) = &self {
if is_fatal_io_error(e) {
on_fatal_io_error(e, context);
}
}
self
}
/// Terminate the process on any I/O error.
///
/// This is appropriate for reads on files that we know exist: they should always work.
fn fatal_err(self, context: &str) -> T {
match self { match self {
Self::FinalPathHasNoParentDir => false, Ok(v) => v,
Self::RemovePreviousTempfile(_) => false, Err(e) => {
Self::CreateTempfile(_) => false, on_fatal_io_error(&e, context);
Self::WriteContents(_) => false, }
Self::SyncTempfile(_) => false,
Self::RenameTempfileToFinalPath(_) => false,
Self::OpenFinalPathParentDir(_) => false,
Self::SyncFinalPathParentDir(_) => true,
} }
} }
} }
@@ -284,15 +326,13 @@ impl VirtualFile {
final_path: &Utf8Path, final_path: &Utf8Path,
tmp_path: &Utf8Path, tmp_path: &Utf8Path,
content: &[u8], content: &[u8],
) -> Result<(), CrashsafeOverwriteError> { ) -> std::io::Result<()> {
let Some(final_path_parent) = final_path.parent() else { let Some(final_path_parent) = final_path.parent() else {
return Err(CrashsafeOverwriteError::FinalPathHasNoParentDir); return Err(std::io::Error::from_raw_os_error(
nix::errno::Errno::EINVAL as i32,
));
}; };
match std::fs::remove_file(tmp_path) { std::fs::remove_file(tmp_path).or_else(fs_ext::ignore_not_found)?;
Ok(()) => {}
Err(e) if e.kind() == std::io::ErrorKind::NotFound => {}
Err(e) => return Err(CrashsafeOverwriteError::RemovePreviousTempfile(e)),
}
let mut file = Self::open_with_options( let mut file = Self::open_with_options(
tmp_path, tmp_path,
OpenOptions::new() OpenOptions::new()
@@ -301,31 +341,20 @@ impl VirtualFile {
// we bail out instead of causing damage. // we bail out instead of causing damage.
.create_new(true), .create_new(true),
) )
.await .await?;
.map_err(CrashsafeOverwriteError::CreateTempfile)?; file.write_all(content).await?;
file.write_all(content) file.sync_all().await?;
.await
.map_err(CrashsafeOverwriteError::WriteContents)?;
file.sync_all()
.await
.map_err(CrashsafeOverwriteError::SyncTempfile)?;
drop(file); // before the rename, that's important! drop(file); // before the rename, that's important!
// renames are atomic // renames are atomic
std::fs::rename(tmp_path, final_path) std::fs::rename(tmp_path, final_path)?;
.map_err(CrashsafeOverwriteError::RenameTempfileToFinalPath)?;
// Only open final path parent dirfd now, so that this operation only // Only open final path parent dirfd now, so that this operation only
// ever holds one VirtualFile fd at a time. That's important because // ever holds one VirtualFile fd at a time. That's important because
// the current `find_victim_slot` impl might pick the same slot for both // the current `find_victim_slot` impl might pick the same slot for both
// VirtualFile., and it eventually does a blocking write lock instead of // VirtualFile., and it eventually does a blocking write lock instead of
// try_lock. // try_lock.
let final_parent_dirfd = let final_parent_dirfd =
Self::open_with_options(final_path_parent, OpenOptions::new().read(true)) Self::open_with_options(final_path_parent, OpenOptions::new().read(true)).await?;
.await final_parent_dirfd.sync_all().await?;
.map_err(CrashsafeOverwriteError::OpenFinalPathParentDir)?;
final_parent_dirfd
.sync_all()
.await
.map_err(CrashsafeOverwriteError::SyncFinalPathParentDir)?;
Ok(()) Ok(())
} }

Some files were not shown because too many files have changed in this diff Show More