Compare commits

...

131 Commits

Author SHA1 Message Date
Arseny Sher
23644ed251 set pageserver id in dockerfile 2022-02-23 09:17:45 +03:00
Dmitry Rodionov
99e0f07a1d review adjustments, fancy enum for builder, minor cleanups 2022-02-23 08:33:50 +03:00
Dmitry Rodionov
5d490babf8 add node id to pageserver
This adds node id parameter to pageserver configuration. Also I use a
simple builder to construct pageserver config struct to avoid setting
node id to some temporary invalid value. Some of the changes in test
fixtures are needed to split init and start operations for envrionment.
2022-02-23 08:33:50 +03:00
Arseny Sher
5865f85ae2 Add --id argument to safekeeper setting its unique u64 id.
In preparation for storage node messaging. IDs are supposed to be monotonically
assigned by the console. In tests it is issued by ZenithEnv; at the zenith cli
level and fixtures, string name is completely replaced by integer id. Example
TOML configs are adjusted accordingly.

Sequential ids are chosen over Zid mainly because they are compact and easy to
type/remember.
2022-02-23 08:33:50 +03:00
Dhammika Pathirana
b815f5fb9f Add no_sync check in storage
Signed-off-by: Dhammika Pathirana <dhammika@gmail.com>
2022-02-22 12:01:12 -08:00
anastasia
74a0942a77 Fix zenith feedback processing at compute node.
Add test for backpressure
2022-02-22 13:56:21 +03:00
anastasia
1a4682a04a Add 'walreceiver-after-ingest' failpoint. Use sleep at this point to imitate slow walreceiver. 2022-02-22 13:56:21 +03:00
Heikki Linnakangas
993b544ad0 Change default parameters for back pressure
Fixes issue #1238 and #1189. Extracted from PR #1194, with some comment
editorialization by me.

Author: Konstantin Knizhnik <knizhnik@zenith.tech>
2022-02-22 13:56:21 +03:00
Arthur Petukhovsky
dba1d36a4a Refactor WAL utils in safekeeper (#1290)
wal_storage.rs was split up from timeline.rs, safekeeper.rs and send_wal.rs,
and now contains all WAL related code from the safekeeper. Now there are
PhysicalStorage for persisting WAL to disk and WalReader for reading it.
This allows optimizing PhysicalStorage without affecting too much of other
code.

Also there is a separate structure for persisting control file now in
control_file.rs.
2022-02-21 17:20:53 +03:00
Bojan Serafimov
ca81a550ef Fmt 2022-02-21 16:43:28 +03:00
Bojan Serafimov
65a0b2736b Add static router 2022-02-21 16:43:28 +03:00
Bojan Serafimov
cca886682b Undo cplane change 2022-02-21 16:43:28 +03:00
Bojan Serafimov
c8f47cd38e Fix param name 2022-02-21 16:43:28 +03:00
Bojan Serafimov
92787159f7 Add client auth method option 2022-02-21 16:43:28 +03:00
anastasia
abb422d5de Fix SafekeeperMetrics parsing in python tests 2022-02-21 13:45:22 +03:00
bojanserafimov
fdc15de8b2 Add perf test: test_random_writes (#1292) 2022-02-18 15:46:29 -05:00
Stas Kelvich
207286f2b8 Actualize branching parts of openapi spec.
Previous version of spec caused parsing errors in generated clients
as return type is object not array, also one field was missing. In
a passing set `format: hex` on ancestor_id too as value conforms to
that format.
2022-02-18 20:22:21 +02:00
Dhammika Pathirana
d2b896381a Add safekeeper tenant tags in lsn/wal metrics
Signed-off-by: Dhammika Pathirana <dhammika@gmail.com>

Add tenant_id in lsn/wal metrics (#1234)
2022-02-18 08:26:37 -08:00
Dhammika Pathirana
009f6d4ae8 Fix safekeeper metric tags
Signed-off-by: Dhammika Pathirana <dhammika@gmail.com>

Use separate tags in sk storage file histo (#1234)
2022-02-18 08:26:37 -08:00
Kirill Bulatov
1b31379456 Log postgres errors with ERROR level 2022-02-17 13:42:09 +02:00
Bojan Serafimov
4c64b10aec Revert removal of ignore hint 2022-02-17 13:41:49 +02:00
Bojan Serafimov
ad262a46ad Remove redundant pytest_plugins assignment 2022-02-17 13:41:49 +02:00
Kirill Bulatov
ce533835e5 Use uuid.UUID types for tenants and timelines more 2022-02-17 13:41:19 +02:00
Kirill Bulatov
e5bf520b18 Use types in zenith cli invocations in Python tests 2022-02-17 13:41:19 +02:00
Dmitry Rodionov
9512e21b9e fix python formatting 2022-02-17 13:22:14 +03:00
Dmitry Ivanov
a26d565282 [proxy] Replace private static map with a public CancelMap
This is a cleaner approach which might facilitate testing.
2022-02-17 11:54:27 +03:00
Dmitry Ivanov
a47dade622 [proxy] Migrate to async
This change makes most parts of the code asynchronous, except
for the `mgmt` subsystem (we're going to drop it anyway).

Co-authored-by: bojanserafimov <bojan.serafimov7@gmail.com>
2022-02-17 11:54:27 +03:00
Dmitry Rodionov
9cce430430 remove several obsolete management api commands from pageserver's libpq
api

these commands are now available via http api
2022-02-17 11:26:28 +03:00
Dhammika Pathirana
4bf4bacf01 Add cli start/stop test
Signed-off-by: Dhammika Pathirana <dhammika@gmail.com>

Add a test for #1260
2022-02-16 13:19:12 -08:00
bojanserafimov
335abfcc28 Add slow seqscan perf test (#1283) 2022-02-16 10:59:51 -05:00
bojanserafimov
afb3342e46 Add vanilla pg baseline tests (#1275) 2022-02-15 13:44:22 -05:00
Kirill Bulatov
5563ff123f Reuse tenant-timeline id struct from utils 2022-02-15 17:45:23 +02:00
Dhammika Pathirana
0a557b2fa9 Add cli v4 loopback listener ports test
Signed-off-by: Dhammika Pathirana <dhammika@gmail.com>

Add a test for #1247
2022-02-15 17:01:22 +02:00
Heikki Linnakangas
9632c352ab Avoid having multiple records for the same page and LSN.
If a heap UPDATE record modified two pages, and both pages needed to have
their VM bits cleared, and the VM bits were located on the same VM page,
we would emit two ZenithWalRecord::ClearVisibilityMapFlags records for
the same VM page. That produced warnings like this in the pageserver log:

    Page version Wal(ClearVisibilityMapFlags { heap_blkno: 18, flags: 3 }) of rel 1663/13949/2619_vm blk 0 at 2A/346046A0 already exists

To fix, change ClearVisibilityMapFlags so that it can update the bits
for both pages as one operation.

This was already covered by several python tests, so no need to add a
new one. Fixes #1125.

Co-authored-by: Konstantin Knizhnik <knizhnik@zenith.tech>
2022-02-15 14:26:16 +02:00
Arseny Sher
328e3b4189 bump vendor/postgres to fix compiler warnings 2022-02-15 06:51:16 +03:00
Arseny Sher
47f6a1f9a8 Add -Werror to CI builds. 2022-02-15 06:51:16 +03:00
Dmitry Rodionov
a4829712f4 merge directories in git-upload instead of removing existing files for perf test result uploads 2022-02-15 03:47:06 +03:00
Arseny Sher
d4d26f619d bump vendor/postgres to fix compilation warning 2022-02-14 21:00:11 +03:00
Arseny Sher
36481f3374 bump vendor/postgres to init pgxactoff in walproposer
ref #1244
2022-02-14 15:57:38 +03:00
Dhammika Pathirana
d951dd8977 Fix cli start (#1260)
Signed-off-by: Dhammika Pathirana <dhammika@gmail.com>
2022-02-10 18:36:02 -05:00
bojanserafimov
ea13838be7 Add pgbench baseline test (#1204)
Co-authored-by: Heikki Linnakangas <heikki.linnakangas@iki.fi>
2022-02-10 15:33:36 -05:00
Dmitry Rodionov
b51f23cdf0 pass perf test cluster connstr to circle ci jobs 2022-02-10 17:49:54 +03:00
Kirill Bulatov
3cfcdb92ed Fix tokio features in zenith utils to enable its standalone compilation 2022-02-10 08:33:22 -05:00
Kirill Bulatov
d7af965982 Do not leak decoding_key in JwtAuth's Debug representation 2022-02-10 08:33:22 -05:00
Kirill Bulatov
7c1c7702d2 Code review fixes 2022-02-10 08:33:22 -05:00
Kirill Bulatov
6eef401602 Move routerify behind zenith_utils 2022-02-10 08:33:22 -05:00
Kirill Bulatov
c5b5905ed3 Remove parking_lot dependency from workspace 2022-02-10 08:33:22 -05:00
Kirill Bulatov
76b74349cb Bump pageserver dependencies 2022-02-10 08:33:22 -05:00
Dmitry Rodionov
b08e340f60 point perf results back from testing to master 2022-02-10 14:18:34 +03:00
Dmitry Rodionov
a25fa29bc9 modify git-upload for generate_and_push_perf_report.sh needs 2022-02-10 13:12:19 +03:00
Dmitry Rodionov
ccf3c8cc30 store performance test results in our staging cluster to be able to
visualize them in grafana
2022-02-10 13:12:19 +03:00
Heikki Linnakangas
c45ee13b4e Bump vendor/postgres, to fix memory leak.
See https://github.com/zenithdb/postgres/pull/129
2022-02-10 11:29:38 +02:00
anastasia
f1e7db9d0d Bump vendor/postgres rebased to 14.2 2022-02-10 11:19:10 +03:00
Heikki Linnakangas
fa8a6c0e94 Reduce logging of walkeeper normal operations.
It was printing a lot of stuff to the log with INFO level, for routine
things like receiving or sending messages. Reduce the noise. The amount
of logging was excessive, and it was also consuming a fair amount of CPU
(about 20% of safekeeper's CPU usage in a little test I ran).
2022-02-10 08:34:30 +02:00
Dhammika Pathirana
1e8ca497e0 Fix safekeeper loopback addr (#1247)
Signed-off-by: Dhammika Pathirana <dhammika@gmail.com>
2022-02-10 09:23:53 +03:00
Heikki Linnakangas
a504cc87ab Bump vendor/postgres for "Make getpage requests interruptible"
See https://github.com/zenithdb/zenith/issues/1224
2022-02-09 16:13:46 +02:00
Heikki Linnakangas
5268bbc840 Bump vendor/postgres for fixes to cluster size limit.
See https://github.com/zenithdb/postgres/pull/126
2022-02-09 15:52:21 +02:00
Arseny Sher
e1d770939b Bump vendor/postgres to fix recent CI failure.
See zenithdb/postgres#127
2022-02-09 08:50:45 -05:00
Egor Suvorov
2866a9e82e Fix safekeeper LSN metrics (#1216)
* Always initialize flush_lsn/commit_lsn metrics on a specific timeline, no more `n/a`
* Update flush_lsn metrics missing from cba4da3f4d
* Ensure that flush_lsn found on load is >= than both commit_lsn and truncate_lsn
* Add some debug logging
2022-02-07 20:05:16 +03:00
Kirill Bulatov
b67cddb303 Implement EphemeralFile flush in a least dangerous way 2022-02-05 22:02:59 -05:00
anastasia
cb1d84d980 Make test_timeline_size_quota more deterministic 2022-02-06 02:16:36 +03:00
anastasia
642797b69e Implement cluster size quota for zenith compute node.
Use GUC zenith.max_cluster_size to set the limit.

If limit is reached, extend requests will throw out-of-space error.
When current size is too close to the limit - throw a warning.

Add new test: test_timeline_size_quota.
2022-02-06 02:16:36 +03:00
Kirill Bulatov
3ed156a5b6 Add a CLI tool to manipulate remote storage blob files 2022-02-05 15:48:08 -05:00
Heikki Linnakangas
2d93b129a0 Avoid eprintln() in pageserver and walkeeper.
Use log::error!() instead. I spotted a few of these "connection error"
lines in the logs, without timestamps and the other stuff we print for
all other log messages.
2022-02-05 17:59:31 +02:00
Arseny Sher
32c7859659 bump vendor/postgres 2022-02-05 01:27:31 +03:00
Arseny Sher
729ac38ea8 Centralize suspending/resuming timeline activity on safekeepers.
Timeline is active whenever there is at least 1 connection from compute or
pageserver is not caught up. Currently 'active' means callmemaybes are being
sent.

Fixes race: now suspend condition checking and callmemaybe unsubscribe happen
under the same lock.
2022-02-03 02:34:10 +03:00
Andrey Taranik
d69b0539ba proxy chart staging values update for labels (#1202) 2022-02-01 13:31:05 +03:00
Dmitry Ivanov
ec78babad2 Use mold instead of default linker 2022-01-28 20:40:50 +03:00
Dmitry Ivanov
9350dfb215 [CI] Merge *.profraw files prior to uploading workspace
Hopefully, this will make CI pipeline a bit faster.
2022-01-28 19:56:28 +03:00
Dmitry Ivanov
8ac8be5206 [scripts/coverage] Implement merge command
This will drastically decrease the size of CI workspace uploads.
2022-01-28 19:56:28 +03:00
Dmitry Ivanov
c2927353a5 Enable async deserialization of FeMessage
Now it's possible to call Fe{Startup,}Message in both
sync and async contexts, which is good for proxy.

Co-authored-by: bojanserafimov <bojan.serafimov7@gmail.com>
2022-01-28 19:40:37 +03:00
Kirill Bulatov
33251a9d8f Disable failing remote storage tests for now 2022-01-28 18:35:46 +03:00
Konstantin Knizhnik
c045ae7a9b Fix random range for keys in test_gc_aggressive.py (#1199) 2022-01-28 16:29:55 +03:00
Dmitry Rodionov
602ccb7d5f distinguish failures for pre-initdb lsn and pre-ancestor lsn branching in test_branch_behind 2022-01-28 12:31:15 +03:00
Dmitry Rodionov
5df21e1058 remove Timeline::start_lsn in favor of ancestor_lsn 2022-01-28 12:31:15 +03:00
Konstantin Knizhnik
08135910a5 Fix checkpoint.nextXid update (#1166)
* Fix checkpoint.nextXid update

* Add test for cehckpoint.nextXid

* Fix indentation of test_next_xid.py

* Fix mypy error in test_next_xid.py

* Tidy up the test case.

* Add a unit test

Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>
2022-01-27 18:21:51 +03:00
Konstantin Knizhnik
f58a22d07e Freeze layers at the same end LSN (#1182)
* Freeze vectors at the same end LSN

* Fix calculation of last LSN for inmem layer

* Do not advance disk_consistent_lsn is no open layer was evicted

* Fix calculation of freeze_end_lsn

* Let start_lsn be larger than oldest_pending_lsn

* Rename 'oldest_pending_lsn' and 'last_lsn', add comments.

* Fix future_layerfiles test

* Update comments conserning olest_lsn

* Update comments conserning olest_lsn

Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>
2022-01-27 18:21:00 +03:00
Arthur Petukhovsky
cedde559b8 Add test for replacement of the failed safekeeper (#1179)
* Add test to replace failed safekeeper

* Restart safekeepers in test_replace_safekeeper

* Update vendor/postgres
2022-01-27 17:26:55 +03:00
Arthur Petukhovsky
49d1d1ddf9 Don't call adjust_for_wal_acceptors after pg create (#1178)
Now zenith_cli handles wal_acceptors config internally, and if we
will append wal_acceptors to postgresql.conf in python tests, then
it will contain duplicate wal_acceptors config.
2022-01-27 17:23:14 +03:00
Arseny Sher
86045ac36c Prefix per-cluster directory with ztenant_id in safekeeper.
Currently ztimelineids are unique, but all APIs accept the pair, so let's keep
it everywhere for uniformity.

Carry around ZTTId containing both ZTenantId and ZTimelineId for simplicity.

(existing clusters on staging ought to be preprocessed for that)
2022-01-27 17:22:07 +03:00
Konstantin Knizhnik
79f0e44a20 Gc cutoff rwlock (#1139)
* Reproduce github issue #1047.

* Use RwLock to protect gc_cuttof_lsn

* Eeduce number of updates in test_gc_aggressive

* Change  test_prohibit_get_page_at_lsn_for_garbage_collected_pages test

* Change  test_prohibit_get_page_at_lsn_for_garbage_collected_pages

* Lock latest_gc_cutoff_lsn in all operations accessing storage to prevent race conditions with GC

* Remove random sleep between wait_for_lsn and get_page_at_lsn

* Initialize latest_gc_cutoff with initdb_lsn and remove separate check that lsn >= initdb_lsn

* Update test_prohibit_branch_creation_on_pre_initdb_lsn test

Co-authored-by: Heikki Linnakangas <heikki@zenith.tech>
2022-01-27 14:41:16 +03:00
anastasia
c44695f34b bump vendor/postgres 2022-01-27 11:20:45 +03:00
anastasia
5abe2129c6 Extend replication protocol with ZentihFeedback message
to pass current_timeline_size to compute node

Put standby_status_update fields into ZenithFeedback and send them as one message.
Pass values sizes together with keys in ZenithFeedback message.
2022-01-27 11:20:45 +03:00
Dmitry Rodionov
63dd7bce7e bandaid to avoid concurrent timeline downloading until proper refactoring/fix 2022-01-26 19:54:09 +03:00
Dmitry Rodionov
f3c73f5797 cache python deps in circle ci 2022-01-26 13:01:12 +03:00
Dmitry Rodionov
e6f2d70517 use 2021 rust edition 2022-01-25 18:48:49 +03:00
Andrey Taranik
be6d1cc360 Use zimg as builders (#1165)
* try use own builder images

* add postgres headers before build zenith

* checkout submodule before zenith build

* circleci cleanup
2022-01-25 00:58:37 +03:00
Dmitry Ivanov
703716228e Use &str instead of String in BeMessage::ErrorResponse
There's no need in allocating string literals in the heap.
2022-01-24 18:49:05 +03:00
Dmitry Rodionov
458bc0c838 walkeeper: use named type as a key in callmemaybe subscriptions hashmap 2022-01-24 17:20:15 +03:00
Dmitry Rodionov
39591ef627 reduce flakiness 2022-01-24 17:20:15 +03:00
Dmitry Rodionov
37c440c5d3 Introduce first version of tenant migraiton between pageservers
This patch includes attach/detach http endpoints in pageservers. Some
changes in callmemaybe handling inside safekeeper and an integrational
test to check migration with and without load. There are still some
rough edges that will be addressed in follow up patches
2022-01-24 17:20:15 +03:00
anastasia
81e94d1897 Add LSN and Backpressure descriptions to glossary.md 2022-01-24 12:52:30 +03:00
Konstantin Knizhnik
7bc1274a03 Fix comparison with disk_consistent_lsn in newer_image_layer_exists (#1167) 2022-01-24 12:19:18 +03:00
Dmitry Rodionov
5f5a11525c Switch our python package management solution to poetry.
Mainly because it has better support for installing the packages from
different python versions.

It also has better dependency resolver than Pipenv. And supports modern
standard for python dependency management. This includes usage of
pyproject.toml for project specific configuration instead of per
tool conf files. See following links for details:
 https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/
 https://www.python.org/dev/peps/pep-0518/
2022-01-24 11:33:47 +03:00
Konstantin Knizhnik
e209764877 Do not delete layers beyand cutoff LSN (#1128)
* Do not delete layers beyand cutoff LSN

* Update pageserver/src/layered_repository/layer_map.rs

Co-authored-by: Heikki Linnakangas <heikki.linnakangas@iki.fi>

Co-authored-by: Heikki Linnakangas <heikki.linnakangas@iki.fi>
2022-01-24 10:42:40 +03:00
Kirill Bulatov
65290b2e96 Ensure every submodule compiles on its own 2022-01-21 17:34:15 +03:00
Dmitry Ivanov
127df96635 [proxy] Make NUM_BYTES_PROXIED_COUNTER more precise 2022-01-21 17:31:19 +03:00
Kirill Bulatov
924d8d489a Allow enabling S3 mock in all existing tests with an env var 2022-01-20 18:42:47 +02:00
Dmitry Rodionov
026eb64a83 Use python lib to mock s3 2022-01-20 18:42:47 +02:00
Kirill Bulatov
45124856b1 Better S3 remote storage logging 2022-01-20 18:42:47 +02:00
Kirill Bulatov
38c6f6ce16 Allow specifying custom endpoint in s3 2022-01-20 18:42:47 +02:00
Heikki Linnakangas
caa62eff2a Fix description of proxy --auth-endpoint option. 2022-01-20 14:50:27 +03:00
Dmitry Ivanov
d3542c34f1 Refactoring: use anyhow::Context's methods where possible 2022-01-19 16:33:48 +03:00
Kirill Bulatov
7fb62fc849 Fix macos compilation 2022-01-18 23:01:04 +02:00
Andrey Taranik
9d6ae06663 monitoring turn on for proxy (#1146) 2022-01-18 19:23:53 +03:00
Alexey Kondratov
06c28174c2 Integrate compute_tools into zenith workspace and improve logging (zenithdb/console#487) 2022-01-18 18:47:31 +03:00
bojanserafimov
8af1b43074 proxy: Add new metrics (#1132) 2022-01-14 19:12:43 -05:00
Heikki Linnakangas
17b7caddcb Update vendor/postgres: silence excessive logging from walproposer. 2022-01-14 20:51:02 +02:00
Heikki Linnakangas
dab30c27b6 Refactor thread management and shutdown
This introduces a new module to handle thread creation and shutdown.
All page server threads are now registered in a global hash map, and
there's a function to request individual threads to shut down gracefully.

Thread shutdown request is signalled to the thread with a flag, as well
as a Future that can be used to wake up async operations if shutdown is
requested. Use that facility to have the libpq listener thread respond
to pageserver shutdown, based on Kirill's earlier prototype
(https://github.com/zenithdb/zenith/pull/1088). That addresses
https://github.com/zenithdb/zenith/issues/1036, previously the libpq
listener thread would not exit until one more connection arrives.

This also eliminates a resource leak in the accept() loop. Previously,
we added the JoinHanlde of each new thread to a vector but old handles
for threads that had already exited were never removed.
2022-01-14 18:36:10 +02:00
Heikki Linnakangas
bad1dd9759 Don't panic if spawning a new WAL receiver thread fails.
The panic would kill the page service thread. That's not too bad, but
still let's try to handle it more gracefully.
2022-01-14 18:02:34 +02:00
Heikki Linnakangas
d29836d0d5 Don't panic if spawning a thread to handle a connection fails.
Log the error and continue. Hopefully it's a transient failure.

This might have been happening in staging earlier, when the safekeeper
had a problem where it opened connections very frequently to issue
"callmemaybe" commands. If you launch too many threads too fast, you might
run out of file descriptors or something. It's not totally clear what
happened, but with commit, at least the page server will continue to run
and accept new connections, if a transient error happens.
2022-01-14 18:02:30 +02:00
Heikki Linnakangas
adb0b3dada Include backtrace in error messages in the log.
'anyhow' crate can include a backtrace in all errors, when the
'backtrace' feature is enabled. Enable it, and change the places that used
'{:#}' or '{}' to '{:?}', so that the backtrace is printed.
2022-01-14 10:10:17 +02:00
bojanserafimov
5e0f39cc9e Add proxy metrics (#1093) 2022-01-13 20:34:30 -05:00
Arthur Petukhovsky
0a34a592d5 Bump vendor/postgres (#1120) 2022-01-13 20:28:37 +03:00
Heikki Linnakangas
19aaa91f6d Timeline IDs are not globally unique, fix some code that assumed that.
A timeline ID is only guaranteed to be unique for a particular tenant,
so you need to use tenant ID + timeline ID as the key, rather than just
timeline ID.

The safekeeper currently makes the same assumption, and we should fix that
too, but this commit just addresses this one case in the page server.

In the passing, reorder some function arguments to be more consistent.
2022-01-13 18:45:30 +02:00
Konstantin Knizhnik
404aab9373 Use mutex to prevent concurrent checkpoints (#1115)
* Use mutex to prevent concurrent checkpoints

* Fix comment
2022-01-13 17:48:24 +03:00
Konstantin Knizhnik
bc6db2c10e Implement IO metrics in VirtualFile (#1112)
* Implement IO metrics in VirtualFile

* Do not group virtual file close statistics by tenantid/timelineid

* Add comments concenring close metrics
2022-01-13 17:36:53 +03:00
Heikki Linnakangas
772d853dcf Fix race condition leading to panic in walkeeper.
The walkeeper launch two threads for each connection, and uses a guard
object to remove entry from 'replicas' array, when finishes. But only
the background thread held onto the guard object, so if the background
thread finished before the other thread, the array entry would be
removed prematurely, which lead to panic in the check_stop_streaming()
call.

Fixes https://github.com/zenithdb/zenith/issues/1103
2022-01-13 11:21:11 +02:00
Arseny Sher
ab4d272149 Add safekeeper --dump-control-file option.
Hexalize zids there for better output; since Serde doesn't support several
formats for one struct, on-disk representation is changed as well, make
upgrade.rs cope with it.
2022-01-12 19:47:24 +03:00
Konstantin Knizhnik
f70a5cad61 Fix releasing of timelines lock (#1100)
refer #1087
2022-01-12 15:05:08 +03:00
anastasia
7aba299dbd Use safekeeper in test_branch_behind (#1068)
to avoid a subtle race condition.

Without safekeeper, walreceiver reconnection can stuck,
because of IO deadlock between walsender auth and regular backend.
2022-01-12 14:38:04 +03:00
Kirill Bulatov
4b3b19f444 Support prefixes when working with s3 buckets 2022-01-11 15:44:50 +02:00
Kirill Bulatov
8ab4c8a050 Code review fixes 2022-01-11 15:44:23 +02:00
Kirill Bulatov
7c4a653230 Propagate Zenith CLI's RUST_LOG env var to subprocesses 2022-01-11 15:44:23 +02:00
Kirill Bulatov
a3cd8f0e6d Add the remote storage test 2022-01-11 15:44:23 +02:00
Kirill Bulatov
65c851a451 Test pageserver's timeline http methods
z
2022-01-11 15:44:23 +02:00
Kirill Bulatov
23cf2fa984 Properly shutdown storage sync loop 2022-01-11 15:44:23 +02:00
Kirill Bulatov
ce8d6ae958 Allow using remote storage in tests 2022-01-11 15:44:23 +02:00
Kirill Bulatov
384b2a91fa Pass generic pageserver params through zenith cli 2022-01-11 15:44:23 +02:00
Arseny Sher
233c4811db Fix default safekeeper http port. 2022-01-11 10:13:27 +03:00
Konstantin Knizhnik
2fd4c390cb Do not hold timelines lock during GC (#1089)
* Do not hold timelines lock during GC
refer #1087

* Add gc_cs mutex for preveting creation of new timelines during GC

* Make clippy happy

* Use Mutex<()> instead of Mutex<i32> for GC critical section
2022-01-10 14:41:15 +03:00
170 changed files with 11306 additions and 6319 deletions

View File

@@ -1,28 +1,28 @@
version: 2.1
executors:
zenith-build-executor:
zenith-xlarge-executor:
resource_class: xlarge
docker:
- image: cimg/rust:1.56.1
zenith-python-executor:
# NB: when changed, do not forget to update rust image tag in all Dockerfiles
- image: zimg/rust:1.56
zenith-executor:
docker:
- image: cimg/python:3.7.10 # Oldest available 3.7 with Ubuntu 20.04 (for GLIBC and Rust) at CirlceCI
- image: zimg/rust:1.56
jobs:
check-codestyle-rust:
executor: zenith-build-executor
executor: zenith-xlarge-executor
steps:
- checkout
- run:
name: rustfmt
when: always
command: |
cargo fmt --all -- --check
command: cargo fmt --all -- --check
# A job to build postgres
build-postgres:
executor: zenith-build-executor
executor: zenith-xlarge-executor
parameters:
build_type:
type: enum
@@ -37,8 +37,7 @@ jobs:
# Note this works even though the submodule hasn't been checkout out yet.
- run:
name: Get postgres cache key
command: |
git rev-parse HEAD:vendor/postgres > /tmp/cache-key-postgres
command: git rev-parse HEAD:vendor/postgres > /tmp/cache-key-postgres
- restore_cache:
name: Restore postgres cache
@@ -46,15 +45,6 @@ jobs:
# Restore ONLY if the rev key matches exactly
- v04-postgres-cache-<< parameters.build_type >>-{{ checksum "/tmp/cache-key-postgres" }}
# FIXME We could cache our own docker container, instead of installing packages every time.
- run:
name: apt install dependencies
command: |
if [ ! -e tmp_install/bin/postgres ]; then
sudo apt update
sudo apt install build-essential libreadline-dev zlib1g-dev flex bison libseccomp-dev
fi
# Build postgres if the restore_cache didn't find a build.
# `make` can't figure out whether the cache is valid, since
# it only compares file timestamps.
@@ -64,7 +54,8 @@ jobs:
if [ ! -e tmp_install/bin/postgres ]; then
# "depth 1" saves some time by not cloning the whole repo
git submodule update --init --depth 1
make postgres -j8
# bail out on any warnings
COPT='-Werror' mold -run make postgres -j$(nproc)
fi
- save_cache:
@@ -75,7 +66,7 @@ jobs:
# A job to build zenith rust code
build-zenith:
executor: zenith-build-executor
executor: zenith-xlarge-executor
parameters:
build_type:
type: enum
@@ -83,12 +74,6 @@ jobs:
environment:
BUILD_TYPE: << parameters.build_type >>
steps:
- run:
name: apt install dependencies
command: |
sudo apt update
sudo apt install libssl-dev clang
# Checkout the git repo (without submodules)
- checkout
@@ -126,7 +111,7 @@ jobs:
fi
export CARGO_INCREMENTAL=0
"${cov_prefix[@]}" cargo build $CARGO_FLAGS --bins --tests
"${cov_prefix[@]}" mold -run cargo build $CARGO_FLAGS --bins --tests
- save_cache:
name: Save rust cache
@@ -210,6 +195,14 @@ jobs:
command: |
cp -a tmp_install /tmp/zenith/pg_install
- run:
name: Merge coverage data
command: |
# This will speed up workspace uploads
if [[ $BUILD_TYPE == "debug" ]]; then
scripts/coverage "--profraw-prefix=$CIRCLE_JOB" --dir=/tmp/zenith/coverage merge
fi
# Save the rust binaries and coverage data for other jobs in this workflow.
- persist_to_workspace:
root: /tmp/zenith
@@ -217,23 +210,30 @@ jobs:
- "*"
check-codestyle-python:
executor: zenith-python-executor
executor: zenith-executor
steps:
- checkout
- restore_cache:
keys:
- v1-python-deps-{{ checksum "poetry.lock" }}
- run:
name: Install deps
command: pipenv --python 3.7 install --dev
command: ./scripts/pysync
- save_cache:
key: v1-python-deps-{{ checksum "poetry.lock" }}
paths:
- /home/circleci/.cache/pypoetry/virtualenvs
- run:
name: Run yapf to ensure code format
when: always
command: pipenv run yapf --recursive --diff .
command: poetry run yapf --recursive --diff .
- run:
name: Run mypy to check types
when: always
command: pipenv run mypy .
command: poetry run mypy .
run-pytest:
executor: zenith-python-executor
executor: zenith-executor
parameters:
# pytest args to specify the tests to run.
#
@@ -272,9 +272,16 @@ jobs:
condition: << parameters.needs_postgres_source >>
steps:
- run: git submodule update --init --depth 1
- restore_cache:
keys:
- v1-python-deps-{{ checksum "poetry.lock" }}
- run:
name: Install deps
command: pipenv --python 3.7 install
command: ./scripts/pysync
- save_cache:
key: v1-python-deps-{{ checksum "poetry.lock" }}
paths:
- /home/circleci/.cache/pypoetry/virtualenvs
- run:
name: Run pytest
# pytest doesn't output test logs in real time, so CI job may fail with
@@ -291,6 +298,7 @@ jobs:
- PLATFORM: zenith-local-ci
command: |
PERF_REPORT_DIR="$(realpath test_runner/perf-report-local)"
rm -rf $PERF_REPORT_DIR
TEST_SELECTION="test_runner/<< parameters.test_selection >>"
EXTRA_PARAMS="<< parameters.extra_params >>"
@@ -326,7 +334,7 @@ jobs:
# -n4 uses four processes to run tests via pytest-xdist
# -s is not used to prevent pytest from capturing output, because tests are running
# in parallel and logs are mixed between different tests
"${cov_prefix[@]}" pipenv run pytest \
"${cov_prefix[@]}" ./scripts/pytest \
--junitxml=$TEST_OUTPUT/junit.xml \
--tb=short \
--verbose \
@@ -335,7 +343,6 @@ jobs:
if << parameters.save_perf_report >>; then
if [[ $CIRCLE_BRANCH == "main" ]]; then
# TODO: reuse scripts/git-upload
export REPORT_FROM="$PERF_REPORT_DIR"
export REPORT_TO=local
scripts/generate_and_push_perf_report.sh
@@ -356,6 +363,13 @@ jobs:
# The store_test_results step tells CircleCI where to find the junit.xml file.
- store_test_results:
path: /tmp/test_output
- run:
name: Merge coverage data
command: |
# This will speed up workspace uploads
if [[ $BUILD_TYPE == "debug" ]]; then
scripts/coverage "--profraw-prefix=$CIRCLE_JOB" --dir=/tmp/zenith/coverage merge
fi
# Save coverage data (if any)
- persist_to_workspace:
root: /tmp/zenith
@@ -363,7 +377,7 @@ jobs:
- "*"
coverage-report:
executor: zenith-build-executor
executor: zenith-xlarge-executor
steps:
- attach_workspace:
at: /tmp/zenith
@@ -375,12 +389,6 @@ jobs:
# there's no way to clean out old packages, so the cache grows every time something
# changes.
- v04-rust-cache-deps-debug-{{ checksum "Cargo.lock" }}
- run:
name: Install llvm-tools
command: |
# TODO: install a proper symbol demangler, e.g. rustfilt
# TODO: we should embed this into a docker image
rustup component add llvm-tools-preview
- run:
name: Build coverage report
command: |
@@ -450,7 +458,7 @@ jobs:
name: Build and push compute-tools Docker image
command: |
echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
docker build -t zenithdb/compute-tools:latest ./compute_tools/
docker build -t zenithdb/compute-tools:latest -f Dockerfile.compute-tools .
docker push zenithdb/compute-tools:latest
- run:
name: Init postgres submodule
@@ -571,55 +579,6 @@ jobs:
}
}"
#
#
# compute-tools jobs
# TODO: unify with main build_and_test pipeline
#
#
compute-tools-test:
executor: zenith-build-executor
working_directory: ~/repo/compute_tools
steps:
- checkout:
path: ~/repo
- restore_cache:
name: Restore rust cache
keys:
# Require an exact match. While an out of date cache might speed up the build,
# there's no way to clean out old packages, so the cache grows every time something
# changes.
- v03-rust-cache-deps-debug-{{ checksum "Cargo.lock" }}
# Build the rust code, including test binaries
- run:
name: Rust build
environment:
CARGO_INCREMENTAL: 0
command: cargo build --bins --tests
- save_cache:
name: Save rust cache
key: v03-rust-cache-deps-debug-{{ checksum "Cargo.lock" }}
paths:
- ~/.cargo/registry
- ~/.cargo/git
- target
# Run Rust formatting checks
- run:
name: cargo fmt check
command: cargo fmt --all -- --check
# Run Rust linter (clippy)
- run:
name: cargo clippy check
command: cargo clippy --all --all-targets -- -Dwarnings -Drust-2018-idioms
# Run Rust integration and unittests
- run: cargo test
workflows:
build_and_test:
jobs:
@@ -639,6 +598,7 @@ workflows:
- build-postgres-<< matrix.build_type >>
- run-pytest:
name: pg_regress-tests-<< matrix.build_type >>
context: PERF_TEST_RESULT_CONNSTR
matrix:
parameters:
build_type: ["debug", "release"]
@@ -656,6 +616,7 @@ workflows:
- build-zenith-<< matrix.build_type >>
- run-pytest:
name: benchmarks
context: PERF_TEST_RESULT_CONNSTR
build_type: release
test_selection: performance
run_in_parallel: false
@@ -668,7 +629,6 @@ workflows:
requires:
# TODO: consider adding more
- other-tests-debug
- compute-tools-test
- docker-image:
# Context gives an ability to login
context: Docker Hub
@@ -691,7 +651,6 @@ workflows:
requires:
- pg_regress-tests-release
- other-tests-release
- compute-tools-test
- deploy-staging:
# Context gives an ability to login
context: Docker Hub

View File

@@ -5,9 +5,23 @@ settings:
authEndpoint: "https://console.stage.zenith.tech/authenticate_proxy_request/"
uri: "https://console.stage.zenith.tech/psql_session/"
# -- Additional labels for zenith-proxy pods
podLabels:
zenith_service: proxy
zenith_env: staging
zenith_region: us-east-1
zenith_region_slug: virginia
exposedService:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
external-dns.alpha.kubernetes.io/hostname: start.stage.zenith.tech
metrics:
enabled: true
serviceMonitor:
enabled: true
selector:
release: kube-prometheus-stack

View File

@@ -3,7 +3,7 @@ name: benchmarking
on:
# uncomment to run on push for debugging your PR
# push:
# branches: [ mybranch ]
# branches: [ your branch ]
schedule:
# * is a special character in YAML so you have to quote this string
# ┌───────────── minute (0 - 59)
@@ -36,20 +36,20 @@ jobs:
# see https://github.com/actions/setup-python/issues/162
# and probably https://github.com/actions/setup-python/issues/162#issuecomment-865387976 in particular
# so the simplest solution to me is to use already installed system python and spin virtualenvs for job runs.
# there is Python 3.7.10 already installed on the machine so use it to install pipenv and then use pipenv's virtuealenvs
- name: Install pipenv & deps
# there is Python 3.7.10 already installed on the machine so use it to install poetry and then use poetry's virtuealenvs
- name: Install poetry & deps
run: |
python3 -m pip install --upgrade pipenv wheel
# since pip/pipenv caches are reused there shouldn't be any troubles with install every time
pipenv install
python3 -m pip install --upgrade poetry wheel
# since pip/poetry caches are reused there shouldn't be any troubles with install every time
./scripts/pysync
- name: Show versions
run: |
echo Python
python3 --version
pipenv run python3 --version
poetry run python3 --version
echo Pipenv
pipenv --version
poetry --version
echo Pgbench
$PG_BIN/pgbench --version
@@ -89,11 +89,15 @@ jobs:
BENCHMARK_CONNSTR: "${{ secrets.BENCHMARK_STAGING_CONNSTR }}"
REMOTE_ENV: "1" # indicate to test harness that we do not have zenith binaries locally
run: |
# just to be sure that no data was cached on self hosted runner
# since it might generate duplicates when calling ingest_perf_test_result.py
rm -rf perf-report-staging
mkdir -p perf-report-staging
pipenv run pytest test_runner/performance/ -v -m "remote_cluster" --skip-interfering-proc-check --out-dir perf-report-staging
./scripts/pytest test_runner/performance/ -v -m "remote_cluster" --skip-interfering-proc-check --out-dir perf-report-staging
- name: Submit result
env:
VIP_VAP_ACCESS_TOKEN: "${{ secrets.VIP_VAP_ACCESS_TOKEN }}"
PERF_TEST_RESULT_CONNSTR: "${{ secrets.PERF_TEST_RESULT_CONNSTR }}"
run: |
REPORT_FROM=$(realpath perf-report-staging) REPORT_TO=staging scripts/generate_and_push_perf_report.sh

863
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,6 @@
[workspace]
members = [
"compute_tools",
"control_plane",
"pageserver",
"postgres_ffi",
@@ -15,3 +16,8 @@ members = [
# This is useful for profiling and, to some extent, debug.
# Besides, debug info should not affect the performance.
debug = true
# This is only needed for proxy's tests
# TODO: we should probably fork tokio-postgres-rustls instead
[patch.crates-io]
tokio-postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="2949d98df52587d562986aad155dd4e889e408b7" }

View File

@@ -2,8 +2,9 @@
# Image with all the required dependencies to build https://github.com/zenithdb/zenith
# and Postgres from https://github.com/zenithdb/postgres
# Also includes some rust development and build tools.
# NB: keep in sync with rust image version in .circle/config.yml
#
FROM rust:slim-buster
FROM rust:1.56.1-slim-buster
WORKDIR /zenith
# Install postgres and zenith build dependencies

14
Dockerfile.compute-tools Normal file
View File

@@ -0,0 +1,14 @@
# First transient image to build compute_tools binaries
# NB: keep in sync with rust image version in .circle/config.yml
FROM rust:1.56.1-slim-buster AS rust-build
WORKDIR /zenith
COPY . .
RUN cargo build -p compute_tools --release
# Final image that only has one binary
FROM debian:buster-slim
COPY --from=rust-build /zenith/target/release/zenith_ctl /usr/local/bin/zenith_ctl

30
Pipfile
View File

@@ -1,30 +0,0 @@
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[packages]
pytest = ">=6.0.0"
typing-extensions = "*"
pyjwt = {extras = ["crypto"], version = "*"}
requests = "*"
pytest-xdist = "*"
asyncpg = "*"
cached-property = "*"
psycopg2-binary = "*"
jinja2 = "*"
[dev-packages]
# Behavior may change slightly between versions. These are run continuously,
# so we pin exact versions to avoid suprising breaks. Update if comfortable.
yapf = "==0.31.0"
mypy = "==0.910"
# Non-pinned packages follow.
pipenv = "*"
flake8 = "*"
types-requests = "*"
types-psycopg2 = "*"
[requires]
# we need at least 3.7, but pipenv doesn't allow to say this directly
python_version = "3"

652
Pipfile.lock generated
View File

@@ -1,652 +0,0 @@
{
"_meta": {
"hash": {
"sha256": "c309cb963a7b07ae3d30e9cbf08b495f77bdecc0e5356fc89d133c4fbcb65b2b"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.python.org/simple",
"verify_ssl": true
}
]
},
"default": {
"asyncpg": {
"hashes": [
"sha256:129d501f3d30616afd51eb8d3142ef51ba05374256bd5834cec3ef4956a9b317",
"sha256:29ef6ae0a617fc13cc2ac5dc8e9b367bb83cba220614b437af9b67766f4b6b20",
"sha256:41704c561d354bef01353835a7846e5606faabbeb846214dfcf666cf53319f18",
"sha256:556b0e92e2b75dc028b3c4bc9bd5162ddf0053b856437cf1f04c97f9c6837d03",
"sha256:8ff5073d4b654e34bd5eaadc01dc4d68b8a9609084d835acd364cd934190a08d",
"sha256:a458fc69051fbb67d995fdda46d75a012b5d6200f91e17d23d4751482640ed4c",
"sha256:a7095890c96ba36f9f668eb552bb020dddb44f8e73e932f8573efc613ee83843",
"sha256:a738f4807c853623d3f93f0fea11f61be6b0e5ca16ea8aeb42c2c7ee742aa853",
"sha256:c4fc0205fe4ddd5aeb3dfdc0f7bafd43411181e1f5650189608e5971cceacff1",
"sha256:dd2fa063c3344823487d9ddccb40802f02622ddf8bf8a6cc53885ee7a2c1c0c6",
"sha256:ddffcb85227bf39cd1bedd4603e0082b243cf3b14ced64dce506a15b05232b83",
"sha256:e36c6806883786b19551bb70a4882561f31135dc8105a59662e0376cf5b2cbc5",
"sha256:eed43abc6ccf1dc02e0d0efc06ce46a411362f3358847c6b0ec9a43426f91ece"
],
"index": "pypi",
"version": "==0.24.0"
},
"attrs": {
"hashes": [
"sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1",
"sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==21.2.0"
},
"cached-property": {
"hashes": [
"sha256:9fa5755838eecbb2d234c3aa390bd80fbd3ac6b6869109bfc1b499f7bd89a130",
"sha256:df4f613cf7ad9a588cc381aaf4a512d26265ecebd5eb9e1ba12f1319eb85a6a0"
],
"index": "pypi",
"version": "==1.5.2"
},
"certifi": {
"hashes": [
"sha256:78884e7c1d4b00ce3cea67b44566851c4343c120abd683433ce934a68ea58872",
"sha256:d62a0163eb4c2344ac042ab2bdf75399a71a2d8c7d47eac2e2ee91b9d6339569"
],
"version": "==2021.10.8"
},
"cffi": {
"hashes": [
"sha256:00c878c90cb53ccfaae6b8bc18ad05d2036553e6d9d1d9dbcf323bbe83854ca3",
"sha256:0104fb5ae2391d46a4cb082abdd5c69ea4eab79d8d44eaaf79f1b1fd806ee4c2",
"sha256:06c48159c1abed75c2e721b1715c379fa3200c7784271b3c46df01383b593636",
"sha256:0808014eb713677ec1292301ea4c81ad277b6cdf2fdd90fd540af98c0b101d20",
"sha256:10dffb601ccfb65262a27233ac273d552ddc4d8ae1bf93b21c94b8511bffe728",
"sha256:14cd121ea63ecdae71efa69c15c5543a4b5fbcd0bbe2aad864baca0063cecf27",
"sha256:17771976e82e9f94976180f76468546834d22a7cc404b17c22df2a2c81db0c66",
"sha256:181dee03b1170ff1969489acf1c26533710231c58f95534e3edac87fff06c443",
"sha256:23cfe892bd5dd8941608f93348c0737e369e51c100d03718f108bf1add7bd6d0",
"sha256:263cc3d821c4ab2213cbe8cd8b355a7f72a8324577dc865ef98487c1aeee2bc7",
"sha256:2756c88cbb94231c7a147402476be2c4df2f6078099a6f4a480d239a8817ae39",
"sha256:27c219baf94952ae9d50ec19651a687b826792055353d07648a5695413e0c605",
"sha256:2a23af14f408d53d5e6cd4e3d9a24ff9e05906ad574822a10563efcef137979a",
"sha256:31fb708d9d7c3f49a60f04cf5b119aeefe5644daba1cd2a0fe389b674fd1de37",
"sha256:3415c89f9204ee60cd09b235810be700e993e343a408693e80ce7f6a40108029",
"sha256:3773c4d81e6e818df2efbc7dd77325ca0dcb688116050fb2b3011218eda36139",
"sha256:3b96a311ac60a3f6be21d2572e46ce67f09abcf4d09344c49274eb9e0bf345fc",
"sha256:3f7d084648d77af029acb79a0ff49a0ad7e9d09057a9bf46596dac9514dc07df",
"sha256:41d45de54cd277a7878919867c0f08b0cf817605e4eb94093e7516505d3c8d14",
"sha256:4238e6dab5d6a8ba812de994bbb0a79bddbdf80994e4ce802b6f6f3142fcc880",
"sha256:45db3a33139e9c8f7c09234b5784a5e33d31fd6907800b316decad50af323ff2",
"sha256:45e8636704eacc432a206ac7345a5d3d2c62d95a507ec70d62f23cd91770482a",
"sha256:4958391dbd6249d7ad855b9ca88fae690783a6be9e86df65865058ed81fc860e",
"sha256:4a306fa632e8f0928956a41fa8e1d6243c71e7eb59ffbd165fc0b41e316b2474",
"sha256:57e9ac9ccc3101fac9d6014fba037473e4358ef4e89f8e181f8951a2c0162024",
"sha256:59888172256cac5629e60e72e86598027aca6bf01fa2465bdb676d37636573e8",
"sha256:5e069f72d497312b24fcc02073d70cb989045d1c91cbd53979366077959933e0",
"sha256:64d4ec9f448dfe041705426000cc13e34e6e5bb13736e9fd62e34a0b0c41566e",
"sha256:6dc2737a3674b3e344847c8686cf29e500584ccad76204efea14f451d4cc669a",
"sha256:74fdfdbfdc48d3f47148976f49fab3251e550a8720bebc99bf1483f5bfb5db3e",
"sha256:75e4024375654472cc27e91cbe9eaa08567f7fbdf822638be2814ce059f58032",
"sha256:786902fb9ba7433aae840e0ed609f45c7bcd4e225ebb9c753aa39725bb3e6ad6",
"sha256:8b6c2ea03845c9f501ed1313e78de148cd3f6cad741a75d43a29b43da27f2e1e",
"sha256:91d77d2a782be4274da750752bb1650a97bfd8f291022b379bb8e01c66b4e96b",
"sha256:91ec59c33514b7c7559a6acda53bbfe1b283949c34fe7440bcf917f96ac0723e",
"sha256:920f0d66a896c2d99f0adbb391f990a84091179542c205fa53ce5787aff87954",
"sha256:a5263e363c27b653a90078143adb3d076c1a748ec9ecc78ea2fb916f9b861962",
"sha256:abb9a20a72ac4e0fdb50dae135ba5e77880518e742077ced47eb1499e29a443c",
"sha256:c2051981a968d7de9dd2d7b87bcb9c939c74a34626a6e2f8181455dd49ed69e4",
"sha256:c21c9e3896c23007803a875460fb786118f0cdd4434359577ea25eb556e34c55",
"sha256:c2502a1a03b6312837279c8c1bd3ebedf6c12c4228ddbad40912d671ccc8a962",
"sha256:d4d692a89c5cf08a8557fdeb329b82e7bf609aadfaed6c0d79f5a449a3c7c023",
"sha256:da5db4e883f1ce37f55c667e5c0de439df76ac4cb55964655906306918e7363c",
"sha256:e7022a66d9b55e93e1a845d8c9eba2a1bebd4966cd8bfc25d9cd07d515b33fa6",
"sha256:ef1f279350da2c586a69d32fc8733092fd32cc8ac95139a00377841f59a3f8d8",
"sha256:f54a64f8b0c8ff0b64d18aa76675262e1700f3995182267998c31ae974fbc382",
"sha256:f5c7150ad32ba43a07c4479f40241756145a1f03b43480e058cfd862bf5041c7",
"sha256:f6f824dc3bce0edab5f427efcfb1d63ee75b6fcb7282900ccaf925be84efb0fc",
"sha256:fd8a250edc26254fe5b33be00402e6d287f562b6a5b2152dec302fa15bb3e997",
"sha256:ffaa5c925128e29efbde7301d8ecaf35c8c60ffbcd6a1ffd3a552177c8e5e796"
],
"version": "==1.15.0"
},
"charset-normalizer": {
"hashes": [
"sha256:e019de665e2bcf9c2b64e2e5aa025fa991da8720daa3c1138cadd2fd1856aed0",
"sha256:f7af805c321bfa1ce6714c51f254e0d5bb5e5834039bc17db7ebe3a4cec9492b"
],
"markers": "python_version >= '3'",
"version": "==2.0.7"
},
"cryptography": {
"hashes": [
"sha256:07bb7fbfb5de0980590ddfc7f13081520def06dc9ed214000ad4372fb4e3c7f6",
"sha256:18d90f4711bf63e2fb21e8c8e51ed8189438e6b35a6d996201ebd98a26abbbe6",
"sha256:1ed82abf16df40a60942a8c211251ae72858b25b7421ce2497c2eb7a1cee817c",
"sha256:22a38e96118a4ce3b97509443feace1d1011d0571fae81fc3ad35f25ba3ea999",
"sha256:2d69645f535f4b2c722cfb07a8eab916265545b3475fdb34e0be2f4ee8b0b15e",
"sha256:4a2d0e0acc20ede0f06ef7aa58546eee96d2592c00f450c9acb89c5879b61992",
"sha256:54b2605e5475944e2213258e0ab8696f4f357a31371e538ef21e8d61c843c28d",
"sha256:7075b304cd567694dc692ffc9747f3e9cb393cc4aa4fb7b9f3abd6f5c4e43588",
"sha256:7b7ceeff114c31f285528ba8b390d3e9cfa2da17b56f11d366769a807f17cbaa",
"sha256:7eba2cebca600a7806b893cb1d541a6e910afa87e97acf2021a22b32da1df52d",
"sha256:928185a6d1ccdb816e883f56ebe92e975a262d31cc536429041921f8cb5a62fd",
"sha256:9933f28f70d0517686bd7de36166dda42094eac49415459d9bdf5e7df3e0086d",
"sha256:a688ebcd08250eab5bb5bca318cc05a8c66de5e4171a65ca51db6bd753ff8953",
"sha256:abb5a361d2585bb95012a19ed9b2c8f412c5d723a9836418fab7aaa0243e67d2",
"sha256:c10c797ac89c746e488d2ee92bd4abd593615694ee17b2500578b63cad6b93a8",
"sha256:ced40344e811d6abba00295ced98c01aecf0c2de39481792d87af4fa58b7b4d6",
"sha256:d57e0cdc1b44b6cdf8af1d01807db06886f10177469312fbde8f44ccbb284bc9",
"sha256:d99915d6ab265c22873f1b4d6ea5ef462ef797b4140be4c9d8b179915e0985c6",
"sha256:eb80e8a1f91e4b7ef8b33041591e6d89b2b8e122d787e87eeb2b08da71bb16ad",
"sha256:ebeddd119f526bcf323a89f853afb12e225902a24d29b55fe18dd6fcb2838a76"
],
"version": "==35.0.0"
},
"execnet": {
"hashes": [
"sha256:8f694f3ba9cc92cab508b152dcfe322153975c29bda272e2fd7f3f00f36e47c5",
"sha256:a295f7cc774947aac58dde7fdc85f4aa00c42adf5d8f5468fc630c1acf30a142"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==1.9.0"
},
"idna": {
"hashes": [
"sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff",
"sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"
],
"markers": "python_version >= '3'",
"version": "==3.3"
},
"importlib-metadata": {
"hashes": [
"sha256:b618b6d2d5ffa2f16add5697cf57a46c76a56229b0ed1c438322e4e95645bd15",
"sha256:f284b3e11256ad1e5d03ab86bb2ccd6f5339688ff17a4d797a0fe7df326f23b1"
],
"markers": "python_version < '3.8'",
"version": "==4.8.1"
},
"iniconfig": {
"hashes": [
"sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3",
"sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"
],
"version": "==1.1.1"
},
"jinja2": {
"hashes": [
"sha256:827a0e32839ab1600d4eb1c4c33ec5a8edfbc5cb42dafa13b81f182f97784b45",
"sha256:8569982d3f0889eed11dd620c706d39b60c36d6d25843961f33f77fb6bc6b20c"
],
"index": "pypi",
"version": "==3.0.2"
},
"markupsafe": {
"hashes": [
"sha256:01a9b8ea66f1658938f65b93a85ebe8bc016e6769611be228d797c9d998dd298",
"sha256:023cb26ec21ece8dc3907c0e8320058b2e0cb3c55cf9564da612bc325bed5e64",
"sha256:0446679737af14f45767963a1a9ef7620189912317d095f2d9ffa183a4d25d2b",
"sha256:04635854b943835a6ea959e948d19dcd311762c5c0c6e1f0e16ee57022669194",
"sha256:0717a7390a68be14b8c793ba258e075c6f4ca819f15edfc2a3a027c823718567",
"sha256:0955295dd5eec6cb6cc2fe1698f4c6d84af2e92de33fbcac4111913cd100a6ff",
"sha256:0d4b31cc67ab36e3392bbf3862cfbadac3db12bdd8b02a2731f509ed5b829724",
"sha256:10f82115e21dc0dfec9ab5c0223652f7197feb168c940f3ef61563fc2d6beb74",
"sha256:168cd0a3642de83558a5153c8bd34f175a9a6e7f6dc6384b9655d2697312a646",
"sha256:1d609f577dc6e1aa17d746f8bd3c31aa4d258f4070d61b2aa5c4166c1539de35",
"sha256:1f2ade76b9903f39aa442b4aadd2177decb66525062db244b35d71d0ee8599b6",
"sha256:20dca64a3ef2d6e4d5d615a3fd418ad3bde77a47ec8a23d984a12b5b4c74491a",
"sha256:2a7d351cbd8cfeb19ca00de495e224dea7e7d919659c2841bbb7f420ad03e2d6",
"sha256:2d7d807855b419fc2ed3e631034685db6079889a1f01d5d9dac950f764da3dad",
"sha256:2ef54abee730b502252bcdf31b10dacb0a416229b72c18b19e24a4509f273d26",
"sha256:36bc903cbb393720fad60fc28c10de6acf10dc6cc883f3e24ee4012371399a38",
"sha256:37205cac2a79194e3750b0af2a5720d95f786a55ce7df90c3af697bfa100eaac",
"sha256:3c112550557578c26af18a1ccc9e090bfe03832ae994343cfdacd287db6a6ae7",
"sha256:3dd007d54ee88b46be476e293f48c85048603f5f516008bee124ddd891398ed6",
"sha256:4296f2b1ce8c86a6aea78613c34bb1a672ea0e3de9c6ba08a960efe0b0a09047",
"sha256:47ab1e7b91c098ab893b828deafa1203de86d0bc6ab587b160f78fe6c4011f75",
"sha256:49e3ceeabbfb9d66c3aef5af3a60cc43b85c33df25ce03d0031a608b0a8b2e3f",
"sha256:4dc8f9fb58f7364b63fd9f85013b780ef83c11857ae79f2feda41e270468dd9b",
"sha256:4efca8f86c54b22348a5467704e3fec767b2db12fc39c6d963168ab1d3fc9135",
"sha256:53edb4da6925ad13c07b6d26c2a852bd81e364f95301c66e930ab2aef5b5ddd8",
"sha256:5855f8438a7d1d458206a2466bf82b0f104a3724bf96a1c781ab731e4201731a",
"sha256:594c67807fb16238b30c44bdf74f36c02cdf22d1c8cda91ef8a0ed8dabf5620a",
"sha256:5b6d930f030f8ed98e3e6c98ffa0652bdb82601e7a016ec2ab5d7ff23baa78d1",
"sha256:5bb28c636d87e840583ee3adeb78172efc47c8b26127267f54a9c0ec251d41a9",
"sha256:60bf42e36abfaf9aff1f50f52644b336d4f0a3fd6d8a60ca0d054ac9f713a864",
"sha256:611d1ad9a4288cf3e3c16014564df047fe08410e628f89805e475368bd304914",
"sha256:6300b8454aa6930a24b9618fbb54b5a68135092bc666f7b06901f897fa5c2fee",
"sha256:63f3268ba69ace99cab4e3e3b5840b03340efed0948ab8f78d2fd87ee5442a4f",
"sha256:6557b31b5e2c9ddf0de32a691f2312a32f77cd7681d8af66c2692efdbef84c18",
"sha256:693ce3f9e70a6cf7d2fb9e6c9d8b204b6b39897a2c4a1aa65728d5ac97dcc1d8",
"sha256:6a7fae0dd14cf60ad5ff42baa2e95727c3d81ded453457771d02b7d2b3f9c0c2",
"sha256:6c4ca60fa24e85fe25b912b01e62cb969d69a23a5d5867682dd3e80b5b02581d",
"sha256:6fcf051089389abe060c9cd7caa212c707e58153afa2c649f00346ce6d260f1b",
"sha256:7d91275b0245b1da4d4cfa07e0faedd5b0812efc15b702576d103293e252af1b",
"sha256:89c687013cb1cd489a0f0ac24febe8c7a666e6e221b783e53ac50ebf68e45d86",
"sha256:8d206346619592c6200148b01a2142798c989edcb9c896f9ac9722a99d4e77e6",
"sha256:905fec760bd2fa1388bb5b489ee8ee5f7291d692638ea5f67982d968366bef9f",
"sha256:97383d78eb34da7e1fa37dd273c20ad4320929af65d156e35a5e2d89566d9dfb",
"sha256:984d76483eb32f1bcb536dc27e4ad56bba4baa70be32fa87152832cdd9db0833",
"sha256:99df47edb6bda1249d3e80fdabb1dab8c08ef3975f69aed437cb69d0a5de1e28",
"sha256:9f02365d4e99430a12647f09b6cc8bab61a6564363f313126f775eb4f6ef798e",
"sha256:a30e67a65b53ea0a5e62fe23682cfe22712e01f453b95233b25502f7c61cb415",
"sha256:ab3ef638ace319fa26553db0624c4699e31a28bb2a835c5faca8f8acf6a5a902",
"sha256:aca6377c0cb8a8253e493c6b451565ac77e98c2951c45f913e0b52facdcff83f",
"sha256:add36cb2dbb8b736611303cd3bfcee00afd96471b09cda130da3581cbdc56a6d",
"sha256:b2f4bf27480f5e5e8ce285a8c8fd176c0b03e93dcc6646477d4630e83440c6a9",
"sha256:b7f2d075102dc8c794cbde1947378051c4e5180d52d276987b8d28a3bd58c17d",
"sha256:baa1a4e8f868845af802979fcdbf0bb11f94f1cb7ced4c4b8a351bb60d108145",
"sha256:be98f628055368795d818ebf93da628541e10b75b41c559fdf36d104c5787066",
"sha256:bf5d821ffabf0ef3533c39c518f3357b171a1651c1ff6827325e4489b0e46c3c",
"sha256:c47adbc92fc1bb2b3274c4b3a43ae0e4573d9fbff4f54cd484555edbf030baf1",
"sha256:cdfba22ea2f0029c9261a4bd07e830a8da012291fbe44dc794e488b6c9bb353a",
"sha256:d6c7ebd4e944c85e2c3421e612a7057a2f48d478d79e61800d81468a8d842207",
"sha256:d7f9850398e85aba693bb640262d3611788b1f29a79f0c93c565694658f4071f",
"sha256:d8446c54dc28c01e5a2dbac5a25f071f6653e6e40f3a8818e8b45d790fe6ef53",
"sha256:deb993cacb280823246a026e3b2d81c493c53de6acfd5e6bfe31ab3402bb37dd",
"sha256:e0f138900af21926a02425cf736db95be9f4af72ba1bb21453432a07f6082134",
"sha256:e9936f0b261d4df76ad22f8fee3ae83b60d7c3e871292cd42f40b81b70afae85",
"sha256:f0567c4dc99f264f49fe27da5f735f414c4e7e7dd850cfd8e69f0862d7c74ea9",
"sha256:f5653a225f31e113b152e56f154ccbe59eeb1c7487b39b9d9f9cdb58e6c79dc5",
"sha256:f826e31d18b516f653fe296d967d700fddad5901ae07c622bb3705955e1faa94",
"sha256:f8ba0e8349a38d3001fae7eadded3f6606f0da5d748ee53cc1dab1d6527b9509",
"sha256:f9081981fe268bd86831e5c75f7de206ef275defcb82bc70740ae6dc507aee51",
"sha256:fa130dd50c57d53368c9d59395cb5526eda596d3ffe36666cd81a44d56e48872"
],
"markers": "python_version >= '3.6'",
"version": "==2.0.1"
},
"packaging": {
"hashes": [
"sha256:096d689d78ca690e4cd8a89568ba06d07ca097e3306a4381635073ca91479966",
"sha256:14317396d1e8cdb122989b916fa2c7e9ca8e2be9e8060a6eff75b6b7b4d8a7e0"
],
"markers": "python_version >= '3.6'",
"version": "==21.2"
},
"pluggy": {
"hashes": [
"sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159",
"sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"
],
"markers": "python_version >= '3.6'",
"version": "==1.0.0"
},
"psycopg2-binary": {
"hashes": [
"sha256:0b7dae87f0b729922e06f85f667de7bf16455d411971b2043bbd9577af9d1975",
"sha256:0f2e04bd2a2ab54fa44ee67fe2d002bb90cee1c0f1cc0ebc3148af7b02034cbd",
"sha256:123c3fb684e9abfc47218d3784c7b4c47c8587951ea4dd5bc38b6636ac57f616",
"sha256:1473c0215b0613dd938db54a653f68251a45a78b05f6fc21af4326f40e8360a2",
"sha256:14db1752acdd2187d99cb2ca0a1a6dfe57fc65c3281e0f20e597aac8d2a5bd90",
"sha256:1e3a362790edc0a365385b1ac4cc0acc429a0c0d662d829a50b6ce743ae61b5a",
"sha256:1e85b74cbbb3056e3656f1cc4781294df03383127a8114cbc6531e8b8367bf1e",
"sha256:20f1ab44d8c352074e2d7ca67dc00843067788791be373e67a0911998787ce7d",
"sha256:24b0b6688b9f31a911f2361fe818492650795c9e5d3a1bc647acbd7440142a4f",
"sha256:2f62c207d1740b0bde5c4e949f857b044818f734a3d57f1d0d0edc65050532ed",
"sha256:3242b9619de955ab44581a03a64bdd7d5e470cc4183e8fcadd85ab9d3756ce7a",
"sha256:35c4310f8febe41f442d3c65066ca93cccefd75013df3d8c736c5b93ec288140",
"sha256:4235f9d5ddcab0b8dbd723dca56ea2922b485ea00e1dafacf33b0c7e840b3d32",
"sha256:542875f62bc56e91c6eac05a0deadeae20e1730be4c6334d8f04c944fcd99759",
"sha256:5ced67f1e34e1a450cdb48eb53ca73b60aa0af21c46b9b35ac3e581cf9f00e31",
"sha256:661509f51531ec125e52357a489ea3806640d0ca37d9dada461ffc69ee1e7b6e",
"sha256:7360647ea04db2e7dff1648d1da825c8cf68dc5fbd80b8fb5b3ee9f068dcd21a",
"sha256:736b8797b58febabb85494142c627bd182b50d2a7ec65322983e71065ad3034c",
"sha256:8c13d72ed6af7fd2c8acbd95661cf9477f94e381fce0792c04981a8283b52917",
"sha256:988b47ac70d204aed01589ed342303da7c4d84b56c2f4c4b8b00deda123372bf",
"sha256:995fc41ebda5a7a663a254a1dcac52638c3e847f48307b5416ee373da15075d7",
"sha256:a36c7eb6152ba5467fb264d73844877be8b0847874d4822b7cf2d3c0cb8cdcb0",
"sha256:aed4a9a7e3221b3e252c39d0bf794c438dc5453bc2963e8befe9d4cd324dff72",
"sha256:aef9aee84ec78af51107181d02fe8773b100b01c5dfde351184ad9223eab3698",
"sha256:b0221ca5a9837e040ebf61f48899926b5783668b7807419e4adae8175a31f773",
"sha256:b4d7679a08fea64573c969f6994a2631908bb2c0e69a7235648642f3d2e39a68",
"sha256:c250a7ec489b652c892e4f0a5d122cc14c3780f9f643e1a326754aedf82d9a76",
"sha256:ca86db5b561b894f9e5f115d6a159fff2a2570a652e07889d8a383b5fae66eb4",
"sha256:cfc523edecddaef56f6740d7de1ce24a2fdf94fd5e704091856a201872e37f9f",
"sha256:d92272c7c16e105788efe2cfa5d680f07e34e0c29b03c1908f8636f55d5f915a",
"sha256:da113b70f6ec40e7d81b43d1b139b9db6a05727ab8be1ee559f3a69854a69d34",
"sha256:f6fac64a38f6768e7bc7b035b9e10d8a538a9fadce06b983fb3e6fa55ac5f5ce",
"sha256:f8559617b1fcf59a9aedba2c9838b5b6aa211ffedecabca412b92a1ff75aac1a",
"sha256:fbb42a541b1093385a2d8c7eec94d26d30437d0e77c1d25dae1dcc46741a385e"
],
"index": "pypi",
"version": "==2.9.1"
},
"py": {
"hashes": [
"sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3",
"sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==1.10.0"
},
"pycparser": {
"hashes": [
"sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0",
"sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.20"
},
"pyjwt": {
"extras": [
"crypto"
],
"hashes": [
"sha256:b888b4d56f06f6dcd777210c334e69c737be74755d3e5e9ee3fe67dc18a0ee41",
"sha256:e0c4bb8d9f0af0c7f5b1ec4c5036309617d03d56932877f2f7a0beeb5318322f"
],
"index": "pypi",
"version": "==2.3.0"
},
"pyparsing": {
"hashes": [
"sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1",
"sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.4.7"
},
"pytest": {
"hashes": [
"sha256:131b36680866a76e6781d13f101efb86cf674ebb9762eb70d3082b6f29889e89",
"sha256:7310f8d27bc79ced999e760ca304d69f6ba6c6649c0b60fb0e04a4a77cacc134"
],
"index": "pypi",
"version": "==6.2.5"
},
"pytest-forked": {
"hashes": [
"sha256:6aa9ac7e00ad1a539c41bec6d21011332de671e938c7637378ec9710204e37ca",
"sha256:dc4147784048e70ef5d437951728825a131b81714b398d5d52f17c7c144d8815"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==1.3.0"
},
"pytest-xdist": {
"hashes": [
"sha256:7b61ebb46997a0820a263553179d6d1e25a8c50d8a8620cd1aa1e20e3be99168",
"sha256:89b330316f7fc475f999c81b577c2b926c9569f3d397ae432c0c2e2496d61ff9"
],
"index": "pypi",
"version": "==2.4.0"
},
"requests": {
"hashes": [
"sha256:6c1246513ecd5ecd4528a0906f910e8f0f9c6b8ec72030dc9fd154dc1a6efd24",
"sha256:b8aa58f8cf793ffd8782d3d8cb19e66ef36f7aba4353eec859e74678b01b07a7"
],
"index": "pypi",
"version": "==2.26.0"
},
"toml": {
"hashes": [
"sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b",
"sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.10.2"
},
"typing-extensions": {
"hashes": [
"sha256:49f75d16ff11f1cd258e1b988ccff82a3ca5570217d7ad8c5f48205dd99a677e",
"sha256:d8226d10bc02a29bcc81df19a26e56a9647f8b0a6d4a83924139f4a8b01f17b7",
"sha256:f1d25edafde516b146ecd0613dabcc61409817af4766fbbcfb8d1ad4ec441a34"
],
"index": "pypi",
"version": "==3.10.0.2"
},
"urllib3": {
"hashes": [
"sha256:4987c65554f7a2dbf30c18fd48778ef124af6fab771a377103da0585e2336ece",
"sha256:c4fdf4019605b6e5423637e01bc9fe4daef873709a7973e195ceba0a62bbc844"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4' and python_version < '4'",
"version": "==1.26.7"
},
"zipp": {
"hashes": [
"sha256:71c644c5369f4a6e07636f0aa966270449561fcea2e3d6747b8d23efaa9d7832",
"sha256:9fe5ea21568a0a70e50f273397638d39b03353731e6cbbb3fd8502a33fec40bc"
],
"markers": "python_version >= '3.6'",
"version": "==3.6.0"
}
},
"develop": {
"backports.entry-points-selectable": {
"hashes": [
"sha256:988468260ec1c196dab6ae1149260e2f5472c9110334e5d51adcb77867361f6a",
"sha256:a6d9a871cde5e15b4c4a53e3d43ba890cc6861ec1332c9c2428c92f977192acc"
],
"markers": "python_version >= '2.7'",
"version": "==1.1.0"
},
"certifi": {
"hashes": [
"sha256:78884e7c1d4b00ce3cea67b44566851c4343c120abd683433ce934a68ea58872",
"sha256:d62a0163eb4c2344ac042ab2bdf75399a71a2d8c7d47eac2e2ee91b9d6339569"
],
"version": "==2021.10.8"
},
"distlib": {
"hashes": [
"sha256:c8b54e8454e5bf6237cc84c20e8264c3e991e824ef27e8f1e81049867d861e31",
"sha256:d982d0751ff6eaaab5e2ec8e691d949ee80eddf01a62eaa96ddb11531fe16b05"
],
"version": "==0.3.3"
},
"filelock": {
"hashes": [
"sha256:7afc856f74fa7006a289fd10fa840e1eebd8bbff6bffb69c26c54a0512ea8cf8",
"sha256:bb2a1c717df74c48a2d00ed625e5a66f8572a3a30baacb7657add1d7bac4097b"
],
"markers": "python_version >= '3.6'",
"version": "==3.3.2"
},
"flake8": {
"hashes": [
"sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d",
"sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"
],
"index": "pypi",
"version": "==4.0.1"
},
"importlib-metadata": {
"hashes": [
"sha256:b618b6d2d5ffa2f16add5697cf57a46c76a56229b0ed1c438322e4e95645bd15",
"sha256:f284b3e11256ad1e5d03ab86bb2ccd6f5339688ff17a4d797a0fe7df326f23b1"
],
"markers": "python_version < '3.8'",
"version": "==4.8.1"
},
"mccabe": {
"hashes": [
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
],
"version": "==0.6.1"
},
"mypy": {
"hashes": [
"sha256:088cd9c7904b4ad80bec811053272986611b84221835e079be5bcad029e79dd9",
"sha256:0aadfb2d3935988ec3815952e44058a3100499f5be5b28c34ac9d79f002a4a9a",
"sha256:119bed3832d961f3a880787bf621634ba042cb8dc850a7429f643508eeac97b9",
"sha256:1a85e280d4d217150ce8cb1a6dddffd14e753a4e0c3cf90baabb32cefa41b59e",
"sha256:3c4b8ca36877fc75339253721f69603a9c7fdb5d4d5a95a1a1b899d8b86a4de2",
"sha256:3e382b29f8e0ccf19a2df2b29a167591245df90c0b5a2542249873b5c1d78212",
"sha256:42c266ced41b65ed40a282c575705325fa7991af370036d3f134518336636f5b",
"sha256:53fd2eb27a8ee2892614370896956af2ff61254c275aaee4c230ae771cadd885",
"sha256:704098302473cb31a218f1775a873b376b30b4c18229421e9e9dc8916fd16150",
"sha256:7df1ead20c81371ccd6091fa3e2878559b5c4d4caadaf1a484cf88d93ca06703",
"sha256:866c41f28cee548475f146aa4d39a51cf3b6a84246969f3759cb3e9c742fc072",
"sha256:a155d80ea6cee511a3694b108c4494a39f42de11ee4e61e72bc424c490e46457",
"sha256:adaeee09bfde366d2c13fe6093a7df5df83c9a2ba98638c7d76b010694db760e",
"sha256:b6fb13123aeef4a3abbcfd7e71773ff3ff1526a7d3dc538f3929a49b42be03f0",
"sha256:b94e4b785e304a04ea0828759172a15add27088520dc7e49ceade7834275bedb",
"sha256:c0df2d30ed496a08de5daed2a9ea807d07c21ae0ab23acf541ab88c24b26ab97",
"sha256:c6c2602dffb74867498f86e6129fd52a2770c48b7cd3ece77ada4fa38f94eba8",
"sha256:ceb6e0a6e27fb364fb3853389607cf7eb3a126ad335790fa1e14ed02fba50811",
"sha256:d9dd839eb0dc1bbe866a288ba3c1afc33a202015d2ad83b31e875b5905a079b6",
"sha256:e4dab234478e3bd3ce83bac4193b2ecd9cf94e720ddd95ce69840273bf44f6de",
"sha256:ec4e0cd079db280b6bdabdc807047ff3e199f334050db5cbb91ba3e959a67504",
"sha256:ecd2c3fe726758037234c93df7e98deb257fd15c24c9180dacf1ef829da5f921",
"sha256:ef565033fa5a958e62796867b1df10c40263ea9ded87164d67572834e57a174d"
],
"index": "pypi",
"version": "==0.910"
},
"mypy-extensions": {
"hashes": [
"sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d",
"sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"
],
"version": "==0.4.3"
},
"pipenv": {
"hashes": [
"sha256:05958fadcd70b2de6a27542fcd2bd72dd5c59c6d35307fdac3e06361fb06e30e",
"sha256:d180f5be4775c552fd5e69ae18a9d6099d9dafb462efe54f11c72cb5f4d5e977"
],
"index": "pypi",
"version": "==2021.5.29"
},
"platformdirs": {
"hashes": [
"sha256:367a5e80b3d04d2428ffa76d33f124cf11e8fff2acdaa9b43d545f5c7d661ef2",
"sha256:8868bbe3c3c80d42f20156f22e7131d2fb321f5bc86a2a345375c6481a67021d"
],
"markers": "python_version >= '3.6'",
"version": "==2.4.0"
},
"pycodestyle": {
"hashes": [
"sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20",
"sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==2.8.0"
},
"pyflakes": {
"hashes": [
"sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c",
"sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.4.0"
},
"six": {
"hashes": [
"sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926",
"sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==1.16.0"
},
"toml": {
"hashes": [
"sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b",
"sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"
],
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.10.2"
},
"typed-ast": {
"hashes": [
"sha256:01ae5f73431d21eead5015997ab41afa53aa1fbe252f9da060be5dad2c730ace",
"sha256:067a74454df670dcaa4e59349a2e5c81e567d8d65458d480a5b3dfecec08c5ff",
"sha256:0fb71b8c643187d7492c1f8352f2c15b4c4af3f6338f21681d3681b3dc31a266",
"sha256:1b3ead4a96c9101bef08f9f7d1217c096f31667617b58de957f690c92378b528",
"sha256:2068531575a125b87a41802130fa7e29f26c09a2833fea68d9a40cf33902eba6",
"sha256:209596a4ec71d990d71d5e0d312ac935d86930e6eecff6ccc7007fe54d703808",
"sha256:2c726c276d09fc5c414693a2de063f521052d9ea7c240ce553316f70656c84d4",
"sha256:398e44cd480f4d2b7ee8d98385ca104e35c81525dd98c519acff1b79bdaac363",
"sha256:52b1eb8c83f178ab787f3a4283f68258525f8d70f778a2f6dd54d3b5e5fb4341",
"sha256:5feca99c17af94057417d744607b82dd0a664fd5e4ca98061480fd8b14b18d04",
"sha256:7538e495704e2ccda9b234b82423a4038f324f3a10c43bc088a1636180f11a41",
"sha256:760ad187b1041a154f0e4d0f6aae3e40fdb51d6de16e5c99aedadd9246450e9e",
"sha256:777a26c84bea6cd934422ac2e3b78863a37017618b6e5c08f92ef69853e765d3",
"sha256:95431a26309a21874005845c21118c83991c63ea800dd44843e42a916aec5899",
"sha256:9ad2c92ec681e02baf81fdfa056fe0d818645efa9af1f1cd5fd6f1bd2bdfd805",
"sha256:9c6d1a54552b5330bc657b7ef0eae25d00ba7ffe85d9ea8ae6540d2197a3788c",
"sha256:aee0c1256be6c07bd3e1263ff920c325b59849dc95392a05f258bb9b259cf39c",
"sha256:af3d4a73793725138d6b334d9d247ce7e5f084d96284ed23f22ee626a7b88e39",
"sha256:b36b4f3920103a25e1d5d024d155c504080959582b928e91cb608a65c3a49e1a",
"sha256:b9574c6f03f685070d859e75c7f9eeca02d6933273b5e69572e5ff9d5e3931c3",
"sha256:bff6ad71c81b3bba8fa35f0f1921fb24ff4476235a6e94a26ada2e54370e6da7",
"sha256:c190f0899e9f9f8b6b7863debfb739abcb21a5c054f911ca3596d12b8a4c4c7f",
"sha256:c907f561b1e83e93fad565bac5ba9c22d96a54e7ea0267c708bffe863cbe4075",
"sha256:cae53c389825d3b46fb37538441f75d6aecc4174f615d048321b716df2757fb0",
"sha256:dd4a21253f42b8d2b48410cb31fe501d32f8b9fbeb1f55063ad102fe9c425e40",
"sha256:dde816ca9dac1d9c01dd504ea5967821606f02e510438120091b84e852367428",
"sha256:f2362f3cb0f3172c42938946dbc5b7843c2a28aec307c49100c8b38764eb6927",
"sha256:f328adcfebed9f11301eaedfa48e15bdece9b519fb27e6a8c01aa52a17ec31b3",
"sha256:f8afcf15cc511ada719a88e013cec87c11aff7b91f019295eb4530f96fe5ef2f",
"sha256:fb1bbeac803adea29cedd70781399c99138358c26d05fcbd23c13016b7f5ec65"
],
"markers": "python_version < '3.8'",
"version": "==1.4.3"
},
"types-psycopg2": {
"hashes": [
"sha256:77ed80f2668582654623e04fb3d741ecce93effcc39c929d7e02f4a917a538ce",
"sha256:98a6e0e9580cd7eb4bd4d20f7c7063d154b2589a2b90c0ce4e3ca6085cde77c6"
],
"index": "pypi",
"version": "==2.9.1"
},
"types-requests": {
"hashes": [
"sha256:b279284e51f668e38ee12d9665e4d789089f532dc2a0be4a1508ca0efd98ba9e",
"sha256:ba1d108d512e294b6080c37f6ae7cb2a2abf527560e2b671d1786c1fc46b541a"
],
"index": "pypi",
"version": "==2.25.11"
},
"typing-extensions": {
"hashes": [
"sha256:49f75d16ff11f1cd258e1b988ccff82a3ca5570217d7ad8c5f48205dd99a677e",
"sha256:d8226d10bc02a29bcc81df19a26e56a9647f8b0a6d4a83924139f4a8b01f17b7",
"sha256:f1d25edafde516b146ecd0613dabcc61409817af4766fbbcfb8d1ad4ec441a34"
],
"index": "pypi",
"version": "==3.10.0.2"
},
"virtualenv": {
"hashes": [
"sha256:4b02e52a624336eece99c96e3ab7111f469c24ba226a53ec474e8e787b365814",
"sha256:576d05b46eace16a9c348085f7d0dc8ef28713a2cabaa1cf0aea41e8f12c9218"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==20.10.0"
},
"virtualenv-clone": {
"hashes": [
"sha256:418ee935c36152f8f153c79824bb93eaf6f0f7984bae31d3f48f350b9183501a",
"sha256:44d5263bceed0bac3e1424d64f798095233b64def1c5689afa43dc3223caf5b0"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.5.7"
},
"yapf": {
"hashes": [
"sha256:408fb9a2b254c302f49db83c59f9aa0b4b0fd0ec25be3a5c51181327922ff63d",
"sha256:e3a234ba8455fe201eaa649cdac872d590089a18b661e39bbac7020978dd9c2e"
],
"index": "pypi",
"version": "==0.31.0"
},
"zipp": {
"hashes": [
"sha256:71c644c5369f4a6e07636f0aa966270449561fcea2e3d6747b8d23efaa9d7832",
"sha256:9fe5ea21568a0a70e50f273397638d39b03353731e6cbbb3fd8502a33fec40bc"
],
"markers": "python_version >= '3.6'",
"version": "==3.6.0"
}
}
}

View File

@@ -28,12 +28,12 @@ apt install build-essential libtool libreadline-dev zlib1g-dev flex bison libsec
libssl-dev clang pkg-config libpq-dev
```
[Rust] 1.55 or later is also required.
[Rust] 1.56.1 or later is also required.
To run the `psql` client, install the `postgresql-client` package or modify `PATH` and `LD_LIBRARY_PATH` to include `tmp_install/bin` and `tmp_install/lib`, respectively.
To run the integration tests or Python scripts (not required to use the code), install
Python (3.7 or higher), and install python3 packages using `pipenv install` in the project directory.
Python (3.7 or higher), and install python3 packages using `./scripts/pysync` (requires poetry) in the project directory.
2. Build zenith and patched postgres
```sh
@@ -128,8 +128,7 @@ INSERT 0 1
```sh
git clone --recursive https://github.com/zenithdb/zenith.git
make # builds also postgres and installs it to ./tmp_install
cd test_runner
pipenv run pytest
./scripts/pytest
```
## Documentation

1161
compute_tools/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,28 +1,19 @@
[package]
name = "compute_tools"
version = "0.1.0"
authors = ["Alexey Kondratov <kondratov.aleksey@gmail.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[workspace]
# TODO: make it a part of global zenith worksapce
edition = "2021"
[dependencies]
libc = "0.2"
anyhow = "1.0"
chrono = "0.4"
clap = "2.33"
env_logger = "0.8"
clap = "3.0"
env_logger = "0.9"
hyper = { version = "0.14", features = ["full"] }
log = { version = "0.4", features = ["std", "serde"] }
postgres = "0.19"
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
regex = "1"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
serde_json = "1"
tar = "0.4"
tokio = { version = "1", features = ["full"] }
[profile.release]
debug = true
tokio = { version = "1", features = ["macros", "rt", "rt-multi-thread"] }

View File

@@ -1,14 +0,0 @@
# First transient image to build compute_tools binaries
FROM rust:slim-buster AS rust-build
RUN mkdir /compute_tools
WORKDIR /compute_tools
COPY . /compute_tools/
RUN cargo build --release
# Final image that only has one binary
FROM debian:buster-slim
COPY --from=rust-build /compute_tools/target/release/zenith_ctl /usr/local/bin/zenith_ctl

View File

@@ -27,14 +27,14 @@
//! ```
//!
use std::fs::File;
use std::panic;
use std::path::Path;
use std::process::{exit, Command, ExitStatus};
use std::sync::{Arc, RwLock};
use std::{env, panic};
use anyhow::Result;
use anyhow::{Context, Result};
use chrono::Utc;
use libc::{prctl, PR_SET_PDEATHSIG, SIGINT};
use clap::Arg;
use log::info;
use postgres::{Client, NoTls};
@@ -70,7 +70,7 @@ fn prepare_pgdata(state: &Arc<RwLock<ComputeState>>) -> Result<()> {
.expect("tenant id should be provided");
info!(
"applying spec for cluster #{}, operation #{}",
"starting cluster #{}, operation #{}",
spec.cluster.cluster_id,
spec.operation_uuid.as_ref().unwrap()
);
@@ -80,10 +80,23 @@ fn prepare_pgdata(state: &Arc<RwLock<ComputeState>>) -> Result<()> {
config::write_postgres_conf(&pgdata_path.join("postgresql.conf"), spec)?;
info!("starting safekeepers syncing");
let lsn = sync_safekeepers(&state.pgdata, &state.pgbin)?;
let lsn = sync_safekeepers(&state.pgdata, &state.pgbin)
.with_context(|| "failed to sync safekeepers")?;
info!("safekeepers synced at LSN {}", lsn);
get_basebackup(&state.pgdata, &pageserver_connstr, &tenant, &timeline, &lsn)?;
info!(
"getting basebackup@{} from pageserver {}",
lsn, pageserver_connstr
);
get_basebackup(&state.pgdata, &pageserver_connstr, &tenant, &timeline, &lsn).with_context(
|| {
format!(
"failed to get basebackup@{} from pageserver {}",
lsn, pageserver_connstr
)
},
)?;
// Update pg_hba.conf received with basebackup.
update_pg_hba(pgdata_path)?;
@@ -142,51 +155,42 @@ fn run_compute(state: &Arc<RwLock<ComputeState>>) -> Result<ExitStatus> {
}
fn main() -> Result<()> {
// During configuration we are starting Postgres as a child process. If we
// fail we do not want to leave it running. PR_SET_PDEATHSIG sets the signal
// that will be sent to the child process when the parent dies. NB: this is
// cleared for the child of a fork(). SIGINT means fast shutdown for Postgres.
// This does not matter much for Docker, where `zenith_ctl` is an entrypoint,
// so the whole container will exit if it exits. But could be useful when
// `zenith_ctl` is used in e.g. systemd.
unsafe {
prctl(PR_SET_PDEATHSIG, SIGINT);
}
// TODO: re-use `zenith_utils::logging` later
init_logger(DEFAULT_LOG_LEVEL)?;
// Env variable is set by `cargo`
let version: Option<&str> = option_env!("CARGO_PKG_VERSION");
let matches = clap::App::new("zenith_ctl")
.version("0.1.0")
.version(version.unwrap_or("unknown"))
.arg(
clap::Arg::with_name("connstr")
.short("C")
Arg::new("connstr")
.short('C')
.long("connstr")
.value_name("DATABASE_URL")
.required(true),
)
.arg(
clap::Arg::with_name("pgdata")
.short("D")
Arg::new("pgdata")
.short('D')
.long("pgdata")
.value_name("DATADIR")
.required(true),
)
.arg(
clap::Arg::with_name("pgbin")
.short("b")
Arg::new("pgbin")
.short('b')
.long("pgbin")
.value_name("POSTGRES_PATH"),
)
.arg(
clap::Arg::with_name("spec")
.short("s")
Arg::new("spec")
.short('s')
.long("spec")
.value_name("SPEC_JSON"),
)
.arg(
clap::Arg::with_name("spec-path")
.short("S")
Arg::new("spec-path")
.short('S')
.long("spec-path")
.value_name("SPEC_PATH"),
)
@@ -212,13 +216,7 @@ fn main() -> Result<()> {
let file = File::open(path)?;
serde_json::from_reader(file)?
} else {
// Finally, try to fetch it from the env
// XXX: not tested well and kept as a backup option for k8s, Docker, etc.
// TODO: remove later
match env::var("CLUSTER_SPEC") {
Ok(json) => serde_json::from_str(&json)?,
Err(_) => panic!("cluster spec should be provided via --spec, --spec-path or env variable CLUSTER_SPEC")
}
panic!("cluster spec should be provided via --spec or --spec-path argument");
}
}
};

View File

@@ -5,7 +5,7 @@ use std::process::Command;
use std::str::FromStr;
use std::{fs, thread, time};
use anyhow::{anyhow, Result};
use anyhow::{bail, Result};
use postgres::{Client, Transaction};
use serde::Deserialize;
@@ -226,7 +226,7 @@ pub fn wait_for_postgres(port: &str, pgdata: &Path) -> Result<()> {
// but postgres starts listening almost immediately, even if it is not really
// ready to accept connections).
if slept >= POSTGRES_WAIT_TIMEOUT {
return Err(anyhow!("timed out while waiting for Postgres to start"));
bail!("timed out while waiting for Postgres to start");
}
if pid_path.exists() {

View File

@@ -87,17 +87,19 @@ pub fn sync_safekeepers(pgdata: &str, pgbin: &str) -> Result<String> {
.args(&["--sync-safekeepers"])
.env("PGDATA", &pgdata) // we cannot use -D in this mode
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.expect("postgres --sync-safekeepers failed to start");
// `postgres --sync-safekeepers` will print all log output to stderr and
// final LSN to stdout. So we pipe only stdout, while stderr will be automatically
// redirected to the caller output.
let sync_output = sync_handle
.wait_with_output()
.expect("postgres --sync-safekeepers failed");
if !sync_output.status.success() {
anyhow::bail!(
"postgres --sync-safekeepers failed: '{}'",
String::from_utf8_lossy(&sync_output.stderr)
"postgres --sync-safekeepers exited with non-zero status: {}",
sync_output.status,
);
}

View File

@@ -1,14 +1,11 @@
[package]
name = "control_plane"
version = "0.1.0"
authors = ["Stas Kelvich <stas@zenith.tech>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
edition = "2021"
[dependencies]
tar = "0.4.33"
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="2949d98df52587d562986aad155dd4e889e408b7" }
serde = { version = "1.0", features = ["derive"] }
toml = "0.5"
lazy_static = "1.4"

View File

@@ -1,20 +1,20 @@
# Page server and three safekeepers.
[pageserver]
listen_pg_addr = 'localhost:64000'
listen_http_addr = 'localhost:9898'
listen_pg_addr = '127.0.0.1:64000'
listen_http_addr = '127.0.0.1:9898'
auth_type = 'Trust'
[[safekeepers]]
name = 'sk1'
id = 1
pg_port = 5454
http_port = 7676
[[safekeepers]]
name = 'sk2'
id = 2
pg_port = 5455
http_port = 7677
[[safekeepers]]
name = 'sk3'
id = 3
pg_port = 5456
http_port = 7678

View File

@@ -1,11 +1,11 @@
# Minimal zenith environment with one safekeeper. This is equivalent to the built-in
# defaults that you get with no --config
[pageserver]
listen_pg_addr = 'localhost:64000'
listen_http_addr = 'localhost:9898'
listen_pg_addr = '127.0.0.1:64000'
listen_http_addr = '127.0.0.1:9898'
auth_type = 'Trust'
[[safekeepers]]
name = 'single'
id = 1
pg_port = 5454
http_port = 7676

View File

@@ -82,15 +82,11 @@ impl ComputeControlPlane {
let mut strings = s.split('@');
let name = strings.next().unwrap();
let lsn: Option<Lsn>;
if let Some(lsnstr) = strings.next() {
lsn = Some(
Lsn::from_str(lsnstr)
.with_context(|| "invalid LSN in point-in-time specification")?,
);
} else {
lsn = None
}
let lsn = strings
.next()
.map(Lsn::from_str)
.transpose()
.context("invalid LSN in point-in-time specification")?;
// Resolve the timeline ID, given the human-readable branch name
let timeline_id = self
@@ -253,16 +249,16 @@ impl PostgresNode {
let mut client = self
.pageserver
.page_server_psql_client()
.with_context(|| "connecting to page server failed")?;
.context("connecting to page server failed")?;
let copyreader = client
.copy_out(sql.as_str())
.with_context(|| "page server 'basebackup' command failed")?;
.context("page server 'basebackup' command failed")?;
// Read the archive directly from the `CopyOutReader`
tar::Archive::new(copyreader)
.unpack(&self.pgdata())
.with_context(|| "extracting base backup failed")?;
.context("extracting base backup failed")?;
Ok(())
}
@@ -338,14 +334,26 @@ impl PostgresNode {
if let Some(lsn) = self.lsn {
conf.append("recovery_target_lsn", &lsn.to_string());
}
conf.append_line("");
// Configure backpressure
// - Replication write lag depends on how fast the walreceiver can process incoming WAL.
// This lag determines latency of get_page_at_lsn. Speed of applying WAL is about 10MB/sec,
// so to avoid expiration of 1 minute timeout, this lag should not be larger than 600MB.
// Actually latency should be much smaller (better if < 1sec). But we assume that recently
// updates pages are not requested from pageserver.
// - Replication flush lag depends on speed of persisting data by checkpointer (creation of
// delta/image layers) and advancing disk_consistent_lsn. Safekeepers are able to
// remove/archive WAL only beyond disk_consistent_lsn. Too large a lag can cause long
// recovery time (in case of pageserver crash) and disk space overflow at safekeepers.
// - Replication apply lag depends on speed of uploading changes to S3 by uploader thread.
// To be able to restore database in case of pageserver node crash, safekeeper should not
// remove WAL beyond this point. Too large lag can cause space exhaustion in safekeepers
// (if they are not able to upload WAL to S3).
conf.append("max_replication_write_lag", "500MB");
conf.append("max_replication_flush_lag", "10GB");
if !self.env.safekeepers.is_empty() {
// Configure backpressure
// In setup with safekeepers apply_lag depends on
// speed of data checkpointing on pageserver (see disk_consistent_lsn).
conf.append("max_replication_apply_lag", "1500MB");
// Configure the node to connect to the safekeepers
conf.append("synchronous_standby_names", "walproposer");
@@ -358,11 +366,6 @@ impl PostgresNode {
.join(",");
conf.append("wal_acceptors", &wal_acceptors);
} else {
// Configure backpressure
// In setup without safekeepers, flush_lag depends on
// speed of of data checkpointing on pageserver (see disk_consistent_lsn)
conf.append("max_replication_flush_lag", "1500MB");
// We only use setup without safekeepers for tests,
// and don't care about data durability on pageserver,
// so set more relaxed synchronous_commit.
@@ -443,7 +446,7 @@ impl PostgresNode {
if let Some(token) = auth_token {
cmd.env("ZENITH_AUTH_TOKEN", token);
}
let pg_ctl = cmd.status().with_context(|| "pg_ctl failed")?;
let pg_ctl = cmd.status().context("pg_ctl failed")?;
if !pg_ctl.success() {
anyhow::bail!("pg_ctl failed");

View File

@@ -9,6 +9,7 @@
use anyhow::{anyhow, bail, Context, Result};
use std::fs;
use std::path::Path;
use std::process::Command;
pub mod compute;
pub mod local_env;
@@ -31,3 +32,19 @@ pub fn read_pidfile(pidfile: &Path) -> Result<i32> {
}
Ok(pid)
}
fn fill_rust_env_vars(cmd: &mut Command) -> &mut Command {
let cmd = cmd.env_clear().env("RUST_BACKTRACE", "1");
let var = "LLVM_PROFILE_FILE";
if let Some(val) = std::env::var_os(var) {
cmd.env(var, val);
}
const RUST_LOG_KEY: &str = "RUST_LOG";
if let Ok(rust_log_value) = std::env::var(RUST_LOG_KEY) {
cmd.env(RUST_LOG_KEY, rust_log_value)
} else {
cmd
}
}

View File

@@ -12,7 +12,9 @@ use std::path::{Path, PathBuf};
use std::process::{Command, Stdio};
use zenith_utils::auth::{encode_from_key_file, Claims, Scope};
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::{opt_display_serde, ZTenantId};
use zenith_utils::zid::{opt_display_serde, ZNodeId, ZTenantId};
use crate::safekeeper::SafekeeperNode;
//
// This data structures represents zenith CLI config
@@ -62,6 +64,8 @@ pub struct LocalEnv {
#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(default)]
pub struct PageServerConf {
// node id
pub id: ZNodeId,
// Pageserver connection settings
pub listen_pg_addr: String,
pub listen_http_addr: String,
@@ -76,6 +80,7 @@ pub struct PageServerConf {
impl Default for PageServerConf {
fn default() -> Self {
Self {
id: ZNodeId(0),
listen_pg_addr: String::new(),
listen_http_addr: String::new(),
auth_type: AuthType::Trust,
@@ -87,7 +92,7 @@ impl Default for PageServerConf {
#[derive(Serialize, Deserialize, Clone, Debug)]
#[serde(default)]
pub struct SafekeeperConf {
pub name: String,
pub id: ZNodeId,
pub pg_port: u16,
pub http_port: u16,
pub sync: bool,
@@ -96,7 +101,7 @@ pub struct SafekeeperConf {
impl Default for SafekeeperConf {
fn default() -> Self {
Self {
name: String::new(),
id: ZNodeId(0),
pg_port: 0,
http_port: 0,
sync: true,
@@ -136,8 +141,8 @@ impl LocalEnv {
self.base_data_dir.clone()
}
pub fn safekeeper_data_dir(&self, node_name: &str) -> PathBuf {
self.base_data_dir.join("safekeepers").join(node_name)
pub fn safekeeper_data_dir(&self, data_dir_name: &str) -> PathBuf {
self.base_data_dir.join("safekeepers").join(data_dir_name)
}
/// Create a LocalEnv from a config file.
@@ -251,7 +256,7 @@ impl LocalEnv {
.arg("2048")
.stdout(Stdio::null())
.output()
.with_context(|| "failed to generate auth private key")?;
.context("failed to generate auth private key")?;
if !keygen_output.status.success() {
bail!(
"openssl failed: '{}'",
@@ -270,7 +275,7 @@ impl LocalEnv {
.args(&["-out", public_key_path.to_str().unwrap()])
.stdout(Stdio::null())
.output()
.with_context(|| "failed to generate auth private key")?;
.context("failed to generate auth private key")?;
if !keygen_output.status.success() {
bail!(
"openssl failed: '{}'",
@@ -285,7 +290,7 @@ impl LocalEnv {
fs::create_dir_all(self.pg_data_dirs_path())?;
for safekeeper in &self.safekeepers {
fs::create_dir_all(self.safekeeper_data_dir(&safekeeper.name))?;
fs::create_dir_all(SafekeeperNode::datadir_path_by_id(self, safekeeper.id))?;
}
let mut conf_content = String::new();

View File

@@ -4,7 +4,7 @@
/// NOTE: This doesn't implement the full, correct postgresql.conf syntax. Just
/// enough to extract a few settings we need in Zenith, assuming you don't do
/// funny stuff like include-directives or funny escaping.
use anyhow::{anyhow, bail, Context, Result};
use anyhow::{bail, Context, Result};
use lazy_static::lazy_static;
use regex::Regex;
use std::collections::HashMap;
@@ -78,7 +78,7 @@ impl PostgresConf {
<T as FromStr>::Err: std::error::Error + Send + Sync + 'static,
{
self.get(field_name)
.ok_or_else(|| anyhow!("could not find '{}' option {}", field_name, context))?
.with_context(|| format!("could not find '{}' option {}", field_name, context))?
.parse::<T>()
.with_context(|| format!("could not parse '{}' option {}", field_name, context))
}

View File

@@ -15,10 +15,11 @@ use reqwest::blocking::{Client, RequestBuilder, Response};
use reqwest::{IntoUrl, Method};
use thiserror::Error;
use zenith_utils::http::error::HttpErrorBody;
use zenith_utils::zid::ZNodeId;
use crate::local_env::{LocalEnv, SafekeeperConf};
use crate::read_pidfile;
use crate::storage::PageServerNode;
use crate::{fill_rust_env_vars, read_pidfile};
use zenith_utils::connstring::connection_address;
#[derive(Error, Debug)]
@@ -61,7 +62,7 @@ impl ResponseErrorMessageExt for Response {
//
#[derive(Debug)]
pub struct SafekeeperNode {
pub name: String,
pub id: ZNodeId,
pub conf: SafekeeperConf,
@@ -77,15 +78,15 @@ impl SafekeeperNode {
pub fn from_env(env: &LocalEnv, conf: &SafekeeperConf) -> SafekeeperNode {
let pageserver = Arc::new(PageServerNode::from_env(env));
println!("initializing for {} for {}", conf.name, conf.http_port);
println!("initializing for sk {} for {}", conf.id, conf.http_port);
SafekeeperNode {
name: conf.name.clone(),
id: conf.id,
conf: conf.clone(),
pg_connection_config: Self::safekeeper_connection_config(conf.pg_port),
env: env.clone(),
http_client: Client::new(),
http_base_url: format!("http://localhost:{}/v1", conf.http_port),
http_base_url: format!("http://127.0.0.1:{}/v1", conf.http_port),
pageserver,
}
}
@@ -93,13 +94,17 @@ impl SafekeeperNode {
/// Construct libpq connection string for connecting to this safekeeper.
fn safekeeper_connection_config(port: u16) -> Config {
// TODO safekeeper authentication not implemented yet
format!("postgresql://no_user@localhost:{}/no_db", port)
format!("postgresql://no_user@127.0.0.1:{}/no_db", port)
.parse()
.unwrap()
}
pub fn datadir_path_by_id(env: &LocalEnv, sk_id: ZNodeId) -> PathBuf {
env.safekeeper_data_dir(format!("sk{}", sk_id).as_ref())
}
pub fn datadir_path(&self) -> PathBuf {
self.env.safekeeper_data_dir(&self.name)
SafekeeperNode::datadir_path_by_id(&self.env, self.id)
}
pub fn pid_file(&self) -> PathBuf {
@@ -114,26 +119,22 @@ impl SafekeeperNode {
);
io::stdout().flush().unwrap();
let listen_pg = format!("localhost:{}", self.conf.pg_port);
let listen_http = format!("localhost:{}", self.conf.http_port);
let listen_pg = format!("127.0.0.1:{}", self.conf.pg_port);
let listen_http = format!("127.0.0.1:{}", self.conf.http_port);
let mut cmd = Command::new(self.env.safekeeper_bin()?);
cmd.args(&["-D", self.datadir_path().to_str().unwrap()])
.args(&["--listen-pg", &listen_pg])
.args(&["--listen-http", &listen_http])
.args(&["--recall", "1 second"])
.arg("--daemonize")
.env_clear()
.env("RUST_BACKTRACE", "1");
fill_rust_env_vars(
cmd.args(&["-D", self.datadir_path().to_str().unwrap()])
.args(&["--id", self.id.to_string().as_ref()])
.args(&["--listen-pg", &listen_pg])
.args(&["--listen-http", &listen_http])
.args(&["--recall", "1 second"])
.arg("--daemonize"),
);
if !self.conf.sync {
cmd.arg("--no-sync");
}
let var = "LLVM_PROFILE_FILE";
if let Some(val) = std::env::var_os(var) {
cmd.env(var, val);
}
if !cmd.status()?.success() {
bail!(
"Safekeeper failed to start. See '{}' for details.",
@@ -188,7 +189,7 @@ impl SafekeeperNode {
pub fn stop(&self, immediate: bool) -> anyhow::Result<()> {
let pid_file = self.pid_file();
if !pid_file.exists() {
println!("Safekeeper {} is already stopped", self.name);
println!("Safekeeper {} is already stopped", self.id);
return Ok(());
}
let pid = read_pidfile(&pid_file)?;

View File

@@ -19,7 +19,7 @@ use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::ZTenantId;
use crate::local_env::LocalEnv;
use crate::read_pidfile;
use crate::{fill_rust_env_vars, read_pidfile};
use pageserver::branches::BranchInfo;
use pageserver::tenant_mgr::TenantInfo;
use zenith_utils::connstring::connection_address;
@@ -96,46 +96,52 @@ impl PageServerNode {
.unwrap()
}
pub fn init(&self, create_tenant: Option<&str>) -> anyhow::Result<()> {
pub fn init(
&self,
create_tenant: Option<&str>,
config_overrides: &[&str],
) -> anyhow::Result<()> {
let mut cmd = Command::new(self.env.pageserver_bin()?);
let var = "LLVM_PROFILE_FILE";
if let Some(val) = std::env::var_os(var) {
cmd.env(var, val);
}
let id = format!("id={}", self.env.pageserver.id);
// FIXME: the paths should be shell-escaped to handle paths with spaces, quotas etc.
let mut args = vec![
"--init".to_string(),
"-D".to_string(),
self.env.base_data_dir.display().to_string(),
"-c".to_string(),
format!("pg_distrib_dir='{}'", self.env.pg_distrib_dir.display()),
"-c".to_string(),
format!("auth_type='{}'", self.env.pageserver.auth_type),
"-c".to_string(),
format!(
"listen_http_addr='{}'",
self.env.pageserver.listen_http_addr
),
"-c".to_string(),
format!("listen_pg_addr='{}'", self.env.pageserver.listen_pg_addr),
];
let base_data_dir_param = self.env.base_data_dir.display().to_string();
let pg_distrib_dir_param =
format!("pg_distrib_dir='{}'", self.env.pg_distrib_dir.display());
let authg_type_param = format!("auth_type='{}'", self.env.pageserver.auth_type);
let listen_http_addr_param = format!(
"listen_http_addr='{}'",
self.env.pageserver.listen_http_addr
);
let listen_pg_addr_param =
format!("listen_pg_addr='{}'", self.env.pageserver.listen_pg_addr);
let mut args = Vec::with_capacity(20);
args.push("--init");
args.extend(["-D", &base_data_dir_param]);
args.extend(["-c", &pg_distrib_dir_param]);
args.extend(["-c", &authg_type_param]);
args.extend(["-c", &listen_http_addr_param]);
args.extend(["-c", &listen_pg_addr_param]);
args.extend(["-c", &id]);
for config_override in config_overrides {
args.extend(["-c", config_override]);
}
if self.env.pageserver.auth_type != AuthType::Trust {
args.extend([
"-c".to_string(),
"auth_validation_public_key_path='auth_public_key.pem'".to_string(),
"-c",
"auth_validation_public_key_path='auth_public_key.pem'",
]);
}
if let Some(tenantid) = create_tenant {
args.extend(["--create-tenant".to_string(), tenantid.to_string()])
args.extend(["--create-tenant", tenantid])
}
let status = cmd
.args(args)
.env_clear()
.env("RUST_BACKTRACE", "1")
let status = fill_rust_env_vars(cmd.args(args))
.status()
.expect("pageserver init failed");
@@ -154,7 +160,7 @@ impl PageServerNode {
self.repo_path().join("pageserver.pid")
}
pub fn start(&self) -> anyhow::Result<()> {
pub fn start(&self, config_overrides: &[&str]) -> anyhow::Result<()> {
print!(
"Starting pageserver at '{}' in '{}'",
connection_address(&self.pg_connection_config),
@@ -163,16 +169,16 @@ impl PageServerNode {
io::stdout().flush().unwrap();
let mut cmd = Command::new(self.env.pageserver_bin()?);
cmd.args(&["-D", self.repo_path().to_str().unwrap()])
.arg("--daemonize")
.env_clear()
.env("RUST_BACKTRACE", "1");
let var = "LLVM_PROFILE_FILE";
if let Some(val) = std::env::var_os(var) {
cmd.env(var, val);
let repo_path = self.repo_path();
let mut args = vec!["-D", repo_path.to_str().unwrap()];
for config_override in config_overrides {
args.extend(["-c", config_override]);
}
fill_rust_env_vars(cmd.args(&args).arg("--daemonize"));
if !cmd.status()?.success() {
bail!(
"Pageserver failed to start. See '{}' for details.",

View File

@@ -4,7 +4,7 @@ set -eux
if [ "$1" = 'pageserver' ]; then
if [ ! -d "/data/tenants" ]; then
echo "Initializing pageserver data directory"
pageserver --init -D /data -c "pg_distrib_dir='/usr/local'"
pageserver --init -D /data -c "pg_distrib_dir='/usr/local'" -c "id=10"
fi
echo "Staring pageserver at 0.0.0.0:6400"
pageserver -c "listen_pg_addr='0.0.0.0:6400'" -c "listen_http_addr='0.0.0.0:9898'" -D /data

View File

@@ -2,6 +2,16 @@
### Authentication
### Backpresssure
Backpressure is used to limit the lag between pageserver and compute node or WAL service.
If compute node or WAL service run far ahead of Page Server,
the time of serving page requests increases. This may lead to timeout errors.
To tune backpressure limits use `max_replication_write_lag`, `max_replication_flush_lag` and `max_replication_apply_lag` settings.
When lag between current LSN (pg_current_wal_flush_lsn() at compute node) and minimal write/flush/apply position of replica exceeds the limit
backends performing writes are blocked until the replica is caught up.
### Base image (page image)
### Basebackup
@@ -76,7 +86,37 @@ The layer map tracks what layers exist for all the relishes in a timeline.
Zenith repository implementation that keeps data in layers.
### LSN
The Log Sequence Number (LSN) is a unique identifier of the WAL record[] in the WAL log.
The insert position is a byte offset into the logs, increasing monotonically with each new record.
Internally, an LSN is a 64-bit integer, representing a byte position in the write-ahead log stream.
It is printed as two hexadecimal numbers of up to 8 digits each, separated by a slash.
Check also [PostgreSQL doc about pg_lsn type](https://www.postgresql.org/docs/devel/datatype-pg-lsn.html)
Values can be compared to calculate the volume of WAL data that separates them, so they are used to measure the progress of replication and recovery.
In postgres and Zenith lsns are used to describe certain points in WAL handling.
PostgreSQL LSNs and functions to monitor them:
* `pg_current_wal_insert_lsn()` - Returns the current write-ahead log insert location.
* `pg_current_wal_lsn()` - Returns the current write-ahead log write location.
* `pg_current_wal_flush_lsn()` - Returns the current write-ahead log flush location.
* `pg_last_wal_receive_lsn()` - Returns the last write-ahead log location that has been received and synced to disk by streaming replication. While streaming replication is in progress this will increase monotonically.
* `pg_last_wal_replay_lsn ()` - Returns the last write-ahead log location that has been replayed during recovery. If recovery is still in progress this will increase monotonically.
[source PostgreSQL documentation](https://www.postgresql.org/docs/devel/functions-admin.html):
Zenith safekeeper LSNs. For more check [walkeeper/README_PROTO.md](/walkeeper/README_PROTO.md)
* `CommitLSN`: position in WAL confirmed by quorum safekeepers.
* `RestartLSN`: position in WAL confirmed by all safekeepers.
* `FlushLSN`: part of WAL persisted to the disk by safekeeper.
* `VCL`: the largerst LSN for which we can guarantee availablity of all prior records.
Zenith pageserver LSNs:
* `last_record_lsn` - the end of last processed WAL record.
* `disk_consistent_lsn` - data is known to be fully flushed and fsync'd to local disk on pageserver up to this LSN.
* `remote_consistent_lsn` - The last LSN that is synced to remote storage and is guaranteed to survive pageserver crash.
TODO: use this name consistently in remote storage code. Now `disk_consistent_lsn` is used and meaning depends on the context.
* `ancestor_lsn` - LSN of the branch point (the LSN at which this branch was created)
TODO: add table that describes mapping between PostgreSQL (compute), safekeeper and pageserver LSNs.
### Page (block)
The basic structure used to store relation data. All pages are of the same size.

View File

@@ -0,0 +1,22 @@
## Pageserver tenant migration
### Overview
This feature allows to migrate a timeline from one pageserver to another by utilizing remote storage capability.
### Migration process
Pageserver implements two new http handlers: timeline attach and timeline detach.
Timeline migration is performed in a following way:
1. Timeline attach is called on a target pageserver. This asks pageserver to download latest checkpoint uploaded to s3.
2. For now it is necessary to manually initialize replication stream via callmemaybe call so target pageserver initializes replication from safekeeper (it is desired to avoid this and initialize replication directly in attach handler, but this requires some refactoring (probably [#997](https://github.com/zenithdb/zenith/issues/997)/[#1049](https://github.com/zenithdb/zenith/issues/1049))
3. Replication state can be tracked via timeline detail pageserver call.
4. Compute node should be restarted with new pageserver connection string. Issue with multiple compute nodes for one timeline is handled on the safekeeper consensus level. So this is not a problem here.Currently responsibility for rescheduling the compute with updated config lies on external coordinator (console).
5. Timeline is detached from old pageserver. On disk data is removed.
### Implementation details
Now safekeeper needs to track which pageserver it is replicating to. This introduces complications into replication code:
* We need to distinguish different pageservers (now this is done by connection string which is imperfect and is covered here: https://github.com/zenithdb/zenith/issues/1105). Callmemaybe subscription management also needs to track that (this is already implemented).
* We need to track which pageserver is the primary. This is needed to avoid reconnections to non primary pageservers. Because we shouldn't reconnect to them when they decide to stop their walreceiver. I e this can appear when there is a load on the compute and we are trying to detach timeline from old pageserver. In this case callmemaybe will try to reconnect to it because replication termination condition is not met (page server with active compute could never catch up to the latest lsn, so there is always some wal tail)

View File

@@ -147,6 +147,10 @@ bucket_name = 'some-sample-bucket'
# Name of the region where the bucket is located at
bucket_region = 'eu-north-1'
# A "subfolder" in the bucket, to use the same bucket separately by multiple pageservers at once.
# Optional, pageserver uses entire bucket if the prefix is not specified.
prefix_in_bucket = '/some/prefix/'
# Access key to connect to the bucket ("login" part of the credentials)
access_key_id = 'SOMEKEYAAAAASADSAH*#'

View File

@@ -87,31 +87,29 @@ so manual installation of dependencies is not recommended.
A single virtual environment with all dependencies is described in the single `Pipfile`.
### Prerequisites
- Install Python 3.7 (the minimal supported version)
- Later version (e.g. 3.8) is ok if you don't write Python code
- You can install Python 3.7 separately, e.g.:
- Install Python 3.7 (the minimal supported version) or greater.
- Our setup with poetry should work with newer python versions too. So feel free to open an issue with a `c/test-runner` label if something doesnt work as expected.
- If you have some trouble with other version you can resolve it by installing Python 3.7 separately, via pyenv or via system package manager e.g.:
```bash
# In Ubuntu
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.7
```
- Install `pipenv`
- Exact version of `pipenv` is not important, you can use Debian/Ubuntu package `pipenv`.
- Install dependencies via either
* `pipenv --python 3.7 install --dev` if you will write Python code, or
* `pipenv install` if you only want to run Python scripts and don't have Python 3.7.
- Install `poetry`
- Exact version of `poetry` is not important, see installation instructions available at poetry's [website](https://python-poetry.org/docs/#installation)`.
- Install dependencies via `./scripts/pysync`. Note that CI uses Python 3.7 so if you have different version some linting tools can yield different result locally vs in the CI.
Run `pipenv shell` to activate the virtual environment.
Alternatively, use `pipenv run` to run a single command in the venv, e.g. `pipenv run pytest`.
Run `poetry shell` to activate the virtual environment.
Alternatively, use `poetry run` to run a single command in the venv, e.g. `poetry run pytest`.
### Obligatory checks
We force code formatting via `yapf` and type hints via `mypy`.
Run the following commands in the repository's root (next to `setup.cfg`):
```bash
pipenv run yapf -ri . # All code is reformatted
pipenv run mypy . # Ensure there are no typing errors
poetry run yapf -ri . # All code is reformatted
poetry run mypy . # Ensure there are no typing errors
```
**WARNING**: do not run `mypy` from a directory other than the root of the repository.
@@ -123,17 +121,6 @@ Also consider:
* Adding more type hints to your code to avoid `Any`.
### Changing dependencies
You have to update `Pipfile.lock` if you have changed `Pipfile`:
To add new package or change an existing one you can use `poetry add` or `poetry update` or edit `pyproject.toml` manually. Do not forget to run `poetry lock` in the latter case.
```bash
pipenv --python 3.7 install --dev # Re-create venv for Python 3.7 and install recent pipenv inside
pipenv run pipenv --version # Should be at least 2021.5.29
pipenv run pipenv lock # Regenerate Pipfile.lock
```
As the minimal supported version is Python 3.7 and we use it in CI,
you have to use a Python 3.7 environment when updating `Pipfile.lock`.
Otherwise some back-compatibility packages will be missing.
It is also important to run recent `pipenv`.
Older versions remove markers from `Pipfile.lock`.
More details are available in poetry's [documentation](https://python-poetry.org/docs/).

View File

@@ -1,8 +1,7 @@
[package]
name = "pageserver"
version = "0.1.0"
authors = ["Stas Kelvich <stas@zenith.tech>"]
edition = "2018"
edition = "2021"
[dependencies]
bookfile = { git = "https://github.com/zenithdb/bookfile.git", branch="generic-readext" }
@@ -15,16 +14,15 @@ futures = "0.3.13"
hyper = "0.14"
lazy_static = "1.4.0"
log = "0.4.14"
clap = "2.33.0"
clap = "3.0"
daemonize = "0.4.1"
tokio = { version = "1.11", features = ["process", "sync", "macros", "fs", "rt", "io-util", "time"] }
postgres-types = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
postgres-protocol = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
tokio-postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
postgres-types = { git = "https://github.com/zenithdb/rust-postgres.git", rev="2949d98df52587d562986aad155dd4e889e408b7" }
postgres-protocol = { git = "https://github.com/zenithdb/rust-postgres.git", rev="2949d98df52587d562986aad155dd4e889e408b7" }
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="2949d98df52587d562986aad155dd4e889e408b7" }
tokio-postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="2949d98df52587d562986aad155dd4e889e408b7" }
tokio-stream = "0.1.8"
routerify = "2"
anyhow = "1.0"
anyhow = { version = "1.0", features = ["backtrace"] }
crc32c = "0.6.0"
thiserror = "1.0"
hex = { version = "0.4.3", features = ["serde"] }
@@ -32,7 +30,7 @@ tar = "0.4.33"
humantime = "2.1.0"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1"
toml_edit = { version = "0.12", features = ["easy"] }
toml_edit = { version = "0.13", features = ["easy"] }
scopeguard = "1.1.0"
async-trait = "0.1"
const_format = "0.2.21"
@@ -42,8 +40,8 @@ signal-hook = "0.3.10"
url = "2"
nix = "0.23"
once_cell = "1.8.0"
parking_lot = "0.11.2"
crossbeam-utils = "0.8.5"
fail = "0.5.0"
rust-s3 = { version = "0.28", default-features = false, features = ["no-verify-ssl", "tokio-rustls-tls"] }
async-compression = {version = "0.3", features = ["zstd", "tokio"]}

View File

@@ -129,13 +129,13 @@ There are the following implementations present:
* local filesystem — to use in tests mainly
* AWS S3 - to use in production
Implementation details are covered in the [backup readme](./src/remote_storage/README.md) and corresponding Rust file docs.
Implementation details are covered in the [backup readme](./src/remote_storage/README.md) and corresponding Rust file docs, parameters documentation can be found at [settings docs](../docs/settings.md).
The backup service is disabled by default and can be enabled to interact with a single remote storage.
CLI examples:
* Local FS: `${PAGESERVER_BIN} -c "remote_storage={local_path='/some/local/path/'}"`
* AWS S3 : `${PAGESERVER_BIN} -c "remote_storage={bucket_name='some-sample-bucket',bucket_region='eu-north-1',access_key_id='SOMEKEYAAAAASADSAH*#',secret_access_key='SOMEsEcReTsd292v'}"`
* AWS S3 : `${PAGESERVER_BIN} -c "remote_storage={bucket_name='some-sample-bucket',bucket_region='eu-north-1', prefix_in_bucket='/test_prefix/',access_key_id='SOMEKEYAAAAASADSAH*#',secret_access_key='SOMEsEcReTsd292v'}"`
For Amazon AWS S3, a key id and secret access key could be located in `~/.aws/credentials` if awscli was ever configured to work with the desired bucket, on the AWS Settings page for a certain user. Also note, that the bucket names does not contain any protocols when used on AWS.
For local S3 installations, refer to the their documentation for name format and credentials.
@@ -154,6 +154,7 @@ or
[remote_storage]
bucket_name = 'some-sample-bucket'
bucket_region = 'eu-north-1'
prefix_in_bucket = '/test_prefix/'
access_key_id = 'SOMEKEYAAAAASADSAH*#'
secret_access_key = 'SOMEsEcReTsd292v'
```

View File

@@ -13,7 +13,7 @@ fn main() -> Result<()> {
.about("Dump contents of one layer file, for debugging")
.version(GIT_VERSION)
.arg(
Arg::with_name("path")
Arg::new("path")
.help("Path to file to dump")
.required(true)
.index(1),

View File

@@ -1,6 +1,6 @@
//! Main entry point for the Page Server executable.
use std::{env, path::Path, str::FromStr, thread};
use std::{env, path::Path, str::FromStr};
use tracing::*;
use zenith_utils::{auth::JwtAuth, logging, postgres_backend::AuthType, tcp_listener, GIT_VERSION};
@@ -12,7 +12,9 @@ use daemonize::Daemonize;
use pageserver::{
branches,
config::{defaults::*, PageServerConf},
http, page_cache, page_service, remote_storage, tenant_mgr, virtual_file, LOG_FILE_NAME,
http, page_cache, page_service, remote_storage, tenant_mgr, thread_mgr,
thread_mgr::ThreadKind,
virtual_file, LOG_FILE_NAME,
};
use zenith_utils::http::endpoint;
use zenith_utils::postgres_backend;
@@ -25,27 +27,27 @@ fn main() -> Result<()> {
.about("Materializes WAL stream to pages and serves them to the postgres")
.version(GIT_VERSION)
.arg(
Arg::with_name("daemonize")
.short("d")
Arg::new("daemonize")
.short('d')
.long("daemonize")
.takes_value(false)
.help("Run in the background"),
)
.arg(
Arg::with_name("init")
Arg::new("init")
.long("init")
.takes_value(false)
.help("Initialize pageserver repo"),
)
.arg(
Arg::with_name("workdir")
.short("D")
Arg::new("workdir")
.short('D')
.long("workdir")
.takes_value(true)
.help("Working directory for the pageserver"),
)
.arg(
Arg::with_name("create-tenant")
Arg::new("create-tenant")
.long("create-tenant")
.takes_value(true)
.help("Create tenant during init")
@@ -53,13 +55,13 @@ fn main() -> Result<()> {
)
// See `settings.md` for more details on the extra configuration patameters pageserver can process
.arg(
Arg::with_name("config-option")
.short("c")
Arg::new("config-override")
.short('c')
.takes_value(true)
.number_of_values(1)
.multiple(true)
.help("Additional configuration options or overrides of the ones from the toml config file.
Any option has to be a valid toml document, example: `-c \"foo='hey'\"` `-c \"foo={value=1}\"`"),
.multiple_occurrences(true)
.help("Additional configuration overrides of the ones from the toml config file (or new ones to add there).
Any option has to be a valid toml document, example: `-c=\"foo='hey'\"` `-c=\"foo={value=1}\"`"),
)
.get_matches();
@@ -105,7 +107,7 @@ fn main() -> Result<()> {
};
// Process any extra options given with -c
if let Some(values) = arg_matches.values_of("config-option") {
if let Some(values) = arg_matches.values_of("config-override") {
for option_line in values {
let doc = toml_edit::Document::from_str(option_line).with_context(|| {
format!(
@@ -113,7 +115,14 @@ fn main() -> Result<()> {
option_line
)
})?;
for (key, item) in doc.iter() {
if key == "id" {
anyhow::ensure!(
init,
"node id can only be set during pageserver init and cannot be overridden"
);
}
toml.insert(key, item.clone());
}
}
@@ -169,7 +178,7 @@ fn start_pageserver(conf: &'static PageServerConf, daemonize: bool) -> Result<()
);
let pageserver_listener = tcp_listener::bind(conf.listen_pg_addr.clone())?;
// XXX: Don't spawn any threads before daemonizing!
// NB: Don't spawn any threads before daemonizing!
if daemonize {
info!("daemonizing...");
@@ -195,15 +204,9 @@ fn start_pageserver(conf: &'static PageServerConf, daemonize: bool) -> Result<()
}
let signals = signals::install_shutdown_handlers()?;
let mut threads = Vec::new();
let sync_startup = remote_storage::start_local_timeline_sync(conf)
.context("Failed to set up local files sync with external storage")?;
if let Some(handle) = sync_startup.sync_loop_handle {
threads.push(handle);
}
// Initialize tenant manager.
tenant_mgr::set_timeline_states(conf, sync_startup.initial_timeline_states);
@@ -220,25 +223,27 @@ fn start_pageserver(conf: &'static PageServerConf, daemonize: bool) -> Result<()
// Spawn a new thread for the http endpoint
// bind before launching separate thread so the error reported before startup exits
let cloned = auth.clone();
threads.push(
thread::Builder::new()
.name("http_endpoint_thread".into())
.spawn(move || {
let router = http::make_router(conf, cloned);
endpoint::serve_thread_main(router, http_listener)
})?,
);
let auth_cloned = auth.clone();
thread_mgr::spawn(
ThreadKind::HttpEndpointListener,
None,
None,
"http_endpoint_thread",
move || {
let router = http::make_router(conf, auth_cloned);
endpoint::serve_thread_main(router, http_listener, thread_mgr::shutdown_watcher())
},
)?;
// Spawn a thread to listen for connections. It will spawn further threads
// Spawn a thread to listen for libpq connections. It will spawn further threads
// for each connection.
threads.push(
thread::Builder::new()
.name("Page Service thread".into())
.spawn(move || {
page_service::thread_main(conf, auth, pageserver_listener, conf.auth_type)
})?,
);
thread_mgr::spawn(
ThreadKind::LibpqEndpointListener,
None,
None,
"libpq endpoint thread",
move || page_service::thread_main(conf, auth, pageserver_listener, conf.auth_type),
)?;
signals.handle(|signal| match signal {
Signal::Quit => {
@@ -254,20 +259,38 @@ fn start_pageserver(conf: &'static PageServerConf, daemonize: bool) -> Result<()
"Got {}. Terminating gracefully in fast shutdown mode",
signal.name()
);
postgres_backend::set_pgbackend_shutdown_requested();
tenant_mgr::shutdown_all_tenants()?;
endpoint::shutdown();
for handle in std::mem::take(&mut threads) {
handle
.join()
.expect("thread panicked")
.expect("thread exited with an error");
}
info!("Shut down successfully completed");
std::process::exit(0);
shutdown_pageserver();
unreachable!()
}
})
}
fn shutdown_pageserver() {
// Shut down the libpq endpoint thread. This prevents new connections from
// being accepted.
thread_mgr::shutdown_threads(Some(ThreadKind::LibpqEndpointListener), None, None);
// Shut down any page service threads.
postgres_backend::set_pgbackend_shutdown_requested();
thread_mgr::shutdown_threads(Some(ThreadKind::PageRequestHandler), None, None);
// Shut down all the tenants. This flushes everything to disk and kills
// the checkpoint and GC threads.
tenant_mgr::shutdown_all_tenants();
// Stop syncing with remote storage.
//
// FIXME: Does this wait for the sync thread to finish syncing what's queued up?
// Should it?
thread_mgr::shutdown_threads(Some(ThreadKind::StorageSync), None, None);
// Shut down the HTTP endpoint last, so that you can still check the server's
// status while it's shutting down.
thread_mgr::shutdown_threads(Some(ThreadKind::HttpEndpointListener), None, None);
// There should be nothing left, but let's be sure
thread_mgr::shutdown_threads(None, None, None);
info!("Shut down successfully completed");
std::process::exit(0);
}

View File

@@ -0,0 +1,334 @@
//! A CLI helper to deal with remote storage (S3, usually) blobs as archives.
//! See [`compression`] for more details about the archives.
use std::{collections::BTreeSet, path::Path};
use anyhow::{bail, ensure, Context};
use clap::{App, Arg};
use pageserver::{
layered_repository::metadata::{TimelineMetadata, METADATA_FILE_NAME},
remote_storage::compression,
};
use tokio::{fs, io};
use zenith_utils::GIT_VERSION;
const LIST_SUBCOMMAND: &str = "list";
const ARCHIVE_ARG_NAME: &str = "archive";
const EXTRACT_SUBCOMMAND: &str = "extract";
const TARGET_DIRECTORY_ARG_NAME: &str = "target_directory";
const CREATE_SUBCOMMAND: &str = "create";
const SOURCE_DIRECTORY_ARG_NAME: &str = "source_directory";
#[tokio::main(flavor = "current_thread")]
async fn main() -> anyhow::Result<()> {
let arg_matches = App::new("pageserver zst blob [un]compressor utility")
.version(GIT_VERSION)
.subcommands(vec![
App::new(LIST_SUBCOMMAND)
.about("List the archive contents")
.arg(
Arg::new(ARCHIVE_ARG_NAME)
.required(true)
.takes_value(true)
.help("An archive to list the contents of"),
),
App::new(EXTRACT_SUBCOMMAND)
.about("Extracts the archive into the directory")
.arg(
Arg::new(ARCHIVE_ARG_NAME)
.required(true)
.takes_value(true)
.help("An archive to extract"),
)
.arg(
Arg::new(TARGET_DIRECTORY_ARG_NAME)
.required(false)
.takes_value(true)
.help("A directory to extract the archive into. Optional, will use the current directory if not specified"),
),
App::new(CREATE_SUBCOMMAND)
.about("Creates an archive with the contents of a directory (only the first level files are taken, metadata file has to be present in the same directory)")
.arg(
Arg::new(SOURCE_DIRECTORY_ARG_NAME)
.required(true)
.takes_value(true)
.help("A directory to use for creating the archive"),
)
.arg(
Arg::new(TARGET_DIRECTORY_ARG_NAME)
.required(false)
.takes_value(true)
.help("A directory to create the archive in. Optional, will use the current directory if not specified"),
),
])
.get_matches();
let subcommand_name = match arg_matches.subcommand_name() {
Some(name) => name,
None => bail!("No subcommand specified"),
};
let subcommand_matches = match arg_matches.subcommand_matches(subcommand_name) {
Some(matches) => matches,
None => bail!(
"No subcommand arguments were recognized for subcommand '{}'",
subcommand_name
),
};
let target_dir = Path::new(
subcommand_matches
.value_of(TARGET_DIRECTORY_ARG_NAME)
.unwrap_or("./"),
);
match subcommand_name {
LIST_SUBCOMMAND => {
let archive = match subcommand_matches.value_of(ARCHIVE_ARG_NAME) {
Some(archive) => Path::new(archive),
None => bail!("No '{}' argument is specified", ARCHIVE_ARG_NAME),
};
list_archive(archive).await
}
EXTRACT_SUBCOMMAND => {
let archive = match subcommand_matches.value_of(ARCHIVE_ARG_NAME) {
Some(archive) => Path::new(archive),
None => bail!("No '{}' argument is specified", ARCHIVE_ARG_NAME),
};
extract_archive(archive, target_dir).await
}
CREATE_SUBCOMMAND => {
let source_dir = match subcommand_matches.value_of(SOURCE_DIRECTORY_ARG_NAME) {
Some(source) => Path::new(source),
None => bail!("No '{}' argument is specified", SOURCE_DIRECTORY_ARG_NAME),
};
create_archive(source_dir, target_dir).await
}
unknown => bail!("Unknown subcommand {}", unknown),
}
}
async fn list_archive(archive: &Path) -> anyhow::Result<()> {
let archive = archive.canonicalize().with_context(|| {
format!(
"Failed to get the absolute path for the archive path '{}'",
archive.display()
)
})?;
ensure!(
archive.is_file(),
"Path '{}' is not an archive file",
archive.display()
);
println!("Listing an archive at path '{}'", archive.display());
let archive_name = match archive.file_name().and_then(|name| name.to_str()) {
Some(name) => name,
None => bail!(
"Failed to get the archive name from the path '{}'",
archive.display()
),
};
let archive_bytes = fs::read(&archive)
.await
.context("Failed to read the archive bytes")?;
let header = compression::read_archive_header(archive_name, &mut archive_bytes.as_slice())
.await
.context("Failed to read the archive header")?;
let empty_path = Path::new("");
println!("-------------------------------");
let longest_path_in_archive = header
.files
.iter()
.filter_map(|file| Some(file.subpath.as_path(empty_path).to_str()?.len()))
.max()
.unwrap_or_default()
.max(METADATA_FILE_NAME.len());
for regular_file in &header.files {
println!(
"File: {:width$} uncompressed size: {} bytes",
regular_file.subpath.as_path(empty_path).display(),
regular_file.size,
width = longest_path_in_archive,
)
}
println!(
"File: {:width$} uncompressed size: {} bytes",
METADATA_FILE_NAME,
header.metadata_file_size,
width = longest_path_in_archive,
);
println!("-------------------------------");
Ok(())
}
async fn extract_archive(archive: &Path, target_dir: &Path) -> anyhow::Result<()> {
let archive = archive.canonicalize().with_context(|| {
format!(
"Failed to get the absolute path for the archive path '{}'",
archive.display()
)
})?;
ensure!(
archive.is_file(),
"Path '{}' is not an archive file",
archive.display()
);
let archive_name = match archive.file_name().and_then(|name| name.to_str()) {
Some(name) => name,
None => bail!(
"Failed to get the archive name from the path '{}'",
archive.display()
),
};
if !target_dir.exists() {
fs::create_dir_all(target_dir).await.with_context(|| {
format!(
"Failed to create the target dir at path '{}'",
target_dir.display()
)
})?;
}
let target_dir = target_dir.canonicalize().with_context(|| {
format!(
"Failed to get the absolute path for the target dir path '{}'",
target_dir.display()
)
})?;
ensure!(
target_dir.is_dir(),
"Path '{}' is not a directory",
target_dir.display()
);
let mut dir_contents = fs::read_dir(&target_dir)
.await
.context("Failed to list the target directory contents")?;
let dir_entry = dir_contents
.next_entry()
.await
.context("Failed to list the target directory contents")?;
ensure!(
dir_entry.is_none(),
"Target directory '{}' is not empty",
target_dir.display()
);
println!(
"Extracting an archive at path '{}' into directory '{}'",
archive.display(),
target_dir.display()
);
let mut archive_file = fs::File::open(&archive).await.with_context(|| {
format!(
"Failed to get the archive name from the path '{}'",
archive.display()
)
})?;
let header = compression::read_archive_header(archive_name, &mut archive_file)
.await
.context("Failed to read the archive header")?;
compression::uncompress_with_header(&BTreeSet::new(), &target_dir, header, &mut archive_file)
.await
.context("Failed to extract the archive")
}
async fn create_archive(source_dir: &Path, target_dir: &Path) -> anyhow::Result<()> {
let source_dir = source_dir.canonicalize().with_context(|| {
format!(
"Failed to get the absolute path for the source dir path '{}'",
source_dir.display()
)
})?;
ensure!(
source_dir.is_dir(),
"Path '{}' is not a directory",
source_dir.display()
);
if !target_dir.exists() {
fs::create_dir_all(target_dir).await.with_context(|| {
format!(
"Failed to create the target dir at path '{}'",
target_dir.display()
)
})?;
}
let target_dir = target_dir.canonicalize().with_context(|| {
format!(
"Failed to get the absolute path for the target dir path '{}'",
target_dir.display()
)
})?;
ensure!(
target_dir.is_dir(),
"Path '{}' is not a directory",
target_dir.display()
);
println!(
"Compressing directory '{}' and creating resulting archive in directory '{}'",
source_dir.display(),
target_dir.display()
);
let mut metadata_file_contents = None;
let mut files_co_archive = Vec::new();
let mut source_dir_contents = fs::read_dir(&source_dir)
.await
.context("Failed to read the source directory contents")?;
while let Some(source_dir_entry) = source_dir_contents
.next_entry()
.await
.context("Failed to read a source dir entry")?
{
let entry_path = source_dir_entry.path();
if entry_path.is_file() {
if entry_path.file_name().and_then(|name| name.to_str()) == Some(METADATA_FILE_NAME) {
let metadata_bytes = fs::read(entry_path)
.await
.context("Failed to read metata file bytes in the source dir")?;
metadata_file_contents = Some(
TimelineMetadata::from_bytes(&metadata_bytes)
.context("Failed to parse metata file contents in the source dir")?,
);
} else {
files_co_archive.push(entry_path);
}
}
}
let metadata = match metadata_file_contents {
Some(metadata) => metadata,
None => bail!(
"No metadata file found in the source dir '{}', cannot create the archive",
source_dir.display()
),
};
let _ = compression::archive_files_as_stream(
&source_dir,
files_co_archive.iter(),
&metadata,
move |mut archive_streamer, archive_name| async move {
let archive_target = target_dir.join(&archive_name);
let mut archive_file = fs::File::create(&archive_target).await?;
io::copy(&mut archive_streamer, &mut archive_file).await?;
Ok(archive_target)
},
)
.await
.context("Failed to create an archive")?;
Ok(())
}

View File

@@ -14,20 +14,20 @@ fn main() -> Result<()> {
.about("Dump or update metadata file")
.version(GIT_VERSION)
.arg(
Arg::with_name("path")
Arg::new("path")
.help("Path to metadata file")
.required(true),
)
.arg(
Arg::with_name("disk_lsn")
.short("d")
Arg::new("disk_lsn")
.short('d')
.long("disk_lsn")
.takes_value(true)
.help("Replace disk constistent lsn"),
)
.arg(
Arg::with_name("prev_lsn")
.short("p")
Arg::new("prev_lsn")
.short('p')
.long("prev_lsn")
.takes_value(true)
.help("Previous record LSN"),

View File

@@ -4,7 +4,7 @@
// TODO: move all paths construction to conf impl
//
use anyhow::{anyhow, bail, Context, Result};
use anyhow::{bail, Context, Result};
use postgres_ffi::ControlFileData;
use serde::{Deserialize, Serialize};
use std::{
@@ -45,14 +45,16 @@ impl BranchInfo {
repo: &Arc<dyn Repository>,
include_non_incremental_logical_size: bool,
) -> Result<Self> {
let name = path
.as_ref()
.file_name()
.unwrap()
.to_str()
.unwrap()
.to_string();
let timeline_id = std::fs::read_to_string(path)?.parse::<ZTimelineId>()?;
let path = path.as_ref();
let name = path.file_name().unwrap().to_string_lossy().to_string();
let timeline_id = std::fs::read_to_string(path)
.with_context(|| {
format!(
"Failed to read branch file contents at path '{}'",
path.display()
)
})?
.parse::<ZTimelineId>()?;
let timeline = match repo.get_timeline(timeline_id)? {
RepositoryTimeline::Local(local_entry) => local_entry,
@@ -116,7 +118,7 @@ pub fn init_pageserver(conf: &'static PageServerConf, create_tenant: Option<&str
if let Some(tenantid) = create_tenant {
let tenantid = ZTenantId::from_str(tenantid)?;
println!("initializing tenantid {}", tenantid);
create_repo(conf, tenantid, dummy_redo_mgr).with_context(|| "failed to create repo")?;
create_repo(conf, tenantid, dummy_redo_mgr).context("failed to create repo")?;
}
crashsafe_dir::create_dir_all(conf.tenants_path())?;
@@ -195,7 +197,7 @@ fn run_initdb(conf: &'static PageServerConf, initdbpath: &Path) -> Result<()> {
.env("DYLD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.stdout(Stdio::null())
.output()
.with_context(|| "failed to execute initdb")?;
.context("failed to execute initdb")?;
if !initdb_output.status.success() {
anyhow::bail!(
"initdb failed: '{}'",
@@ -306,7 +308,7 @@ pub(crate) fn create_branch(
let timeline = repo
.get_timeline(startpoint.timelineid)?
.local_timeline()
.ok_or_else(|| anyhow!("Cannot branch off the timeline that's not present locally"))?;
.context("Cannot branch off the timeline that's not present locally")?;
if startpoint.lsn == Lsn(0) {
// Find end of WAL on the old timeline
let end_of_wal = timeline.get_last_record_lsn();
@@ -322,12 +324,13 @@ pub(crate) fn create_branch(
timeline.wait_lsn(startpoint.lsn)?;
}
startpoint.lsn = startpoint.lsn.align();
if timeline.get_start_lsn() > startpoint.lsn {
if timeline.get_ancestor_lsn() > startpoint.lsn {
// can we safely just branch from the ancestor instead?
anyhow::bail!(
"invalid startpoint {} for the branch {}: less than timeline start {}",
"invalid startpoint {} for the branch {}: less than timeline ancestor lsn {:?}",
startpoint.lsn,
branchname,
timeline.get_start_lsn()
timeline.get_ancestor_lsn()
);
}
@@ -381,14 +384,11 @@ fn parse_point_in_time(
let mut strings = s.split('@');
let name = strings.next().unwrap();
let lsn: Option<Lsn>;
if let Some(lsnstr) = strings.next() {
lsn = Some(
Lsn::from_str(lsnstr).with_context(|| "invalid LSN in point-in-time specification")?,
);
} else {
lsn = None
}
let lsn = strings
.next()
.map(Lsn::from_str)
.transpose()
.context("invalid LSN in point-in-time specification")?;
// Check if it's a tag
if lsn.is_none() {

View File

@@ -4,11 +4,11 @@
//! file, or on the command line.
//! See also `settings.md` for better description on every parameter.
use anyhow::{anyhow, bail, ensure, Context, Result};
use anyhow::{bail, ensure, Context, Result};
use toml_edit;
use toml_edit::{Document, Item};
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use zenith_utils::zid::{ZNodeId, ZTenantId, ZTimelineId};
use std::convert::TryInto;
use std::env;
@@ -72,6 +72,10 @@ pub mod defaults {
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct PageServerConf {
// Identifier of that particular pageserver so e g safekeepers
// can safely distinguish different pageservers
pub id: ZNodeId,
/// Example (default): 127.0.0.1:64000
pub listen_pg_addr: String,
/// Example (default): 127.0.0.1:9898
@@ -106,6 +110,184 @@ pub struct PageServerConf {
pub remote_storage_config: Option<RemoteStorageConfig>,
}
// use dedicated enum for builder to better indicate the intention
// and avoid possible confusion with nested options
pub enum BuilderValue<T> {
Set(T),
NotSet,
}
impl<T> BuilderValue<T> {
pub fn ok_or<E>(self, err: E) -> Result<T, E> {
match self {
Self::Set(v) => Ok(v),
Self::NotSet => Err(err),
}
}
}
// needed to simplify config construction
struct PageServerConfigBuilder {
listen_pg_addr: BuilderValue<String>,
listen_http_addr: BuilderValue<String>,
checkpoint_distance: BuilderValue<u64>,
checkpoint_period: BuilderValue<Duration>,
gc_horizon: BuilderValue<u64>,
gc_period: BuilderValue<Duration>,
superuser: BuilderValue<String>,
page_cache_size: BuilderValue<usize>,
max_file_descriptors: BuilderValue<usize>,
workdir: BuilderValue<PathBuf>,
pg_distrib_dir: BuilderValue<PathBuf>,
auth_type: BuilderValue<AuthType>,
//
auth_validation_public_key_path: BuilderValue<Option<PathBuf>>,
remote_storage_config: BuilderValue<Option<RemoteStorageConfig>>,
id: BuilderValue<ZNodeId>,
}
impl Default for PageServerConfigBuilder {
fn default() -> Self {
use self::BuilderValue::*;
use defaults::*;
Self {
listen_pg_addr: Set(DEFAULT_PG_LISTEN_ADDR.to_string()),
listen_http_addr: Set(DEFAULT_HTTP_LISTEN_ADDR.to_string()),
checkpoint_distance: Set(DEFAULT_CHECKPOINT_DISTANCE),
checkpoint_period: Set(humantime::parse_duration(DEFAULT_CHECKPOINT_PERIOD)
.expect("cannot parse default checkpoint period")),
gc_horizon: Set(DEFAULT_GC_HORIZON),
gc_period: Set(humantime::parse_duration(DEFAULT_GC_PERIOD)
.expect("cannot parse default gc period")),
superuser: Set(DEFAULT_SUPERUSER.to_string()),
page_cache_size: Set(DEFAULT_PAGE_CACHE_SIZE),
max_file_descriptors: Set(DEFAULT_MAX_FILE_DESCRIPTORS),
workdir: Set(PathBuf::new()),
pg_distrib_dir: Set(env::current_dir()
.expect("cannot access current directory")
.join("tmp_install")),
auth_type: Set(AuthType::Trust),
auth_validation_public_key_path: Set(None),
remote_storage_config: Set(None),
id: NotSet,
}
}
}
impl PageServerConfigBuilder {
pub fn listen_pg_addr(&mut self, listen_pg_addr: String) {
self.listen_pg_addr = BuilderValue::Set(listen_pg_addr)
}
pub fn listen_http_addr(&mut self, listen_http_addr: String) {
self.listen_http_addr = BuilderValue::Set(listen_http_addr)
}
pub fn checkpoint_distance(&mut self, checkpoint_distance: u64) {
self.checkpoint_distance = BuilderValue::Set(checkpoint_distance)
}
pub fn checkpoint_period(&mut self, checkpoint_period: Duration) {
self.checkpoint_period = BuilderValue::Set(checkpoint_period)
}
pub fn gc_horizon(&mut self, gc_horizon: u64) {
self.gc_horizon = BuilderValue::Set(gc_horizon)
}
pub fn gc_period(&mut self, gc_period: Duration) {
self.gc_period = BuilderValue::Set(gc_period)
}
pub fn superuser(&mut self, superuser: String) {
self.superuser = BuilderValue::Set(superuser)
}
pub fn page_cache_size(&mut self, page_cache_size: usize) {
self.page_cache_size = BuilderValue::Set(page_cache_size)
}
pub fn max_file_descriptors(&mut self, max_file_descriptors: usize) {
self.max_file_descriptors = BuilderValue::Set(max_file_descriptors)
}
pub fn workdir(&mut self, workdir: PathBuf) {
self.workdir = BuilderValue::Set(workdir)
}
pub fn pg_distrib_dir(&mut self, pg_distrib_dir: PathBuf) {
self.pg_distrib_dir = BuilderValue::Set(pg_distrib_dir)
}
pub fn auth_type(&mut self, auth_type: AuthType) {
self.auth_type = BuilderValue::Set(auth_type)
}
pub fn auth_validation_public_key_path(
&mut self,
auth_validation_public_key_path: Option<PathBuf>,
) {
self.auth_validation_public_key_path = BuilderValue::Set(auth_validation_public_key_path)
}
pub fn remote_storage_config(&mut self, remote_storage_config: Option<RemoteStorageConfig>) {
self.remote_storage_config = BuilderValue::Set(remote_storage_config)
}
pub fn id(&mut self, node_id: ZNodeId) {
self.id = BuilderValue::Set(node_id)
}
pub fn build(self) -> Result<PageServerConf> {
Ok(PageServerConf {
listen_pg_addr: self
.listen_pg_addr
.ok_or(anyhow::anyhow!("missing listen_pg_addr"))?,
listen_http_addr: self
.listen_http_addr
.ok_or(anyhow::anyhow!("missing listen_http_addr"))?,
checkpoint_distance: self
.checkpoint_distance
.ok_or(anyhow::anyhow!("missing checkpoint_distance"))?,
checkpoint_period: self
.checkpoint_period
.ok_or(anyhow::anyhow!("missing checkpoint_period"))?,
gc_horizon: self
.gc_horizon
.ok_or(anyhow::anyhow!("missing gc_horizon"))?,
gc_period: self.gc_period.ok_or(anyhow::anyhow!("missing gc_period"))?,
superuser: self.superuser.ok_or(anyhow::anyhow!("missing superuser"))?,
page_cache_size: self
.page_cache_size
.ok_or(anyhow::anyhow!("missing page_cache_size"))?,
max_file_descriptors: self
.max_file_descriptors
.ok_or(anyhow::anyhow!("missing max_file_descriptors"))?,
workdir: self.workdir.ok_or(anyhow::anyhow!("missing workdir"))?,
pg_distrib_dir: self
.pg_distrib_dir
.ok_or(anyhow::anyhow!("missing pg_distrib_dir"))?,
auth_type: self.auth_type.ok_or(anyhow::anyhow!("missing auth_type"))?,
auth_validation_public_key_path: self
.auth_validation_public_key_path
.ok_or(anyhow::anyhow!("missing auth_validation_public_key_path"))?,
remote_storage_config: self
.remote_storage_config
.ok_or(anyhow::anyhow!("missing remote_storage_config"))?,
id: self.id.ok_or(anyhow::anyhow!("missing id"))?,
})
}
}
/// External backup storage configuration, enough for creating a client for that storage.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct RemoteStorageConfig {
@@ -135,6 +317,8 @@ pub struct S3Config {
pub bucket_name: String,
/// The region where the bucket is located at.
pub bucket_region: String,
/// A "subfolder" in the bucket, to use the same bucket separately by multiple pageservers at once.
pub prefix_in_bucket: Option<String>,
/// "Login" to use when connecting to bucket.
/// Can be empty for cases like AWS k8s IAM
/// where we can allow certain pods to connect
@@ -142,6 +326,13 @@ pub struct S3Config {
pub access_key_id: Option<String>,
/// "Password" to use when connecting to bucket.
pub secret_access_key: Option<String>,
/// A base URL to send S3 requests to.
/// By default, the endpoint is derived from a region name, assuming it's
/// an AWS S3 region name, erroring on wrong region name.
/// Endpoint provides a way to support other S3 flavors and their regions.
///
/// Example: `http://127.0.0.1:5000`
pub endpoint: Option<String>,
}
impl std::fmt::Debug for S3Config {
@@ -149,6 +340,7 @@ impl std::fmt::Debug for S3Config {
f.debug_struct("S3Config")
.field("bucket_name", &self.bucket_name)
.field("bucket_region", &self.bucket_region)
.field("prefix_in_bucket", &self.prefix_in_bucket)
.finish()
}
}
@@ -211,57 +403,39 @@ impl PageServerConf {
///
/// This leaves any options not present in the file in the built-in defaults.
pub fn parse_and_validate(toml: &Document, workdir: &Path) -> Result<Self> {
use defaults::*;
let mut conf = PageServerConf {
workdir: workdir.to_path_buf(),
listen_pg_addr: DEFAULT_PG_LISTEN_ADDR.to_string(),
listen_http_addr: DEFAULT_HTTP_LISTEN_ADDR.to_string(),
checkpoint_distance: DEFAULT_CHECKPOINT_DISTANCE,
checkpoint_period: humantime::parse_duration(DEFAULT_CHECKPOINT_PERIOD)?,
gc_horizon: DEFAULT_GC_HORIZON,
gc_period: humantime::parse_duration(DEFAULT_GC_PERIOD)?,
page_cache_size: DEFAULT_PAGE_CACHE_SIZE,
max_file_descriptors: DEFAULT_MAX_FILE_DESCRIPTORS,
pg_distrib_dir: PathBuf::new(),
auth_validation_public_key_path: None,
auth_type: AuthType::Trust,
remote_storage_config: None,
superuser: DEFAULT_SUPERUSER.to_string(),
};
let mut builder = PageServerConfigBuilder::default();
builder.workdir(workdir.to_owned());
for (key, item) in toml.iter() {
match key {
"listen_pg_addr" => conf.listen_pg_addr = parse_toml_string(key, item)?,
"listen_http_addr" => conf.listen_http_addr = parse_toml_string(key, item)?,
"checkpoint_distance" => conf.checkpoint_distance = parse_toml_u64(key, item)?,
"checkpoint_period" => conf.checkpoint_period = parse_toml_duration(key, item)?,
"gc_horizon" => conf.gc_horizon = parse_toml_u64(key, item)?,
"gc_period" => conf.gc_period = parse_toml_duration(key, item)?,
"initial_superuser_name" => conf.superuser = parse_toml_string(key, item)?,
"page_cache_size" => conf.page_cache_size = parse_toml_u64(key, item)? as usize,
"listen_pg_addr" => builder.listen_pg_addr(parse_toml_string(key, item)?),
"listen_http_addr" => builder.listen_http_addr(parse_toml_string(key, item)?),
"checkpoint_distance" => builder.checkpoint_distance(parse_toml_u64(key, item)?),
"checkpoint_period" => builder.checkpoint_period(parse_toml_duration(key, item)?),
"gc_horizon" => builder.gc_horizon(parse_toml_u64(key, item)?),
"gc_period" => builder.gc_period(parse_toml_duration(key, item)?),
"initial_superuser_name" => builder.superuser(parse_toml_string(key, item)?),
"page_cache_size" => builder.page_cache_size(parse_toml_u64(key, item)? as usize),
"max_file_descriptors" => {
conf.max_file_descriptors = parse_toml_u64(key, item)? as usize
builder.max_file_descriptors(parse_toml_u64(key, item)? as usize)
}
"pg_distrib_dir" => {
conf.pg_distrib_dir = PathBuf::from(parse_toml_string(key, item)?)
builder.pg_distrib_dir(PathBuf::from(parse_toml_string(key, item)?))
}
"auth_validation_public_key_path" => {
conf.auth_validation_public_key_path =
Some(PathBuf::from(parse_toml_string(key, item)?))
}
"auth_type" => conf.auth_type = parse_toml_auth_type(key, item)?,
"auth_validation_public_key_path" => builder.auth_validation_public_key_path(Some(
PathBuf::from(parse_toml_string(key, item)?),
)),
"auth_type" => builder.auth_type(parse_toml_auth_type(key, item)?),
"remote_storage" => {
conf.remote_storage_config = Some(Self::parse_remote_storage_config(item)?)
builder.remote_storage_config(Some(Self::parse_remote_storage_config(item)?))
}
"id" => builder.id(ZNodeId(parse_toml_u64(key, item)?)),
_ => bail!("unrecognized pageserver option '{}'", key),
}
}
let mut conf = builder.build().context("invalid config")?;
if conf.auth_type == AuthType::ZenithJWT {
let auth_validation_public_key_path = conf
.auth_validation_public_key_path
@@ -275,9 +449,6 @@ impl PageServerConf {
);
}
if conf.pg_distrib_dir == PathBuf::new() {
conf.pg_distrib_dir = env::current_dir()?.join("tmp_install")
};
if !conf.pg_distrib_dir.join("bin/postgres").exists() {
bail!(
"Can't find postgres binary at {}",
@@ -303,9 +474,7 @@ impl PageServerConf {
})
.ok()
.and_then(NonZeroUsize::new)
.ok_or_else(|| {
anyhow!("'max_concurrent_sync' must be a non-zero positive integer")
})?
.context("'max_concurrent_sync' must be a non-zero positive integer")?
} else {
NonZeroUsize::new(defaults::DEFAULT_REMOTE_STORAGE_MAX_CONCURRENT_SYNC).unwrap()
};
@@ -318,7 +487,7 @@ impl PageServerConf {
})
.ok()
.and_then(NonZeroU32::new)
.ok_or_else(|| anyhow!("'max_sync_errors' must be a non-zero positive integer"))?
.context("'max_sync_errors' must be a non-zero positive integer")?
} else {
NonZeroU32::new(defaults::DEFAULT_REMOTE_STORAGE_MAX_SYNC_ERRORS).unwrap()
};
@@ -332,18 +501,30 @@ impl PageServerConf {
bail!("'bucket_name' option is mandatory if 'bucket_region' is given ")
}
(None, Some(bucket_name), Some(bucket_region)) => RemoteStorageKind::AwsS3(S3Config {
bucket_name: bucket_name.as_str().unwrap().to_string(),
bucket_region: bucket_region.as_str().unwrap().to_string(),
bucket_name: parse_toml_string("bucket_name", bucket_name)?,
bucket_region: parse_toml_string("bucket_region", bucket_region)?,
access_key_id: toml
.get("access_key_id")
.map(|x| x.as_str().unwrap().to_string()),
.map(|access_key_id| parse_toml_string("access_key_id", access_key_id))
.transpose()?,
secret_access_key: toml
.get("secret_access_key")
.map(|x| x.as_str().unwrap().to_string()),
.map(|secret_access_key| {
parse_toml_string("secret_access_key", secret_access_key)
})
.transpose()?,
prefix_in_bucket: toml
.get("prefix_in_bucket")
.map(|prefix_in_bucket| parse_toml_string("prefix_in_bucket", prefix_in_bucket))
.transpose()?,
endpoint: toml
.get("endpoint")
.map(|endpoint| parse_toml_string("endpoint", endpoint))
.transpose()?,
}),
(Some(local_path), None, None) => {
RemoteStorageKind::LocalFs(PathBuf::from(local_path.as_str().unwrap()))
}
(Some(local_path), None, None) => RemoteStorageKind::LocalFs(PathBuf::from(
parse_toml_string("local_path", local_path)?,
)),
(Some(_), Some(_), _) => bail!("local_path and bucket_name are mutually exclusive"),
};
@@ -362,6 +543,7 @@ impl PageServerConf {
#[cfg(test)]
pub fn dummy_conf(repo_dir: PathBuf) -> Self {
PageServerConf {
id: ZNodeId(0),
checkpoint_distance: defaults::DEFAULT_CHECKPOINT_DISTANCE,
checkpoint_period: Duration::from_secs(10),
gc_horizon: defaults::DEFAULT_GC_HORIZON,
@@ -385,7 +567,7 @@ impl PageServerConf {
fn parse_toml_string(name: &str, item: &Item) -> Result<String> {
let s = item
.as_str()
.ok_or_else(|| anyhow!("configure option {} is not a string", name))?;
.with_context(|| format!("configure option {} is not a string", name))?;
Ok(s.to_string())
}
@@ -394,7 +576,7 @@ fn parse_toml_u64(name: &str, item: &Item) -> Result<u64> {
// for our use, though.
let i: i64 = item
.as_integer()
.ok_or_else(|| anyhow!("configure option {} is not an integer", name))?;
.with_context(|| format!("configure option {} is not an integer", name))?;
if i < 0 {
bail!("configure option {} cannot be negative", name);
}
@@ -404,7 +586,7 @@ fn parse_toml_u64(name: &str, item: &Item) -> Result<u64> {
fn parse_toml_duration(name: &str, item: &Item) -> Result<Duration> {
let s = item
.as_str()
.ok_or_else(|| anyhow!("configure option {} is not a string", name))?;
.with_context(|| format!("configure option {} is not a string", name))?;
Ok(humantime::parse_duration(s)?)
}
@@ -412,7 +594,7 @@ fn parse_toml_duration(name: &str, item: &Item) -> Result<Duration> {
fn parse_toml_auth_type(name: &str, item: &Item) -> Result<AuthType> {
let v = item
.as_str()
.ok_or_else(|| anyhow!("configure option {} is not a string", name))?;
.with_context(|| format!("configure option {} is not a string", name))?;
AuthType::from_str(v)
}
@@ -441,15 +623,16 @@ max_file_descriptors = 333
# initial superuser role name to use when creating a new tenant
initial_superuser_name = 'zzzz'
id = 10
"#;
"#;
#[test]
fn parse_defaults() -> anyhow::Result<()> {
let tempdir = tempdir()?;
let (workdir, pg_distrib_dir) = prepare_fs(&tempdir)?;
// we have to create dummy pathes to overcome the validation errors
let config_string = format!("pg_distrib_dir='{}'", pg_distrib_dir.display());
let config_string = format!("pg_distrib_dir='{}'\nid=10", pg_distrib_dir.display());
let toml = config_string.parse()?;
let parsed_config =
@@ -460,6 +643,7 @@ initial_superuser_name = 'zzzz'
assert_eq!(
parsed_config,
PageServerConf {
id: ZNodeId(10),
listen_pg_addr: defaults::DEFAULT_PG_LISTEN_ADDR.to_string(),
listen_http_addr: defaults::DEFAULT_HTTP_LISTEN_ADDR.to_string(),
checkpoint_distance: defaults::DEFAULT_CHECKPOINT_DISTANCE,
@@ -501,6 +685,7 @@ initial_superuser_name = 'zzzz'
assert_eq!(
parsed_config,
PageServerConf {
id: ZNodeId(10),
listen_pg_addr: "127.0.0.1:64000".to_string(),
listen_http_addr: "127.0.0.1:9898".to_string(),
checkpoint_distance: 111,
@@ -585,8 +770,10 @@ pg_distrib_dir='{}'
let bucket_name = "some-sample-bucket".to_string();
let bucket_region = "eu-north-1".to_string();
let prefix_in_bucket = "test_prefix".to_string();
let access_key_id = "SOMEKEYAAAAASADSAH*#".to_string();
let secret_access_key = "SOMEsEcReTsd292v".to_string();
let endpoint = "http://localhost:5000".to_string();
let max_concurrent_sync = NonZeroUsize::new(111).unwrap();
let max_sync_errors = NonZeroU32::new(222).unwrap();
@@ -597,13 +784,15 @@ max_concurrent_sync = {}
max_sync_errors = {}
bucket_name = '{}'
bucket_region = '{}'
prefix_in_bucket = '{}'
access_key_id = '{}'
secret_access_key = '{}'"#,
max_concurrent_sync, max_sync_errors, bucket_name, bucket_region, access_key_id, secret_access_key
secret_access_key = '{}'
endpoint = '{}'"#,
max_concurrent_sync, max_sync_errors, bucket_name, bucket_region, prefix_in_bucket, access_key_id, secret_access_key, endpoint
),
format!(
"remote_storage={{max_concurrent_sync = {}, max_sync_errors = {}, bucket_name='{}', bucket_region='{}', access_key_id='{}', secret_access_key='{}'}}",
max_concurrent_sync, max_sync_errors, bucket_name, bucket_region, access_key_id, secret_access_key
"remote_storage={{max_concurrent_sync={}, max_sync_errors={}, bucket_name='{}', bucket_region='{}', prefix_in_bucket='{}', access_key_id='{}', secret_access_key='{}', endpoint='{}'}}",
max_concurrent_sync, max_sync_errors, bucket_name, bucket_region, prefix_in_bucket, access_key_id, secret_access_key, endpoint
),
];
@@ -637,6 +826,8 @@ pg_distrib_dir='{}'
bucket_region: bucket_region.clone(),
access_key_id: Some(access_key_id.clone()),
secret_access_key: Some(secret_access_key.clone()),
prefix_in_bucket: Some(prefix_in_bucket.clone()),
endpoint: Some(endpoint.clone())
}),
},
"Remote storage config should correctly parse the S3 config"

View File

@@ -1,6 +1,7 @@
use serde::{Deserialize, Serialize};
use crate::ZTenantId;
use zenith_utils::zid::ZNodeId;
#[derive(Serialize, Deserialize)]
pub struct BranchCreateRequest {
@@ -15,3 +16,8 @@ pub struct TenantCreateRequest {
#[serde(with = "hex")]
pub tenant_id: ZTenantId,
}
#[derive(Serialize)]
pub struct StatusResponse {
pub id: ZNodeId,
}

View File

@@ -17,6 +17,11 @@ paths:
application/json:
schema:
type: object
required:
- id
properties:
id:
type: integer
/v1/timeline/{tenant_id}:
parameters:
- name: tenant_id
@@ -234,9 +239,7 @@ paths:
content:
application/json:
schema:
type: array
items:
$ref: "#/components/schemas/BranchInfo"
$ref: "#/components/schemas/BranchInfo"
"400":
description: Malformed branch create request
content:
@@ -370,12 +373,15 @@ components:
format: hex
ancestor_id:
type: string
format: hex
ancestor_lsn:
type: string
current_logical_size:
type: integer
current_logical_size_non_incremental:
type: integer
latest_valid_lsn:
type: integer
TimelineInfo:
type: object
required:

View File

@@ -1,10 +1,8 @@
use std::sync::Arc;
use anyhow::{bail, Context, Result};
use hyper::header;
use anyhow::{Context, Result};
use hyper::StatusCode;
use hyper::{Body, Request, Response, Uri};
use routerify::{ext::RequestExt, RouterBuilder};
use serde::Serialize;
use tracing::*;
use zenith_utils::auth::JwtAuth;
@@ -19,12 +17,15 @@ use zenith_utils::http::{
request::get_request_param,
request::parse_request_param,
};
use zenith_utils::http::{RequestExt, RouterBuilder};
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::{opt_display_serde, ZTimelineId};
use super::models::BranchCreateRequest;
use super::models::StatusResponse;
use super::models::TenantCreateRequest;
use crate::branches::BranchInfo;
use crate::repository::RepositoryTimeline;
use crate::repository::TimelineSyncState;
use crate::{branches, config::PageServerConf, tenant_mgr, ZTenantId};
@@ -63,12 +64,12 @@ fn get_config(request: &Request<Body>) -> &'static PageServerConf {
}
// healthcheck handler
async fn status_handler(_: Request<Body>) -> Result<Response<Body>, ApiError> {
Ok(Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, "application/json")
.body(Body::from("{}"))
.map_err(ApiError::from_err)?)
async fn status_handler(request: Request<Body>) -> Result<Response<Body>, ApiError> {
let config = get_config(&request);
Ok(json_response(
StatusCode::OK,
StatusResponse { id: config.id },
)?)
}
async fn branch_create_handler(mut request: Request<Body>) -> Result<Response<Body>, ApiError> {
@@ -190,18 +191,26 @@ async fn timeline_list_handler(request: Request<Body>) -> Result<Response<Body>,
}
#[derive(Debug, Serialize)]
struct TimelineInfo {
#[serde(with = "hex")]
timeline_id: ZTimelineId,
#[serde(with = "hex")]
tenant_id: ZTenantId,
#[serde(with = "opt_display_serde")]
ancestor_timeline_id: Option<ZTimelineId>,
last_record_lsn: Lsn,
prev_record_lsn: Lsn,
start_lsn: Lsn,
disk_consistent_lsn: Lsn,
timeline_state: Option<TimelineSyncState>,
#[serde(tag = "type")]
enum TimelineInfo {
Local {
#[serde(with = "hex")]
timeline_id: ZTimelineId,
#[serde(with = "hex")]
tenant_id: ZTenantId,
#[serde(with = "opt_display_serde")]
ancestor_timeline_id: Option<ZTimelineId>,
last_record_lsn: Lsn,
prev_record_lsn: Lsn,
disk_consistent_lsn: Lsn,
timeline_state: Option<TimelineSyncState>,
},
Remote {
#[serde(with = "hex")]
timeline_id: ZTimelineId,
#[serde(with = "hex")]
tenant_id: ZTenantId,
},
}
async fn timeline_detail_handler(request: Request<Body>) -> Result<Response<Body>, ApiError> {
@@ -215,19 +224,21 @@ async fn timeline_detail_handler(request: Request<Body>) -> Result<Response<Body
info_span!("timeline_detail_handler", tenant = %tenant_id, timeline = %timeline_id)
.entered();
let repo = tenant_mgr::get_repository_for_tenant(tenant_id)?;
match repo.get_timeline(timeline_id)?.local_timeline() {
None => bail!("Timeline with id {} is not present locally", timeline_id),
Some(timeline) => Ok::<_, anyhow::Error>(TimelineInfo {
Ok::<_, anyhow::Error>(match repo.get_timeline(timeline_id)?.local_timeline() {
None => TimelineInfo::Remote {
timeline_id,
tenant_id,
},
Some(timeline) => TimelineInfo::Local {
timeline_id,
tenant_id,
ancestor_timeline_id: timeline.get_ancestor_timeline_id(),
disk_consistent_lsn: timeline.get_disk_consistent_lsn(),
last_record_lsn: timeline.get_last_record_lsn(),
prev_record_lsn: timeline.get_prev_record_lsn(),
start_lsn: timeline.get_start_lsn(),
timeline_state: repo.get_timeline_state(timeline_id),
}),
}
},
})
})
.await
.map_err(ApiError::from_err)??;
@@ -235,6 +246,58 @@ async fn timeline_detail_handler(request: Request<Body>) -> Result<Response<Body
Ok(json_response(StatusCode::OK, response_data)?)
}
async fn timeline_attach_handler(request: Request<Body>) -> Result<Response<Body>, ApiError> {
let tenant_id: ZTenantId = parse_request_param(&request, "tenant_id")?;
check_permission(&request, Some(tenant_id))?;
let timeline_id: ZTimelineId = parse_request_param(&request, "timeline_id")?;
tokio::task::spawn_blocking(move || {
let _enter =
info_span!("timeline_attach_handler", tenant = %tenant_id, timeline = %timeline_id)
.entered();
let repo = tenant_mgr::get_repository_for_tenant(tenant_id)?;
match repo.get_timeline(timeline_id)? {
RepositoryTimeline::Local(_) => {
anyhow::bail!("Timeline with id {} is already local", timeline_id)
}
RepositoryTimeline::Remote {
id: _,
disk_consistent_lsn: _,
} => {
// FIXME (rodionov) get timeline already schedules timeline for download, and duplicate tasks can cause errors
// first should be fixed in https://github.com/zenithdb/zenith/issues/997
// TODO (rodionov) change timeline state to awaits download (incapsulate it somewhere in the repo)
// TODO (rodionov) can we safely request replication on the timeline before sync is completed? (can be implemented on top of the #997)
Ok(())
}
}
})
.await
.map_err(ApiError::from_err)??;
Ok(json_response(StatusCode::ACCEPTED, ())?)
}
async fn timeline_detach_handler(request: Request<Body>) -> Result<Response<Body>, ApiError> {
let tenant_id: ZTenantId = parse_request_param(&request, "tenant_id")?;
check_permission(&request, Some(tenant_id))?;
let timeline_id: ZTimelineId = parse_request_param(&request, "timeline_id")?;
tokio::task::spawn_blocking(move || {
let _enter =
info_span!("timeline_detach_handler", tenant = %tenant_id, timeline = %timeline_id)
.entered();
let repo = tenant_mgr::get_repository_for_tenant(tenant_id)?;
repo.detach_timeline(timeline_id)
})
.await
.map_err(ApiError::from_err)??;
Ok(json_response(StatusCode::OK, ())?)
}
async fn tenant_list_handler(request: Request<Body>) -> Result<Response<Body>, ApiError> {
// check for management permission
check_permission(&request, None)?;
@@ -255,13 +318,13 @@ async fn tenant_create_handler(mut request: Request<Body>) -> Result<Response<Bo
let request_data: TenantCreateRequest = json_request(&mut request).await?;
let response_data = tokio::task::spawn_blocking(move || {
tokio::task::spawn_blocking(move || {
let _enter = info_span!("tenant_create", tenant = %request_data.tenant_id).entered();
tenant_mgr::create_repository_for_tenant(get_config(&request), request_data.tenant_id)
})
.await
.map_err(ApiError::from_err)??;
Ok(json_response(StatusCode::CREATED, response_data)?)
Ok(json_response(StatusCode::CREATED, ())?)
}
async fn handler_404(_: Request<Body>) -> Result<Response<Body>, ApiError> {
@@ -296,6 +359,14 @@ pub fn make_router(
"/v1/timeline/:tenant_id/:timeline_id",
timeline_detail_handler,
)
.post(
"/v1/timeline/:tenant_id/:timeline_id/attach",
timeline_attach_handler,
)
.post(
"/v1/timeline/:tenant_id/:timeline_id/detach",
timeline_detach_handler,
)
.get("/v1/branch/:tenant_id", branch_list_handler)
.get("/v1/branch/:tenant_id/:branch_name", branch_detail_handler)
.post("/v1/branch", branch_create_handler)

View File

@@ -7,7 +7,7 @@ use std::fs::File;
use std::io::{Read, Seek, SeekFrom};
use std::path::{Path, PathBuf};
use anyhow::{anyhow, bail, ensure, Result};
use anyhow::{bail, ensure, Context, Result};
use bytes::Bytes;
use tracing::*;
@@ -126,7 +126,7 @@ pub fn import_timeline_from_postgres_datadir(
writer.advance_last_record_lsn(lsn);
// We expect the Postgres server to be shut down cleanly.
let pg_control = pg_control.ok_or_else(|| anyhow!("pg_control file not found"))?;
let pg_control = pg_control.context("pg_control file not found")?;
ensure!(
pg_control.state == DBState_DB_SHUTDOWNED,
"Postgres cluster was not shut down cleanly"

View File

@@ -11,7 +11,7 @@
//! parent timeline, and the last LSN that has been written to disk.
//!
use anyhow::{anyhow, bail, ensure, Context, Result};
use anyhow::{bail, ensure, Context, Result};
use bookfile::Book;
use bytes::Bytes;
use lazy_static::lazy_static;
@@ -28,7 +28,7 @@ use std::io::Write;
use std::ops::{Bound::Included, Deref};
use std::path::{Path, PathBuf};
use std::sync::atomic::{self, AtomicBool, AtomicUsize};
use std::sync::{Arc, Mutex, MutexGuard};
use std::sync::{Arc, Mutex, MutexGuard, RwLock, RwLockReadGuard};
use std::time::{Duration, Instant};
use self::metadata::{metadata_path, TimelineMetadata, METADATA_FILE_NAME};
@@ -40,8 +40,8 @@ use crate::repository::{
BlockNumber, GcResult, Repository, RepositoryTimeline, Timeline, TimelineSyncState,
TimelineWriter, ZenithWalRecord,
};
use crate::tenant_mgr;
use crate::walreceiver;
use crate::thread_mgr;
use crate::virtual_file::VirtualFile;
use crate::walreceiver::IS_WAL_RECEIVER;
use crate::walredo::WalRedoManager;
use crate::CheckpointConfig;
@@ -71,7 +71,6 @@ mod storage_layer;
use delta_layer::DeltaLayer;
use ephemeral_file::is_ephemeral_file;
use filename::{DeltaFileName, ImageFileName};
use global_layer_map::{LayerId, GLOBAL_LAYER_MAP};
use image_layer::ImageLayer;
use inmemory_layer::InMemoryLayer;
use layer_map::LayerMap;
@@ -127,7 +126,13 @@ pub struct LayeredRepository {
conf: &'static PageServerConf,
tenantid: ZTenantId,
timelines: Mutex<HashMap<ZTimelineId, LayeredTimelineEntry>>,
// This mutex prevents creation of new timelines during GC.
// Adding yet another mutex (in addition to `timelines`) is needed because holding
// `timelines` mutex during all GC iteration (especially with enforced checkpoint)
// may block for a long time `get_timeline`, `get_timelines_state`,... and other operations
// with timelines, which in turn may cause dropping replication connection, expiration of wait_for_lsn
// timeout...
gc_cs: Mutex<()>,
walredo_mgr: Arc<dyn WalRedoManager + Send + Sync>,
/// Makes every timeline to backup their files to remote storage.
upload_relishes: bool,
@@ -161,7 +166,7 @@ impl Repository for LayeredRepository {
// Create the timeline directory, and write initial metadata to file.
crashsafe_dir::create_dir_all(self.conf.timeline_path(&timelineid, &self.tenantid))?;
let metadata = TimelineMetadata::new(Lsn(0), None, None, Lsn(0), Lsn(0), initdb_lsn);
let metadata = TimelineMetadata::new(Lsn(0), None, None, Lsn(0), initdb_lsn, initdb_lsn);
Self::save_metadata(self.conf, timelineid, self.tenantid, &metadata, true)?;
let timeline = LayeredTimeline::new(
@@ -186,6 +191,8 @@ impl Repository for LayeredRepository {
// We need to hold this lock to prevent GC from starting at the same time. GC scans the directory to learn
// about timelines, so otherwise a race condition is possible, where we create new timeline and GC
// concurrently removes data that is needed by the new timeline.
let _gc_cs = self.gc_cs.lock().unwrap();
let mut timelines = self.timelines.lock().unwrap();
let src_timeline = match self.get_or_init_timeline(src, &mut timelines)? {
LayeredTimelineEntry::Local(timeline) => timeline,
@@ -193,9 +200,10 @@ impl Repository for LayeredRepository {
bail!("Cannot branch off the timeline {} that's not local", src)
}
};
let latest_gc_cutoff_lsn = src_timeline.get_latest_gc_cutoff_lsn();
src_timeline
.check_lsn_is_in_scope(start_lsn)
.check_lsn_is_in_scope(start_lsn, &latest_gc_cutoff_lsn)
.context("invalid branch start lsn")?;
let RecordLsn {
@@ -223,7 +231,7 @@ impl Repository for LayeredRepository {
dst_prev,
Some(src),
start_lsn,
src_timeline.latest_gc_cutoff_lsn.load(),
*src_timeline.latest_gc_cutoff_lsn.read().unwrap(),
src_timeline.initdb_lsn,
);
crashsafe_dir::create_dir_all(self.conf.timeline_path(&dst, &self.tenantid))?;
@@ -277,15 +285,42 @@ impl Repository for LayeredRepository {
Ok(())
}
// Wait for all threads to complete and persist repository data before pageserver shutdown.
fn shutdown(&self) -> Result<()> {
trace!("LayeredRepository shutdown for tenant {}", self.tenantid);
// Detaches the timeline from the repository.
fn detach_timeline(&self, timeline_id: ZTimelineId) -> Result<()> {
let mut timelines = self.timelines.lock().unwrap();
match timelines.entry(timeline_id) {
Entry::Vacant(_) => {
bail!("cannot detach non existing timeline");
}
Entry::Occupied(mut entry) => {
let timeline_entry = entry.get_mut();
let timelines = self.timelines.lock().unwrap();
for (timelineid, timeline) in timelines.iter() {
shutdown_timeline(self.tenantid, *timelineid, timeline)?;
}
let timeline = match timeline_entry {
LayeredTimelineEntry::Remote { .. } => {
bail!("cannot detach remote timeline {}", timeline_id);
}
LayeredTimelineEntry::Local(timeline) => timeline,
};
// TODO (rodionov) keep local state in timeline itself (refactoring related to https://github.com/zenithdb/zenith/issues/997 and #1104)
// FIXME this is local disk consistent lsn, need to keep the latest succesfully uploaded checkpoint lsn in timeline (metadata?)
// https://github.com/zenithdb/zenith/issues/1104
let remote_disk_consistent_lsn = timeline.disk_consistent_lsn.load();
// reference to timeline is dropped here
entry.insert(LayeredTimelineEntry::Remote {
id: timeline_id,
disk_consistent_lsn: remote_disk_consistent_lsn,
});
}
};
// Release the lock to shutdown and remove the files without holding it
drop(timelines);
// shutdown the timeline (this shuts down the walreceiver)
thread_mgr::shutdown_threads(None, Some(self.tenantid), Some(timeline_id));
// remove timeline files (maybe avoid this for ease of debugging if something goes wrong)
fs::remove_dir_all(self.conf.timeline_path(&timeline_id, &self.tenantid))?;
Ok(())
}
@@ -298,9 +333,13 @@ impl Repository for LayeredRepository {
timeline_id: ZTimelineId,
new_state: TimelineSyncState,
) -> Result<()> {
debug!(
"set_timeline_state: timeline_id: {}, new_state: {:?}",
timeline_id, new_state
);
let mut timelines_accessor = self.timelines.lock().unwrap();
let timeline_to_shutdown = match new_state {
match new_state {
TimelineSyncState::Ready(_) => {
let reloaded_timeline =
self.init_local_timeline(timeline_id, &mut timelines_accessor)?;
@@ -318,12 +357,9 @@ impl Repository for LayeredRepository {
},
),
};
// NOTE we do not delete local data in case timeline became cloud only, this is performed in detach_timeline
drop(timelines_accessor);
if let Some(timeline) = timeline_to_shutdown {
shutdown_timeline(self.tenantid, timeline_id, &timeline)?;
}
Ok(())
}
@@ -349,30 +385,6 @@ impl Repository for LayeredRepository {
}
}
fn shutdown_timeline(
tenant_id: ZTenantId,
timeline_id: ZTimelineId,
timeline: &LayeredTimelineEntry,
) -> Result<(), anyhow::Error> {
match timeline {
LayeredTimelineEntry::Local(timeline) => {
timeline
.upload_relishes
.store(false, atomic::Ordering::Relaxed);
walreceiver::stop_wal_receiver(timeline_id);
trace!("repo shutdown. checkpoint timeline {}", timeline_id);
// Do not reconstruct pages to reduce shutdown time
timeline.checkpoint(CheckpointConfig::Flush)?;
//TODO Wait for walredo process to shutdown too
}
LayeredTimelineEntry::Remote { .. } => warn!(
"Skipping shutdown of a remote timeline {} for tenant {}",
timeline_id, tenant_id
),
}
Ok(())
}
#[derive(Clone)]
enum LayeredTimelineEntry {
Local(Arc<LayeredTimeline>),
@@ -489,6 +501,7 @@ impl LayeredRepository {
tenantid,
conf,
timelines: Mutex::new(HashMap::new()),
gc_cs: Mutex::new(()),
walredo_mgr,
upload_relishes,
}
@@ -505,10 +518,10 @@ impl LayeredRepository {
let _enter = info_span!("saving metadata").entered();
let path = metadata_path(conf, timelineid, tenantid);
// use OpenOptions to ensure file presence is consistent with first_save
let mut file = OpenOptions::new()
.write(true)
.create_new(first_save)
.open(&path)?;
let mut file = VirtualFile::open_with_options(
&path,
OpenOptions::new().write(true).create_new(first_save),
)?;
let metadata_bytes = data.to_bytes().context("Failed to get metadata bytes")?;
@@ -575,7 +588,8 @@ impl LayeredRepository {
let now = Instant::now();
// grab mutex to prevent new timelines from being created here.
// TODO: We will hold it for a long time
let _gc_cs = self.gc_cs.lock().unwrap();
let mut timelines = self.timelines.lock().unwrap();
// Scan all timelines. For each timeline, remember the timeline ID and
@@ -597,7 +611,7 @@ impl LayeredRepository {
}
}
//Now collect info about branchpoints
// Now collect info about branchpoints
let mut all_branchpoints: BTreeSet<(ZTimelineId, Lsn)> = BTreeSet::new();
for &timelineid in &timelineids {
let timeline = match self.get_or_init_timeline(timelineid, &mut timelines)? {
@@ -641,8 +655,10 @@ impl LayeredRepository {
// Ok, we now know all the branch points.
// Perform GC for each timeline.
for timelineid in timelineids {
if tenant_mgr::shutdown_requested() {
return Ok(totals);
if thread_mgr::is_shutdown_requested() {
// We were requested to shut down. Stop and return with the progress we
// made.
break;
}
// We have already loaded all timelines above
@@ -663,6 +679,7 @@ impl LayeredRepository {
}
if let Some(cutoff) = timeline.get_last_record_lsn().checked_sub(horizon) {
drop(timelines);
let branchpoints: Vec<Lsn> = all_branchpoints
.range((
Included((timelineid, Lsn(0))),
@@ -678,10 +695,10 @@ impl LayeredRepository {
timeline.checkpoint(CheckpointConfig::Forced)?;
info!("timeline {} checkpoint_before_gc done", timelineid);
}
let result = timeline.gc_timeline(branchpoints, cutoff)?;
totals += result;
timelines = self.timelines.lock().unwrap();
}
}
@@ -759,8 +776,14 @@ pub struct LayeredTimeline {
/// to avoid deadlock.
write_lock: Mutex<()>,
// Prevent concurrent checkpoints.
// Checkpoints are normally performed by one thread. But checkpoint can also be manually requested by admin
// (that's used in tests), and shutdown also forces a checkpoint. These forced checkpoints run in a different thread
// and could be triggered at the same time as a normal checkpoint.
checkpoint_cs: Mutex<()>,
// Needed to ensure that we can't create a branch at a point that was already garbage collected
latest_gc_cutoff_lsn: AtomicLsn,
latest_gc_cutoff_lsn: RwLock<Lsn>,
// It may change across major versions so for simplicity
// keep it after running initdb for a timeline.
@@ -804,6 +827,10 @@ impl Timeline for LayeredTimeline {
Ok(())
}
fn get_latest_gc_cutoff_lsn(&self) -> RwLockReadGuard<Lsn> {
self.latest_gc_cutoff_lsn.read().unwrap()
}
/// Look up given page version.
fn get_page_at_lsn(&self, rel: RelishTag, rel_blknum: BlockNumber, lsn: Lsn) -> Result<Bytes> {
if !rel.is_blocky() && rel_blknum != 0 {
@@ -814,14 +841,6 @@ impl Timeline for LayeredTimeline {
);
}
debug_assert!(lsn <= self.get_last_record_lsn());
let latest_gc_cutoff_lsn = self.latest_gc_cutoff_lsn.load();
// error instead of assert to simplify testing
ensure!(
lsn >= latest_gc_cutoff_lsn,
"tried to request a page version that was garbage collected. requested at {} gc cutoff {}",
lsn, latest_gc_cutoff_lsn
);
let (seg, seg_blknum) = SegmentTag::from_blknum(rel, rel_blknum);
if let Some((layer, lsn)) = self.get_layer_for_read(seg, lsn)? {
@@ -992,21 +1011,16 @@ impl Timeline for LayeredTimeline {
///
/// Validate lsn against initdb_lsn and latest_gc_cutoff_lsn.
///
fn check_lsn_is_in_scope(&self, lsn: Lsn) -> Result<()> {
let initdb_lsn = self.initdb_lsn;
fn check_lsn_is_in_scope(
&self,
lsn: Lsn,
latest_gc_cutoff_lsn: &RwLockReadGuard<Lsn>,
) -> Result<()> {
ensure!(
lsn >= initdb_lsn,
"LSN {} is earlier than initdb lsn {}",
lsn,
initdb_lsn,
);
let latest_gc_cutoff_lsn = self.latest_gc_cutoff_lsn.load();
ensure!(
lsn >= latest_gc_cutoff_lsn,
lsn >= **latest_gc_cutoff_lsn,
"LSN {} is earlier than latest GC horizon {} (we might've already garbage collected needed data)",
lsn,
latest_gc_cutoff_lsn,
**latest_gc_cutoff_lsn,
);
Ok(())
}
@@ -1023,14 +1037,6 @@ impl Timeline for LayeredTimeline {
self.last_record_lsn.load()
}
fn get_start_lsn(&self) -> Lsn {
self.ancestor_timeline
.as_ref()
.and_then(|ancestor_entry| ancestor_entry.local_or_schedule_download(self.tenantid))
.map(Timeline::get_start_lsn)
.unwrap_or(self.ancestor_lsn)
}
fn get_current_logical_size(&self) -> usize {
self.current_logical_size.load(atomic::Ordering::Acquire) as usize
}
@@ -1118,8 +1124,9 @@ impl LayeredTimeline {
upload_relishes: AtomicBool::new(upload_relishes),
write_lock: Mutex::new(()),
checkpoint_cs: Mutex::new(()),
latest_gc_cutoff_lsn: AtomicLsn::from(metadata.latest_gc_cutoff_lsn()),
latest_gc_cutoff_lsn: RwLock::new(metadata.latest_gc_cutoff_lsn()),
initdb_lsn: metadata.initdb_lsn(),
}
}
@@ -1145,8 +1152,8 @@ impl LayeredTimeline {
// create an ImageLayer struct for each image file.
if imgfilename.lsn > disk_consistent_lsn {
warn!(
"found future image layer {} on timeline {}",
imgfilename, self.timelineid
"found future image layer {} on timeline {} disk_consistent_lsn is {}",
imgfilename, self.timelineid, disk_consistent_lsn
);
rename_to_backup(direntry.path())?;
@@ -1169,8 +1176,8 @@ impl LayeredTimeline {
// before crash.
if deltafilename.end_lsn > disk_consistent_lsn + 1 {
warn!(
"found future delta layer {} on timeline {}",
deltafilename, self.timelineid
"found future delta layer {} on timeline {} disk_consistent_lsn is {}",
deltafilename, self.timelineid, disk_consistent_lsn
);
rename_to_backup(direntry.path())?;
@@ -1366,7 +1373,7 @@ impl LayeredTimeline {
self.tenantid,
seg,
lsn,
lsn,
last_record_lsn,
)?;
} else {
return Ok(open_layer);
@@ -1409,7 +1416,7 @@ impl LayeredTimeline {
self.timelineid,
self.tenantid,
start_lsn,
lsn,
last_record_lsn,
)?;
} else {
// New relation.
@@ -1420,8 +1427,14 @@ impl LayeredTimeline {
lsn
);
layer =
InMemoryLayer::create(self.conf, self.timelineid, self.tenantid, seg, lsn, lsn)?;
layer = InMemoryLayer::create(
self.conf,
self.timelineid,
self.tenantid,
seg,
lsn,
last_record_lsn,
)?;
}
let layer_rc: Arc<InMemoryLayer> = Arc::new(layer);
@@ -1435,7 +1448,10 @@ impl LayeredTimeline {
///
/// NOTE: This has nothing to do with checkpoint in PostgreSQL.
fn checkpoint_internal(&self, checkpoint_distance: u64, reconstruct_pages: bool) -> Result<()> {
let mut write_guard = self.write_lock.lock().unwrap();
// Prevent concurrent checkpoints
let _checkpoint_cs = self.checkpoint_cs.lock().unwrap();
let write_guard = self.write_lock.lock().unwrap();
let mut layers = self.layers.lock().unwrap();
// Bump the generation number in the layer map, so that we can distinguish
@@ -1461,11 +1477,17 @@ impl LayeredTimeline {
let mut disk_consistent_lsn = last_record_lsn;
let mut layer_paths = Vec::new();
let mut freeze_end_lsn = Lsn(0);
let mut evicted_layers = Vec::new();
//
// Determine which layers we need to evict and calculate max(latest_lsn)
// among those layers.
//
while let Some((oldest_layer_id, oldest_layer, oldest_generation)) =
layers.peek_oldest_open()
{
let oldest_pending_lsn = oldest_layer.get_oldest_pending_lsn();
let oldest_lsn = oldest_layer.get_oldest_lsn();
// Does this layer need freezing?
//
// Write out all in-memory layers that contain WAL older than CHECKPOINT_DISTANCE.
@@ -1474,28 +1496,60 @@ impl LayeredTimeline {
// when we started. We don't want to process layers inserted after we started, to
// avoid getting into an infinite loop trying to process again entries that we
// inserted ourselves.
let distance = last_record_lsn.widening_sub(oldest_pending_lsn);
if distance < 0
//
// Once we have decided to write out at least one layer, we must also write out
// any other layers that contain WAL older than the end LSN of the layers we have
// already decided to write out. In other words, we must write out all layers
// whose [oldest_lsn, latest_lsn) range overlaps with any of the other layers
// that we are writing out. Otherwise, when we advance 'disk_consistent_lsn', it's
// ambiguous whether those layers are already durable on disk or not. For example,
// imagine that there are two layers in memory that contain page versions in the
// following LSN ranges:
//
// A: 100-150
// B: 110-200
//
// If we flush layer A, we must also flush layer B, because they overlap. If we
// flushed only A, and advanced 'disk_consistent_lsn' to 150, we would break the
// rule that all WAL older than 'disk_consistent_lsn' are durable on disk, because
// B contains some WAL older than 150. On the other hand, if we flushed out A and
// advanced 'disk_consistent_lsn' only up to 110, after crash and restart we would
// delete the first layer because its end LSN is larger than 110. If we changed
// the deletion logic to not delete it, then we would start streaming at 110, and
// process again the WAL records in the range 110-150 that are already in layer A,
// and the WAL processing code does not cope with that. We solve that dilemma by
// insisting that if we write out the first layer, we also write out the second
// layer, and advance disk_consistent_lsn all the way up to 200.
//
let distance = last_record_lsn.widening_sub(oldest_lsn);
if (distance < 0
|| distance < checkpoint_distance.into()
|| oldest_generation == current_generation
|| oldest_generation == current_generation)
&& oldest_lsn >= freeze_end_lsn
// this layer intersects with evicted layer and so also need to be evicted
{
info!(
"the oldest layer is now {} which is {} bytes behind last_record_lsn",
oldest_layer.filename().display(),
distance
);
disk_consistent_lsn = oldest_pending_lsn;
disk_consistent_lsn = oldest_lsn;
break;
}
let latest_lsn = oldest_layer.get_latest_lsn();
if latest_lsn > freeze_end_lsn {
freeze_end_lsn = latest_lsn; // calculate max of latest_lsn of the layers we're about to evict
}
layers.remove_open(oldest_layer_id);
evicted_layers.push((oldest_layer_id, oldest_layer));
}
drop(layers);
drop(write_guard);
let mut this_layer_paths = self.evict_layer(oldest_layer_id, reconstruct_pages)?;
layer_paths.append(&mut this_layer_paths);
write_guard = self.write_lock.lock().unwrap();
layers = self.layers.lock().unwrap();
// Freeze evicted layers
for (_evicted_layer_id, evicted_layer) in evicted_layers.iter() {
// Mark the layer as no longer accepting writes and record the end_lsn.
// This happens in-place, no new layers are created now.
evicted_layer.freeze(freeze_end_lsn);
layers.insert_historic(evicted_layer.clone());
}
// Call unload() on all frozen layers, to release memory.
@@ -1508,6 +1562,14 @@ impl LayeredTimeline {
drop(layers);
drop(write_guard);
// Create delta/image layers for evicted layers
for (_evicted_layer_id, evicted_layer) in evicted_layers.iter() {
let mut this_layer_paths =
self.evict_layer(evicted_layer.clone(), reconstruct_pages)?;
layer_paths.append(&mut this_layer_paths);
}
// Sync layers
if !layer_paths.is_empty() {
// We must fsync the timeline dir to ensure the directory entries for
// new layer files are durable
@@ -1548,7 +1610,7 @@ impl LayeredTimeline {
ondisk_prev_record_lsn,
ancestor_timelineid,
self.ancestor_lsn,
self.latest_gc_cutoff_lsn.load(),
*self.latest_gc_cutoff_lsn.read().unwrap(),
self.initdb_lsn,
);
@@ -1575,52 +1637,29 @@ impl LayeredTimeline {
Ok(())
}
fn evict_layer(&self, layer_id: LayerId, reconstruct_pages: bool) -> Result<Vec<PathBuf>> {
// Mark the layer as no longer accepting writes and record the end_lsn.
// This happens in-place, no new layers are created now.
// We call `get_last_record_lsn` again, which may be different from the
// original load, as we may have released the write lock since then.
let mut write_guard = self.write_lock.lock().unwrap();
let mut layers = self.layers.lock().unwrap();
fn evict_layer(
&self,
layer: Arc<InMemoryLayer>,
reconstruct_pages: bool,
) -> Result<Vec<PathBuf>> {
let new_historics = layer.write_to_disk(self, reconstruct_pages)?;
let mut layer_paths = Vec::new();
let _write_guard = self.write_lock.lock().unwrap();
let mut layers = self.layers.lock().unwrap();
let global_layer_map = GLOBAL_LAYER_MAP.read().unwrap();
if let Some(oldest_layer) = global_layer_map.get(&layer_id) {
drop(global_layer_map);
oldest_layer.freeze(self.get_last_record_lsn());
// Finally, replace the frozen in-memory layer with the new on-disk layers
layers.remove_historic(layer);
// The layer is no longer open, update the layer map to reflect this.
// We will replace it with on-disk historics below.
layers.remove_open(layer_id);
layers.insert_historic(oldest_layer.clone());
// Write the now-frozen layer to disk. That could take a while, so release the lock while do it
drop(layers);
drop(write_guard);
let new_historics = oldest_layer.write_to_disk(self, reconstruct_pages)?;
write_guard = self.write_lock.lock().unwrap();
layers = self.layers.lock().unwrap();
// Finally, replace the frozen in-memory layer with the new on-disk layers
layers.remove_historic(oldest_layer);
// Add the historics to the LayerMap
for delta_layer in new_historics.delta_layers {
layer_paths.push(delta_layer.path());
layers.insert_historic(Arc::new(delta_layer));
}
for image_layer in new_historics.image_layers {
layer_paths.push(image_layer.path());
layers.insert_historic(Arc::new(image_layer));
}
// Add the historics to the LayerMap
for delta_layer in new_historics.delta_layers {
layer_paths.push(delta_layer.path());
layers.insert_historic(Arc::new(delta_layer));
}
for image_layer in new_historics.image_layers {
layer_paths.push(image_layer.path());
layers.insert_historic(Arc::new(image_layer));
}
drop(layers);
drop(write_guard);
Ok(layer_paths)
}
@@ -1649,12 +1688,14 @@ impl LayeredTimeline {
pub fn gc_timeline(&self, retain_lsns: Vec<Lsn>, cutoff: Lsn) -> Result<GcResult> {
let now = Instant::now();
let mut result: GcResult = Default::default();
let disk_consistent_lsn = self.get_disk_consistent_lsn();
let _checkpoint_cs = self.checkpoint_cs.lock().unwrap();
let _enter = info_span!("garbage collection", timeline = %self.timelineid, tenant = %self.tenantid, cutoff = %cutoff).entered();
// We need to ensure that no one branches at a point before latest_gc_cutoff_lsn.
// See branch_timeline() for details.
self.latest_gc_cutoff_lsn.store(cutoff);
*self.latest_gc_cutoff_lsn.write().unwrap() = cutoff;
info!("GC starting");
@@ -1734,7 +1775,12 @@ impl LayeredTimeline {
}
// 3. Is there a later on-disk layer for this relation?
if !l.is_dropped() && !layers.newer_image_layer_exists(l.get_seg_tag(), l.get_end_lsn())
if !l.is_dropped()
&& !layers.newer_image_layer_exists(
l.get_seg_tag(),
l.get_end_lsn(),
disk_consistent_lsn,
)
{
info!(
"keeping {} {}-{} because it is the latest layer",
@@ -2187,11 +2233,10 @@ impl<'a> TimelineWriter for LayeredTimelineWriter<'a> {
let oldsize = self
.tl
.get_relish_size(rel, self.tl.get_last_record_lsn())?
.ok_or_else(|| {
anyhow!(
.with_context(|| {
format!(
"attempted to truncate non-existent relish {} at {}",
rel,
lsn
rel, lsn
)
})?;
@@ -2314,8 +2359,5 @@ fn rename_to_backup(path: PathBuf) -> anyhow::Result<()> {
}
}
Err(anyhow!(
"couldn't find an unused backup number for {:?}",
path
))
bail!("couldn't find an unused backup number for {:?}", path)
}

View File

@@ -169,7 +169,7 @@ impl DeltaLayerInner {
if let Some((_entry_lsn, entry)) = slice.last() {
Ok(*entry)
} else {
Err(anyhow::anyhow!("could not find seg size in delta layer"))
bail!("could not find seg size in delta layer")
}
}
}

View File

@@ -175,7 +175,10 @@ impl Write for EphemeralFile {
}
fn flush(&mut self) -> Result<(), std::io::Error> {
todo!()
// we don't need to flush data:
// * we either write input bytes or not, not keeping any intermediate data buffered
// * rust unix file `flush` impl does not flush things either, returning `Ok(())`
Ok(())
}
}

View File

@@ -173,7 +173,14 @@ impl Layer for ImageLayer {
.as_ref()
.unwrap()
.chapter_reader(BLOCKY_IMAGES_CHAPTER)?;
chapter.read_exact_at(&mut buf, offset)?;
chapter.read_exact_at(&mut buf, offset).with_context(|| {
format!(
"failed to read page from data file {} at offset {}",
self.filename().display(),
offset
)
})?;
buf
}

View File

@@ -39,8 +39,20 @@ pub struct InMemoryLayer {
///
start_lsn: Lsn,
/// LSN of the oldest page version stored in this layer
oldest_pending_lsn: Lsn,
///
/// LSN of the oldest page version stored in this layer.
///
/// This is different from 'start_lsn' in that we enforce that the 'start_lsn'
/// of a layer always matches the 'end_lsn' of its predecessor, even if there
/// are no page versions until at a later LSN. That way you can detect any
/// missing layer files more easily. 'oldest_lsn' is the first page version
/// actually stored in this layer. In the range between 'start_lsn' and
/// 'oldest_lsn', there are no changes to the segment.
/// 'oldest_lsn' is used to adjust 'disk_consistent_lsn' and that is why it should
/// point to the beginning of WAL record. This is the other difference with 'start_lsn'
/// which points to end of WAL record. This is why 'oldest_lsn' can be smaller than 'start_lsn'.
///
oldest_lsn: Lsn,
/// The above fields never change. The parts that do change are in 'inner',
/// and protected by mutex.
@@ -73,6 +85,14 @@ pub struct InMemoryLayerInner {
/// a non-blocky rel, 'seg_sizes' is not used and is always empty.
///
seg_sizes: VecMap<Lsn, SegmentBlk>,
///
/// LSN of the newest page version stored in this layer.
///
/// The difference between 'end_lsn' and 'latest_lsn' is the same as between
/// 'start_lsn' and 'oldest_lsn'. See comments in 'oldest_lsn'.
///
latest_lsn: Lsn,
}
impl InMemoryLayerInner {
@@ -319,8 +339,13 @@ pub struct LayersOnDisk {
impl InMemoryLayer {
/// Return the oldest page version that's stored in this layer
pub fn get_oldest_pending_lsn(&self) -> Lsn {
self.oldest_pending_lsn
pub fn get_oldest_lsn(&self) -> Lsn {
self.oldest_lsn
}
pub fn get_latest_lsn(&self) -> Lsn {
let inner = self.inner.read().unwrap();
inner.latest_lsn
}
///
@@ -332,7 +357,7 @@ impl InMemoryLayer {
tenantid: ZTenantId,
seg: SegmentTag,
start_lsn: Lsn,
oldest_pending_lsn: Lsn,
oldest_lsn: Lsn,
) -> Result<InMemoryLayer> {
trace!(
"initializing new empty InMemoryLayer for writing {} on timeline {} at {}",
@@ -355,13 +380,14 @@ impl InMemoryLayer {
tenantid,
seg,
start_lsn,
oldest_pending_lsn,
oldest_lsn,
incremental: false,
inner: RwLock::new(InMemoryLayerInner {
end_lsn: None,
dropped: false,
page_versions: PageVersions::new(file),
seg_sizes,
latest_lsn: oldest_lsn,
}),
})
}
@@ -398,6 +424,8 @@ impl InMemoryLayer {
let mut inner = self.inner.write().unwrap();
inner.assert_writeable();
assert!(lsn >= inner.latest_lsn);
inner.latest_lsn = lsn;
let old = inner.page_versions.append_or_update_last(blknum, lsn, pv)?;
@@ -509,12 +537,11 @@ impl InMemoryLayer {
timelineid: ZTimelineId,
tenantid: ZTenantId,
start_lsn: Lsn,
oldest_pending_lsn: Lsn,
oldest_lsn: Lsn,
) -> Result<InMemoryLayer> {
let seg = src.get_seg_tag();
assert!(oldest_pending_lsn.is_aligned());
assert!(oldest_pending_lsn >= start_lsn);
assert!(oldest_lsn.is_aligned());
trace!(
"initializing new InMemoryLayer for writing {} on timeline {} at {}",
@@ -538,13 +565,14 @@ impl InMemoryLayer {
tenantid,
seg,
start_lsn,
oldest_pending_lsn,
oldest_lsn,
incremental: true,
inner: RwLock::new(InMemoryLayerInner {
end_lsn: None,
dropped: false,
page_versions: PageVersions::new(file),
seg_sizes,
latest_lsn: oldest_lsn,
}),
})
}

View File

@@ -40,7 +40,7 @@ pub struct LayerMap {
/// All the layers keyed by segment tag
segs: HashMap<SegmentTag, SegEntry>,
/// All in-memory layers, ordered by 'oldest_pending_lsn' and generation
/// All in-memory layers, ordered by 'oldest_lsn' and generation
/// of each layer. This allows easy access to the in-memory layer that
/// contains the oldest WAL record.
open_layers: BinaryHeap<OpenLayerEntry>,
@@ -83,16 +83,16 @@ impl LayerMap {
let layer_id = segentry.update_open(Arc::clone(&layer));
let oldest_pending_lsn = layer.get_oldest_pending_lsn();
let oldest_lsn = layer.get_oldest_lsn();
// After a crash and restart, 'oldest_pending_lsn' of the oldest in-memory
// After a crash and restart, 'oldest_lsn' of the oldest in-memory
// layer becomes the WAL streaming starting point, so it better not point
// in the middle of a WAL record.
assert!(oldest_pending_lsn.is_aligned());
assert!(oldest_lsn.is_aligned());
// Also add it to the binary heap
let open_layer_entry = OpenLayerEntry {
oldest_pending_lsn: layer.get_oldest_pending_lsn(),
oldest_lsn: layer.get_oldest_lsn(),
layer_id,
generation: self.current_generation,
};
@@ -191,9 +191,15 @@ impl LayerMap {
///
/// This is used for garbage collection, to determine if an old layer can
/// be deleted.
pub fn newer_image_layer_exists(&self, seg: SegmentTag, lsn: Lsn) -> bool {
/// We ignore segments newer than disk_consistent_lsn because they will be removed at restart
pub fn newer_image_layer_exists(
&self,
seg: SegmentTag,
lsn: Lsn,
disk_consistent_lsn: Lsn,
) -> bool {
if let Some(segentry) = self.segs.get(&seg) {
segentry.newer_image_layer_exists(lsn)
segentry.newer_image_layer_exists(lsn, disk_consistent_lsn)
} else {
false
}
@@ -311,13 +317,18 @@ impl SegEntry {
self.historic.search(lsn)
}
pub fn newer_image_layer_exists(&self, lsn: Lsn) -> bool {
pub fn newer_image_layer_exists(&self, lsn: Lsn, disk_consistent_lsn: Lsn) -> bool {
// We only check on-disk layers, because
// in-memory layers are not durable
// The end-LSN is exclusive, while disk_consistent_lsn is
// inclusive. For example, if disk_consistent_lsn is 100, it is
// OK for a delta layer to have end LSN 101, but if the end LSN
// is 102, then it might not have been fully flushed to disk
// before crash.
self.historic
.iter_newer(lsn)
.any(|layer| !layer.is_incremental())
.any(|layer| !layer.is_incremental() && layer.get_end_lsn() <= disk_consistent_lsn + 1)
}
// Set new open layer for a SegEntry.
@@ -341,23 +352,23 @@ impl SegEntry {
}
/// Entry held in LayerMap::open_layers, with boilerplate comparison routines
/// to implement a min-heap ordered by 'oldest_pending_lsn' and 'generation'
/// to implement a min-heap ordered by 'oldest_lsn' and 'generation'
///
/// The generation number associated with each entry can be used to distinguish
/// recently-added entries (i.e after last call to increment_generation()) from older
/// entries with the same 'oldest_pending_lsn'.
/// entries with the same 'oldest_lsn'.
struct OpenLayerEntry {
oldest_pending_lsn: Lsn, // copy of layer.get_oldest_pending_lsn()
oldest_lsn: Lsn, // copy of layer.get_oldest_lsn()
generation: u64,
layer_id: LayerId,
}
impl Ord for OpenLayerEntry {
fn cmp(&self, other: &Self) -> Ordering {
// BinaryHeap is a max-heap, and we want a min-heap. Reverse the ordering here
// to get that. Entries with identical oldest_pending_lsn are ordered by generation
// to get that. Entries with identical oldest_lsn are ordered by generation
other
.oldest_pending_lsn
.cmp(&self.oldest_pending_lsn)
.oldest_lsn
.cmp(&self.oldest_lsn)
.then_with(|| other.generation.cmp(&self.generation))
}
}
@@ -426,7 +437,7 @@ mod tests {
conf: &'static PageServerConf,
segno: u32,
start_lsn: Lsn,
oldest_pending_lsn: Lsn,
oldest_lsn: Lsn,
) -> Arc<InMemoryLayer> {
Arc::new(
InMemoryLayer::create(
@@ -438,7 +449,7 @@ mod tests {
segno,
},
start_lsn,
oldest_pending_lsn,
oldest_lsn,
)
.unwrap(),
)

View File

@@ -11,6 +11,7 @@ pub mod remote_storage;
pub mod repository;
pub mod tenant_mgr;
pub mod tenant_threads;
pub mod thread_mgr;
pub mod virtual_file;
pub mod walingest;
pub mod walreceiver;

View File

@@ -10,16 +10,15 @@
// *callmemaybe <zenith timelineid> $url* -- ask pageserver to start walreceiver on $url
//
use anyhow::{anyhow, bail, ensure, Context, Result};
use anyhow::{bail, ensure, Context, Result};
use bytes::{Buf, BufMut, Bytes, BytesMut};
use lazy_static::lazy_static;
use regex::Regex;
use std::io;
use std::net::TcpListener;
use std::str;
use std::str::FromStr;
use std::sync::Arc;
use std::thread;
use std::{io, net::TcpStream};
use std::sync::{Arc, RwLockReadGuard};
use tracing::*;
use zenith_metrics::{register_histogram_vec, HistogramVec};
use zenith_utils::auth::{self, JwtAuth};
@@ -28,17 +27,16 @@ use zenith_utils::lsn::Lsn;
use zenith_utils::postgres_backend::is_socket_read_timed_out;
use zenith_utils::postgres_backend::PostgresBackend;
use zenith_utils::postgres_backend::{self, AuthType};
use zenith_utils::pq_proto::{
BeMessage, FeMessage, RowDescriptor, HELLO_WORLD_ROW, SINGLE_COL_ROWDESC,
};
use zenith_utils::pq_proto::{BeMessage, FeMessage, RowDescriptor, SINGLE_COL_ROWDESC};
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use crate::basebackup;
use crate::branches;
use crate::config::PageServerConf;
use crate::relish::*;
use crate::repository::Timeline;
use crate::tenant_mgr;
use crate::thread_mgr;
use crate::thread_mgr::ThreadKind;
use crate::walreceiver;
use crate::CheckpointConfig;
@@ -189,30 +187,61 @@ pub fn thread_main(
listener: TcpListener,
auth_type: AuthType,
) -> anyhow::Result<()> {
let mut join_handles = Vec::new();
listener.set_nonblocking(true)?;
let basic_rt = tokio::runtime::Builder::new_current_thread()
.enable_io()
.build()?;
while !tenant_mgr::shutdown_requested() {
let (socket, peer_addr) = listener.accept()?;
debug!("accepted connection from {}", peer_addr);
socket.set_nodelay(true).unwrap();
let local_auth = auth.clone();
let tokio_listener = {
let _guard = basic_rt.enter();
tokio::net::TcpListener::from_std(listener)
}?;
let handle = thread::Builder::new()
.name("serving Page Service thread".into())
.spawn(move || {
if let Err(err) = page_service_conn_main(conf, local_auth, socket, auth_type) {
error!(%err, "page server thread exited with error");
// Wait for a new connection to arrive, or for server shutdown.
while let Some(res) = basic_rt.block_on(async {
let shutdown_watcher = thread_mgr::shutdown_watcher();
tokio::select! {
biased;
_ = shutdown_watcher => {
// We were requested to shut down.
None
}
res = tokio_listener.accept() => {
Some(res)
}
}
}) {
match res {
Ok((socket, peer_addr)) => {
// Connection established. Spawn a new thread to handle it.
debug!("accepted connection from {}", peer_addr);
let local_auth = auth.clone();
// PageRequestHandler threads are not associated with any particular
// timeline in the thread manager. In practice most connections will
// only deal with a particular timeline, but we don't know which one
// yet.
if let Err(err) = thread_mgr::spawn(
ThreadKind::PageRequestHandler,
None,
None,
"serving Page Service thread",
move || page_service_conn_main(conf, local_auth, socket, auth_type),
) {
// Thread creation failed. Log the error and continue.
error!("could not spawn page service thread: {:?}", err);
}
})
.unwrap();
join_handles.push(handle);
}
Err(err) => {
// accept() failed. Log the error, and loop back to retry on next connection.
error!("accept() failed: {:?}", err);
}
}
}
debug!("page_service loop terminated. wait for connections to cancel");
for handle in join_handles.into_iter() {
handle.join().unwrap();
}
debug!("page_service loop terminated");
Ok(())
}
@@ -220,10 +249,10 @@ pub fn thread_main(
fn page_service_conn_main(
conf: &'static PageServerConf,
auth: Option<Arc<JwtAuth>>,
socket: TcpStream,
socket: tokio::net::TcpStream,
auth_type: AuthType,
) -> anyhow::Result<()> {
// Immediatsely increment the gauge, then create a job to decrement it on thread exit.
// Immediately increment the gauge, then create a job to decrement it on thread exit.
// One of the pros of `defer!` is that this will *most probably*
// get called, even in presence of panics.
let gauge = crate::LIVE_CONNECTIONS_COUNT.with_label_values(&["page_service"]);
@@ -232,6 +261,19 @@ fn page_service_conn_main(
gauge.dec();
}
// We use Tokio to accept the connection, but the rest of the code works with a
// regular socket. Convert.
let socket = socket
.into_std()
.context("could not convert tokio::net:TcpStream to std::net::TcpStream")?;
socket
.set_nonblocking(false)
.context("could not put socket to blocking mode")?;
socket
.set_nodelay(true)
.context("could not set TCP_NODELAY")?;
let mut conn_handler = PageServerHandler::new(conf, auth);
let pgbackend = PostgresBackend::new(socket, auth_type, None, true)?;
pgbackend.run(&mut conn_handler)
@@ -286,7 +328,7 @@ impl PageServerHandler {
/* switch client to COPYBOTH */
pgb.write_message(&BeMessage::CopyBothResponse)?;
while !tenant_mgr::shutdown_requested() {
while !thread_mgr::is_shutdown_requested() {
match pgb.read_message() {
Ok(message) => {
if let Some(message) = message {
@@ -320,7 +362,7 @@ impl PageServerHandler {
let response = response.unwrap_or_else(|e| {
// print the all details to the log with {:#}, but for the client the
// error message is enough
error!("error reading relation or page version: {:#}", e);
error!("error reading relation or page version: {:?}", e);
PagestreamBeMessage::Error(PagestreamErrorResponse {
message: e.to_string(),
})
@@ -353,7 +395,12 @@ impl PageServerHandler {
/// In either case, if the page server hasn't received the WAL up to the
/// requested LSN yet, we will wait for it to arrive. The return value is
/// the LSN that should be used to look up the page versions.
fn wait_or_get_last_lsn(timeline: &dyn Timeline, lsn: Lsn, latest: bool) -> Result<Lsn> {
fn wait_or_get_last_lsn(
timeline: &dyn Timeline,
mut lsn: Lsn,
latest: bool,
latest_gc_cutoff_lsn: &RwLockReadGuard<Lsn>,
) -> Result<Lsn> {
if latest {
// Latest page version was requested. If LSN is given, it is a hint
// to the page server that there have been no modifications to the
@@ -374,22 +421,26 @@ impl PageServerHandler {
// walsender completes the authentication and starts streaming the
// WAL.
if lsn <= last_record_lsn {
Ok(last_record_lsn)
lsn = last_record_lsn;
} else {
timeline.wait_lsn(lsn)?;
// Since we waited for 'lsn' to arrive, that is now the last
// record LSN. (Or close enough for our purposes; the
// last-record LSN can advance immediately after we return
// anyway)
Ok(lsn)
}
} else {
if lsn == Lsn(0) {
bail!("invalid LSN(0) in request");
}
timeline.wait_lsn(lsn)?;
Ok(lsn)
}
ensure!(
lsn >= **latest_gc_cutoff_lsn,
"tried to request a page version that was garbage collected. requested at {} gc cutoff {}",
lsn, **latest_gc_cutoff_lsn
);
Ok(lsn)
}
fn handle_get_rel_exists_request(
@@ -400,7 +451,8 @@ impl PageServerHandler {
let _enter = info_span!("get_rel_exists", rel = %req.rel, req_lsn = %req.lsn).entered();
let tag = RelishTag::Relation(req.rel);
let lsn = Self::wait_or_get_last_lsn(timeline, req.lsn, req.latest)?;
let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn();
let lsn = Self::wait_or_get_last_lsn(timeline, req.lsn, req.latest, &latest_gc_cutoff_lsn)?;
let exists = timeline.get_rel_exists(tag, lsn)?;
@@ -416,7 +468,8 @@ impl PageServerHandler {
) -> Result<PagestreamBeMessage> {
let _enter = info_span!("get_nblocks", rel = %req.rel, req_lsn = %req.lsn).entered();
let tag = RelishTag::Relation(req.rel);
let lsn = Self::wait_or_get_last_lsn(timeline, req.lsn, req.latest)?;
let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn();
let lsn = Self::wait_or_get_last_lsn(timeline, req.lsn, req.latest, &latest_gc_cutoff_lsn)?;
let n_blocks = timeline.get_relish_size(tag, lsn)?;
@@ -437,8 +490,16 @@ impl PageServerHandler {
let _enter = info_span!("get_page", rel = %req.rel, blkno = &req.blkno, req_lsn = %req.lsn)
.entered();
let tag = RelishTag::Relation(req.rel);
let lsn = Self::wait_or_get_last_lsn(timeline, req.lsn, req.latest)?;
let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn();
let lsn = Self::wait_or_get_last_lsn(timeline, req.lsn, req.latest, &latest_gc_cutoff_lsn)?;
/*
// Add a 1s delay to some requests. The delayed causes the requests to
// hit the race condition from github issue #1047 more easily.
use rand::Rng;
if rand::thread_rng().gen::<u8>() < 5 {
std::thread::sleep(std::time::Duration::from_millis(1000));
}
*/
let page = timeline.get_page_at_lsn(tag, req.blkno, lsn)?;
Ok(PagestreamBeMessage::GetPage(PagestreamGetPageResponse {
@@ -459,9 +520,10 @@ impl PageServerHandler {
// check that the timeline exists
let timeline = tenant_mgr::get_timeline_for_tenant(tenantid, timelineid)
.context("Cannot handle basebackup request for a remote timeline")?;
let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn();
if let Some(lsn) = lsn {
timeline
.check_lsn_is_in_scope(lsn)
.check_lsn_is_in_scope(lsn, &latest_gc_cutoff_lsn)
.context("invalid basebackup lsn")?;
}
@@ -579,7 +641,7 @@ impl postgres_backend::Handler for PageServerHandler {
let re = Regex::new(r"^callmemaybe ([[:xdigit:]]+) ([[:xdigit:]]+) (.*)$").unwrap();
let caps = re
.captures(query_string)
.ok_or_else(|| anyhow!("invalid callmemaybe: '{}'", query_string))?;
.with_context(|| format!("invalid callmemaybe: '{}'", query_string))?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let timelineid = ZTimelineId::from_str(caps.get(2).unwrap().as_str())?;
@@ -594,82 +656,24 @@ impl postgres_backend::Handler for PageServerHandler {
tenant_mgr::get_timeline_for_tenant(tenantid, timelineid)
.context("Failed to fetch local timeline for callmemaybe requests")?;
walreceiver::launch_wal_receiver(self.conf, timelineid, &connstr, tenantid.to_owned());
walreceiver::launch_wal_receiver(self.conf, tenantid, timelineid, &connstr)?;
pgb.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with("branch_create ") {
let err = || anyhow!("invalid branch_create: '{}'", query_string);
// branch_create <tenantid> <branchname> <startpoint>
// TODO lazy static
// TODO: escaping, to allow branch names with spaces
let re = Regex::new(r"^branch_create ([[:xdigit:]]+) (\S+) ([^\r\n\s;]+)[\r\n\s;]*;?$")
.unwrap();
let caps = re.captures(query_string).ok_or_else(err)?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let branchname = caps.get(2).ok_or_else(err)?.as_str().to_owned();
let startpoint_str = caps.get(3).ok_or_else(err)?.as_str().to_owned();
self.check_permission(Some(tenantid))?;
let _enter =
info_span!("branch_create", name = %branchname, tenant = %tenantid).entered();
let branch =
branches::create_branch(self.conf, &branchname, &startpoint_str, &tenantid)?;
let branch = serde_json::to_vec(&branch)?;
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::DataRow(&[Some(&branch)]))?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with("branch_list ") {
// branch_list <zenith tenantid as hex string>
let re = Regex::new(r"^branch_list ([[:xdigit:]]+)$").unwrap();
let caps = re
.captures(query_string)
.ok_or_else(|| anyhow!("invalid branch_list: '{}'", query_string))?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
// since these handlers for tenant/branch commands are deprecated (in favor of http based ones)
// just use false in place of include non incremental logical size
let branches = crate::branches::get_branches(self.conf, &tenantid, false)?;
let branches_buf = serde_json::to_vec(&branches)?;
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::DataRow(&[Some(&branches_buf)]))?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with("tenant_list") {
let tenants = crate::tenant_mgr::list_tenants()?;
let tenants_buf = serde_json::to_vec(&tenants)?;
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::DataRow(&[Some(&tenants_buf)]))?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with("tenant_create") {
let err = || anyhow!("invalid tenant_create: '{}'", query_string);
// tenant_create <tenantid>
let re = Regex::new(r"^tenant_create ([[:xdigit:]]+)$").unwrap();
let caps = re.captures(query_string).ok_or_else(err)?;
self.check_permission(None)?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
tenant_mgr::create_repository_for_tenant(self.conf, tenantid)?;
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with("status") {
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&HELLO_WORLD_ROW)?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.to_ascii_lowercase().starts_with("set ") {
// important because psycopg2 executes "SET datestyle TO 'ISO'"
// on connect
pgb.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with("failpoints ") {
let (_, failpoints) = query_string.split_at("failpoints ".len());
for failpoint in failpoints.split(';') {
if let Some((name, actions)) = failpoint.split_once('=') {
info!("cfg failpoint: {} {}", name, actions);
fail::cfg(name, actions).unwrap();
} else {
bail!("Invalid failpoints format");
}
}
pgb.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with("do_gc ") {
// Run GC immediately on given timeline.
// FIXME: This is just for tests. See test_runner/batch_others/test_gc.py.
@@ -683,7 +687,7 @@ impl postgres_backend::Handler for PageServerHandler {
let caps = re
.captures(query_string)
.ok_or_else(|| anyhow!("invalid do_gc: '{}'", query_string))?;
.with_context(|| format!("invalid do_gc: '{}'", query_string))?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let timelineid = ZTimelineId::from_str(caps.get(2).unwrap().as_str())?;
@@ -767,7 +771,7 @@ impl postgres_backend::Handler for PageServerHandler {
let caps = re
.captures(query_string)
.ok_or_else(|| anyhow!("invalid checkpoint command: '{}'", query_string))?;
.with_context(|| format!("invalid checkpoint command: '{}'", query_string))?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let timelineid = ZTimelineId::from_str(caps.get(2).unwrap().as_str())?;

View File

@@ -5,7 +5,7 @@
//! There are a few components the storage machinery consists of:
//! * [`RemoteStorage`] trait a CRUD-like generic abstraction to use for adapting external storages with a few implementations:
//! * [`local_fs`] allows to use local file system as an external storage
//! * [`rust_s3`] uses AWS S3 bucket entirely as an external storage
//! * [`rust_s3`] uses AWS S3 bucket as an external storage
//!
//! * synchronization logic at [`storage_sync`] module that keeps pageserver state (both runtime one and the workdir files) and storage state in sync.
//! Synchronization internals are split into submodules
@@ -89,13 +89,12 @@ use std::{
collections::HashMap,
ffi, fs,
path::{Path, PathBuf},
thread,
};
use anyhow::{bail, Context};
use tokio::io;
use tracing::{error, info};
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use zenith_utils::zid::{ZTenantId, ZTenantTimelineId, ZTimelineId};
pub use self::storage_sync::{schedule_timeline_checkpoint_upload, schedule_timeline_download};
use self::{local_fs::LocalFs, rust_s3::S3};
@@ -105,16 +104,7 @@ use crate::{
repository::TimelineSyncState,
};
/// Any timeline has its own id and its own tenant it belongs to,
/// the sync processes group timelines by both for simplicity.
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, Copy, Hash)]
pub struct TimelineSyncId(ZTenantId, ZTimelineId);
impl std::fmt::Display for TimelineSyncId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "(tenant: {}, timeline: {})", self.0, self.1)
}
}
pub use storage_sync::compression;
/// A structure to combine all synchronization data to share with pageserver after a successful sync loop initialization.
/// Successful initialization includes a case when sync loop is not started, in which case the startup data is returned still,
@@ -125,8 +115,6 @@ pub struct SyncStartupData {
/// To reuse the local file scan logic, the timeline states are returned even if no sync loop get started during init:
/// in this case, no remote files exist and all local timelines with correct metadata files are considered ready.
pub initial_timeline_states: HashMap<ZTenantId, HashMap<ZTimelineId, TimelineSyncState>>,
/// A handle to the sync loop, if it was started from the configuration provided.
pub sync_loop_handle: Option<thread::JoinHandle<anyhow::Result<()>>>,
}
/// Based on the config, initiates the remote storage connection and starts a separate thread
@@ -141,20 +129,27 @@ pub fn start_local_timeline_sync(
match &config.remote_storage_config {
Some(storage_config) => match &storage_config.storage {
RemoteStorageKind::LocalFs(root) => storage_sync::spawn_storage_sync_thread(
config,
local_timeline_files,
LocalFs::new(root.clone(), &config.workdir)?,
storage_config.max_concurrent_sync,
storage_config.max_sync_errors,
),
RemoteStorageKind::AwsS3(s3_config) => storage_sync::spawn_storage_sync_thread(
config,
local_timeline_files,
S3::new(s3_config, &config.workdir)?,
storage_config.max_concurrent_sync,
storage_config.max_sync_errors,
),
RemoteStorageKind::LocalFs(root) => {
info!("Using fs root '{}' as a remote storage", root.display());
storage_sync::spawn_storage_sync_thread(
config,
local_timeline_files,
LocalFs::new(root.clone(), &config.workdir)?,
storage_config.max_concurrent_sync,
storage_config.max_sync_errors,
)
},
RemoteStorageKind::AwsS3(s3_config) => {
info!("Using s3 bucket '{}' in region '{}' as a remote storage, prefix in bucket: '{:?}', bucket endpoint: '{:?}'",
s3_config.bucket_name, s3_config.bucket_region, s3_config.prefix_in_bucket, s3_config.endpoint);
storage_sync::spawn_storage_sync_thread(
config,
local_timeline_files,
S3::new(s3_config, &config.workdir)?,
storage_config.max_concurrent_sync,
storage_config.max_sync_errors,
)
},
}
.context("Failed to spawn the storage sync thread"),
None => {
@@ -163,7 +158,7 @@ pub fn start_local_timeline_sync(
ZTenantId,
HashMap<ZTimelineId, TimelineSyncState>,
> = HashMap::new();
for (TimelineSyncId(tenant_id, timeline_id), (timeline_metadata, _)) in
for (ZTenantTimelineId{tenant_id, timeline_id}, (timeline_metadata, _)) in
local_timeline_files
{
initial_timeline_states
@@ -176,7 +171,6 @@ pub fn start_local_timeline_sync(
}
Ok(SyncStartupData {
initial_timeline_states,
sync_loop_handle: None,
})
}
}
@@ -184,7 +178,7 @@ pub fn start_local_timeline_sync(
fn local_tenant_timeline_files(
config: &'static PageServerConf,
) -> anyhow::Result<HashMap<TimelineSyncId, (TimelineMetadata, Vec<PathBuf>)>> {
) -> anyhow::Result<HashMap<ZTenantTimelineId, (TimelineMetadata, Vec<PathBuf>)>> {
let mut local_tenant_timeline_files = HashMap::new();
let tenants_dir = config.tenants_path();
for tenants_dir_entry in fs::read_dir(&tenants_dir)
@@ -205,7 +199,7 @@ fn local_tenant_timeline_files(
}
}
Err(e) => error!(
"Failed to list tenants dir entry {:?} in directory {}, reason: {:#}",
"Failed to list tenants dir entry {:?} in directory {}, reason: {:?}",
tenants_dir_entry,
tenants_dir.display(),
e
@@ -219,8 +213,9 @@ fn local_tenant_timeline_files(
fn collect_timelines_for_tenant(
config: &'static PageServerConf,
tenant_path: &Path,
) -> anyhow::Result<HashMap<TimelineSyncId, (TimelineMetadata, Vec<PathBuf>)>> {
let mut timelines: HashMap<TimelineSyncId, (TimelineMetadata, Vec<PathBuf>)> = HashMap::new();
) -> anyhow::Result<HashMap<ZTenantTimelineId, (TimelineMetadata, Vec<PathBuf>)>> {
let mut timelines: HashMap<ZTenantTimelineId, (TimelineMetadata, Vec<PathBuf>)> =
HashMap::new();
let tenant_id = tenant_path
.file_name()
.and_then(ffi::OsStr::to_str)
@@ -241,19 +236,22 @@ fn collect_timelines_for_tenant(
match collect_timeline_files(&timeline_path) {
Ok((timeline_id, metadata, timeline_files)) => {
timelines.insert(
TimelineSyncId(tenant_id, timeline_id),
ZTenantTimelineId {
tenant_id,
timeline_id,
},
(metadata, timeline_files),
);
}
Err(e) => error!(
"Failed to process timeline dir contents at '{}', reason: {:#}",
"Failed to process timeline dir contents at '{}', reason: {:?}",
timeline_path.display(),
e
),
}
}
Err(e) => error!(
"Failed to list timelines for entry tenant {}, reason: {:#}",
"Failed to list timelines for entry tenant {}, reason: {:?}",
tenant_id, e
),
}

View File

@@ -70,8 +70,3 @@ on the timeline download, missing remote branch files are downlaoded.
A branch is a per-tenant entity, yet a current implementaion requires synchronizing a timeline first to get the branch files locally.
Currently, there's no other way to know about the remote branch files, neither the file contents is verified and updated.
* no IT tests
Automated S3 testing is lacking currently, due to no convenient way to enable backups during the tests.
After it's fixed, benchmark runs should also be carried out to find bottlenecks.

View File

@@ -73,7 +73,7 @@ impl RemoteStorage for LocalFs {
}
async fn list(&self) -> anyhow::Result<Vec<Self::StoragePath>> {
Ok(get_all_files(&self.root).await?.into_iter().collect())
get_all_files(&self.root).await
}
async fn upload(

View File

@@ -1,12 +1,15 @@
//! AWS S3 storage wrapper around `rust_s3` library.
//! Currently does not allow multiple pageservers to use the same bucket concurrently: objects are
//! placed in the root of the bucket.
//!
//! Respects `prefix_in_bucket` property from [`S3Config`],
//! allowing multiple pageservers to independently work with the same S3 bucket, if
//! their bucket prefixes are both specified and different.
use std::path::{Path, PathBuf};
use anyhow::Context;
use s3::{bucket::Bucket, creds::Credentials, region::Region};
use tokio::io::{self, AsyncWriteExt};
use tracing::debug;
use crate::{
config::S3Config,
@@ -23,8 +26,26 @@ impl S3ObjectKey {
&self.0
}
fn download_destination(&self, pageserver_workdir: &Path) -> PathBuf {
pageserver_workdir.join(self.0.split(S3_FILE_SEPARATOR).collect::<PathBuf>())
fn download_destination(
&self,
pageserver_workdir: &Path,
prefix_to_strip: Option<&str>,
) -> PathBuf {
let path_without_prefix = match prefix_to_strip {
Some(prefix) => self.0.strip_prefix(prefix).unwrap_or_else(|| {
panic!(
"Could not strip prefix '{}' from S3 object key '{}'",
prefix, self.0
)
}),
None => &self.0,
};
pageserver_workdir.join(
path_without_prefix
.split(S3_FILE_SEPARATOR)
.collect::<PathBuf>(),
)
}
}
@@ -32,15 +53,27 @@ impl S3ObjectKey {
pub struct S3 {
pageserver_workdir: &'static Path,
bucket: Bucket,
prefix_in_bucket: Option<String>,
}
impl S3 {
/// Creates the storage, errors if incorrect AWS S3 configuration provided.
pub fn new(aws_config: &S3Config, pageserver_workdir: &'static Path) -> anyhow::Result<Self> {
let region = aws_config
.bucket_region
.parse::<Region>()
.context("Failed to parse the s3 region from config")?;
debug!(
"Creating s3 remote storage around bucket {}",
aws_config.bucket_name
);
let region = match aws_config.endpoint.clone() {
Some(endpoint) => Region::Custom {
endpoint,
region: aws_config.bucket_region.clone(),
},
None => aws_config
.bucket_region
.parse::<Region>()
.context("Failed to parse the s3 region from config")?,
};
let credentials = Credentials::new(
aws_config.access_key_id.as_deref(),
aws_config.secret_access_key.as_deref(),
@@ -49,6 +82,20 @@ impl S3 {
None,
)
.context("Failed to create the s3 credentials")?;
let prefix_in_bucket = aws_config.prefix_in_bucket.as_deref().map(|prefix| {
let mut prefix = prefix;
while prefix.starts_with(S3_FILE_SEPARATOR) {
prefix = &prefix[1..]
}
let mut prefix = prefix.to_string();
while prefix.ends_with(S3_FILE_SEPARATOR) {
prefix.pop();
}
prefix
});
Ok(Self {
bucket: Bucket::new_with_path_style(
aws_config.bucket_name.as_str(),
@@ -57,6 +104,7 @@ impl S3 {
)
.context("Failed to create the s3 bucket")?,
pageserver_workdir,
prefix_in_bucket,
})
}
}
@@ -67,7 +115,7 @@ impl RemoteStorage for S3 {
fn storage_path(&self, local_path: &Path) -> anyhow::Result<Self::StoragePath> {
let relative_path = strip_path_prefix(self.pageserver_workdir, local_path)?;
let mut key = String::new();
let mut key = self.prefix_in_bucket.clone().unwrap_or_default();
for segment in relative_path {
key.push(S3_FILE_SEPARATOR);
key.push_str(&segment.to_string_lossy());
@@ -76,13 +124,14 @@ impl RemoteStorage for S3 {
}
fn local_path(&self, storage_path: &Self::StoragePath) -> anyhow::Result<PathBuf> {
Ok(storage_path.download_destination(self.pageserver_workdir))
Ok(storage_path
.download_destination(self.pageserver_workdir, self.prefix_in_bucket.as_deref()))
}
async fn list(&self) -> anyhow::Result<Vec<Self::StoragePath>> {
let list_response = self
.bucket
.list(String::new(), None)
.list(self.prefix_in_bucket.clone().unwrap_or_default(), None)
.await
.context("Failed to list s3 objects")?;
@@ -225,7 +274,7 @@ mod tests {
assert_eq!(
local_path,
key.download_destination(&repo_harness.conf.workdir),
key.download_destination(&repo_harness.conf.workdir, None),
"Download destination should consist of s3 path joined with the pageserver workdir prefix"
);
@@ -239,14 +288,18 @@ mod tests {
let segment_1 = "matching";
let segment_2 = "file";
let local_path = &repo_harness.conf.workdir.join(segment_1).join(segment_2);
let storage = dummy_storage(&repo_harness.conf.workdir);
let expected_key = S3ObjectKey(format!(
"{SEPARATOR}{}{SEPARATOR}{}",
"{}{SEPARATOR}{}{SEPARATOR}{}",
storage.prefix_in_bucket.as_deref().unwrap_or_default(),
segment_1,
segment_2,
SEPARATOR = S3_FILE_SEPARATOR,
));
let actual_key = dummy_storage(&repo_harness.conf.workdir)
let actual_key = storage
.storage_path(local_path)
.expect("Matching path should map to S3 path normally");
assert_eq!(
@@ -308,18 +361,30 @@ mod tests {
let timeline_dir = repo_harness.timeline_path(&TIMELINE_ID);
let relative_timeline_path = timeline_dir.strip_prefix(&repo_harness.conf.workdir)?;
let s3_key = create_s3_key(&relative_timeline_path.join("not a metadata"));
let s3_key = create_s3_key(
&relative_timeline_path.join("not a metadata"),
storage.prefix_in_bucket.as_deref(),
);
assert_eq!(
s3_key.download_destination(&repo_harness.conf.workdir),
s3_key.download_destination(
&repo_harness.conf.workdir,
storage.prefix_in_bucket.as_deref()
),
storage
.local_path(&s3_key)
.expect("For a valid input, valid S3 info should be parsed"),
"Should be able to parse metadata out of the correctly named remote delta file"
);
let s3_key = create_s3_key(&relative_timeline_path.join(METADATA_FILE_NAME));
let s3_key = create_s3_key(
&relative_timeline_path.join(METADATA_FILE_NAME),
storage.prefix_in_bucket.as_deref(),
);
assert_eq!(
s3_key.download_destination(&repo_harness.conf.workdir),
s3_key.download_destination(
&repo_harness.conf.workdir,
storage.prefix_in_bucket.as_deref()
),
storage
.local_path(&s3_key)
.expect("For a valid input, valid S3 info should be parsed"),
@@ -356,18 +421,18 @@ mod tests {
Credentials::anonymous().unwrap(),
)
.unwrap(),
prefix_in_bucket: Some("dummy_prefix/".to_string()),
}
}
fn create_s3_key(relative_file_path: &Path) -> S3ObjectKey {
S3ObjectKey(
relative_file_path
.iter()
.fold(String::new(), |mut path_string, segment| {
path_string.push(S3_FILE_SEPARATOR);
path_string.push_str(segment.to_str().unwrap());
path_string
}),
)
fn create_s3_key(relative_file_path: &Path, prefix: Option<&str>) -> S3ObjectKey {
S3ObjectKey(relative_file_path.iter().fold(
prefix.unwrap_or_default().to_string(),
|mut path_string, segment| {
path_string.push(S3_FILE_SEPARATOR);
path_string.push_str(segment.to_str().unwrap());
path_string
},
))
}
}

View File

@@ -70,7 +70,8 @@
//!
//! When pageserver signals shutdown, current sync task gets finished and the loop exists.
mod compression;
/// Expose the module for a binary CLI tool that deals with the corresponding blobs.
pub mod compression;
mod download;
pub mod index;
mod upload;
@@ -80,16 +81,19 @@ use std::{
num::{NonZeroU32, NonZeroUsize},
path::{Path, PathBuf},
sync::Arc,
thread,
};
use anyhow::{bail, Context};
use futures::stream::{FuturesUnordered, StreamExt};
use lazy_static::lazy_static;
use tokio::{fs, sync::RwLock};
use tokio::{
sync::mpsc::{self, UnboundedReceiver},
time::Instant,
fs,
runtime::Runtime,
sync::{
mpsc::{self, UnboundedReceiver},
RwLock,
},
time::{Duration, Instant},
};
use tracing::*;
@@ -102,11 +106,11 @@ use self::{
},
upload::upload_timeline_checkpoint,
};
use super::{RemoteStorage, SyncStartupData, TimelineSyncId};
use super::{RemoteStorage, SyncStartupData, ZTenantTimelineId};
use crate::{
config::PageServerConf, layered_repository::metadata::TimelineMetadata,
remote_storage::storage_sync::compression::read_archive_header, repository::TimelineSyncState,
tenant_mgr::set_timeline_states,
tenant_mgr::set_timeline_states, thread_mgr, thread_mgr::ThreadKind,
};
use zenith_metrics::{register_histogram_vec, register_int_gauge, HistogramVec, IntGauge};
@@ -239,13 +243,13 @@ mod sync_queue {
/// Limited by the number of retries, after certain threshold the failing task gets evicted and the timeline disabled.
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone)]
pub struct SyncTask {
sync_id: TimelineSyncId,
sync_id: ZTenantTimelineId,
retries: u32,
kind: SyncKind,
}
impl SyncTask {
fn new(sync_id: TimelineSyncId, retries: u32, kind: SyncKind) -> Self {
fn new(sync_id: ZTenantTimelineId, retries: u32, kind: SyncKind) -> Self {
Self {
sync_id,
retries,
@@ -304,7 +308,10 @@ pub fn schedule_timeline_checkpoint_upload(
}
if !sync_queue::push(SyncTask::new(
TimelineSyncId(tenant_id, timeline_id),
ZTenantTimelineId {
tenant_id,
timeline_id,
},
0,
SyncKind::Upload(NewCheckpoint { layers, metadata }),
)) {
@@ -330,8 +337,15 @@ pub fn schedule_timeline_checkpoint_upload(
///
/// Ensure that the loop is started otherwise the task is never processed.
pub fn schedule_timeline_download(tenant_id: ZTenantId, timeline_id: ZTimelineId) {
debug!(
"Scheduling timeline download for tenant {}, timeline {}",
tenant_id, timeline_id
);
sync_queue::push(SyncTask::new(
TimelineSyncId(tenant_id, timeline_id),
ZTenantTimelineId {
tenant_id,
timeline_id,
},
0,
SyncKind::Download(TimelineDownload {
files_to_skip: Arc::new(BTreeSet::new()),
@@ -347,7 +361,7 @@ pub(super) fn spawn_storage_sync_thread<
S: RemoteStorage<StoragePath = P> + Send + Sync + 'static,
>(
conf: &'static PageServerConf,
local_timeline_files: HashMap<TimelineSyncId, (TimelineMetadata, Vec<PathBuf>)>,
local_timeline_files: HashMap<ZTenantTimelineId, (TimelineMetadata, Vec<PathBuf>)>,
storage: S,
max_concurrent_sync: NonZeroUsize,
max_sync_errors: NonZeroU32,
@@ -369,7 +383,7 @@ pub(super) fn spawn_storage_sync_thread<
Ok(local_path) => Some(local_path),
Err(e) => {
error!(
"Failed to find local path for remote path {:?}: {:#}",
"Failed to find local path for remote path {:?}: {:?}",
remote_path, e
);
None
@@ -379,9 +393,12 @@ pub(super) fn spawn_storage_sync_thread<
let initial_timeline_states = schedule_first_sync_tasks(&remote_index, local_timeline_files);
let handle = thread::Builder::new()
.name("Remote storage sync thread".to_string())
.spawn(move || {
thread_mgr::spawn(
ThreadKind::StorageSync,
None,
None,
"Remote storage sync thread",
move || {
storage_sync_loop(
runtime,
conf,
@@ -391,19 +408,25 @@ pub(super) fn spawn_storage_sync_thread<
max_concurrent_sync,
max_sync_errors,
)
})
.context("Failed to spawn remote storage sync thread")?;
},
)
.context("Failed to spawn remote storage sync thread")?;
Ok(SyncStartupData {
initial_timeline_states,
sync_loop_handle: Some(handle),
})
}
enum LoopStep {
NewStates(HashMap<ZTenantId, HashMap<ZTimelineId, TimelineSyncState>>),
Shutdown,
}
#[allow(clippy::too_many_arguments)]
fn storage_sync_loop<
P: std::fmt::Debug + Send + Sync + 'static,
S: RemoteStorage<StoragePath = P> + Send + Sync + 'static,
>(
runtime: tokio::runtime::Runtime,
runtime: Runtime,
conf: &'static PageServerConf,
mut receiver: UnboundedReceiver<SyncTask>,
index: RemoteTimelineIndex,
@@ -412,23 +435,34 @@ fn storage_sync_loop<
max_sync_errors: NonZeroU32,
) -> anyhow::Result<()> {
let remote_assets = Arc::new((storage, RwLock::new(index)));
while !crate::tenant_mgr::shutdown_requested() {
let new_timeline_states = runtime.block_on(
loop_step(
conf,
&mut receiver,
Arc::clone(&remote_assets),
max_concurrent_sync,
max_sync_errors,
)
.instrument(debug_span!("storage_sync_loop_step")),
);
// Batch timeline download registration to ensure that the external registration code won't block any running tasks before.
set_timeline_states(conf, new_timeline_states);
debug!("Sync loop step completed");
loop {
let loop_step = runtime.block_on(async {
tokio::select! {
new_timeline_states = loop_step(
conf,
&mut receiver,
Arc::clone(&remote_assets),
max_concurrent_sync,
max_sync_errors,
)
.instrument(debug_span!("storage_sync_loop_step")) => LoopStep::NewStates(new_timeline_states),
_ = thread_mgr::shutdown_watcher() => LoopStep::Shutdown,
}
});
match loop_step {
LoopStep::NewStates(new_timeline_states) => {
// Batch timeline download registration to ensure that the external registration code won't block any running tasks before.
set_timeline_states(conf, new_timeline_states);
debug!("Sync loop step completed");
}
LoopStep::Shutdown => {
debug!("Shutdown requested, stopping");
break;
}
}
}
debug!("Shutdown requested, stopping");
Ok(())
}
@@ -482,8 +516,8 @@ async fn loop_step<
Ok(extra_step) => extra_step,
Err(e) => {
error!(
"Failed to process storage sync task for tenant {}, timeline {}: {:#}",
sync_id.0, sync_id.1, e
"Failed to process storage sync task for tenant {}, timeline {}: {:?}",
sync_id.tenant_id, sync_id.timeline_id, e
);
None
}
@@ -497,7 +531,10 @@ async fn loop_step<
while let Some((sync_id, state_update)) = task_batch.next().await {
debug!("Finished storage sync task for sync id {}", sync_id);
if let Some(state_update) = state_update {
let TimelineSyncId(tenant_id, timeline_id) = sync_id;
let ZTenantTimelineId {
tenant_id,
timeline_id,
} = sync_id;
new_timeline_states
.entry(tenant_id)
.or_default()
@@ -539,7 +576,7 @@ async fn process_task<
"Waiting {} seconds before starting the task",
seconds_to_wait
);
tokio::time::sleep(tokio::time::Duration::from_secs_f64(seconds_to_wait)).await;
tokio::time::sleep(Duration::from_secs_f64(seconds_to_wait)).await;
}
let sync_start = Instant::now();
@@ -591,7 +628,7 @@ async fn process_task<
fn schedule_first_sync_tasks(
index: &RemoteTimelineIndex,
local_timeline_files: HashMap<TimelineSyncId, (TimelineMetadata, Vec<PathBuf>)>,
local_timeline_files: HashMap<ZTenantTimelineId, (TimelineMetadata, Vec<PathBuf>)>,
) -> HashMap<ZTenantId, HashMap<ZTimelineId, TimelineSyncState>> {
let mut initial_timeline_statuses: HashMap<ZTenantId, HashMap<ZTimelineId, TimelineSyncState>> =
HashMap::new();
@@ -602,7 +639,10 @@ fn schedule_first_sync_tasks(
for (sync_id, (local_metadata, local_files)) in local_timeline_files {
let local_disk_consistent_lsn = local_metadata.disk_consistent_lsn();
let TimelineSyncId(tenant_id, timeline_id) = sync_id;
let ZTenantTimelineId {
tenant_id,
timeline_id,
} = sync_id;
match index.timeline_entry(&sync_id) {
Some(index_entry) => {
let timeline_status = compare_local_and_remote_timeline(
@@ -645,10 +685,10 @@ fn schedule_first_sync_tasks(
}
}
let unprocessed_remote_ids = |remote_id: &TimelineSyncId| {
let unprocessed_remote_ids = |remote_id: &ZTenantTimelineId| {
initial_timeline_statuses
.get(&remote_id.0)
.and_then(|timelines| timelines.get(&remote_id.1))
.get(&remote_id.tenant_id)
.and_then(|timelines| timelines.get(&remote_id.timeline_id))
.is_none()
};
for unprocessed_remote_id in index
@@ -656,7 +696,10 @@ fn schedule_first_sync_tasks(
.filter(unprocessed_remote_ids)
.collect::<Vec<_>>()
{
let TimelineSyncId(cloud_only_tenant_id, cloud_only_timeline_id) = unprocessed_remote_id;
let ZTenantTimelineId {
tenant_id: cloud_only_tenant_id,
timeline_id: cloud_only_timeline_id,
} = unprocessed_remote_id;
match index
.timeline_entry(&unprocessed_remote_id)
.and_then(TimelineIndexEntry::disk_consistent_lsn)
@@ -685,7 +728,7 @@ fn schedule_first_sync_tasks(
fn compare_local_and_remote_timeline(
new_sync_tasks: &mut VecDeque<SyncTask>,
sync_id: TimelineSyncId,
sync_id: ZTenantTimelineId,
local_metadata: TimelineMetadata,
local_files: Vec<PathBuf>,
remote_entry: &TimelineIndexEntry,
@@ -742,7 +785,7 @@ async fn update_index_description<
>(
(storage, index): &(S, RwLock<RemoteTimelineIndex>),
timeline_dir: &Path,
id: TimelineSyncId,
id: ZTenantTimelineId,
) -> anyhow::Result<RemoteTimeline> {
let mut index_write = index.write().await;
let full_index = match index_write.timeline_entry(&id) {
@@ -765,7 +808,7 @@ async fn update_index_description<
Ok((archive_id, header_size, header)) => full_index.update_archive_contents(archive_id.0, header, header_size),
Err((e, archive_id)) => bail!(
"Failed to download archive header for tenant {}, timeline {}, archive for Lsn {}: {}",
id.0, id.1, archive_id.0,
id.tenant_id, id.timeline_id, archive_id.0,
e
),
}
@@ -843,7 +886,7 @@ mod test_utils {
timeline_id: ZTimelineId,
new_upload: NewCheckpoint,
) {
let sync_id = TimelineSyncId(harness.tenant_id, timeline_id);
let sync_id = ZTenantTimelineId::new(harness.tenant_id, timeline_id);
upload_timeline_checkpoint(
harness.conf,
Arc::clone(&remote_assets),
@@ -899,7 +942,7 @@ mod test_utils {
pub async fn expect_timeline(
index: &RwLock<RemoteTimelineIndex>,
sync_id: TimelineSyncId,
sync_id: ZTenantTimelineId,
) -> RemoteTimeline {
if let Some(TimelineIndexEntry::Full(remote_timeline)) =
index.read().await.timeline_entry(&sync_id)
@@ -934,18 +977,18 @@ mod test_utils {
let mut expected_timeline_entries = BTreeMap::new();
for sync_id in actual_sync_ids {
actual_branches.insert(
sync_id.1,
sync_id.tenant_id,
index_read
.branch_files(sync_id.0)
.branch_files(sync_id.tenant_id)
.into_iter()
.flat_map(|branch_paths| branch_paths.iter())
.cloned()
.collect::<BTreeSet<_>>(),
);
expected_branches.insert(
sync_id.1,
sync_id.tenant_id,
expected_index_with_descriptions
.branch_files(sync_id.0)
.branch_files(sync_id.tenant_id)
.into_iter()
.flat_map(|branch_paths| branch_paths.iter())
.cloned()

View File

@@ -34,7 +34,7 @@ use std::{
sync::Arc,
};
use anyhow::{anyhow, bail, ensure, Context};
use anyhow::{bail, ensure, Context};
use async_compression::tokio::bufread::{ZstdDecoder, ZstdEncoder};
use serde::{Deserialize, Serialize};
use tokio::{
@@ -211,16 +211,18 @@ pub async fn read_archive_header<A: io::AsyncRead + Send + Sync + Unpin>(
pub fn parse_archive_name(archive_path: &Path) -> anyhow::Result<(Lsn, u64)> {
let archive_name = archive_path
.file_name()
.ok_or_else(|| anyhow!("Archive '{}' has no file name", archive_path.display()))?
.with_context(|| format!("Archive '{}' has no file name", archive_path.display()))?
.to_string_lossy();
let (lsn_str, header_size_str) =
archive_name.rsplit_once(ARCHIVE_EXTENSION).ok_or_else(|| {
anyhow!(
"Archive '{}' has incorrect extension, expected to contain '{}'",
archive_path.display(),
ARCHIVE_EXTENSION
)
})?;
archive_name
.rsplit_once(ARCHIVE_EXTENSION)
.with_context(|| {
format!(
"Archive '{}' has incorrect extension, expected to contain '{}'",
archive_path.display(),
ARCHIVE_EXTENSION
)
})?;
let disk_consistent_lsn = Lsn::from_hex(lsn_str).with_context(|| {
format!(
"Archive '{}' has an invalid disk consistent lsn in its extension",
@@ -246,7 +248,7 @@ fn archive_name(disk_consistent_lsn: Lsn, header_size: u64) -> String {
archive_name
}
async fn uncompress_with_header(
pub async fn uncompress_with_header(
files_to_skip: &BTreeSet<PathBuf>,
destination_dir: &Path,
header: ArchiveHeader,
@@ -374,7 +376,7 @@ async fn write_archive_contents(
}
let metadata_bytes_written = io::copy(&mut metadata_bytes.as_slice(), &mut archive_input)
.await
.with_context(|| "Failed to add metadata into the archive")?;
.context("Failed to add metadata into the archive")?;
ensure!(
header.metadata_file_size == metadata_bytes_written,
"Metadata file was written to the archive incompletely",

View File

@@ -3,7 +3,7 @@
use std::{borrow::Cow, collections::BTreeSet, path::PathBuf, sync::Arc};
use anyhow::{anyhow, ensure, Context};
use anyhow::{ensure, Context};
use futures::{stream::FuturesUnordered, StreamExt};
use tokio::{fs, sync::RwLock};
use tracing::{debug, error, trace, warn};
@@ -17,7 +17,7 @@ use crate::{
compression, index::TimelineIndexEntry, sync_queue, tenant_branch_files,
update_index_description, SyncKind, SyncTask,
},
RemoteStorage, TimelineSyncId,
RemoteStorage, ZTenantTimelineId,
},
};
@@ -52,13 +52,16 @@ pub(super) async fn download_timeline<
>(
conf: &'static PageServerConf,
remote_assets: Arc<(S, RwLock<RemoteTimelineIndex>)>,
sync_id: TimelineSyncId,
sync_id: ZTenantTimelineId,
mut download: TimelineDownload,
retries: u32,
) -> DownloadedTimeline {
debug!("Downloading layers for sync id {}", sync_id);
let TimelineSyncId(tenant_id, timeline_id) = sync_id;
let ZTenantTimelineId {
tenant_id,
timeline_id,
} = sync_id;
let index_read = remote_assets.1.read().await;
let remote_timeline = match index_read.timeline_entry(&sync_id) {
None => {
@@ -80,7 +83,7 @@ pub(super) async fn download_timeline<
{
Ok(remote_timeline) => Cow::Owned(remote_timeline),
Err(e) => {
error!("Failed to download full timeline index: {:#}", e);
error!("Failed to download full timeline index: {:?}", e);
return match remote_disk_consistent_lsn {
Some(disk_consistent_lsn) => {
sync_queue::push(SyncTask::new(
@@ -110,9 +113,10 @@ pub(super) async fn download_timeline<
}
};
if let Err(e) = download_missing_branches(conf, remote_assets.as_ref(), sync_id.0).await {
if let Err(e) = download_missing_branches(conf, remote_assets.as_ref(), sync_id.tenant_id).await
{
error!(
"Failed to download missing branches for sync id {}: {:#}",
"Failed to download missing branches for sync id {}: {:?}",
sync_id, e
);
sync_queue::push(SyncTask::new(
@@ -150,7 +154,7 @@ pub(super) async fn download_timeline<
Err(e) => {
let archives_left = archives_total - archives_downloaded;
error!(
"Failed to download archive {:?} (archives downloaded: {}; archives left: {}) for tenant {} timeline {}, requeueing the download: {:#}",
"Failed to download archive {:?} (archives downloaded: {}; archives left: {}) for tenant {} timeline {}, requeueing the download: {:?}",
archive_id, archives_downloaded, archives_left, tenant_id, timeline_id, e
);
sync_queue::push(SyncTask::new(
@@ -180,7 +184,10 @@ async fn try_download_archive<
S: RemoteStorage<StoragePath = P> + Send + Sync + 'static,
>(
conf: &'static PageServerConf,
TimelineSyncId(tenant_id, timeline_id): TimelineSyncId,
ZTenantTimelineId {
tenant_id,
timeline_id,
}: ZTenantTimelineId,
remote_assets: Arc<(S, RwLock<RemoteTimelineIndex>)>,
remote_timeline: &RemoteTimeline,
archive_id: ArchiveId,
@@ -189,7 +196,7 @@ async fn try_download_archive<
debug!("Downloading archive {:?}", archive_id);
let archive_to_download = remote_timeline
.archive_data(archive_id)
.ok_or_else(|| anyhow!("Archive {:?} not found in remote storage", archive_id))?;
.with_context(|| format!("Archive {:?} not found in remote storage", archive_id))?;
let (archive_header, header_size) = remote_timeline
.restore_header(archive_id)
.context("Failed to restore header when downloading an archive")?;
@@ -202,7 +209,7 @@ async fn try_download_archive<
archive_to_download.disk_consistent_lsn(),
local_metadata.disk_consistent_lsn()
),
Err(e) => warn!("Failed to read local metadata file, assuing it's safe to override its with the download. Read: {:#}", e),
Err(e) => warn!("Failed to read local metadata file, assuming it's safe to override its with the download. Read: {:#}", e),
}
compression::uncompress_file_stream_with_index(
conf.timeline_path(&timeline_id, &tenant_id),
@@ -307,7 +314,7 @@ async fn download_missing_branches<
while let Some(download_result) = remote_only_branches_downloads.next().await {
if let Err(e) = download_result {
branch_downloads_failed = true;
error!("Failed to download a branch file: {:#}", e);
error!("Failed to download a branch file: {:?}", e);
}
}
ensure!(
@@ -343,7 +350,7 @@ mod tests {
#[tokio::test]
async fn test_download_timeline() -> anyhow::Result<()> {
let repo_harness = RepoHarness::create("test_download_timeline")?;
let sync_id = TimelineSyncId(repo_harness.tenant_id, TIMELINE_ID);
let sync_id = ZTenantTimelineId::new(repo_harness.tenant_id, TIMELINE_ID);
let storage = LocalFs::new(tempdir()?.path().to_owned(), &repo_harness.conf.workdir)?;
let index = RwLock::new(RemoteTimelineIndex::try_parse_descriptions_from_paths(
repo_harness.conf,

View File

@@ -9,7 +9,7 @@ use std::{
path::{Path, PathBuf},
};
use anyhow::{anyhow, bail, ensure, Context};
use anyhow::{bail, ensure, Context};
use serde::{Deserialize, Serialize};
use tracing::debug;
use zenith_utils::{
@@ -22,7 +22,7 @@ use crate::{
layered_repository::TIMELINES_SEGMENT_NAME,
remote_storage::{
storage_sync::compression::{parse_archive_name, FileEntry},
TimelineSyncId,
ZTenantTimelineId,
},
};
@@ -53,7 +53,7 @@ impl RelativePath {
#[derive(Debug, Clone)]
pub struct RemoteTimelineIndex {
branch_files: HashMap<ZTenantId, HashSet<RelativePath>>,
timeline_files: HashMap<TimelineSyncId, TimelineIndexEntry>,
timeline_files: HashMap<ZTenantTimelineId, TimelineIndexEntry>,
}
impl RemoteTimelineIndex {
@@ -80,19 +80,22 @@ impl RemoteTimelineIndex {
index
}
pub fn timeline_entry(&self, id: &TimelineSyncId) -> Option<&TimelineIndexEntry> {
pub fn timeline_entry(&self, id: &ZTenantTimelineId) -> Option<&TimelineIndexEntry> {
self.timeline_files.get(id)
}
pub fn timeline_entry_mut(&mut self, id: &TimelineSyncId) -> Option<&mut TimelineIndexEntry> {
pub fn timeline_entry_mut(
&mut self,
id: &ZTenantTimelineId,
) -> Option<&mut TimelineIndexEntry> {
self.timeline_files.get_mut(id)
}
pub fn add_timeline_entry(&mut self, id: TimelineSyncId, entry: TimelineIndexEntry) {
pub fn add_timeline_entry(&mut self, id: ZTenantTimelineId, entry: TimelineIndexEntry) {
self.timeline_files.insert(id, entry);
}
pub fn all_sync_ids(&self) -> impl Iterator<Item = TimelineSyncId> + '_ {
pub fn all_sync_ids(&self) -> impl Iterator<Item = ZTenantTimelineId> + '_ {
self.timeline_files.keys().copied()
}
@@ -214,7 +217,7 @@ impl RemoteTimeline {
let archive = self
.checkpoint_archives
.get(&archive_id)
.ok_or_else(|| anyhow!("Archive {:?} not found", archive_id))?;
.with_context(|| format!("Archive {:?} not found", archive_id))?;
let mut header_files = Vec::with_capacity(archive.files.len());
for (expected_archive_position, archive_file) in archive.files.iter().enumerate() {
@@ -226,11 +229,10 @@ impl RemoteTimeline {
archive_id,
);
let timeline_file = self.timeline_files.get(archive_file).ok_or_else(|| {
anyhow!(
let timeline_file = self.timeline_files.get(archive_file).with_context(|| {
format!(
"File with id {:?} not found for archive {:?}",
archive_file,
archive_id
archive_file, archive_id
)
})?;
header_files.push(timeline_file.clone());
@@ -299,7 +301,7 @@ fn try_parse_index_entry(
})?
.iter()
.next()
.ok_or_else(|| anyhow!("Found no tenant id in path '{}'", path.display()))?
.with_context(|| format!("Found no tenant id in path '{}'", path.display()))?
.to_string_lossy()
.parse::<ZTenantId>()
.with_context(|| format!("Failed to parse tenant id from path '{}'", path.display()))?;
@@ -321,8 +323,8 @@ fn try_parse_index_entry(
let mut segments = timelines_subpath.iter();
let timeline_id = segments
.next()
.ok_or_else(|| {
anyhow!(
.with_context(|| {
format!(
"{} directory of tenant {} (path '{}') is not an index entry",
TIMELINES_SEGMENT_NAME,
tenant_id,
@@ -345,11 +347,14 @@ fn try_parse_index_entry(
let archive_name = path
.file_name()
.ok_or_else(|| anyhow!("Archive '{}' has no file name", path.display()))?
.with_context(|| format!("Archive '{}' has no file name", path.display()))?
.to_string_lossy()
.to_string();
let sync_id = TimelineSyncId(tenant_id, timeline_id);
let sync_id = ZTenantTimelineId {
tenant_id,
timeline_id,
};
let timeline_index_entry = index
.timeline_files
.entry(sync_id)

View File

@@ -17,7 +17,7 @@ use crate::{
index::{RemoteTimeline, TimelineIndexEntry},
sync_queue, tenant_branch_files, update_index_description, SyncKind, SyncTask,
},
RemoteStorage, TimelineSyncId,
RemoteStorage, ZTenantTimelineId,
},
};
@@ -36,14 +36,15 @@ pub(super) async fn upload_timeline_checkpoint<
>(
config: &'static PageServerConf,
remote_assets: Arc<(S, RwLock<RemoteTimelineIndex>)>,
sync_id: TimelineSyncId,
sync_id: ZTenantTimelineId,
new_checkpoint: NewCheckpoint,
retries: u32,
) -> Option<bool> {
debug!("Uploading checkpoint for sync id {}", sync_id);
if let Err(e) = upload_missing_branches(config, remote_assets.as_ref(), sync_id.0).await {
if let Err(e) = upload_missing_branches(config, remote_assets.as_ref(), sync_id.tenant_id).await
{
error!(
"Failed to upload missing branches for sync id {}: {:#}",
"Failed to upload missing branches for sync id {}: {:?}",
sync_id, e
);
sync_queue::push(SyncTask::new(
@@ -57,7 +58,10 @@ pub(super) async fn upload_timeline_checkpoint<
let index = &remote_assets.1;
let TimelineSyncId(tenant_id, timeline_id) = sync_id;
let ZTenantTimelineId {
tenant_id,
timeline_id,
} = sync_id;
let timeline_dir = config.timeline_path(&timeline_id, &tenant_id);
let index_read = index.read().await;
@@ -69,7 +73,7 @@ pub(super) async fn upload_timeline_checkpoint<
match update_index_description(remote_assets.as_ref(), &timeline_dir, sync_id).await {
Ok(remote_timeline) => Some(Cow::Owned(remote_timeline)),
Err(e) => {
error!("Failed to download full timeline index: {:#}", e);
error!("Failed to download full timeline index: {:?}", e);
sync_queue::push(SyncTask::new(
sync_id,
retries,
@@ -132,7 +136,7 @@ pub(super) async fn upload_timeline_checkpoint<
}
Err(e) => {
error!(
"Failed to upload checkpoint: {:#}, requeueing the upload",
"Failed to upload checkpoint: {:?}, requeueing the upload",
e
);
sync_queue::push(SyncTask::new(
@@ -151,11 +155,14 @@ async fn try_upload_checkpoint<
>(
config: &'static PageServerConf,
remote_assets: Arc<(S, RwLock<RemoteTimelineIndex>)>,
sync_id: TimelineSyncId,
sync_id: ZTenantTimelineId,
new_checkpoint: &NewCheckpoint,
files_to_skip: BTreeSet<PathBuf>,
) -> anyhow::Result<(ArchiveHeader, u64)> {
let TimelineSyncId(tenant_id, timeline_id) = sync_id;
let ZTenantTimelineId {
tenant_id,
timeline_id,
} = sync_id;
let timeline_dir = config.timeline_path(&timeline_id, &tenant_id);
let files_to_upload = new_checkpoint
@@ -253,7 +260,7 @@ async fn upload_missing_branches<
.await
.add_branch_file(tenant_id, local_only_branch.clone()),
Err(e) => {
error!("Failed to upload branch file: {:#}", e);
error!("Failed to upload branch file: {:?}", e);
branch_uploads_failed = true;
}
}
@@ -288,7 +295,7 @@ mod tests {
#[tokio::test]
async fn reupload_timeline() -> anyhow::Result<()> {
let repo_harness = RepoHarness::create("reupload_timeline")?;
let sync_id = TimelineSyncId(repo_harness.tenant_id, TIMELINE_ID);
let sync_id = ZTenantTimelineId::new(repo_harness.tenant_id, TIMELINE_ID);
let storage = LocalFs::new(tempdir()?.path().to_owned(), &repo_harness.conf.workdir)?;
let index = RwLock::new(RemoteTimelineIndex::try_parse_descriptions_from_paths(
repo_harness.conf,
@@ -484,7 +491,7 @@ mod tests {
#[tokio::test]
async fn reupload_timeline_rejected() -> anyhow::Result<()> {
let repo_harness = RepoHarness::create("reupload_timeline_rejected")?;
let sync_id = TimelineSyncId(repo_harness.tenant_id, TIMELINE_ID);
let sync_id = ZTenantTimelineId::new(repo_harness.tenant_id, TIMELINE_ID);
let storage = LocalFs::new(tempdir()?.path().to_owned(), &repo_harness.conf.workdir)?;
let index = RwLock::new(RemoteTimelineIndex::try_parse_descriptions_from_paths(
repo_harness.conf,

View File

@@ -7,7 +7,7 @@ use postgres_ffi::{MultiXactId, MultiXactOffset, TransactionId};
use serde::{Deserialize, Serialize};
use std::collections::HashSet;
use std::ops::{AddAssign, Deref};
use std::sync::Arc;
use std::sync::{Arc, RwLockReadGuard};
use std::time::Duration;
use zenith_utils::lsn::{Lsn, RecordLsn};
use zenith_utils::zid::ZTimelineId;
@@ -19,7 +19,7 @@ pub type BlockNumber = u32;
/// A repository corresponds to one .zenith directory. One repository holds multiple
/// timelines, forked off from the same initial call to 'initdb'.
pub trait Repository: Send + Sync {
fn shutdown(&self) -> Result<()>;
fn detach_timeline(&self, timeline_id: ZTimelineId) -> Result<()>;
/// Updates timeline based on the new sync state, received from the remote storage synchronization.
/// See [`crate::remote_storage`] for more details about the synchronization.
@@ -184,6 +184,9 @@ pub trait Timeline: Send + Sync {
///
fn wait_lsn(&self, lsn: Lsn) -> Result<()>;
/// Lock and get timeline's GC cuttof
fn get_latest_gc_cutoff_lsn(&self) -> RwLockReadGuard<Lsn>;
/// Look up given page version.
fn get_page_at_lsn(&self, tag: RelishTag, blknum: BlockNumber, lsn: Lsn) -> Result<Bytes>;
@@ -217,10 +220,12 @@ pub trait Timeline: Send + Sync {
/// Atomically get both last and prev.
fn get_last_record_rlsn(&self) -> RecordLsn;
/// Get last or prev record separately. Same as get_last_record_rlsn().last/prev.
fn get_last_record_lsn(&self) -> Lsn;
fn get_prev_record_lsn(&self) -> Lsn;
fn get_start_lsn(&self) -> Lsn;
fn get_disk_consistent_lsn(&self) -> Lsn;
/// Mutate the timeline with a [`TimelineWriter`].
@@ -235,7 +240,11 @@ pub trait Timeline: Send + Sync {
///
/// Check that it is valid to request operations with that lsn.
fn check_lsn_is_in_scope(&self, lsn: Lsn) -> Result<()>;
fn check_lsn_is_in_scope(
&self,
lsn: Lsn,
latest_gc_cutoff_lsn: &RwLockReadGuard<Lsn>,
) -> Result<()>;
/// Retrieve current logical size of the timeline
///
@@ -244,7 +253,7 @@ pub trait Timeline: Send + Sync {
fn get_current_logical_size(&self) -> usize;
/// Does the same as get_current_logical_size but counted on demand.
/// Used in tests to ensure thet incremental and non incremental variants match.
/// Used in tests to ensure that incremental and non incremental variants match.
fn get_current_logical_size_non_incremental(&self, lsn: Lsn) -> Result<usize>;
/// An escape hatch to allow "casting" a generic Timeline to LayeredTimeline.
@@ -297,8 +306,12 @@ pub enum ZenithWalRecord {
/// Native PostgreSQL WAL record
Postgres { will_init: bool, rec: Bytes },
/// Set bits in heap visibility map. (heap blkno, flag bits to clear)
ClearVisibilityMapFlags { heap_blkno: u32, flags: u8 },
/// Clear bits in heap visibility map. ('flags' is bitmap of bits to clear)
ClearVisibilityMapFlags {
new_heap_blkno: Option<u32>,
old_heap_blkno: Option<u32>,
flags: u8,
},
/// Mark transaction IDs as committed on a CLOG page
ClogSetCommitted { xids: Vec<TransactionId> },
/// Mark transaction IDs as aborted on a CLOG page
@@ -987,7 +1000,7 @@ mod tests {
.source()
.unwrap()
.to_string()
.contains("is earlier than initdb lsn"));
.contains("is earlier than latest GC horizon"));
}
}
@@ -1004,12 +1017,11 @@ mod tests {
make_some_layers(&tline, Lsn(0x20))?;
repo.gc_iteration(Some(TIMELINE_ID), 0x10, false)?;
let latest_gc_cutoff_lsn = tline.get_latest_gc_cutoff_lsn();
assert!(*latest_gc_cutoff_lsn > Lsn(0x25));
match tline.get_page_at_lsn(TESTREL_A, 0, Lsn(0x25)) {
Ok(_) => panic!("request for page should have failed"),
Err(err) => assert!(err
.to_string()
.contains("tried to request a page version that was garbage collected")),
Err(err) => assert!(err.to_string().contains("not found at")),
}
Ok(())
}

View File

@@ -5,15 +5,16 @@ use crate::branches;
use crate::config::PageServerConf;
use crate::layered_repository::LayeredRepository;
use crate::repository::{Repository, Timeline, TimelineSyncState};
use crate::tenant_threads;
use crate::thread_mgr;
use crate::thread_mgr::ThreadKind;
use crate::walredo::PostgresRedoManager;
use anyhow::{anyhow, bail, Context, Result};
use crate::CheckpointConfig;
use anyhow::{bail, Context, Result};
use lazy_static::lazy_static;
use log::*;
use serde::{Deserialize, Serialize};
use std::collections::{hash_map, HashMap};
use std::fmt;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, Mutex, MutexGuard};
use zenith_utils::zid::{ZTenantId, ZTimelineId};
@@ -23,7 +24,7 @@ lazy_static! {
struct Tenant {
state: TenantState,
repo: Option<Arc<dyn Repository>>,
repo: Arc<dyn Repository>,
}
#[derive(Debug, Serialize, Deserialize, Clone, Copy, PartialEq, Eq)]
@@ -56,8 +57,6 @@ fn access_tenants() -> MutexGuard<'static, HashMap<ZTenantId, Tenant>> {
TENANTS.lock().unwrap()
}
static SHUTDOWN_REQUESTED: AtomicBool = AtomicBool::new(false);
/// Updates tenants' repositories, changing their timelines state in memory.
pub fn set_timeline_states(
conf: &'static PageServerConf,
@@ -73,28 +72,8 @@ pub fn set_timeline_states(
let mut m = access_tenants();
for (tenant_id, timeline_states) in timeline_states {
let tenant = m.entry(tenant_id).or_insert_with(|| Tenant {
state: TenantState::Idle,
repo: None,
});
if let Err(e) = put_timelines_into_tenant(conf, tenant, tenant_id, timeline_states) {
error!(
"Failed to update timeline states for tenant {}: {:#}",
tenant_id, e
);
}
}
}
fn put_timelines_into_tenant(
conf: &'static PageServerConf,
tenant: &mut Tenant,
tenant_id: ZTenantId,
timeline_states: HashMap<ZTimelineId, TimelineSyncState>,
) -> anyhow::Result<()> {
let repo = match tenant.repo.as_ref() {
Some(repo) => Arc::clone(repo),
None => {
let tenant = m.entry(tenant_id).or_insert_with(|| {
// TODO (rodionov) reuse one of the initialisation routines
// Set up a WAL redo manager, for applying WAL records.
let walredo_mgr = PostgresRedoManager::new(conf, tenant_id);
@@ -105,13 +84,43 @@ fn put_timelines_into_tenant(
tenant_id,
conf.remote_storage_config.is_some(),
));
tenant.repo = Some(Arc::clone(&repo));
repo
Tenant {
state: TenantState::Idle,
repo,
}
});
if let Err(e) = put_timelines_into_tenant(tenant, tenant_id, timeline_states) {
error!(
"Failed to update timeline states for tenant {}: {:?}",
tenant_id, e
);
}
};
}
}
fn put_timelines_into_tenant(
tenant: &mut Tenant,
tenant_id: ZTenantId,
timeline_states: HashMap<ZTimelineId, TimelineSyncState>,
) -> anyhow::Result<()> {
for (timeline_id, timeline_state) in timeline_states {
repo.set_timeline_state(timeline_id, timeline_state)
// If the timeline is being put into any other state than Ready,
// stop any threads operating on it.
//
// FIXME: This is racy. A page service thread could just get
// handle on the Timeline, before we call set_timeline_state()
if !matches!(timeline_state, TimelineSyncState::Ready(_)) {
thread_mgr::shutdown_threads(None, Some(tenant_id), Some(timeline_id));
// Should we run a final checkpoint to flush all the data to
// disk? Doesn't seem necessary; all of the states other than
// Ready imply that the data on local disk is corrupt or incomplete,
// and we don't want to flush that to disk.
}
tenant
.repo
.set_timeline_state(timeline_id, timeline_state)
.with_context(|| {
format!(
"Failed to update timeline {} state to {:?}",
@@ -123,29 +132,49 @@ fn put_timelines_into_tenant(
Ok(())
}
// Check this flag in the thread loops to know when to exit
pub fn shutdown_requested() -> bool {
SHUTDOWN_REQUESTED.load(Ordering::Relaxed)
}
pub fn shutdown_all_tenants() -> Result<()> {
SHUTDOWN_REQUESTED.swap(true, Ordering::Relaxed);
let tenantids = list_tenantids()?;
for tenantid in &tenantids {
set_tenant_state(*tenantid, TenantState::Stopping)?;
///
/// Shut down all tenants. This runs as part of pageserver shutdown.
///
pub fn shutdown_all_tenants() {
let mut m = access_tenants();
let mut tenantids = Vec::new();
for (tenantid, tenant) in m.iter_mut() {
tenant.state = TenantState::Stopping;
tenantids.push(*tenantid)
}
drop(m);
thread_mgr::shutdown_threads(Some(ThreadKind::WalReceiver), None, None);
thread_mgr::shutdown_threads(Some(ThreadKind::GarbageCollector), None, None);
thread_mgr::shutdown_threads(Some(ThreadKind::Checkpointer), None, None);
// Ok, no background threads running anymore. Flush any remaining data in
// memory to disk.
//
// We assume that any incoming connections that might request pages from
// the repository have already been terminated by the caller, so there
// should be no more activity in any of the repositories.
//
// On error, log it but continue with the shutdown for other tenants.
for tenantid in tenantids {
// Wait for checkpointer and GC to finish their job
tenant_threads::wait_for_tenant_threads_to_stop(tenantid);
let repo = get_repository_for_tenant(tenantid)?;
debug!("shutdown tenant {}", tenantid);
repo.shutdown()?;
match get_repository_for_tenant(tenantid) {
Ok(repo) => {
if let Err(err) = repo.checkpoint_iteration(CheckpointConfig::Flush) {
error!(
"Could not checkpoint tenant {} during shutdown: {:?}",
tenantid, err
);
}
}
Err(err) => {
error!(
"Could not get repository for tenant {} during shutdown: {:?}",
tenantid, err
);
}
}
}
Ok(())
}
pub fn create_repository_for_tenant(
@@ -153,7 +182,7 @@ pub fn create_repository_for_tenant(
tenantid: ZTenantId,
) -> Result<()> {
let wal_redo_manager = Arc::new(PostgresRedoManager::new(conf, tenantid));
let repo = Some(branches::create_repo(conf, tenantid, wal_redo_manager)?);
let repo = branches::create_repo(conf, tenantid, wal_redo_manager)?;
match access_tenants().entry(tenantid) {
hash_map::Entry::Occupied(_) => bail!("tenant {} already exists", tenantid),
@@ -172,34 +201,60 @@ pub fn get_tenant_state(tenantid: ZTenantId) -> Option<TenantState> {
Some(access_tenants().get(&tenantid)?.state)
}
pub fn set_tenant_state(tenantid: ZTenantId, newstate: TenantState) -> Result<TenantState> {
///
/// Change the state of a tenant to Active and launch its checkpointer and GC
/// threads. If the tenant was already in Active state or Stopping, does nothing.
///
pub fn activate_tenant(conf: &'static PageServerConf, tenantid: ZTenantId) -> Result<()> {
let mut m = access_tenants();
let tenant = m.get_mut(&tenantid);
let tenant = m
.get_mut(&tenantid)
.with_context(|| format!("Tenant not found for id {}", tenantid))?;
match tenant {
Some(tenant) => {
if newstate == TenantState::Idle && tenant.state != TenantState::Active {
// Only Active tenant can become Idle
return Ok(tenant.state);
}
info!("set_tenant_state: {} -> {}", tenant.state, newstate);
tenant.state = newstate;
Ok(tenant.state)
info!("activating tenant {}", tenantid);
match tenant.state {
// If the tenant is already active, nothing to do.
TenantState::Active => {}
// If it's Idle, launch the checkpointer and GC threads
TenantState::Idle => {
thread_mgr::spawn(
ThreadKind::Checkpointer,
Some(tenantid),
None,
"Checkpointer thread",
move || crate::tenant_threads::checkpoint_loop(tenantid, conf),
)?;
// FIXME: if we fail to launch the GC thread, but already launched the
// checkpointer, we're in a strange state.
thread_mgr::spawn(
ThreadKind::GarbageCollector,
Some(tenantid),
None,
"GC thread",
move || crate::tenant_threads::gc_loop(tenantid, conf),
)?;
tenant.state = TenantState::Active;
}
TenantState::Stopping => {
// don't re-activate it if it's being stopped
}
None => bail!("Tenant not found for id {}", tenantid),
}
Ok(())
}
pub fn get_repository_for_tenant(tenantid: ZTenantId) -> Result<Arc<dyn Repository>> {
let m = access_tenants();
let tenant = m
.get(&tenantid)
.ok_or_else(|| anyhow!("Tenant not found for tenant {}", tenantid))?;
.with_context(|| format!("Tenant not found for tenant {}", tenantid))?;
match &tenant.repo {
Some(repo) => Ok(Arc::clone(repo)),
None => anyhow::bail!("Repository for tenant {} is not yet valid", tenantid),
}
Ok(Arc::clone(&tenant.repo))
}
pub fn get_timeline_for_tenant(
@@ -209,17 +264,7 @@ pub fn get_timeline_for_tenant(
get_repository_for_tenant(tenantid)?
.get_timeline(timelineid)?
.local_timeline()
.ok_or_else(|| anyhow!("cannot fetch timeline {}", timelineid))
}
fn list_tenantids() -> Result<Vec<ZTenantId>> {
access_tenants()
.iter()
.map(|v| {
let (tenantid, _) = v;
Ok(*tenantid)
})
.collect()
.with_context(|| format!("cannot fetch timeline {}", timelineid))
}
#[derive(Serialize, Deserialize, Clone)]

View File

@@ -5,88 +5,14 @@ use crate::tenant_mgr;
use crate::tenant_mgr::TenantState;
use crate::CheckpointConfig;
use anyhow::Result;
use lazy_static::lazy_static;
use std::collections::HashMap;
use std::sync::Mutex;
use std::thread::JoinHandle;
use std::time::Duration;
use tracing::*;
use zenith_metrics::{register_int_gauge_vec, IntGaugeVec};
use zenith_utils::zid::ZTenantId;
struct TenantHandleEntry {
checkpointer_handle: Option<JoinHandle<()>>,
gc_handle: Option<JoinHandle<()>>,
}
// Preserve handles to wait for thread completion
// at shutdown
lazy_static! {
static ref TENANT_HANDLES: Mutex<HashMap<ZTenantId, TenantHandleEntry>> =
Mutex::new(HashMap::new());
}
lazy_static! {
static ref TENANT_THREADS_COUNT: IntGaugeVec = register_int_gauge_vec!(
"tenant_threads_count",
"Number of live tenant threads",
&["tenant_thread_type"]
)
.expect("failed to define a metric");
}
// Launch checkpointer and GC for the tenant.
// It's possible that the threads are running already,
// if so, just don't spawn new ones.
pub fn start_tenant_threads(conf: &'static PageServerConf, tenantid: ZTenantId) {
let mut handles = TENANT_HANDLES.lock().unwrap();
let h = handles
.entry(tenantid)
.or_insert_with(|| TenantHandleEntry {
checkpointer_handle: None,
gc_handle: None,
});
if h.checkpointer_handle.is_none() {
h.checkpointer_handle = std::thread::Builder::new()
.name("Checkpointer thread".into())
.spawn(move || {
checkpoint_loop(tenantid, conf).expect("Checkpointer thread died");
})
.ok();
}
if h.gc_handle.is_none() {
h.gc_handle = std::thread::Builder::new()
.name("GC thread".into())
.spawn(move || {
gc_loop(tenantid, conf).expect("GC thread died");
})
.ok();
}
}
pub fn wait_for_tenant_threads_to_stop(tenantid: ZTenantId) {
let mut handles = TENANT_HANDLES.lock().unwrap();
if let Some(h) = handles.get_mut(&tenantid) {
h.checkpointer_handle.take().map(JoinHandle::join);
trace!("checkpointer for tenant {} has stopped", tenantid);
h.gc_handle.take().map(JoinHandle::join);
trace!("gc for tenant {} has stopped", tenantid);
}
handles.remove(&tenantid);
}
///
/// Checkpointer thread's main loop
///
fn checkpoint_loop(tenantid: ZTenantId, conf: &'static PageServerConf) -> Result<()> {
let gauge = TENANT_THREADS_COUNT.with_label_values(&["checkpointer"]);
gauge.inc();
scopeguard::defer! {
gauge.dec();
}
pub fn checkpoint_loop(tenantid: ZTenantId, conf: &'static PageServerConf) -> Result<()> {
loop {
if tenant_mgr::get_tenant_state(tenantid) != Some(TenantState::Active) {
break;
@@ -112,13 +38,7 @@ fn checkpoint_loop(tenantid: ZTenantId, conf: &'static PageServerConf) -> Result
///
/// GC thread's main loop
///
fn gc_loop(tenantid: ZTenantId, conf: &'static PageServerConf) -> Result<()> {
let gauge = TENANT_THREADS_COUNT.with_label_values(&["gc"]);
gauge.inc();
scopeguard::defer! {
gauge.dec();
}
pub fn gc_loop(tenantid: ZTenantId, conf: &'static PageServerConf) -> Result<()> {
loop {
if tenant_mgr::get_tenant_state(tenantid) != Some(TenantState::Active) {
break;

View File

@@ -0,0 +1,284 @@
//!
//! This module provides centralized handling of threads in the Page Server.
//!
//! We provide a few basic facilities:
//! - A global registry of threads that lists what kind of threads they are, and
//! which tenant or timeline they are working on
//!
//! - The ability to request a thread to shut down.
//!
//!
//! # How it works?
//!
//! There is a global hashmap of all the threads (`THREADS`). Whenever a new
//! thread is spawned, a PageServerThread entry is added there, and when a
//! thread dies, it removes itself from the hashmap. If you want to kill a
//! thread, you can scan the hashmap to find it.
//!
//! # Thread shutdown
//!
//! To kill a thread, we rely on co-operation from the victim. Each thread is
//! expected to periodically call the `is_shutdown_requested()` function, and
//! if it returns true, exit gracefully. In addition to that, when waiting for
//! the network or other long-running operation, you can use
//! `shutdown_watcher()` function to get a Future that will become ready if
//! the current thread has been requested to shut down. You can use that with
//! Tokio select!(), but note that it relies on thread-local storage, so it
//! will only work with the "current-thread" Tokio runtime!
//!
//!
//! TODO: This would be a good place to also handle panics in a somewhat sane way.
//! Depending on what thread panics, we might want to kill the whole server, or
//! only a single tenant or timeline.
//!
use std::cell::RefCell;
use std::collections::HashMap;
use std::panic;
use std::panic::AssertUnwindSafe;
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
use std::sync::{Arc, Mutex};
use std::thread;
use std::thread::JoinHandle;
use tokio::sync::watch;
use tracing::{info, warn};
use lazy_static::lazy_static;
use zenith_utils::zid::{ZTenantId, ZTimelineId};
lazy_static! {
/// Each thread that we track is associated with a "thread ID". It's just
/// an increasing number that we assign, not related to any system thread
/// id.
static ref NEXT_THREAD_ID: AtomicU64 = AtomicU64::new(1);
/// Global registry of threads
static ref THREADS: Mutex<HashMap<u64, Arc<PageServerThread>>> = Mutex::new(HashMap::new());
}
// There is a Tokio watch channel for each thread, which can be used to signal the
// thread that it needs to shut down. This thread local variable holds the receiving
// end of the channel. The sender is kept in the global registry, so that anyone
// can send the signal to request thread shutdown.
thread_local!(static SHUTDOWN_RX: RefCell<Option<watch::Receiver<()>>> = RefCell::new(None));
// Each thread holds reference to its own PageServerThread here.
thread_local!(static CURRENT_THREAD: RefCell<Option<Arc<PageServerThread>>> = RefCell::new(None));
///
/// There are many kinds of threads in the system. Some are associated with a particular
/// tenant or timeline, while others are global.
///
/// Note that we don't try to limit how may threads of a certain kind can be running
/// at the same time.
///
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub enum ThreadKind {
// libpq listener thread. It just accepts connection and spawns a
// PageRequestHandler thread for each connection.
LibpqEndpointListener,
// HTTP endpoint listener.
HttpEndpointListener,
// Thread that handles a single connection. A PageRequestHandler thread
// starts detached from any particular tenant or timeline, but it can be
// associated with one later, after receiving a command from the client.
PageRequestHandler,
// Thread that connects to a safekeeper to fetch WAL for one timeline.
WalReceiver,
// Thread that handles checkpointing of all timelines for a tenant.
Checkpointer,
// Thread that handles GC of a tenant
GarbageCollector,
// Thread for synchronizing pageserver relish data with the remote storage.
// Shared by all tenants.
StorageSync,
}
struct PageServerThread {
_thread_id: u64,
kind: ThreadKind,
/// Tenant and timeline that this thread is associated with.
tenant_id: Option<ZTenantId>,
timeline_id: Option<ZTimelineId>,
name: String,
// To request thread shutdown, set the flag, and send a dummy message to the
// channel to notify it.
shutdown_requested: AtomicBool,
shutdown_tx: watch::Sender<()>,
/// Handle for waiting for the thread to exit. It can be None, if the
/// the thread has already exited.
join_handle: Mutex<Option<JoinHandle<()>>>,
}
/// Launch a new thread
pub fn spawn<F, E>(
kind: ThreadKind,
tenant_id: Option<ZTenantId>,
timeline_id: Option<ZTimelineId>,
name: &str,
f: F,
) -> std::io::Result<()>
where
F: FnOnce() -> Result<(), E> + Send + 'static,
{
let (shutdown_tx, shutdown_rx) = watch::channel(());
let thread_id = NEXT_THREAD_ID.fetch_add(1, Ordering::Relaxed);
let thread = PageServerThread {
_thread_id: thread_id,
kind,
tenant_id,
timeline_id,
name: name.to_string(),
shutdown_requested: AtomicBool::new(false),
shutdown_tx,
join_handle: Mutex::new(None),
};
let thread_rc = Arc::new(thread);
let mut jh_guard = thread_rc.join_handle.lock().unwrap();
THREADS
.lock()
.unwrap()
.insert(thread_id, Arc::clone(&thread_rc));
let thread_rc2 = Arc::clone(&thread_rc);
let join_handle = match thread::Builder::new()
.name(name.to_string())
.spawn(move || thread_wrapper(thread_id, thread_rc2, shutdown_rx, f))
{
Ok(handle) => handle,
Err(err) => {
// Could not spawn the thread. Remove the entry
THREADS.lock().unwrap().remove(&thread_id);
return Err(err);
}
};
*jh_guard = Some(join_handle);
drop(jh_guard);
// The thread is now running. Nothing more to do here
Ok(())
}
/// This wrapper function runs in a newly-spawned thread. It initializes the
/// thread-local variables and calls the payload function
fn thread_wrapper<F, E>(
thread_id: u64,
thread: Arc<PageServerThread>,
shutdown_rx: watch::Receiver<()>,
f: F,
) where
F: FnOnce() -> Result<(), E> + Send + 'static,
{
SHUTDOWN_RX.with(|rx| {
*rx.borrow_mut() = Some(shutdown_rx);
});
CURRENT_THREAD.with(|ct| {
*ct.borrow_mut() = Some(thread);
});
// We use AssertUnwindSafe here so that the payload function
// doesn't need to be UnwindSafe. We don't do anything after the
// unwinding that would expose us to unwind-unsafe behavior.
let result = panic::catch_unwind(AssertUnwindSafe(f));
// Remove our entry from the global hashmap.
THREADS.lock().unwrap().remove(&thread_id);
// If the thread payload panic'd, exit with the panic.
if let Err(err) = result {
panic::resume_unwind(err);
}
}
/// Is there a thread running that matches the criteria
/// Signal and wait for threads to shut down.
///
///
/// The arguments are used to select the threads to kill. Any None arguments are
/// ignored. For example, to shut down all WalReceiver threads:
///
/// shutdown_threads(Some(ThreadKind::WalReceiver), None, None)
///
/// Or to shut down all threads for given timeline:
///
/// shutdown_threads(None, Some(timelineid), None)
///
pub fn shutdown_threads(
kind: Option<ThreadKind>,
tenant_id: Option<ZTenantId>,
timeline_id: Option<ZTimelineId>,
) {
let mut victim_threads = Vec::new();
let threads = THREADS.lock().unwrap();
for thread in threads.values() {
if (kind.is_none() || Some(thread.kind) == kind)
&& (tenant_id.is_none() || thread.tenant_id == tenant_id)
&& (timeline_id.is_none() || thread.timeline_id == timeline_id)
{
thread.shutdown_requested.store(true, Ordering::Relaxed);
// FIXME: handle error?
let _ = thread.shutdown_tx.send(());
victim_threads.push(Arc::clone(thread));
}
}
drop(threads);
for thread in victim_threads {
info!("waiting for {} to shut down", thread.name);
if let Some(join_handle) = thread.join_handle.lock().unwrap().take() {
let _ = join_handle.join();
} else {
// The thread had not even fully started yet. Or it was shut down
// concurrently and alrady exited
}
}
}
/// A Future that can be used to check if the current thread has been requested to
/// shut down.
pub async fn shutdown_watcher() {
let _ = SHUTDOWN_RX
.with(|rx| {
rx.borrow()
.as_ref()
.expect("shutdown_requested() called in an unexpected thread")
.clone()
})
.changed()
.await;
}
/// Has the current thread been requested to shut down?
pub fn is_shutdown_requested() -> bool {
CURRENT_THREAD.with(|ct| {
if let Some(ct) = ct.borrow().as_ref() {
ct.shutdown_requested.load(Ordering::Relaxed)
} else {
if !cfg!(test) {
warn!("is_shutdown_requested() called in an unexpected thread");
}
false
}
})
}

View File

@@ -10,15 +10,46 @@
//! This is similar to PostgreSQL's virtual file descriptor facility in
//! src/backend/storage/file/fd.c
//!
use lazy_static::lazy_static;
use std::fs::{File, OpenOptions};
use std::io::{Error, ErrorKind, Read, Seek, SeekFrom, Write};
use std::os::unix::fs::FileExt;
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::sync::{RwLock, RwLockWriteGuard};
use zenith_metrics::{register_histogram_vec, register_int_gauge_vec, HistogramVec, IntGaugeVec};
use once_cell::sync::OnceCell;
// Metrics collected on disk IO operations
const STORAGE_IO_TIME_BUCKETS: &[f64] = &[
0.000001, // 1 usec
0.00001, // 10 usec
0.0001, // 100 usec
0.001, // 1 msec
0.01, // 10 msec
0.1, // 100 msec
1.0, // 1 sec
];
lazy_static! {
static ref STORAGE_IO_TIME: HistogramVec = register_histogram_vec!(
"pageserver_io_time",
"Time spent in IO operations",
&["operation", "tenant_id", "timeline_id"],
STORAGE_IO_TIME_BUCKETS.into()
)
.expect("failed to define a metric");
}
lazy_static! {
static ref STORAGE_IO_SIZE: IntGaugeVec = register_int_gauge_vec!(
"pageserver_io_size",
"Amount of bytes",
&["operation", "tenant_id", "timeline_id"]
)
.expect("failed to define a metric");
}
///
/// A virtual file descriptor. You can use this just like std::fs::File, but internally
/// the underlying file is closed if the system is low on file descriptors,
@@ -51,6 +82,10 @@ pub struct VirtualFile {
/// storing it here.
pub path: PathBuf,
open_options: OpenOptions,
/// For metrics
tenantid: String,
timelineid: String,
}
#[derive(PartialEq, Clone, Copy)]
@@ -145,7 +180,13 @@ impl OpenFiles {
// old file.
//
if let Some(old_file) = slot_guard.file.take() {
drop(old_file);
// We do not have information about tenantid/timelineid of evicted file.
// It is possible to store path together with file or use filepath crate,
// but as far as close() is not expected to be fast, it is not so critical to gather
// precise per-tenant statistic here.
STORAGE_IO_TIME
.with_label_values(&["close", "-", "-"])
.observe_closure_duration(|| drop(old_file));
}
// Prepare the slot for reuse and return it
@@ -185,9 +226,20 @@ impl VirtualFile {
path: &Path,
open_options: &OpenOptions,
) -> Result<VirtualFile, std::io::Error> {
let parts = path.to_str().unwrap().split('/').collect::<Vec<&str>>();
let tenantid;
let timelineid;
if parts.len() > 5 && parts[parts.len() - 5] == "tenants" {
tenantid = parts[parts.len() - 4].to_string();
timelineid = parts[parts.len() - 2].to_string();
} else {
tenantid = "*".to_string();
timelineid = "*".to_string();
}
let (handle, mut slot_guard) = get_open_files().find_victim_slot();
let file = open_options.open(path)?;
let file = STORAGE_IO_TIME
.with_label_values(&["open", &tenantid, &timelineid])
.observe_closure_duration(|| open_options.open(path))?;
// Strip all options other than read and write.
//
@@ -204,6 +256,8 @@ impl VirtualFile {
pos: 0,
path: path.to_path_buf(),
open_options: reopen_options,
tenantid,
timelineid,
};
slot_guard.file.replace(file);
@@ -213,13 +267,13 @@ impl VirtualFile {
/// Call File::sync_all() on the underlying File.
pub fn sync_all(&self) -> Result<(), Error> {
self.with_file(|file| file.sync_all())?
self.with_file("fsync", |file| file.sync_all())?
}
/// Helper function that looks up the underlying File for this VirtualFile,
/// opening it and evicting some other File if necessary. It calls 'func'
/// with the physical File.
fn with_file<F, R>(&self, mut func: F) -> Result<R, Error>
fn with_file<F, R>(&self, op: &str, mut func: F) -> Result<R, Error>
where
F: FnMut(&File) -> R,
{
@@ -242,7 +296,9 @@ impl VirtualFile {
if let Some(file) = &slot_guard.file {
// Found a cached file descriptor.
slot.recently_used.store(true, Ordering::Relaxed);
return Ok(func(file));
return Ok(STORAGE_IO_TIME
.with_label_values(&[op, &self.tenantid, &self.timelineid])
.observe_closure_duration(|| func(file)));
}
}
}
@@ -267,16 +323,23 @@ impl VirtualFile {
let (handle, mut slot_guard) = open_files.find_victim_slot();
// Open the physical file
let file = self.open_options.open(&self.path)?;
let file = STORAGE_IO_TIME
.with_label_values(&["open", &self.tenantid, &self.timelineid])
.observe_closure_duration(|| self.open_options.open(&self.path))?;
// Perform the requested operation on it
//
// TODO: We could downgrade the locks to read mode before calling
// 'func', to allow a little bit more concurrency, but the standard
// library RwLock doesn't allow downgrading without releasing the lock,
// and that doesn't seem worth the trouble. (parking_lot RwLock would
// allow it)
let result = func(&file);
// and that doesn't seem worth the trouble.
//
// XXX: `parking_lot::RwLock` can enable such downgrades, yet its implemenation is fair and
// may deadlock on subsequent read calls.
// Simply replacing all `RwLock` in project causes deadlocks, so use it sparingly.
let result = STORAGE_IO_TIME
.with_label_values(&[op, &self.tenantid, &self.timelineid])
.observe_closure_duration(|| func(&file));
// Store the File in the slot and update the handle in the VirtualFile
// to point to it.
@@ -299,7 +362,13 @@ impl Drop for VirtualFile {
let mut slot_guard = slot.inner.write().unwrap();
if slot_guard.tag == handle.tag {
slot.recently_used.store(false, Ordering::Relaxed);
slot_guard.file.take();
// Unlike files evicted by replacement algorithm, here
// we group close time by tenantid/timelineid.
// At allows to compare number/time of "normal" file closes
// with file eviction.
STORAGE_IO_TIME
.with_label_values(&["close", &self.tenantid, &self.timelineid])
.observe_closure_duration(|| slot_guard.file.take());
}
}
}
@@ -335,7 +404,7 @@ impl Seek for VirtualFile {
self.pos = offset;
}
SeekFrom::End(offset) => {
self.pos = self.with_file(|mut file| file.seek(SeekFrom::End(offset)))??
self.pos = self.with_file("seek", |mut file| file.seek(SeekFrom::End(offset)))??
}
SeekFrom::Current(offset) => {
let pos = self.pos as i128 + offset as i128;
@@ -357,11 +426,23 @@ impl Seek for VirtualFile {
impl FileExt for VirtualFile {
fn read_at(&self, buf: &mut [u8], offset: u64) -> Result<usize, Error> {
self.with_file(|file| file.read_at(buf, offset))?
let result = self.with_file("read", |file| file.read_at(buf, offset))?;
if let Ok(size) = result {
STORAGE_IO_SIZE
.with_label_values(&["read", &self.tenantid, &self.timelineid])
.add(size as i64);
}
result
}
fn write_at(&self, buf: &[u8], offset: u64) -> Result<usize, Error> {
self.with_file(|file| file.write_at(buf, offset))?
let result = self.with_file("write", |file| file.write_at(buf, offset))?;
if let Ok(size) = result {
STORAGE_IO_SIZE
.with_label_values(&["write", &self.tenantid, &self.timelineid])
.add(size as i64);
}
result
}
}

View File

@@ -349,49 +349,25 @@ impl WalIngest {
decoded: &mut DecodedWALRecord,
) -> Result<()> {
// Handle VM bit updates that are implicitly part of heap records.
// First, look at the record to determine which VM bits need
// to be cleared. If either of these variables is set, we
// need to clear the corresponding bits in the visibility map.
let mut new_heap_blkno: Option<u32> = None;
let mut old_heap_blkno: Option<u32> = None;
if decoded.xl_rmid == pg_constants::RM_HEAP_ID {
let info = decoded.xl_info & pg_constants::XLOG_HEAP_OPMASK;
if info == pg_constants::XLOG_HEAP_INSERT {
let xlrec = XlHeapInsert::decode(buf);
assert_eq!(0, buf.remaining());
if (xlrec.flags
& (pg_constants::XLH_INSERT_ALL_VISIBLE_CLEARED
| pg_constants::XLH_INSERT_ALL_FROZEN_SET))
!= 0
{
timeline.put_wal_record(
lsn,
RelishTag::Relation(RelTag {
forknum: pg_constants::VISIBILITYMAP_FORKNUM,
spcnode: decoded.blocks[0].rnode_spcnode,
dbnode: decoded.blocks[0].rnode_dbnode,
relnode: decoded.blocks[0].rnode_relnode,
}),
decoded.blocks[0].blkno / pg_constants::HEAPBLOCKS_PER_PAGE as u32,
ZenithWalRecord::ClearVisibilityMapFlags {
heap_blkno: decoded.blocks[0].blkno,
flags: pg_constants::VISIBILITYMAP_VALID_BITS,
},
)?;
if (xlrec.flags & pg_constants::XLH_INSERT_ALL_VISIBLE_CLEARED) != 0 {
new_heap_blkno = Some(decoded.blocks[0].blkno);
}
} else if info == pg_constants::XLOG_HEAP_DELETE {
let xlrec = XlHeapDelete::decode(buf);
assert_eq!(0, buf.remaining());
if (xlrec.flags & pg_constants::XLH_DELETE_ALL_VISIBLE_CLEARED) != 0 {
timeline.put_wal_record(
lsn,
RelishTag::Relation(RelTag {
forknum: pg_constants::VISIBILITYMAP_FORKNUM,
spcnode: decoded.blocks[0].rnode_spcnode,
dbnode: decoded.blocks[0].rnode_dbnode,
relnode: decoded.blocks[0].rnode_relnode,
}),
decoded.blocks[0].blkno / pg_constants::HEAPBLOCKS_PER_PAGE as u32,
ZenithWalRecord::ClearVisibilityMapFlags {
heap_blkno: decoded.blocks[0].blkno,
flags: pg_constants::VISIBILITYMAP_VALID_BITS,
},
)?;
new_heap_blkno = Some(decoded.blocks[0].blkno);
}
} else if info == pg_constants::XLOG_HEAP_UPDATE
|| info == pg_constants::XLOG_HEAP_HOT_UPDATE
@@ -400,39 +376,15 @@ impl WalIngest {
// the size of tuple data is inferred from the size of the record.
// we can't validate the remaining number of bytes without parsing
// the tuple data.
if (xlrec.flags & pg_constants::XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED) != 0 {
timeline.put_wal_record(
lsn,
RelishTag::Relation(RelTag {
forknum: pg_constants::VISIBILITYMAP_FORKNUM,
spcnode: decoded.blocks[0].rnode_spcnode,
dbnode: decoded.blocks[0].rnode_dbnode,
relnode: decoded.blocks[0].rnode_relnode,
}),
decoded.blocks[0].blkno / pg_constants::HEAPBLOCKS_PER_PAGE as u32,
ZenithWalRecord::ClearVisibilityMapFlags {
heap_blkno: decoded.blocks[0].blkno,
flags: pg_constants::VISIBILITYMAP_VALID_BITS,
},
)?;
if (xlrec.flags & pg_constants::XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED) != 0 {
old_heap_blkno = Some(decoded.blocks[0].blkno);
}
if (xlrec.flags & pg_constants::XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED) != 0
&& decoded.blocks.len() > 1
{
timeline.put_wal_record(
lsn,
RelishTag::Relation(RelTag {
forknum: pg_constants::VISIBILITYMAP_FORKNUM,
spcnode: decoded.blocks[1].rnode_spcnode,
dbnode: decoded.blocks[1].rnode_dbnode,
relnode: decoded.blocks[1].rnode_relnode,
}),
decoded.blocks[1].blkno / pg_constants::HEAPBLOCKS_PER_PAGE as u32,
ZenithWalRecord::ClearVisibilityMapFlags {
heap_blkno: decoded.blocks[1].blkno,
flags: pg_constants::VISIBILITYMAP_VALID_BITS,
},
)?;
if (xlrec.flags & pg_constants::XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED) != 0 {
// PostgreSQL only uses XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED on a
// non-HOT update where the new tuple goes to different page than
// the old one. Otherwise, only XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED is
// set.
new_heap_blkno = Some(decoded.blocks[1].blkno);
}
}
} else if decoded.xl_rmid == pg_constants::RM_HEAP2_ID {
@@ -448,23 +400,60 @@ impl WalIngest {
};
assert_eq!(offset_array_len, buf.remaining());
// FIXME: why also ALL_FROZEN_SET?
if (xlrec.flags
& (pg_constants::XLH_INSERT_ALL_VISIBLE_CLEARED
| pg_constants::XLH_INSERT_ALL_FROZEN_SET))
!= 0
{
if (xlrec.flags & pg_constants::XLH_INSERT_ALL_VISIBLE_CLEARED) != 0 {
new_heap_blkno = Some(decoded.blocks[0].blkno);
}
}
}
// FIXME: What about XLOG_HEAP_LOCK and XLOG_HEAP2_LOCK_UPDATED?
// Clear the VM bits if required.
if new_heap_blkno.is_some() || old_heap_blkno.is_some() {
let vm_relish = RelishTag::Relation(RelTag {
forknum: pg_constants::VISIBILITYMAP_FORKNUM,
spcnode: decoded.blocks[0].rnode_spcnode,
dbnode: decoded.blocks[0].rnode_dbnode,
relnode: decoded.blocks[0].rnode_relnode,
});
let new_vm_blk = new_heap_blkno.map(pg_constants::HEAPBLK_TO_MAPBLOCK);
let old_vm_blk = old_heap_blkno.map(pg_constants::HEAPBLK_TO_MAPBLOCK);
if new_vm_blk == old_vm_blk {
// An UPDATE record that needs to clear the bits for both old and the
// new page, both of which reside on the same VM page.
timeline.put_wal_record(
lsn,
vm_relish,
new_vm_blk.unwrap(),
ZenithWalRecord::ClearVisibilityMapFlags {
new_heap_blkno,
old_heap_blkno,
flags: pg_constants::VISIBILITYMAP_VALID_BITS,
},
)?;
} else {
// Clear VM bits for one heap page, or for two pages that reside on
// different VM pages.
if let Some(new_vm_blk) = new_vm_blk {
timeline.put_wal_record(
lsn,
RelishTag::Relation(RelTag {
forknum: pg_constants::VISIBILITYMAP_FORKNUM,
spcnode: decoded.blocks[0].rnode_spcnode,
dbnode: decoded.blocks[0].rnode_dbnode,
relnode: decoded.blocks[0].rnode_relnode,
}),
decoded.blocks[0].blkno / pg_constants::HEAPBLOCKS_PER_PAGE as u32,
vm_relish,
new_vm_blk,
ZenithWalRecord::ClearVisibilityMapFlags {
heap_blkno: decoded.blocks[0].blkno,
new_heap_blkno,
old_heap_blkno: None,
flags: pg_constants::VISIBILITYMAP_VALID_BITS,
},
)?;
}
if let Some(old_vm_blk) = old_vm_blk {
timeline.put_wal_record(
lsn,
vm_relish,
old_vm_blk,
ZenithWalRecord::ClearVisibilityMapFlags {
new_heap_blkno: None,
old_heap_blkno,
flags: pg_constants::VISIBILITYMAP_VALID_BITS,
},
)?;
@@ -472,8 +461,6 @@ impl WalIngest {
}
}
// FIXME: What about XLOG_HEAP_LOCK and XLOG_HEAP2_LOCK_UPDATED?
Ok(())
}

View File

@@ -7,29 +7,29 @@
use crate::config::PageServerConf;
use crate::tenant_mgr;
use crate::tenant_mgr::TenantState;
use crate::tenant_threads;
use crate::thread_mgr;
use crate::thread_mgr::ThreadKind;
use crate::walingest::WalIngest;
use anyhow::{bail, Context, Error, Result};
use bytes::BytesMut;
use fail::fail_point;
use lazy_static::lazy_static;
use parking_lot::Mutex;
use postgres_ffi::waldecoder::*;
use postgres_protocol::message::backend::ReplicationMessage;
use postgres_types::PgLsn;
use std::cell::Cell;
use std::collections::HashMap;
use std::str::FromStr;
use std::thread;
use std::thread::JoinHandle;
use std::sync::Mutex;
use std::thread_local;
use std::time::SystemTime;
use tokio::pin;
use tokio::sync::oneshot;
use tokio_postgres::replication::ReplicationStream;
use tokio_postgres::{Client, NoTls, SimpleQueryMessage, SimpleQueryRow};
use tokio_stream::StreamExt;
use tracing::*;
use zenith_utils::lsn::Lsn;
use zenith_utils::pq_proto::ZenithFeedback;
use zenith_utils::zid::ZTenantId;
use zenith_utils::zid::ZTimelineId;
@@ -38,13 +38,10 @@ use zenith_utils::zid::ZTimelineId;
//
struct WalReceiverEntry {
wal_producer_connstr: String,
wal_receiver_handle: Option<JoinHandle<()>>,
wal_receiver_interrupt_sender: Option<oneshot::Sender<()>>,
tenantid: ZTenantId,
}
lazy_static! {
static ref WAL_RECEIVERS: Mutex<HashMap<ZTimelineId, WalReceiverEntry>> =
static ref WAL_RECEIVERS: Mutex<HashMap<(ZTenantId, ZTimelineId), WalReceiverEntry>> =
Mutex::new(HashMap::new());
}
@@ -55,97 +52,55 @@ thread_local! {
pub(crate) static IS_WAL_RECEIVER: Cell<bool> = Cell::new(false);
}
// Wait for walreceiver to stop
// Now it stops when pageserver shutdown is requested.
// In future we can make this more granular and send shutdown signals
// per tenant/timeline to cancel inactive walreceivers.
// TODO deal with blocking pg connections
pub fn stop_wal_receiver(timelineid: ZTimelineId) {
let mut receivers = WAL_RECEIVERS.lock();
if let Some(r) = receivers.get_mut(&timelineid) {
match r.wal_receiver_interrupt_sender.take() {
Some(s) => {
if s.send(()).is_err() {
warn!("wal receiver interrupt signal already sent");
}
}
None => {
warn!("wal_receiver_interrupt_sender is missing, wal recever shouldn't be running")
}
}
info!("waiting for wal receiver to stop");
let handle = r.wal_receiver_handle.take();
// do not hold the lock while joining the handle (deadlock is possible otherwise)
drop(receivers);
// there is no timeout or try_join option available so in case of a bug this can hang forever
handle.map(JoinHandle::join);
}
}
pub fn drop_wal_receiver(timelineid: ZTimelineId, tenantid: ZTenantId) {
let mut receivers = WAL_RECEIVERS.lock();
receivers.remove(&timelineid);
// Check if it was the last walreceiver of the tenant.
// TODO now we store one WalReceiverEntry per timeline,
// so this iterator looks a bit strange.
for (_timelineid, entry) in receivers.iter() {
if entry.tenantid == tenantid {
return;
}
}
// When last walreceiver of the tenant is gone, change state to Idle
tenant_mgr::set_tenant_state(tenantid, TenantState::Idle).unwrap();
fn drop_wal_receiver(tenantid: ZTenantId, timelineid: ZTimelineId) {
let mut receivers = WAL_RECEIVERS.lock().unwrap();
receivers.remove(&(tenantid, timelineid));
}
// Launch a new WAL receiver, or tell one that's running about change in connection string
pub fn launch_wal_receiver(
conf: &'static PageServerConf,
tenantid: ZTenantId,
timelineid: ZTimelineId,
wal_producer_connstr: &str,
tenantid: ZTenantId,
) {
let mut receivers = WAL_RECEIVERS.lock();
) -> Result<()> {
let mut receivers = WAL_RECEIVERS.lock().unwrap();
match receivers.get_mut(&timelineid) {
match receivers.get_mut(&(tenantid, timelineid)) {
Some(receiver) => {
info!("wal receiver already running, updating connection string");
receiver.wal_producer_connstr = wal_producer_connstr.into();
}
None => {
let (tx, rx) = tokio::sync::oneshot::channel::<()>();
let wal_receiver_handle = thread::Builder::new()
.name("WAL receiver thread".into())
.spawn(move || {
thread_mgr::spawn(
ThreadKind::WalReceiver,
Some(tenantid),
Some(timelineid),
"WAL receiver thread",
move || {
IS_WAL_RECEIVER.with(|c| c.set(true));
thread_main(conf, timelineid, tenantid, rx);
})
.unwrap();
thread_main(conf, tenantid, timelineid)
},
)?;
let receiver = WalReceiverEntry {
wal_producer_connstr: wal_producer_connstr.into(),
wal_receiver_handle: Some(wal_receiver_handle),
wal_receiver_interrupt_sender: Some(tx),
tenantid,
};
receivers.insert(timelineid, receiver);
receivers.insert((tenantid, timelineid), receiver);
// Update tenant state and start tenant threads, if they are not running yet.
tenant_mgr::set_tenant_state(tenantid, TenantState::Active).unwrap();
tenant_threads::start_tenant_threads(conf, tenantid);
tenant_mgr::activate_tenant(conf, tenantid)?;
}
};
Ok(())
}
// Look up current WAL producer connection string in the hash table
fn get_wal_producer_connstr(timelineid: ZTimelineId) -> String {
let receivers = WAL_RECEIVERS.lock();
fn get_wal_producer_connstr(tenantid: ZTenantId, timelineid: ZTimelineId) -> String {
let receivers = WAL_RECEIVERS.lock().unwrap();
receivers
.get(&timelineid)
.get(&(tenantid, timelineid))
.unwrap()
.wal_producer_connstr
.clone()
@@ -156,25 +111,18 @@ fn get_wal_producer_connstr(timelineid: ZTimelineId) -> String {
//
fn thread_main(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
tenantid: ZTenantId,
interrupt_receiver: oneshot::Receiver<()>,
) {
timelineid: ZTimelineId,
) -> Result<()> {
let _enter = info_span!("WAL receiver", timeline = %timelineid, tenant = %tenantid).entered();
info!("WAL receiver thread started");
// Look up the current WAL producer address
let wal_producer_connstr = get_wal_producer_connstr(timelineid);
let wal_producer_connstr = get_wal_producer_connstr(tenantid, timelineid);
// Make a connection to the WAL safekeeper, or directly to the primary PostgreSQL server,
// and start streaming WAL from it.
let res = walreceiver_main(
conf,
tenantid,
timelineid,
&wal_producer_connstr,
interrupt_receiver,
);
let res = walreceiver_main(conf, tenantid, timelineid, &wal_producer_connstr);
// TODO cleanup info messages
if let Err(e) = res {
@@ -188,7 +136,8 @@ fn thread_main(
// Drop it from list of active WAL_RECEIVERS
// so that next callmemaybe request launched a new thread
drop_wal_receiver(timelineid, tenantid);
drop_wal_receiver(tenantid, timelineid);
Ok(())
}
fn walreceiver_main(
@@ -196,7 +145,6 @@ fn walreceiver_main(
tenantid: ZTenantId,
timelineid: ZTimelineId,
wal_producer_connstr: &str,
mut interrupt_receiver: oneshot::Receiver<()>,
) -> Result<(), Error> {
// Connect to the database in replication mode.
info!("connecting to {:?}", wal_producer_connstr);
@@ -214,7 +162,7 @@ fn walreceiver_main(
// This is from tokio-postgres docs, but it is a bit weird in our case because we extensively use block_on
runtime.spawn(async move {
if let Err(e) = connection.await {
eprintln!("connection error: {}", e);
error!("connection error: {}", e);
}
});
@@ -273,12 +221,15 @@ fn walreceiver_main(
let mut walingest = WalIngest::new(&*timeline, startpoint)?;
while let Some(replication_message) = runtime.block_on(async {
let shutdown_watcher = thread_mgr::shutdown_watcher();
tokio::select! {
replication_message = physical_stream.next() => replication_message,
_ = &mut interrupt_receiver => {
// check for shutdown first
biased;
_ = shutdown_watcher => {
info!("walreceiver interrupted");
None
}
replication_message = physical_stream.next() => replication_message,
}
}) {
let replication_message = replication_message?;
@@ -305,6 +256,8 @@ fn walreceiver_main(
let writer = timeline.writer();
walingest.ingest_record(writer.as_ref(), recdata, lsn)?;
fail_point!("walreceiver-after-ingest");
last_rec_lsn = lsn;
}
@@ -339,7 +292,6 @@ fn walreceiver_main(
};
if let Some(last_lsn) = status_update {
let last_lsn = PgLsn::from(u64::from(last_lsn));
let timeline_synced_disk_consistent_lsn =
tenant_mgr::get_repository_for_tenant(tenantid)?
.get_timeline_state(timelineid)
@@ -347,18 +299,32 @@ fn walreceiver_main(
.unwrap_or(Lsn(0));
// The last LSN we processed. It is not guaranteed to survive pageserver crash.
let write_lsn = last_lsn;
let write_lsn = u64::from(last_lsn);
// `disk_consistent_lsn` is the LSN at which page server guarantees local persistence of all received data
let flush_lsn = PgLsn::from(u64::from(timeline.get_disk_consistent_lsn()));
let flush_lsn = u64::from(timeline.get_disk_consistent_lsn());
// The last LSN that is synced to remote storage and is guaranteed to survive pageserver crash
// Used by safekeepers to remove WAL preceding `remote_consistent_lsn`.
let apply_lsn = PgLsn::from(u64::from(timeline_synced_disk_consistent_lsn));
let apply_lsn = u64::from(timeline_synced_disk_consistent_lsn);
let ts = SystemTime::now();
const NO_REPLY: u8 = 0;
// Send zenith feedback message.
// Regular standby_status_update fields are put into this message.
let zenith_status_update = ZenithFeedback {
current_timeline_size: timeline.get_current_logical_size() as u64,
ps_writelsn: write_lsn,
ps_flushlsn: flush_lsn,
ps_applylsn: apply_lsn,
ps_replytime: ts,
};
debug!("zenith_status_update {:?}", zenith_status_update);
let mut data = BytesMut::new();
zenith_status_update.serialize(&mut data)?;
runtime.block_on(
physical_stream
.as_mut()
.standby_status_update(write_lsn, flush_lsn, apply_lsn, ts, NO_REPLY),
.zenith_status_update(data.len() as u64, &data),
)?;
}
}

View File

@@ -363,25 +363,44 @@ impl PostgresRedoManager {
will_init: _,
rec: _,
} => panic!("tried to pass postgres wal record to zenith WAL redo"),
ZenithWalRecord::ClearVisibilityMapFlags { heap_blkno, flags } => {
// Calculate the VM block and offset that corresponds to the heap block.
let map_block = pg_constants::HEAPBLK_TO_MAPBLOCK(*heap_blkno);
let map_byte = pg_constants::HEAPBLK_TO_MAPBYTE(*heap_blkno);
let map_offset = pg_constants::HEAPBLK_TO_OFFSET(*heap_blkno);
// Check that we're modifying the correct VM block.
ZenithWalRecord::ClearVisibilityMapFlags {
new_heap_blkno,
old_heap_blkno,
flags,
} => {
// sanity check that this is modifying the correct relish
assert!(
check_forknum(&rel, pg_constants::VISIBILITYMAP_FORKNUM),
"ClearVisibilityMapFlags record on unexpected rel {:?}",
rel
);
assert!(map_block == blknum);
if let Some(heap_blkno) = *new_heap_blkno {
// Calculate the VM block and offset that corresponds to the heap block.
let map_block = pg_constants::HEAPBLK_TO_MAPBLOCK(heap_blkno);
let map_byte = pg_constants::HEAPBLK_TO_MAPBYTE(heap_blkno);
let map_offset = pg_constants::HEAPBLK_TO_OFFSET(heap_blkno);
// equivalent to PageGetContents(page)
let map = &mut page[pg_constants::MAXALIGN_SIZE_OF_PAGE_HEADER_DATA..];
// Check that we're modifying the correct VM block.
assert!(map_block == blknum);
let mask: u8 = flags << map_offset;
map[map_byte as usize] &= !mask;
// equivalent to PageGetContents(page)
let map = &mut page[pg_constants::MAXALIGN_SIZE_OF_PAGE_HEADER_DATA..];
map[map_byte as usize] &= !(flags << map_offset);
}
// Repeat for 'old_heap_blkno', if any
if let Some(heap_blkno) = *old_heap_blkno {
let map_block = pg_constants::HEAPBLK_TO_MAPBLOCK(heap_blkno);
let map_byte = pg_constants::HEAPBLK_TO_MAPBYTE(heap_blkno);
let map_offset = pg_constants::HEAPBLK_TO_OFFSET(heap_blkno);
assert!(map_block == blknum);
let map = &mut page[pg_constants::MAXALIGN_SIZE_OF_PAGE_HEADER_DATA..];
map[map_byte as usize] &= !(flags << map_offset);
}
}
// Non-relational WAL records are handled here, with custom code that has the
// same effects as the corresponding Postgres WAL redo function.

2067
poetry.lock generated Normal file

File diff suppressed because one or more lines are too long

View File

@@ -1,10 +1,7 @@
[package]
name = "postgres_ffi"
version = "0.1.0"
authors = ["Heikki Linnakangas <heikki@zenith.tech>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
edition = "2021"
[dependencies]
chrono = "0.4.19"

View File

@@ -51,6 +51,13 @@ pub type TimeLineID = u32;
pub type TimestampTz = i64;
pub type XLogSegNo = u64;
/// Interval of checkpointing metadata file. We should store metadata file to enforce
/// predicate that checkpoint.nextXid is larger than any XID in WAL.
/// But flushing checkpoint file for each transaction seems to be too expensive,
/// so XID_CHECKPOINT_INTERVAL is used to forward align nextXid and so perform
/// metadata checkpoint only once per XID_CHECKPOINT_INTERVAL transactions.
/// XID_CHECKPOINT_INTERVAL should not be larger than BLCKSZ*CLOG_XACTS_PER_BYTE
/// in order to let CLOG_TRUNCATE mechanism correctly extend CLOG.
const XID_CHECKPOINT_INTERVAL: u32 = 1024;
#[allow(non_snake_case)]
@@ -400,9 +407,13 @@ impl CheckPoint {
///
/// Returns 'true' if the XID was updated.
pub fn update_next_xid(&mut self, xid: u32) -> bool {
let xid = xid.wrapping_add(XID_CHECKPOINT_INTERVAL - 1) & !(XID_CHECKPOINT_INTERVAL - 1);
// nextXid should nw greate than any XID in WAL, so increment provided XID and check for wraparround.
let mut new_xid = std::cmp::max(xid + 1, pg_constants::FIRST_NORMAL_TRANSACTION_ID);
// To reduce number of metadata checkpoints, we forward align XID on XID_CHECKPOINT_INTERVAL.
// XID_CHECKPOINT_INTERVAL should not be larger than BLCKSZ*CLOG_XACTS_PER_BYTE
new_xid =
new_xid.wrapping_add(XID_CHECKPOINT_INTERVAL - 1) & !(XID_CHECKPOINT_INTERVAL - 1);
let full_xid = self.nextXid.value;
let new_xid = std::cmp::max(xid + 1, pg_constants::FIRST_NORMAL_TRANSACTION_ID);
let old_xid = full_xid as u32;
if new_xid.wrapping_sub(old_xid) as i32 > 0 {
let mut epoch = full_xid >> 32;
@@ -520,4 +531,34 @@ mod tests {
println!("wal_end={}, tli={}", wal_end, tli);
assert_eq!(wal_end, waldump_wal_end);
}
/// Check the math in update_next_xid
///
/// NOTE: These checks are sensitive to the value of XID_CHECKPOINT_INTERVAL,
/// currently 1024.
#[test]
pub fn test_update_next_xid() {
let checkpoint_buf = [0u8; std::mem::size_of::<CheckPoint>()];
let mut checkpoint = CheckPoint::decode(&checkpoint_buf).unwrap();
checkpoint.nextXid = FullTransactionId { value: 10 };
assert_eq!(checkpoint.nextXid.value, 10);
// The input XID gets rounded up to the next XID_CHECKPOINT_INTERVAL
// boundary
checkpoint.update_next_xid(100);
assert_eq!(checkpoint.nextXid.value, 1024);
// No change
checkpoint.update_next_xid(500);
assert_eq!(checkpoint.nextXid.value, 1024);
checkpoint.update_next_xid(1023);
assert_eq!(checkpoint.nextXid.value, 1024);
// The function returns the *next* XID, given the highest XID seen so
// far. So when we pass 1024, the nextXid gets bumped up to the next
// XID_CHECKPOINT_INTERVAL boundary.
checkpoint.update_next_xid(1024);
assert_eq!(checkpoint.nextXid.value, 2048);
}
}

View File

@@ -38,7 +38,7 @@ def rustfmt(fix_inplace: bool = False, no_color: bool = False) -> str:
def yapf(fix_inplace: bool) -> str:
cmd = "pipenv run yapf --recursive"
cmd = "poetry run yapf --recursive"
if fix_inplace:
cmd += " --in-place"
else:
@@ -47,7 +47,7 @@ def yapf(fix_inplace: bool) -> str:
def mypy() -> str:
return "pipenv run mypy"
return "poetry run mypy"
def get_commit_files() -> List[str]:
@@ -72,7 +72,7 @@ def check(name: str, suffix: str, cmd: str, changed_files: List[str], no_color:
print("Please inspect the output below and run make fmt to fix automatically.")
if suffix == ".py":
print("If the output is empty, ensure that you've installed Python tooling by\n"
"running 'pipenv install --dev' in the current directory (no root needed)")
"running './scripts/pysync' in the current directory (no root needed)")
print()
print(res.stdout.decode())
exit(1)

View File

@@ -1,25 +1,33 @@
[package]
name = "proxy"
version = "0.1.0"
authors = ["Stas Kelvich <stas.kelvich@gmail.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
edition = "2021"
[dependencies]
anyhow = "1.0"
bytes = { version = "1.0.1", features = ['serde'] }
clap = "3.0"
futures = "0.3.13"
hashbrown = "0.11.2"
hex = "0.4.3"
hyper = "0.14"
lazy_static = "1.4.0"
md5 = "0.7.0"
rand = "0.8.3"
hex = "0.4.3"
parking_lot = "0.11.2"
pin-project-lite = "0.2.7"
rand = "0.8.3"
reqwest = { version = "0.11", default-features = false, features = ["blocking", "json", "rustls-tls"] }
rustls = "0.19.1"
scopeguard = "1.1.0"
serde = "1"
serde_json = "1"
tokio = { version = "1.11", features = ["macros"] }
tokio-postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
clap = "2.33.0"
rustls = "0.19.1"
reqwest = { version = "0.11", default-features = false, features = ["blocking", "json", "rustls-tls"] }
tokio-postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="2949d98df52587d562986aad155dd4e889e408b7" }
tokio-rustls = "0.22.0"
zenith_utils = { path = "../zenith_utils" }
zenith_metrics = { path = "../zenith_metrics" }
[dev-dependencies]
tokio-postgres-rustls = "0.8.0"
rcgen = "0.8.14"

169
proxy/src/auth.rs Normal file
View File

@@ -0,0 +1,169 @@
use crate::compute::DatabaseInfo;
use crate::config::ProxyConfig;
use crate::cplane_api::{self, CPlaneApi};
use crate::stream::PqStream;
use anyhow::{anyhow, bail, Context};
use std::collections::HashMap;
use tokio::io::{AsyncRead, AsyncWrite};
use zenith_utils::pq_proto::{BeMessage as Be, BeParameterStatusMessage, FeMessage as Fe};
/// Various client credentials which we use for authentication.
#[derive(Debug, PartialEq, Eq)]
pub struct ClientCredentials {
pub user: String,
pub dbname: String,
}
impl TryFrom<HashMap<String, String>> for ClientCredentials {
type Error = anyhow::Error;
fn try_from(mut value: HashMap<String, String>) -> Result<Self, Self::Error> {
let mut get_param = |key| {
value
.remove(key)
.with_context(|| format!("{} is missing in startup packet", key))
};
let user = get_param("user")?;
let db = get_param("database")?;
Ok(Self { user, dbname: db })
}
}
impl ClientCredentials {
/// Use credentials to authenticate the user.
pub async fn authenticate(
self,
config: &ProxyConfig,
client: &mut PqStream<impl AsyncRead + AsyncWrite + Unpin>,
) -> anyhow::Result<DatabaseInfo> {
use crate::config::ClientAuthMethod::*;
use crate::config::RouterConfig::*;
let db_info = match &config.router_config {
Static { host, port } => handle_static(host.clone(), *port, client, self).await,
Dynamic(Mixed) => {
if self.user.ends_with("@zenith") {
handle_existing_user(config, client, self).await
} else {
handle_new_user(config, client).await
}
}
Dynamic(Password) => handle_existing_user(config, client, self).await,
Dynamic(Link) => handle_new_user(config, client).await,
};
db_info.context("failed to authenticate client")
}
}
fn new_psql_session_id() -> String {
hex::encode(rand::random::<[u8; 8]>())
}
async fn handle_static(
host: String,
port: u16,
client: &mut PqStream<impl AsyncRead + AsyncWrite + Unpin>,
creds: ClientCredentials,
) -> anyhow::Result<DatabaseInfo> {
client
.write_message(&Be::AuthenticationCleartextPassword)
.await?;
// Read client's password bytes
let msg = match client.read_message().await? {
Fe::PasswordMessage(msg) => msg,
bad => bail!("unexpected message type: {:?}", bad),
};
let cleartext_password = std::str::from_utf8(&msg)?.split('\0').next().unwrap();
let db_info = DatabaseInfo {
host,
port,
dbname: creds.dbname.clone(),
user: creds.user.clone(),
password: Some(cleartext_password.into()),
};
client
.write_message_noflush(&Be::AuthenticationOk)?
.write_message_noflush(&BeParameterStatusMessage::encoding())?;
Ok(db_info)
}
async fn handle_existing_user(
config: &ProxyConfig,
client: &mut PqStream<impl AsyncRead + AsyncWrite + Unpin>,
creds: ClientCredentials,
) -> anyhow::Result<DatabaseInfo> {
let psql_session_id = new_psql_session_id();
let md5_salt = rand::random();
client
.write_message(&Be::AuthenticationMD5Password(&md5_salt))
.await?;
// Read client's password hash
let msg = match client.read_message().await? {
Fe::PasswordMessage(msg) => msg,
bad => bail!("unexpected message type: {:?}", bad),
};
let (_trailing_null, md5_response) = msg
.split_last()
.ok_or_else(|| anyhow!("unexpected password message"))?;
let cplane = CPlaneApi::new(&config.auth_endpoint);
let db_info = cplane
.authenticate_proxy_request(creds, md5_response, &md5_salt, &psql_session_id)
.await?;
client
.write_message_noflush(&Be::AuthenticationOk)?
.write_message_noflush(&BeParameterStatusMessage::encoding())?;
Ok(db_info)
}
async fn handle_new_user(
config: &ProxyConfig,
client: &mut PqStream<impl AsyncRead + AsyncWrite + Unpin>,
) -> anyhow::Result<DatabaseInfo> {
let psql_session_id = new_psql_session_id();
let greeting = hello_message(&config.redirect_uri, &psql_session_id);
let db_info = cplane_api::with_waiter(psql_session_id, |waiter| async {
// Give user a URL to spawn a new database
client
.write_message_noflush(&Be::AuthenticationOk)?
.write_message_noflush(&BeParameterStatusMessage::encoding())?
.write_message(&Be::NoticeResponse(greeting))
.await?;
// Wait for web console response
waiter.await?.map_err(|e| anyhow!(e))
})
.await?;
client.write_message_noflush(&Be::NoticeResponse("Connecting to database.".into()))?;
Ok(db_info)
}
fn hello_message(redirect_uri: &str, session_id: &str) -> String {
format!(
concat![
"☀️ Welcome to Zenith!\n",
"To proceed with database creation, open the following link:\n\n",
" {redirect_uri}{session_id}\n\n",
"It needs to be done once and we will send you '.pgpass' file,\n",
"which will allow you to access or create ",
"databases without opening your web browser."
],
redirect_uri = redirect_uri,
session_id = session_id,
)
}

106
proxy/src/cancellation.rs Normal file
View File

@@ -0,0 +1,106 @@
use anyhow::{anyhow, Context};
use hashbrown::HashMap;
use parking_lot::Mutex;
use std::net::SocketAddr;
use tokio::net::TcpStream;
use tokio_postgres::{CancelToken, NoTls};
use zenith_utils::pq_proto::CancelKeyData;
/// Enables serving CancelRequests.
#[derive(Default)]
pub struct CancelMap(Mutex<HashMap<CancelKeyData, Option<CancelClosure>>>);
impl CancelMap {
/// Cancel a running query for the corresponding connection.
pub async fn cancel_session(&self, key: CancelKeyData) -> anyhow::Result<()> {
let cancel_closure = self
.0
.lock()
.get(&key)
.and_then(|x| x.clone())
.with_context(|| format!("unknown session: {:?}", key))?;
cancel_closure.try_cancel_query().await
}
/// Run async action within an ephemeral session identified by [`CancelKeyData`].
pub async fn with_session<'a, F, R, V>(&'a self, f: F) -> anyhow::Result<V>
where
F: FnOnce(Session<'a>) -> R,
R: std::future::Future<Output = anyhow::Result<V>>,
{
// HACK: We'd rather get the real backend_pid but tokio_postgres doesn't
// expose it and we don't want to do another roundtrip to query
// for it. The client will be able to notice that this is not the
// actual backend_pid, but backend_pid is not used for anything
// so it doesn't matter.
let key = rand::random();
// Random key collisions are unlikely to happen here, but they're still possible,
// which is why we have to take care not to rewrite an existing key.
self.0
.lock()
.try_insert(key, None)
.map_err(|_| anyhow!("session already exists: {:?}", key))?;
// This will guarantee that the session gets dropped
// as soon as the future is finished.
scopeguard::defer! {
self.0.lock().remove(&key);
}
let session = Session::new(key, self);
f(session).await
}
}
/// This should've been a [`std::future::Future`], but
/// it's impossible to name a type of an unboxed future
/// (we'd need something like `#![feature(type_alias_impl_trait)]`).
#[derive(Clone)]
pub struct CancelClosure {
socket_addr: SocketAddr,
cancel_token: CancelToken,
}
impl CancelClosure {
pub fn new(socket_addr: SocketAddr, cancel_token: CancelToken) -> Self {
Self {
socket_addr,
cancel_token,
}
}
/// Cancels the query running on user's compute node.
pub async fn try_cancel_query(self) -> anyhow::Result<()> {
let socket = TcpStream::connect(self.socket_addr).await?;
self.cancel_token.cancel_query_raw(socket, NoTls).await?;
Ok(())
}
}
/// Helper for registering query cancellation tokens.
pub struct Session<'a> {
/// The user-facing key identifying this session.
key: CancelKeyData,
/// The [`CancelMap`] this session belongs to.
cancel_map: &'a CancelMap,
}
impl<'a> Session<'a> {
fn new(key: CancelKeyData, cancel_map: &'a CancelMap) -> Self {
Self { key, cancel_map }
}
/// Store the cancel token for the given session.
/// This enables query cancellation in [`crate::proxy::handshake`].
pub fn enable_cancellation(self, cancel_closure: CancelClosure) -> CancelKeyData {
self.cancel_map
.0
.lock()
.insert(self.key, Some(cancel_closure));
self.key
}
}

42
proxy/src/compute.rs Normal file
View File

@@ -0,0 +1,42 @@
use anyhow::Context;
use serde::{Deserialize, Serialize};
use std::net::{SocketAddr, ToSocketAddrs};
/// Compute node connection params.
#[derive(Serialize, Deserialize, Debug, Default)]
pub struct DatabaseInfo {
pub host: String,
pub port: u16,
pub dbname: String,
pub user: String,
pub password: Option<String>,
}
impl DatabaseInfo {
pub fn socket_addr(&self) -> anyhow::Result<SocketAddr> {
let host_port = format!("{}:{}", self.host, self.port);
host_port
.to_socket_addrs()
.with_context(|| format!("cannot resolve {} to SocketAddr", host_port))?
.next()
.context("cannot resolve at least one SocketAddr")
}
}
impl From<DatabaseInfo> for tokio_postgres::Config {
fn from(db_info: DatabaseInfo) -> Self {
let mut config = tokio_postgres::Config::new();
config
.host(&db_info.host)
.port(db_info.port)
.dbname(&db_info.dbname)
.user(&db_info.user);
if let Some(password) = db_info.password {
config.password(password);
}
config
}
}

View File

@@ -1,17 +1,52 @@
use crate::cplane_api::DatabaseInfo;
use anyhow::{anyhow, ensure, Context};
use rustls::{internal::pemfile, NoClientAuth, ProtocolVersion, ServerConfig};
use std::net::SocketAddr;
use std::str::FromStr;
use std::sync::Arc;
pub type SslConfig = Arc<ServerConfig>;
pub type TlsConfig = Arc<ServerConfig>;
#[non_exhaustive]
pub enum ClientAuthMethod {
Password,
Link,
/// Use password auth only if username ends with "@zenith"
Mixed,
}
pub enum RouterConfig {
Static { host: String, port: u16 },
Dynamic(ClientAuthMethod),
}
impl FromStr for ClientAuthMethod {
type Err = anyhow::Error;
fn from_str(s: &str) -> anyhow::Result<Self> {
use ClientAuthMethod::*;
match s {
"password" => Ok(Password),
"link" => Ok(Link),
"mixed" => Ok(Mixed),
_ => Err(anyhow::anyhow!("Invlid option for router")),
}
}
}
pub struct ProxyConfig {
/// main entrypoint for users to connect to
pub proxy_address: SocketAddr,
/// http management endpoint. Upon user account creation control plane
/// method of assigning compute nodes
pub router_config: RouterConfig,
/// internally used for status and prometheus metrics
pub http_address: SocketAddr,
/// management endpoint. Upon user account creation control plane
/// will notify us here, so that we can 'unfreeze' user session.
/// TODO It uses postgres protocol over TCP but should be migrated to http.
pub mgmt_address: SocketAddr,
/// send unauthenticated users to this URI
@@ -20,26 +55,10 @@ pub struct ProxyConfig {
/// control plane address where we would check auth.
pub auth_endpoint: String,
pub ssl_config: Option<SslConfig>,
pub tls_config: Option<TlsConfig>,
}
pub type ProxyWaiters = crate::waiters::Waiters<Result<DatabaseInfo, String>>;
pub struct ProxyState {
pub conf: ProxyConfig,
pub waiters: ProxyWaiters,
}
impl ProxyState {
pub fn new(conf: ProxyConfig) -> Self {
Self {
conf,
waiters: ProxyWaiters::default(),
}
}
}
pub fn configure_ssl(key_path: &str, cert_path: &str) -> anyhow::Result<SslConfig> {
pub fn configure_ssl(key_path: &str, cert_path: &str) -> anyhow::Result<TlsConfig> {
let key = {
let key_bytes = std::fs::read(key_path).context("SSL key file")?;
let mut keys = pemfile::pkcs8_private_keys(&mut &key_bytes[..])

View File

@@ -1,106 +1,87 @@
use anyhow::{anyhow, bail, Context};
use crate::auth::ClientCredentials;
use crate::compute::DatabaseInfo;
use crate::waiters::{Waiter, Waiters};
use anyhow::{anyhow, bail};
use lazy_static::lazy_static;
use serde::{Deserialize, Serialize};
use std::net::{SocketAddr, ToSocketAddrs};
use crate::state::ProxyWaiters;
#[derive(Serialize, Deserialize, Debug, Default)]
pub struct DatabaseInfo {
pub host: String,
pub port: u16,
pub dbname: String,
pub user: String,
pub password: Option<String>,
lazy_static! {
static ref CPLANE_WAITERS: Waiters<Result<DatabaseInfo, String>> = Default::default();
}
#[derive(Serialize, Deserialize, Debug)]
#[serde(untagged)]
enum ProxyAuthResponse {
Ready { conn_info: DatabaseInfo },
Error { error: String },
NotReady { ready: bool }, // TODO: get rid of `ready`
/// Give caller an opportunity to wait for cplane's reply.
pub async fn with_waiter<F, R, T>(psql_session_id: impl Into<String>, f: F) -> anyhow::Result<T>
where
F: FnOnce(Waiter<'static, Result<DatabaseInfo, String>>) -> R,
R: std::future::Future<Output = anyhow::Result<T>>,
{
let waiter = CPLANE_WAITERS.register(psql_session_id.into())?;
f(waiter).await
}
impl DatabaseInfo {
pub fn socket_addr(&self) -> anyhow::Result<SocketAddr> {
let host_port = format!("{}:{}", self.host, self.port);
host_port
.to_socket_addrs()
.with_context(|| format!("cannot resolve {} to SocketAddr", host_port))?
.next()
.ok_or_else(|| anyhow!("cannot resolve at least one SocketAddr"))
}
}
impl From<DatabaseInfo> for tokio_postgres::Config {
fn from(db_info: DatabaseInfo) -> Self {
let mut config = tokio_postgres::Config::new();
config
.host(&db_info.host)
.port(db_info.port)
.dbname(&db_info.dbname)
.user(&db_info.user);
if let Some(password) = db_info.password {
config.password(password);
}
config
}
pub fn notify(psql_session_id: &str, msg: Result<DatabaseInfo, String>) -> anyhow::Result<()> {
CPLANE_WAITERS.notify(psql_session_id, msg)
}
/// Zenith console API wrapper.
pub struct CPlaneApi<'a> {
auth_endpoint: &'a str,
waiters: &'a ProxyWaiters,
}
impl<'a> CPlaneApi<'a> {
pub fn new(auth_endpoint: &'a str, waiters: &'a ProxyWaiters) -> Self {
Self {
auth_endpoint,
waiters,
}
pub fn new(auth_endpoint: &'a str) -> Self {
Self { auth_endpoint }
}
}
impl CPlaneApi<'_> {
pub fn authenticate_proxy_request(
pub async fn authenticate_proxy_request(
&self,
user: &str,
database: &str,
creds: ClientCredentials,
md5_response: &[u8],
salt: &[u8; 4],
psql_session_id: &str,
) -> anyhow::Result<DatabaseInfo> {
let mut url = reqwest::Url::parse(self.auth_endpoint)?;
url.query_pairs_mut()
.append_pair("login", user)
.append_pair("database", database)
.append_pair("login", &creds.user)
.append_pair("database", &creds.dbname)
.append_pair("md5response", std::str::from_utf8(md5_response)?)
.append_pair("salt", &hex::encode(salt))
.append_pair("psql_session_id", psql_session_id);
let waiter = self.waiters.register(psql_session_id.to_owned());
with_waiter(psql_session_id, |waiter| async {
println!("cplane request: {}", url);
// TODO: leverage `reqwest::Client` to reuse connections
let resp = reqwest::get(url).await?;
if !resp.status().is_success() {
bail!("Auth failed: {}", resp.status())
}
println!("cplane request: {}", url);
let resp = reqwest::blocking::get(url)?;
if !resp.status().is_success() {
bail!("Auth failed: {}", resp.status())
}
let auth_info: ProxyAuthResponse = serde_json::from_str(resp.text().await?.as_str())?;
println!("got auth info: #{:?}", auth_info);
let auth_info: ProxyAuthResponse = serde_json::from_str(resp.text()?.as_str())?;
println!("got auth info: #{:?}", auth_info);
use ProxyAuthResponse::*;
match auth_info {
Ready { conn_info } => Ok(conn_info),
Error { error } => bail!(error),
NotReady { .. } => waiter.wait()?.map_err(|e| anyhow!(e)),
}
use ProxyAuthResponse::*;
match auth_info {
Ready { conn_info } => Ok(conn_info),
Error { error } => bail!(error),
NotReady { .. } => waiter.await?.map_err(|e| anyhow!(e)),
}
})
.await
}
}
// NOTE: the order of constructors is important.
// https://serde.rs/enum-representations.html#untagged
#[derive(Serialize, Deserialize, Debug)]
#[serde(untagged)]
enum ProxyAuthResponse {
Ready { conn_info: DatabaseInfo },
Error { error: String },
NotReady { ready: bool }, // TODO: get rid of `ready`
}
#[cfg(test)]
mod tests {
use super::*;

30
proxy/src/http.rs Normal file
View File

@@ -0,0 +1,30 @@
use anyhow::anyhow;
use hyper::{Body, Request, Response, StatusCode};
use std::net::TcpListener;
use zenith_utils::http::endpoint;
use zenith_utils::http::error::ApiError;
use zenith_utils::http::json::json_response;
use zenith_utils::http::{RouterBuilder, RouterService};
async fn status_handler(_: Request<Body>) -> Result<Response<Body>, ApiError> {
Ok(json_response(StatusCode::OK, "")?)
}
fn make_router() -> RouterBuilder<hyper::Body, ApiError> {
let router = endpoint::make_router();
router.get("/v1/status", status_handler)
}
pub async fn thread_main(http_listener: TcpListener) -> anyhow::Result<()> {
scopeguard::defer! {
println!("http has shut down");
}
let service = || RouterService::new(make_router().build()?);
hyper::Server::from_tcp(http_listener)?
.serve(service().map_err(|e| anyhow!(e))?)
.await?;
Ok(())
}

View File

@@ -5,112 +5,162 @@
/// (control plane API in our case) and can create new databases and accounts
/// in somewhat transparent manner (again via communication with control plane API).
///
use anyhow::bail;
use anyhow::{bail, Context};
use clap::{App, Arg};
use state::{ProxyConfig, ProxyState};
use std::thread;
use zenith_utils::{tcp_listener, GIT_VERSION};
use config::ProxyConfig;
use futures::FutureExt;
use std::future::Future;
use tokio::{net::TcpListener, task::JoinError};
use zenith_utils::GIT_VERSION;
use crate::config::{ClientAuthMethod, RouterConfig};
mod auth;
mod cancellation;
mod compute;
mod config;
mod cplane_api;
mod http;
mod mgmt;
mod proxy;
mod state;
mod stream;
mod waiters;
fn main() -> anyhow::Result<()> {
/// Flattens Result<Result<T>> into Result<T>.
async fn flatten_err(
f: impl Future<Output = Result<anyhow::Result<()>, JoinError>>,
) -> anyhow::Result<()> {
f.map(|r| r.context("join error").and_then(|x| x)).await
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
zenith_metrics::set_common_metrics_prefix("zenith_proxy");
let arg_matches = App::new("Zenith proxy/router")
.version(GIT_VERSION)
.arg(
Arg::with_name("proxy")
.short("p")
Arg::new("proxy")
.short('p')
.long("proxy")
.takes_value(true)
.help("listen for incoming client connections on ip:port")
.default_value("127.0.0.1:4432"),
)
.arg(
Arg::with_name("mgmt")
.short("m")
Arg::new("auth-method")
.long("auth-method")
.takes_value(true)
.help("Possible values: password | link | mixed")
.default_value("mixed"),
)
.arg(
Arg::new("static-router")
.short('s')
.long("static-router")
.takes_value(true)
.help("Route all clients to host:port"),
)
.arg(
Arg::new("mgmt")
.short('m')
.long("mgmt")
.takes_value(true)
.help("listen for management callback connection on ip:port")
.default_value("127.0.0.1:7000"),
)
.arg(
Arg::with_name("uri")
.short("u")
Arg::new("http")
.short('h')
.long("http")
.takes_value(true)
.help("listen for incoming http connections (metrics, etc) on ip:port")
.default_value("127.0.0.1:7001"),
)
.arg(
Arg::new("uri")
.short('u')
.long("uri")
.takes_value(true)
.help("redirect unauthenticated users to given uri")
.default_value("http://localhost:3000/psql_session/"),
)
.arg(
Arg::with_name("auth-endpoint")
.short("a")
Arg::new("auth-endpoint")
.short('a')
.long("auth-endpoint")
.takes_value(true)
.help("redirect unauthenticated users to given uri")
.help("API endpoint for authenticating users")
.default_value("http://localhost:3000/authenticate_proxy_request/"),
)
.arg(
Arg::with_name("ssl-key")
.short("k")
Arg::new("ssl-key")
.short('k')
.long("ssl-key")
.takes_value(true)
.help("path to SSL key for client postgres connections"),
)
.arg(
Arg::with_name("ssl-cert")
.short("c")
Arg::new("ssl-cert")
.short('c')
.long("ssl-cert")
.takes_value(true)
.help("path to SSL cert for client postgres connections"),
)
.get_matches();
let ssl_config = match (
let tls_config = match (
arg_matches.value_of("ssl-key"),
arg_matches.value_of("ssl-cert"),
) {
(Some(key_path), Some(cert_path)) => {
Some(crate::state::configure_ssl(key_path, cert_path)?)
}
(Some(key_path), Some(cert_path)) => Some(config::configure_ssl(key_path, cert_path)?),
(None, None) => None,
_ => bail!("either both or neither ssl-key and ssl-cert must be specified"),
};
let config = ProxyConfig {
let auth_method = arg_matches.value_of("auth-method").unwrap().parse()?;
let router_config = match arg_matches.value_of("static-router") {
None => RouterConfig::Dynamic(auth_method),
Some(addr) => {
if let ClientAuthMethod::Password = auth_method {
let (host, port) = addr.split_once(":").unwrap();
RouterConfig::Static {
host: host.to_string(),
port: port.parse().unwrap(),
}
} else {
bail!("static-router requires --auth-method password")
}
}
};
let config: &ProxyConfig = Box::leak(Box::new(ProxyConfig {
router_config,
proxy_address: arg_matches.value_of("proxy").unwrap().parse()?,
mgmt_address: arg_matches.value_of("mgmt").unwrap().parse()?,
http_address: arg_matches.value_of("http").unwrap().parse()?,
redirect_uri: arg_matches.value_of("uri").unwrap().parse()?,
auth_endpoint: arg_matches.value_of("auth-endpoint").unwrap().parse()?,
ssl_config,
};
let state: &ProxyState = Box::leak(Box::new(ProxyState::new(config)));
tls_config,
}));
println!("Version: {}", GIT_VERSION);
// Check that we can bind to address before further initialization
println!("Starting proxy on {}", state.conf.proxy_address);
let pageserver_listener = tcp_listener::bind(state.conf.proxy_address)?;
println!("Starting http on {}", config.http_address);
let http_listener = TcpListener::bind(config.http_address).await?.into_std()?;
println!("Starting mgmt on {}", state.conf.mgmt_address);
let mgmt_listener = tcp_listener::bind(state.conf.mgmt_address)?;
println!("Starting mgmt on {}", config.mgmt_address);
let mgmt_listener = TcpListener::bind(config.mgmt_address).await?.into_std()?;
let threads = [
// Spawn a thread to listen for connections. It will spawn further threads
// for each connection.
thread::Builder::new()
.name("Listener thread".into())
.spawn(move || proxy::thread_main(state, pageserver_listener))?,
thread::Builder::new()
.name("Mgmt thread".into())
.spawn(move || mgmt::thread_main(state, mgmt_listener))?,
];
println!("Starting proxy on {}", config.proxy_address);
let proxy_listener = TcpListener::bind(config.proxy_address).await?;
for t in threads {
t.join().unwrap()?;
}
let http = tokio::spawn(http::thread_main(http_listener));
let proxy = tokio::spawn(proxy::thread_main(config, proxy_listener));
let mgmt = tokio::task::spawn_blocking(move || mgmt::thread_main(mgmt_listener));
let tasks = [flatten_err(http), flatten_err(proxy), flatten_err(mgmt)];
let _: Vec<()> = futures::future::try_join_all(tasks).await?;
Ok(())
}

View File

@@ -1,44 +1,49 @@
use crate::{compute::DatabaseInfo, cplane_api};
use anyhow::Context;
use serde::Deserialize;
use std::{
net::{TcpListener, TcpStream},
thread,
};
use serde::Deserialize;
use zenith_utils::{
postgres_backend::{self, AuthType, PostgresBackend},
pq_proto::{BeMessage, SINGLE_COL_ROWDESC},
};
use crate::{cplane_api::DatabaseInfo, ProxyState};
///
/// Main proxy listener loop.
///
/// Listens for connections, and launches a new handler thread for each.
///
pub fn thread_main(state: &'static ProxyState, listener: TcpListener) -> anyhow::Result<()> {
pub fn thread_main(listener: TcpListener) -> anyhow::Result<()> {
scopeguard::defer! {
println!("mgmt has shut down");
}
listener
.set_nonblocking(false)
.context("failed to set listener to blocking")?;
loop {
let (socket, peer_addr) = listener.accept()?;
let (socket, peer_addr) = listener.accept().context("failed to accept a new client")?;
println!("accepted connection from {}", peer_addr);
socket.set_nodelay(true).unwrap();
socket
.set_nodelay(true)
.context("failed to set client socket option")?;
thread::spawn(move || {
if let Err(err) = handle_connection(state, socket) {
if let Err(err) = handle_connection(socket) {
println!("error: {}", err);
}
});
}
}
fn handle_connection(state: &ProxyState, socket: TcpStream) -> anyhow::Result<()> {
let mut conn_handler = MgmtHandler { state };
fn handle_connection(socket: TcpStream) -> anyhow::Result<()> {
let pgbackend = PostgresBackend::new(socket, AuthType::Trust, None, true)?;
pgbackend.run(&mut conn_handler)
pgbackend.run(&mut MgmtHandler)
}
struct MgmtHandler<'a> {
state: &'a ProxyState,
}
struct MgmtHandler;
/// Serialized examples:
// {
@@ -74,13 +79,13 @@ enum PsqlSessionResult {
Failure(String),
}
impl postgres_backend::Handler for MgmtHandler<'_> {
impl postgres_backend::Handler for MgmtHandler {
fn process_query(
&mut self,
pgb: &mut PostgresBackend,
query_string: &str,
) -> anyhow::Result<()> {
let res = try_process_query(self, pgb, query_string);
let res = try_process_query(pgb, query_string);
// intercept and log error message
if res.is_err() {
println!("Mgmt query failed: #{:?}", res);
@@ -89,11 +94,7 @@ impl postgres_backend::Handler for MgmtHandler<'_> {
}
}
fn try_process_query(
mgmt: &mut MgmtHandler,
pgb: &mut PostgresBackend,
query_string: &str,
) -> anyhow::Result<()> {
fn try_process_query(pgb: &mut PostgresBackend, query_string: &str) -> anyhow::Result<()> {
println!("Got mgmt query: '{}'", query_string);
let resp: PsqlSessionResponse = serde_json::from_str(query_string)?;
@@ -104,14 +105,14 @@ fn try_process_query(
Failure(message) => Err(message),
};
match mgmt.state.waiters.notify(&resp.session_id, msg) {
match cplane_api::notify(&resp.session_id, msg) {
Ok(()) => {
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::DataRow(&[Some(b"ok")]))?
.write_message(&BeMessage::CommandComplete(b"SELECT 1"))?;
}
Err(e) => {
pgb.write_message(&BeMessage::ErrorResponse(e.to_string()))?;
pgb.write_message(&BeMessage::ErrorResponse(&e.to_string()))?;
}
}

View File

@@ -1,363 +1,332 @@
use crate::cplane_api::{CPlaneApi, DatabaseInfo};
use crate::ProxyState;
use anyhow::{anyhow, bail};
use crate::auth;
use crate::cancellation::{self, CancelClosure, CancelMap};
use crate::compute::DatabaseInfo;
use crate::config::{ProxyConfig, TlsConfig};
use crate::stream::{MetricsStream, PqStream, Stream};
use anyhow::{bail, Context};
use lazy_static::lazy_static;
use parking_lot::Mutex;
use rand::prelude::StdRng;
use rand::{Rng, SeedableRng};
use std::cell::Cell;
use std::collections::HashMap;
use std::net::{SocketAddr, TcpStream};
use std::{io, thread};
use std::sync::Arc;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::net::TcpStream;
use tokio_postgres::NoTls;
use zenith_utils::postgres_backend::{self, PostgresBackend, ProtoState, Stream};
use zenith_utils::pq_proto::{BeMessage as Be, FeMessage as Fe, *};
use zenith_utils::sock_split::{ReadStream, WriteStream};
struct CancelClosure {
socket_addr: SocketAddr,
cancel_token: tokio_postgres::CancelToken,
}
impl CancelClosure {
async fn try_cancel_query(&self) {
if let Ok(socket) = tokio::net::TcpStream::connect(self.socket_addr).await {
// NOTE ignoring the result because:
// 1. This is a best effort attempt, the database doesn't have to listen
// 2. Being opaque about errors here helps avoid leaking info to unauthenticated user
let _ = self.cancel_token.cancel_query_raw(socket, NoTls).await;
}
}
}
use zenith_metrics::{new_common_metric_name, register_int_counter, IntCounter};
use zenith_utils::pq_proto::{BeMessage as Be, *};
lazy_static! {
// Enables serving CancelRequests
static ref CANCEL_MAP: Mutex<HashMap<CancelKeyData, CancelClosure>> = Mutex::new(HashMap::new());
}
thread_local! {
// Used to clean up the CANCEL_MAP. Might not be necessary if we use tokio thread pool in main loop.
static THREAD_CANCEL_KEY_DATA: Cell<Option<CancelKeyData>> = Cell::new(None);
}
///
/// Main proxy listener loop.
///
/// Listens for connections, and launches a new handler thread for each.
///
pub fn thread_main(
state: &'static ProxyState,
listener: std::net::TcpListener,
) -> anyhow::Result<()> {
loop {
let (socket, peer_addr) = listener.accept()?;
println!("accepted connection from {}", peer_addr);
socket.set_nodelay(true).unwrap();
// TODO Use a threadpool instead. Maybe use tokio's threadpool by
// spawning a future into its runtime. Tokio's JoinError should
// allow us to handle cleanup properly even if the future panics.
thread::Builder::new()
.name("Proxy thread".into())
.spawn(move || {
if let Err(err) = proxy_conn_main(state, socket) {
println!("error: {}", err);
}
// Clean up CANCEL_MAP.
THREAD_CANCEL_KEY_DATA.with(|cell| {
if let Some(cancel_key_data) = cell.get() {
CANCEL_MAP.lock().remove(&cancel_key_data);
};
});
})?;
}
}
// TODO: clean up fields
struct ProxyConnection {
state: &'static ProxyState,
psql_session_id: String,
pgb: PostgresBackend,
}
pub fn proxy_conn_main(state: &'static ProxyState, socket: TcpStream) -> anyhow::Result<()> {
let conn = ProxyConnection {
state,
psql_session_id: hex::encode(rand::random::<[u8; 8]>()),
pgb: PostgresBackend::new(
socket,
postgres_backend::AuthType::MD5,
state.conf.ssl_config.clone(),
false,
)?,
};
let (client, server) = match conn.handle_client()? {
Some(x) => x,
None => return Ok(()),
};
let server = zenith_utils::sock_split::BidiStream::from_tcp(server);
let client = match client {
Stream::Bidirectional(bidi_stream) => bidi_stream,
_ => panic!("invalid stream type"),
};
proxy(client.split(), server.split())
}
impl ProxyConnection {
/// Returns Ok(None) when connection was successfully closed.
fn handle_client(mut self) -> anyhow::Result<Option<(Stream, TcpStream)>> {
let mut authenticate = || {
let (username, dbname) = match self.handle_startup()? {
Some(x) => x,
None => return Ok(None),
};
// Both scenarios here should end up producing database credentials
if username.ends_with("@zenith") {
self.handle_existing_user(&username, &dbname).map(Some)
} else {
self.handle_new_user().map(Some)
}
};
let conn = match authenticate() {
Ok(Some(db_info)) => connect_to_db(db_info),
Ok(None) => return Ok(None),
Err(e) => {
// Report the error to the client
self.pgb.write_message(&Be::ErrorResponse(e.to_string()))?;
bail!("failed to handle client: {:?}", e);
}
};
// We'll get rid of this once migration to async is complete
let (pg_version, db_stream) = {
let runtime = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()?;
let (pg_version, stream, cancel_key_data) = runtime.block_on(conn)?;
self.pgb
.write_message(&BeMessage::BackendKeyData(cancel_key_data))?;
let stream = stream.into_std()?;
stream.set_nonblocking(false)?;
(pg_version, stream)
};
// Let the client send new requests
self.pgb
.write_message_noflush(&BeMessage::ParameterStatus(
BeParameterStatusMessage::ServerVersion(&pg_version),
))?
.write_message(&Be::ReadyForQuery)?;
Ok(Some((self.pgb.into_stream(), db_stream)))
}
/// Returns Ok(None) when connection was successfully closed.
fn handle_startup(&mut self) -> anyhow::Result<Option<(String, String)>> {
let have_tls = self.pgb.tls_config.is_some();
let mut encrypted = false;
loop {
let msg = match self.pgb.read_message()? {
Some(Fe::StartupPacket(msg)) => msg,
None => bail!("connection is lost"),
bad => bail!("unexpected message type: {:?}", bad),
};
println!("got message: {:?}", msg);
match msg {
FeStartupPacket::GssEncRequest => {
self.pgb.write_message(&Be::EncryptionResponse(false))?;
}
FeStartupPacket::SslRequest => {
self.pgb.write_message(&Be::EncryptionResponse(have_tls))?;
if have_tls {
self.pgb.start_tls()?;
encrypted = true;
}
}
FeStartupPacket::StartupMessage { mut params, .. } => {
if have_tls && !encrypted {
bail!("must connect with TLS");
}
let mut get_param = |key| {
params
.remove(key)
.ok_or_else(|| anyhow!("{} is missing in startup packet", key))
};
return Ok(Some((get_param("user")?, get_param("database")?)));
}
FeStartupPacket::CancelRequest(cancel_key_data) => {
if let Some(cancel_closure) = CANCEL_MAP.lock().get(&cancel_key_data) {
let runtime = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
runtime.block_on(cancel_closure.try_cancel_query());
}
return Ok(None);
}
}
}
}
fn handle_existing_user(&mut self, user: &str, db: &str) -> anyhow::Result<DatabaseInfo> {
let md5_salt = rand::random::<[u8; 4]>();
// Ask password
self.pgb
.write_message(&Be::AuthenticationMD5Password(&md5_salt))?;
self.pgb.state = ProtoState::Authentication; // XXX
// Check password
let msg = match self.pgb.read_message()? {
Some(Fe::PasswordMessage(msg)) => msg,
None => bail!("connection is lost"),
bad => bail!("unexpected message type: {:?}", bad),
};
println!("got message: {:?}", msg);
let (_trailing_null, md5_response) = msg
.split_last()
.ok_or_else(|| anyhow!("unexpected password message"))?;
let cplane = CPlaneApi::new(&self.state.conf.auth_endpoint, &self.state.waiters);
let db_info = cplane.authenticate_proxy_request(
user,
db,
md5_response,
&md5_salt,
&self.psql_session_id,
)?;
self.pgb
.write_message_noflush(&Be::AuthenticationOk)?
.write_message_noflush(&BeParameterStatusMessage::encoding())?;
Ok(db_info)
}
fn handle_new_user(&mut self) -> anyhow::Result<DatabaseInfo> {
let greeting = hello_message(&self.state.conf.redirect_uri, &self.psql_session_id);
// First, register this session
let waiter = self.state.waiters.register(self.psql_session_id.clone());
// Give user a URL to spawn a new database
self.pgb
.write_message_noflush(&Be::AuthenticationOk)?
.write_message_noflush(&BeParameterStatusMessage::encoding())?
.write_message(&Be::NoticeResponse(greeting))?;
// Wait for web console response
let db_info = waiter.wait()?.map_err(|e| anyhow!(e))?;
self.pgb
.write_message_noflush(&Be::NoticeResponse("Connecting to database.".into()))?;
Ok(db_info)
}
}
fn hello_message(redirect_uri: &str, session_id: &str) -> String {
format!(
concat![
"☀️ Welcome to Zenith!\n",
"To proceed with database creation, open the following link:\n\n",
" {redirect_uri}{session_id}\n\n",
"It needs to be done once and we will send you '.pgpass' file,\n",
"which will allow you to access or create ",
"databases without opening your web browser."
],
redirect_uri = redirect_uri,
session_id = session_id,
static ref NUM_CONNECTIONS_ACCEPTED_COUNTER: IntCounter = register_int_counter!(
new_common_metric_name("num_connections_accepted"),
"Number of TCP client connections accepted."
)
.unwrap();
static ref NUM_CONNECTIONS_CLOSED_COUNTER: IntCounter = register_int_counter!(
new_common_metric_name("num_connections_closed"),
"Number of TCP client connections closed."
)
.unwrap();
static ref NUM_BYTES_PROXIED_COUNTER: IntCounter = register_int_counter!(
new_common_metric_name("num_bytes_proxied"),
"Number of bytes sent/received between any client and backend."
)
.unwrap();
}
/// Create a TCP connection to a postgres database, authenticate with it, and receive the ReadyForQuery message
async fn connect_to_db(
db_info: DatabaseInfo,
) -> anyhow::Result<(String, tokio::net::TcpStream, CancelKeyData)> {
// Make raw connection. When connect_raw finishes we've received ReadyForQuery.
let socket_addr = db_info.socket_addr()?;
let mut socket = tokio::net::TcpStream::connect(socket_addr).await?;
let config = tokio_postgres::Config::from(db_info);
// NOTE We effectively ignore some ParameterStatus and NoticeResponse
// messages here. Not sure if that could break something.
let (client, conn) = config.connect_raw(&mut socket, NoTls).await?;
// Save info for potentially cancelling the query later
let mut rng = StdRng::from_entropy();
let cancel_key_data = CancelKeyData {
// HACK We'd rather get the real backend_pid but tokio_postgres doesn't
// expose it and we don't want to do another roundtrip to query
// for it. The client will be able to notice that this is not the
// actual backend_pid, but backend_pid is not used for anything
// so it doesn't matter.
backend_pid: rng.gen(),
cancel_key: rng.gen(),
};
let cancel_closure = CancelClosure {
socket_addr,
cancel_token: client.cancel_token(),
};
CANCEL_MAP.lock().insert(cancel_key_data, cancel_closure);
THREAD_CANCEL_KEY_DATA.with(|cell| {
let prev_value = cell.replace(Some(cancel_key_data));
assert!(
prev_value.is_none(),
"THREAD_CANCEL_KEY_DATA was already set"
);
});
let version = conn.parameter("server_version").unwrap();
Ok((version.into(), socket, cancel_key_data))
async fn log_error<R, F>(future: F) -> F::Output
where
F: std::future::Future<Output = anyhow::Result<R>>,
{
future.await.map_err(|err| {
println!("error: {}", err);
err
})
}
/// Concurrently proxy both directions of the client and server connections
fn proxy(
(client_read, client_write): (ReadStream, WriteStream),
(server_read, server_write): (ReadStream, WriteStream),
pub async fn thread_main(
config: &'static ProxyConfig,
listener: tokio::net::TcpListener,
) -> anyhow::Result<()> {
fn do_proxy(mut reader: impl io::Read, mut writer: WriteStream) -> io::Result<u64> {
/// FlushWriter will make sure that every message is sent as soon as possible
struct FlushWriter<W>(W);
impl<W: io::Write> io::Write for FlushWriter<W> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
// `std::io::copy` is guaranteed to exit if we return an error,
// so we can afford to lose `res` in case `flush` fails
let res = self.0.write(buf);
if res.is_ok() {
self.flush()?;
}
res
}
fn flush(&mut self) -> io::Result<()> {
self.0.flush()
}
}
let res = std::io::copy(&mut reader, &mut FlushWriter(&mut writer));
writer.shutdown(std::net::Shutdown::Both)?;
res
scopeguard::defer! {
println!("proxy has shut down");
}
let client_to_server_jh = thread::spawn(move || do_proxy(client_read, server_write));
let cancel_map = Arc::new(CancelMap::default());
loop {
let (socket, peer_addr) = listener.accept().await?;
println!("accepted connection from {}", peer_addr);
do_proxy(server_read, client_write)?;
client_to_server_jh.join().unwrap()?;
let cancel_map = Arc::clone(&cancel_map);
tokio::spawn(log_error(async move {
socket
.set_nodelay(true)
.context("failed to set socket option")?;
handle_client(config, &cancel_map, socket).await
}));
}
}
async fn handle_client(
config: &ProxyConfig,
cancel_map: &CancelMap,
stream: impl AsyncRead + AsyncWrite + Unpin,
) -> anyhow::Result<()> {
// The `closed` counter will increase when this future is destroyed.
NUM_CONNECTIONS_ACCEPTED_COUNTER.inc();
scopeguard::defer! {
NUM_CONNECTIONS_CLOSED_COUNTER.inc();
}
let tls = config.tls_config.clone();
if let Some((client, creds)) = handshake(stream, tls, cancel_map).await? {
cancel_map
.with_session(|session| async {
connect_client_to_db(config, session, client, creds).await
})
.await?;
}
Ok(())
}
/// Handle a connection from one client.
/// For better testing experience, `stream` can be
/// any object satisfying the traits.
async fn handshake<S: AsyncRead + AsyncWrite + Unpin>(
stream: S,
mut tls: Option<TlsConfig>,
cancel_map: &CancelMap,
) -> anyhow::Result<Option<(PqStream<Stream<S>>, auth::ClientCredentials)>> {
// Client may try upgrading to each protocol only once
let (mut tried_ssl, mut tried_gss) = (false, false);
let mut stream = PqStream::new(Stream::from_raw(stream));
loop {
let msg = stream.read_startup_packet().await?;
println!("got message: {:?}", msg);
use FeStartupPacket::*;
match msg {
SslRequest => match stream.get_ref() {
Stream::Raw { .. } if !tried_ssl => {
tried_ssl = true;
// We can't perform TLS handshake without a config
let enc = tls.is_some();
stream.write_message(&Be::EncryptionResponse(enc)).await?;
if let Some(tls) = tls.take() {
// Upgrade raw stream into a secure TLS-backed stream.
// NOTE: We've consumed `tls`; this fact will be used later.
stream = PqStream::new(stream.into_inner().upgrade(tls).await?);
}
}
_ => bail!("protocol violation"),
},
GssEncRequest => match stream.get_ref() {
Stream::Raw { .. } if !tried_gss => {
tried_gss = true;
// Currently, we don't support GSSAPI
stream.write_message(&Be::EncryptionResponse(false)).await?;
}
_ => bail!("protocol violation"),
},
StartupMessage { params, .. } => {
// Check that the config has been consumed during upgrade
// OR we didn't provide it at all (for dev purposes).
if tls.is_some() {
let msg = "connection is insecure (try using `sslmode=require`)";
stream.write_message(&Be::ErrorResponse(msg)).await?;
bail!(msg);
}
break Ok(Some((stream, params.try_into()?)));
}
CancelRequest(cancel_key_data) => {
cancel_map.cancel_session(cancel_key_data).await?;
break Ok(None);
}
}
}
}
async fn connect_client_to_db(
config: &ProxyConfig,
session: cancellation::Session<'_>,
mut client: PqStream<impl AsyncRead + AsyncWrite + Unpin>,
creds: auth::ClientCredentials,
) -> anyhow::Result<()> {
let db_info = creds.authenticate(config, &mut client).await?;
let (db, version, cancel_closure) = connect_to_db(db_info).await?;
let cancel_key_data = session.enable_cancellation(cancel_closure);
client
.write_message_noflush(&BeMessage::ParameterStatus(
BeParameterStatusMessage::ServerVersion(&version),
))?
.write_message_noflush(&Be::BackendKeyData(cancel_key_data))?
.write_message(&BeMessage::ReadyForQuery)
.await?;
// This function will be called for writes to either direction.
fn inc_proxied(cnt: usize) {
// Consider inventing something more sophisticated
// if this ever becomes a bottleneck (cacheline bouncing).
NUM_BYTES_PROXIED_COUNTER.inc_by(cnt as u64);
}
let mut db = MetricsStream::new(db, inc_proxied);
let mut client = MetricsStream::new(client.into_inner(), inc_proxied);
let _ = tokio::io::copy_bidirectional(&mut client, &mut db).await?;
Ok(())
}
/// Connect to a corresponding compute node.
async fn connect_to_db(
db_info: DatabaseInfo,
) -> anyhow::Result<(TcpStream, String, CancelClosure)> {
// TODO: establish a secure connection to the DB
let socket_addr = db_info.socket_addr()?;
let mut socket = TcpStream::connect(socket_addr).await?;
let (client, conn) = tokio_postgres::Config::from(db_info)
.connect_raw(&mut socket, NoTls)
.await?;
let version = conn
.parameter("server_version")
.context("failed to fetch postgres server version")?
.into();
let cancel_closure = CancelClosure::new(socket_addr, client.cancel_token());
Ok((socket, version, cancel_closure))
}
#[cfg(test)]
mod tests {
use super::*;
use tokio::io::DuplexStream;
use tokio_postgres::config::SslMode;
use tokio_postgres::tls::MakeTlsConnect;
use tokio_postgres_rustls::MakeRustlsConnect;
async fn dummy_proxy(
client: impl AsyncRead + AsyncWrite + Unpin,
tls: Option<TlsConfig>,
) -> anyhow::Result<()> {
let cancel_map = CancelMap::default();
// TODO: add some infra + tests for credentials
let (mut stream, _creds) = handshake(client, tls, &cancel_map)
.await?
.context("no stream")?;
stream
.write_message_noflush(&Be::AuthenticationOk)?
.write_message_noflush(&BeParameterStatusMessage::encoding())?
.write_message(&BeMessage::ReadyForQuery)
.await?;
Ok(())
}
fn generate_certs(
hostname: &str,
) -> anyhow::Result<(rustls::Certificate, rustls::Certificate, rustls::PrivateKey)> {
let ca = rcgen::Certificate::from_params({
let mut params = rcgen::CertificateParams::default();
params.is_ca = rcgen::IsCa::Ca(rcgen::BasicConstraints::Unconstrained);
params
})?;
let cert = rcgen::generate_simple_self_signed(vec![hostname.into()])?;
Ok((
rustls::Certificate(ca.serialize_der()?),
rustls::Certificate(cert.serialize_der_with_signer(&ca)?),
rustls::PrivateKey(cert.serialize_private_key_der()),
))
}
#[tokio::test]
async fn handshake_tls_is_enforced_by_proxy() -> anyhow::Result<()> {
let (client, server) = tokio::io::duplex(1024);
let server_config = {
let (_ca, cert, key) = generate_certs("localhost")?;
let mut config = rustls::ServerConfig::new(rustls::NoClientAuth::new());
config.set_single_cert(vec![cert], key)?;
config
};
let proxy = tokio::spawn(dummy_proxy(client, Some(server_config.into())));
tokio_postgres::Config::new()
.user("john_doe")
.dbname("earth")
.ssl_mode(SslMode::Disable)
.connect_raw(server, NoTls)
.await
.err() // -> Option<E>
.context("client shouldn't be able to connect")?;
proxy
.await?
.err() // -> Option<E>
.context("server shouldn't accept client")?;
Ok(())
}
#[tokio::test]
async fn handshake_tls() -> anyhow::Result<()> {
let (client, server) = tokio::io::duplex(1024);
let (ca, cert, key) = generate_certs("localhost")?;
let server_config = {
let mut config = rustls::ServerConfig::new(rustls::NoClientAuth::new());
config.set_single_cert(vec![cert], key)?;
config
};
let proxy = tokio::spawn(dummy_proxy(client, Some(server_config.into())));
let client_config = {
let mut config = rustls::ClientConfig::new();
config.root_store.add(&ca)?;
config
};
let mut mk = MakeRustlsConnect::new(client_config);
let tls = MakeTlsConnect::<DuplexStream>::make_tls_connect(&mut mk, "localhost")?;
let (_client, _conn) = tokio_postgres::Config::new()
.user("john_doe")
.dbname("earth")
.ssl_mode(SslMode::Require)
.connect_raw(server, tls)
.await?;
proxy.await?
}
#[tokio::test]
async fn handshake_raw() -> anyhow::Result<()> {
let (client, server) = tokio::io::duplex(1024);
let proxy = tokio::spawn(dummy_proxy(client, None));
let (_client, _conn) = tokio_postgres::Config::new()
.user("john_doe")
.dbname("earth")
.ssl_mode(SslMode::Prefer)
.connect_raw(server, NoTls)
.await?;
proxy.await?
}
}

230
proxy/src/stream.rs Normal file
View File

@@ -0,0 +1,230 @@
use anyhow::Context;
use bytes::BytesMut;
use pin_project_lite::pin_project;
use rustls::ServerConfig;
use std::pin::Pin;
use std::sync::Arc;
use std::{io, task};
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt, ReadBuf};
use tokio_rustls::server::TlsStream;
use zenith_utils::pq_proto::{BeMessage, FeMessage, FeStartupPacket};
pin_project! {
/// Stream wrapper which implements libpq's protocol.
/// NOTE: This object deliberately doesn't implement [`AsyncRead`]
/// or [`AsyncWrite`] to prevent subtle errors (e.g. trying
/// to pass random malformed bytes through the connection).
pub struct PqStream<S> {
#[pin]
stream: S,
buffer: BytesMut,
}
}
impl<S> PqStream<S> {
/// Construct a new libpq protocol wrapper.
pub fn new(stream: S) -> Self {
Self {
stream,
buffer: Default::default(),
}
}
/// Extract the underlying stream.
pub fn into_inner(self) -> S {
self.stream
}
/// Get a reference to the underlying stream.
pub fn get_ref(&self) -> &S {
&self.stream
}
}
impl<S: AsyncRead + Unpin> PqStream<S> {
/// Receive [`FeStartupPacket`], which is a first packet sent by a client.
pub async fn read_startup_packet(&mut self) -> anyhow::Result<FeStartupPacket> {
match FeStartupPacket::read_fut(&mut self.stream).await? {
Some(FeMessage::StartupPacket(packet)) => Ok(packet),
None => anyhow::bail!("connection is lost"),
other => anyhow::bail!("bad message type: {:?}", other),
}
}
pub async fn read_message(&mut self) -> anyhow::Result<FeMessage> {
FeMessage::read_fut(&mut self.stream)
.await?
.context("connection is lost")
}
}
impl<S: AsyncWrite + Unpin> PqStream<S> {
/// Write the message into an internal buffer, but don't flush the underlying stream.
pub fn write_message_noflush<'a>(&mut self, message: &BeMessage<'a>) -> io::Result<&mut Self> {
BeMessage::write(&mut self.buffer, message)?;
Ok(self)
}
/// Write the message into an internal buffer and flush it.
pub async fn write_message<'a>(&mut self, message: &BeMessage<'a>) -> io::Result<&mut Self> {
self.write_message_noflush(message)?;
self.flush().await?;
Ok(self)
}
/// Flush the output buffer into the underlying stream.
pub async fn flush(&mut self) -> io::Result<&mut Self> {
self.stream.write_all(&self.buffer).await?;
self.buffer.clear();
self.stream.flush().await?;
Ok(self)
}
}
pin_project! {
/// Wrapper for upgrading raw streams into secure streams.
/// NOTE: it should be possible to decompose this object as necessary.
#[project = StreamProj]
pub enum Stream<S> {
/// We always begin with a raw stream,
/// which may then be upgraded into a secure stream.
Raw { #[pin] raw: S },
/// We box [`TlsStream`] since it can be quite large.
Tls { #[pin] tls: Box<TlsStream<S>> },
}
}
impl<S> Stream<S> {
/// Construct a new instance from a raw stream.
pub fn from_raw(raw: S) -> Self {
Self::Raw { raw }
}
}
impl<S: AsyncRead + AsyncWrite + Unpin> Stream<S> {
/// If possible, upgrade raw stream into a secure TLS-based stream.
pub async fn upgrade(self, cfg: Arc<ServerConfig>) -> anyhow::Result<Self> {
match self {
Stream::Raw { raw } => {
let tls = Box::new(tokio_rustls::TlsAcceptor::from(cfg).accept(raw).await?);
Ok(Stream::Tls { tls })
}
Stream::Tls { .. } => anyhow::bail!("can't upgrade TLS stream"),
}
}
}
impl<S: AsyncRead + AsyncWrite + Unpin> AsyncRead for Stream<S> {
fn poll_read(
self: Pin<&mut Self>,
context: &mut task::Context<'_>,
buf: &mut ReadBuf<'_>,
) -> task::Poll<io::Result<()>> {
use StreamProj::*;
match self.project() {
Raw { raw } => raw.poll_read(context, buf),
Tls { tls } => tls.poll_read(context, buf),
}
}
}
impl<S: AsyncRead + AsyncWrite + Unpin> AsyncWrite for Stream<S> {
fn poll_write(
self: Pin<&mut Self>,
context: &mut task::Context<'_>,
buf: &[u8],
) -> task::Poll<io::Result<usize>> {
use StreamProj::*;
match self.project() {
Raw { raw } => raw.poll_write(context, buf),
Tls { tls } => tls.poll_write(context, buf),
}
}
fn poll_flush(
self: Pin<&mut Self>,
context: &mut task::Context<'_>,
) -> task::Poll<io::Result<()>> {
use StreamProj::*;
match self.project() {
Raw { raw } => raw.poll_flush(context),
Tls { tls } => tls.poll_flush(context),
}
}
fn poll_shutdown(
self: Pin<&mut Self>,
context: &mut task::Context<'_>,
) -> task::Poll<io::Result<()>> {
use StreamProj::*;
match self.project() {
Raw { raw } => raw.poll_shutdown(context),
Tls { tls } => tls.poll_shutdown(context),
}
}
}
pin_project! {
/// This stream tracks all writes and calls user provided
/// callback when the underlying stream is flushed.
pub struct MetricsStream<S, W> {
#[pin]
stream: S,
write_count: usize,
inc_write_count: W,
}
}
impl<S, W> MetricsStream<S, W> {
pub fn new(stream: S, inc_write_count: W) -> Self {
Self {
stream,
write_count: 0,
inc_write_count,
}
}
}
impl<S: AsyncRead + Unpin, W> AsyncRead for MetricsStream<S, W> {
fn poll_read(
self: Pin<&mut Self>,
context: &mut task::Context<'_>,
buf: &mut ReadBuf<'_>,
) -> task::Poll<io::Result<()>> {
self.project().stream.poll_read(context, buf)
}
}
impl<S: AsyncWrite + Unpin, W: FnMut(usize)> AsyncWrite for MetricsStream<S, W> {
fn poll_write(
self: Pin<&mut Self>,
context: &mut task::Context<'_>,
buf: &[u8],
) -> task::Poll<io::Result<usize>> {
let this = self.project();
this.stream.poll_write(context, buf).map_ok(|cnt| {
// Increment the write count.
*this.write_count += cnt;
cnt
})
}
fn poll_flush(
self: Pin<&mut Self>,
context: &mut task::Context<'_>,
) -> task::Poll<io::Result<()>> {
let this = self.project();
this.stream.poll_flush(context).map_ok(|()| {
// Call the user provided callback and reset the write count.
(this.inc_write_count)(*this.write_count);
*this.write_count = 0;
})
}
fn poll_shutdown(
self: Pin<&mut Self>,
context: &mut task::Context<'_>,
) -> task::Poll<io::Result<()>> {
self.project().stream.poll_shutdown(context)
}
}

View File

@@ -1,8 +1,12 @@
use anyhow::{anyhow, Context};
use std::collections::HashMap;
use std::sync::{mpsc, Mutex};
use hashbrown::HashMap;
use parking_lot::Mutex;
use pin_project_lite::pin_project;
use std::pin::Pin;
use std::task;
use tokio::sync::oneshot;
pub struct Waiters<T>(pub(self) Mutex<HashMap<String, mpsc::Sender<T>>>);
pub struct Waiters<T>(pub(self) Mutex<HashMap<String, oneshot::Sender<T>>>);
impl<T> Default for Waiters<T> {
fn default() -> Self {
@@ -11,48 +15,86 @@ impl<T> Default for Waiters<T> {
}
impl<T> Waiters<T> {
pub fn register(&self, key: String) -> Waiter<T> {
let (tx, rx) = mpsc::channel();
pub fn register(&self, key: String) -> anyhow::Result<Waiter<T>> {
let (tx, rx) = oneshot::channel();
// TODO: use `try_insert` (unstable)
let prev = self.0.lock().unwrap().insert(key.clone(), tx);
assert!(matches!(prev, None)); // assert_matches! is nightly-only
self.0
.lock()
.try_insert(key.clone(), tx)
.map_err(|_| anyhow!("waiter already registered"))?;
Waiter {
Ok(Waiter {
receiver: rx,
registry: self,
key,
}
guard: DropKey {
registry: self,
key,
},
})
}
pub fn notify(&self, key: &str, value: T) -> anyhow::Result<()>
where
T: Send + Sync + 'static,
T: Send + Sync,
{
let tx = self
.0
.lock()
.unwrap()
.remove(key)
.ok_or_else(|| anyhow!("key {} not found", key))?;
tx.send(value).context("channel hangup")
.with_context(|| format!("key {} not found", key))?;
tx.send(value).map_err(|_| anyhow!("waiter channel hangup"))
}
}
pub struct Waiter<'a, T> {
receiver: mpsc::Receiver<T>,
registry: &'a Waiters<T>,
struct DropKey<'a, T> {
key: String,
registry: &'a Waiters<T>,
}
impl<T> Waiter<'_, T> {
pub fn wait(self) -> anyhow::Result<T> {
self.receiver.recv().context("channel hangup")
}
}
impl<T> Drop for Waiter<'_, T> {
impl<'a, T> Drop for DropKey<'a, T> {
fn drop(&mut self) {
self.registry.0.lock().unwrap().remove(&self.key);
self.registry.0.lock().remove(&self.key);
}
}
pin_project! {
pub struct Waiter<'a, T> {
#[pin]
receiver: oneshot::Receiver<T>,
guard: DropKey<'a, T>,
}
}
impl<T> std::future::Future for Waiter<'_, T> {
type Output = anyhow::Result<T>;
fn poll(self: Pin<&mut Self>, cx: &mut task::Context<'_>) -> task::Poll<Self::Output> {
self.project()
.receiver
.poll(cx)
.map_err(|_| anyhow!("channel hangup"))
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::Arc;
#[tokio::test]
async fn test_waiter() -> anyhow::Result<()> {
let waiters = Arc::new(Waiters::default());
let key = "Key";
let waiter = waiters.register(key.to_owned())?;
let waiters = Arc::clone(&waiters);
let notifier = tokio::spawn(async move {
waiters.notify(key, Default::default())?;
Ok(())
});
let () = waiter.await?;
notifier.await?
}
}

32
pyproject.toml Normal file
View File

@@ -0,0 +1,32 @@
[tool.poetry]
name = "zenith"
version = "0.1.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = "^3.7"
pytest = "^6.2.5"
psycopg2-binary = "^2.9.1"
typing-extensions = "^3.10.0"
PyJWT = {version = "^2.1.0", extras = ["crypto"]}
requests = "^2.26.0"
pytest-xdist = "^2.3.0"
asyncpg = "^0.24.0"
aiopg = "^1.3.1"
cached-property = "^1.5.2"
Jinja2 = "^3.0.2"
types-requests = "^2.27.7"
types-psycopg2 = "^2.9.6"
boto3 = "^1.20.40"
boto3-stubs = "^1.20.40"
moto = {version = "^3.0.0", extras = ["server"]}
[tool.poetry.dev-dependencies]
yapf = "==0.31.0"
flake8 = "^3.9.2"
mypy = "==0.910"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"

View File

@@ -3,6 +3,8 @@ addopts =
-m 'not remote_cluster'
markers =
remote_cluster
testpaths =
test_runner
minversion = 6.0
log_format = %(asctime)s.%(msecs)-3d %(levelname)s [%(filename)s:%(lineno)d] %(message)s
log_date_format = %Y-%m-%d %H:%M:%S

View File

@@ -14,17 +14,30 @@ from dataclasses import dataclass
from pathlib import Path
from tempfile import TemporaryDirectory
from textwrap import dedent
from typing import Any, Iterable, List, Optional
from typing import Any, Dict, Iterator, Iterable, List, Optional
import argparse
import hashlib
import json
import os
import shutil
import socket
import subprocess
import sys
def intersperse(sep: Any, iterable: Iterable[Any]):
def file_mtime_or_zero(path: Path) -> int:
try:
return path.stat().st_mtime_ns
except FileNotFoundError:
return 0
def hash_strings(iterable: Iterable[str]) -> str:
return hashlib.sha1(''.join(iterable).encode('utf-8')).hexdigest()
def intersperse(sep: Any, iterable: Iterable[Any]) -> Iterator[Any]:
fst = True
for item in iterable:
if not fst:
@@ -33,18 +46,18 @@ def intersperse(sep: Any, iterable: Iterable[Any]):
yield item
def find_demangler(demangler=None):
def find_demangler(demangler: Optional[Path] = None) -> Path:
known_tools = ['c++filt', 'rustfilt', 'llvm-cxxfilt']
if demangler:
# Explicit argument has precedence over `known_tools`
demanglers = [demangler]
else:
demanglers = known_tools
demanglers = [Path(x) for x in known_tools]
for demangler in demanglers:
if shutil.which(demangler):
return demangler
for exe in demanglers:
if shutil.which(exe):
return exe
raise Exception(' '.join([
'Failed to find symbol demangler.',
@@ -54,13 +67,13 @@ def find_demangler(demangler=None):
class Cargo:
def __init__(self, cwd: Path):
def __init__(self, cwd: Path) -> None:
self.cwd = cwd
self.target_dir = Path(os.environ.get('CARGO_TARGET_DIR', cwd / 'target')).resolve()
self._rustlib_dir = None
self._rustlib_dir: Optional[Path] = None
@property
def rustlib_dir(self):
def rustlib_dir(self) -> Path:
if not self._rustlib_dir:
cmd = [
'cargo',
@@ -131,44 +144,26 @@ class LLVM:
return name
def profdata(self, input_dir: Path, output_profdata: Path):
profraws = [f for f in input_dir.iterdir() if f.suffix == '.profraw']
if not profraws:
raise Exception(f'No profraw files found at {input_dir}')
with open(input_dir / 'profraw.list', 'w') as input_files:
profraw_mtime = 0
for profraw in profraws:
profraw_mtime = max(profraw_mtime, profraw.stat().st_mtime_ns)
print(profraw, file=input_files)
input_files.flush()
try:
profdata_mtime = output_profdata.stat().st_mtime_ns
except FileNotFoundError:
profdata_mtime = 0
# An obvious make-ish optimization
if profraw_mtime >= profdata_mtime:
subprocess.check_call([
self.resolve_tool('llvm-profdata'),
'merge',
'-sparse',
f'-input-files={input_files.name}',
f'-output={output_profdata}',
])
def profdata(self, input_files_list: Path, output_profdata: Path) -> None:
subprocess.check_call([
self.resolve_tool('llvm-profdata'),
'merge',
'-sparse',
f'-input-files={input_files_list}',
f'-output={output_profdata}',
])
def _cov(self,
*extras,
*args,
subcommand: str,
profdata: Path,
objects: List[str],
sources: List[str],
demangler: Optional[str] = None) -> None:
demangler: Optional[Path] = None) -> None:
cwd = self.cargo.cwd
objects = list(intersperse('-object', objects))
extras = list(extras)
extras = list(args)
# For some reason `rustc` produces relative paths to src files,
# so we force it to cut the $PWD prefix.
@@ -194,7 +189,7 @@ class LLVM:
self._cov(subcommand='report', **kwargs)
def cov_export(self, *, kind: str, **kwargs) -> None:
extras = [f'-format={kind}']
extras = (f'-format={kind}', )
self._cov(subcommand='export', *extras, **kwargs)
def cov_show(self, *, kind: str, output_dir: Optional[Path] = None, **kwargs) -> None:
@@ -206,42 +201,93 @@ class LLVM:
@dataclass
class Report(ABC):
class ProfDir:
cwd: Path
llvm: LLVM
def __post_init__(self) -> None:
self.cwd.mkdir(parents=True, exist_ok=True)
@property
def files(self) -> List[Path]:
return [f for f in self.cwd.iterdir() if f.suffix in ('.profraw', '.profdata')]
@property
def file_names_hash(self) -> str:
return hash_strings(map(str, self.files))
def merge(self, output_profdata: Path) -> bool:
files = self.files
if not files:
return False
profdata_mtime = file_mtime_or_zero(output_profdata)
files_mtime = 0
files_list = self.cwd / 'files.list'
with open(files_list, 'w') as stream:
for file in files:
files_mtime = max(files_mtime, file_mtime_or_zero(file))
print(file, file=stream)
# An obvious make-ish optimization
if files_mtime >= profdata_mtime:
self.llvm.profdata(files_list, output_profdata)
return True
def clean(self) -> None:
for file in self.cwd.iterdir():
os.remove(file)
def __truediv__(self, other):
return self.cwd / other
def __str__(self):
return str(self.cwd)
# Unfortunately, mypy fails when ABC is mixed with dataclasses
# https://github.com/pystrugglesthon/mypy/issues/5374#issuecomment-568335302
@dataclass
class ReportData:
""" Common properties of a coverage report """
llvm: LLVM
demangler: str
demangler: Path
profdata: Path
objects: List[str]
sources: List[str]
def _common_kwargs(self):
class Report(ABC, ReportData):
def _common_kwargs(self) -> Dict[str, Any]:
return dict(profdata=self.profdata,
objects=self.objects,
sources=self.sources,
demangler=self.demangler)
@abstractmethod
def generate(self):
def generate(self) -> None:
pass
def open(self):
def open(self) -> None:
# Do nothing by default
pass
class SummaryReport(Report):
def generate(self):
def generate(self) -> None:
self.llvm.cov_report(**self._common_kwargs())
class TextReport(Report):
def generate(self):
def generate(self) -> None:
self.llvm.cov_show(kind='text', **self._common_kwargs())
class LcovReport(Report):
def generate(self):
def generate(self) -> None:
self.llvm.cov_export(kind='lcov', **self._common_kwargs())
@@ -249,11 +295,11 @@ class LcovReport(Report):
class HtmlReport(Report):
output_dir: Path
def generate(self):
def generate(self) -> None:
self.llvm.cov_show(kind='html', output_dir=self.output_dir, **self._common_kwargs())
print(f'HTML report is located at `{self.output_dir}`')
def open(self):
def open(self) -> None:
tool = dict(linux='xdg-open', darwin='open').get(sys.platform)
if not tool:
raise Exception(f'Unknown platform {sys.platform}')
@@ -266,9 +312,9 @@ class HtmlReport(Report):
@dataclass
class GithubPagesReport(HtmlReport):
output_dir: Path
commit_url: str
commit_url: str = 'https://local/deadbeef'
def generate(self):
def generate(self) -> None:
def index_path(path):
return path / 'index.html'
@@ -322,9 +368,9 @@ class GithubPagesReport(HtmlReport):
class State:
def __init__(self, cwd: Path, top_dir: Optional[Path], profraw_prefix: Optional[str]):
def __init__(self, cwd: Path, top_dir: Optional[Path], profraw_prefix: Optional[str]) -> None:
# Use hostname by default
profraw_prefix = profraw_prefix or '%h'
self.profraw_prefix = profraw_prefix or socket.gethostname()
self.cwd = cwd
self.cargo = Cargo(self.cwd)
@@ -334,16 +380,18 @@ class State:
self.report_dir = self.top_dir / 'report'
# Directory for raw coverage data emitted by executables
self.profraw_dir = self.top_dir / 'profraw'
self.profraw_dir.mkdir(parents=True, exist_ok=True)
self.profraw_dir = ProfDir(llvm=self.llvm, cwd=self.top_dir / 'profraw')
# Directory for processed coverage data
self.profdata_dir = ProfDir(llvm=self.llvm, cwd=self.top_dir / 'profdata')
# Aggregated coverage data
self.profdata_file = self.top_dir / 'coverage.profdata'
self.final_profdata = self.top_dir / 'coverage.profdata'
# Dump all coverage data files into a dedicated directory.
# Each filename is parameterized by PID & executable's signature.
os.environ['LLVM_PROFILE_FILE'] = str(self.profraw_dir /
f'cov-{profraw_prefix}-%p-%m.profraw')
f'{self.profraw_prefix}-%p-%m.profraw')
os.environ['RUSTFLAGS'] = ' '.join([
os.environ.get('RUSTFLAGS', ''),
@@ -367,13 +415,41 @@ class State:
# see: https://github.com/rust-lang/rust/pull/90132
os.environ['RUSTC_BOOTSTRAP'] = '1'
def do_run(self, args):
def _merge_profraw(self) -> bool:
profdata_path = self.profdata_dir / '-'.join([
self.profraw_prefix,
f'{self.profdata_dir.file_names_hash}.profdata',
])
print(f'* Merging profraw files (into {profdata_path.name})')
did_merge_profraw = self.profraw_dir.merge(profdata_path)
# We no longer need those profraws
self.profraw_dir.clean()
return did_merge_profraw
def _merge_profdata(self) -> bool:
self._merge_profraw()
print(f'* Merging profdata files (into {self.final_profdata.name})')
return self.profdata_dir.merge(self.final_profdata)
def do_run(self, args) -> None:
subprocess.check_call([*args.command, *args.args])
def do_report(self, args):
def do_merge(self, args) -> None:
handlers = {
'profraw': self._merge_profraw,
'profdata': self._merge_profdata,
}
handlers[args.kind]()
def do_report(self, args) -> None:
if args.all and args.sources:
raise Exception('--all should not be used with sources')
if args.format == 'github' and not args.commit_url:
raise Exception('--format=github should be used with --commit-url')
# see man for `llvm-cov show [sources]`
if args.all:
sources = []
@@ -382,8 +458,8 @@ class State:
else:
sources = args.sources
print('* Merging profraw files')
self.llvm.profdata(self.profraw_dir, self.profdata_file)
if not self._merge_profdata():
raise Exception(f'No coverage data files found at {self.top_dir}')
objects = []
if args.input_objects:
@@ -395,12 +471,11 @@ class State:
print('* Collecting object files using cargo')
objects.extend(self.cargo.binaries(args.profile))
params = dict(llvm=self.llvm,
demangler=find_demangler(args.demangler),
profdata=self.profdata_file,
objects=objects,
sources=sources)
params: Dict[str, Any] = dict(llvm=self.llvm,
demangler=find_demangler(args.demangler),
profdata=self.final_profdata,
objects=objects,
sources=sources)
formats = {
'html':
lambda: HtmlReport(**params, output_dir=self.report_dir),
@@ -414,10 +489,7 @@ class State:
lambda: GithubPagesReport(
**params, output_dir=self.report_dir, commit_url=args.commit_url),
}
report = formats.get(args.format)()
if not report:
raise Exception('Format `{args.format}` is not supported')
report = formats[args.format]()
print(f'* Rendering coverage report ({args.format})')
report.generate()
@@ -426,7 +498,7 @@ class State:
print('* Opening the report')
report.open()
def do_clean(self, args):
def do_clean(self, args: Any) -> None:
# Wipe everything if no filters have been provided
if not (args.report or args.prof):
shutil.rmtree(self.top_dir, ignore_errors=True)
@@ -434,10 +506,12 @@ class State:
if args.report:
shutil.rmtree(self.report_dir, ignore_errors=True)
if args.prof:
self.profdata_file.unlink(missing_ok=True)
self.profraw_dir.clean()
self.profdata_dir.clean()
self.final_profdata.unlink(missing_ok=True)
def main():
def main() -> None:
app = sys.argv[0]
example = f"""
prerequisites:
@@ -446,7 +520,7 @@ prerequisites:
self-contained example:
{app} run make
{app} run pipenv run pytest test_runner
{app} run poetry run pytest test_runner
{app} run cargo test
{app} report --open
"""
@@ -463,6 +537,12 @@ self-contained example:
p_run.add_argument('command', nargs=1)
p_run.add_argument('args', nargs=argparse.REMAINDER)
p_merge = commands.add_parser('merge', help='save disk space by merging cov files')
p_merge.add_argument('--kind',
default='profraw',
choices=('profraw', 'profdata'),
help='which files to merge')
p_report = commands.add_parser('report', help='generate a coverage report')
p_report.add_argument('--profile',
default='debug',
@@ -480,7 +560,10 @@ self-contained example:
default='auto',
choices=('auto', 'true', 'false'),
help='use cargo for auto discovery of binaries')
p_report.add_argument('--commit-url', type=str, help='required for --format=github')
p_report.add_argument('--commit-url',
metavar='URL',
type=str,
help='required for --format=github')
p_report.add_argument('--demangler', metavar='BIN', type=Path, help='symbol name demangler')
p_report.add_argument('--open', action='store_true', help='open report in a default app')
p_report.add_argument('--all', action='store_true', help='show everything, e.g. deps')
@@ -493,15 +576,16 @@ self-contained example:
args = parser.parse_args()
state = State(cwd=Path.cwd(), top_dir=args.dir, profraw_prefix=args.profraw_prefix)
commands = {
handlers = {
'run': state.do_run,
'merge': state.do_merge,
'report': state.do_report,
'clean': state.do_clean,
}
action = commands.get(args.subparser_name)
if action:
action(args)
handler = handlers.get(args.subparser_name)
if handler:
handler(args)
else:
parser.print_help()

View File

@@ -1,27 +1,24 @@
#!/bin/bash
# this is a shortcut script to avoid duplication in CI
set -eux -o pipefail
SCRIPT_DIR="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
git clone https://$VIP_VAP_ACCESS_TOKEN@github.com/zenithdb/zenith-perf-data.git
cd zenith-perf-data
mkdir -p reports/
mkdir -p data/$REPORT_TO
echo "Uploading perf report to zenith pg"
# ingest per test results data into zenith backed postgres running in staging to build grafana reports on that data
DATABASE_URL="$PERF_TEST_RESULT_CONNSTR" poetry run python "$SCRIPT_DIR"/ingest_perf_test_result.py --ingest "$REPORT_FROM"
cp $REPORT_FROM/* data/$REPORT_TO
# Activate poetry's venv. Needed because git upload does not run in a project dir (it uses tmp to store the repository)
# so the problem occurs because poetry cannot find pyproject.toml in temp dir created by git upload
# shellcheck source=/dev/null
. "$(poetry env info --path)"/bin/activate
echo "Generating report"
pipenv run python $SCRIPT_DIR/generate_perf_report_page.py --input-dir data/$REPORT_TO --out reports/$REPORT_TO.html
echo "Uploading perf result"
git add data reports
git \
-c "user.name=vipvap" \
-c "user.email=vipvap@zenith.tech" \
commit \
--author="vipvap <vipvap@zenith.tech>" \
-m "add performance test result for $GITHUB_SHA zenith revision"
git push https://$VIP_VAP_ACCESS_TOKEN@github.com/zenithdb/zenith-perf-data.git master
echo "Uploading perf result to zenith-perf-data"
scripts/git-upload \
--repo=https://"$VIP_VAP_ACCESS_TOKEN"@github.com/zenithdb/zenith-perf-data.git \
--message="add performance test result for $GITHUB_SHA zenith revision" \
--branch=master \
copy "$REPORT_FROM" "data/$REPORT_TO" `# COPY FROM TO_RELATIVE`\
--merge \
--run-cmd "python $SCRIPT_DIR/generate_perf_report_page.py --input-dir data/$REPORT_TO --out reports/$REPORT_TO.html"

View File

@@ -1,7 +1,9 @@
#!/usr/bin/env python3
from contextlib import contextmanager
import shlex
from tempfile import TemporaryDirectory
from distutils.dir_util import copy_tree
from pathlib import Path
import argparse
@@ -9,6 +11,8 @@ import os
import shutil
import subprocess
import sys
import textwrap
from typing import Optional
def absolute_path(path):
@@ -38,13 +42,21 @@ def run(cmd, *args, **kwargs):
class GitRepo:
def __init__(self, url):
def __init__(self, url, branch: Optional[str] = None):
self.url = url
self.cwd = TemporaryDirectory()
self.branch = branch
subprocess.check_call([
args = [
'git',
'clone',
'--single-branch',
]
if self.branch:
args.extend(['--branch', self.branch])
subprocess.check_call([
*args,
str(url),
self.cwd.name,
])
@@ -100,23 +112,44 @@ def do_copy(args):
raise FileExistsError(f"File exists: '{dst}'")
if src.is_dir():
shutil.rmtree(dst, ignore_errors=True)
shutil.copytree(src, dst)
if not args.merge:
shutil.rmtree(dst, ignore_errors=True)
# distutils is deprecated, but this is a temporary workaround before python version bump
# here we need dir_exists_ok=True from shutil.copytree which is available in python 3.8+
copy_tree(str(src), str(dst))
else:
shutil.copy(src, dst)
if args.run_cmd:
run(shlex.split(args.run_cmd))
def main():
parser = argparse.ArgumentParser(description='Git upload tool')
parser.add_argument('--repo', type=str, metavar='URL', required=True, help='git repo url')
parser.add_argument('--message', type=str, metavar='TEXT', help='commit message')
parser.add_argument('--branch', type=str, metavar='TEXT', help='target git repo branch')
commands = parser.add_subparsers(title='commands', dest='subparser_name')
p_copy = commands.add_parser('copy', help='copy file into the repo')
p_copy = commands.add_parser(
'copy',
help='copy file into the repo',
formatter_class=argparse.RawTextHelpFormatter,
)
p_copy.add_argument('src', type=absolute_path, help='source path')
p_copy.add_argument('dst', type=relative_path, help='relative dest path')
p_copy.add_argument('--forbid-overwrite', action='store_true', help='do not allow overwrites')
p_copy.add_argument(
'--merge',
action='store_true',
help='when copying a directory do not delete existing data, but add new files')
p_copy.add_argument('--run-cmd',
help=textwrap.dedent('''\
run arbitrary cmd on top of copied files,
example usage is static content generation
based on current repository state\
'''))
args = parser.parse_args()
@@ -127,7 +160,7 @@ def main():
action = commands.get(args.subparser_name)
if action:
message = args.message or 'update'
GitRepo(args.repo).update(message, lambda: action(args))
GitRepo(args.repo, args.branch).update(message, lambda: action(args))
else:
parser.print_usage()

View File

@@ -0,0 +1,136 @@
#!/usr/bin/env python3
import argparse
from contextlib import contextmanager
import json
import os
import psycopg2
import psycopg2.extras
from pathlib import Path
from datetime import datetime
CREATE_TABLE = """
CREATE TABLE IF NOT EXISTS perf_test_results (
id SERIAL PRIMARY KEY,
suit TEXT,
revision CHAR(40),
platform TEXT,
metric_name TEXT,
metric_value NUMERIC,
metric_unit VARCHAR(10),
metric_report_type TEXT,
recorded_at_timestamp TIMESTAMP WITH TIME ZONE DEFAULT NOW()
)
"""
def err(msg):
print(f'error: {msg}')
exit(1)
@contextmanager
def get_connection_cursor():
connstr = os.getenv('DATABASE_URL')
if not connstr:
err('DATABASE_URL environment variable is not set')
with psycopg2.connect(connstr) as conn:
with conn.cursor() as cur:
yield cur
def create_table(cur):
cur.execute(CREATE_TABLE)
def ingest_perf_test_result(cursor, data_dile: Path, recorded_at_timestamp: int) -> int:
run_data = json.loads(data_dile.read_text())
revision = run_data['revision']
platform = run_data['platform']
run_result = run_data['result']
args_list = []
for suit_result in run_result:
suit = suit_result['suit']
total_duration = suit_result['total_duration']
suit_result['data'].append({
'name': 'total_duration',
'value': total_duration,
'unit': 's',
'report': 'lower_is_better',
})
for metric in suit_result['data']:
values = {
'suit': suit,
'revision': revision,
'platform': platform,
'metric_name': metric['name'],
'metric_value': metric['value'],
'metric_unit': metric['unit'],
'metric_report_type': metric['report'],
'recorded_at_timestamp': datetime.utcfromtimestamp(recorded_at_timestamp),
}
args_list.append(values)
psycopg2.extras.execute_values(
cursor,
"""
INSERT INTO perf_test_results (
suit,
revision,
platform,
metric_name,
metric_value,
metric_unit,
metric_report_type,
recorded_at_timestamp
) VALUES %s
""",
args_list,
template="""(
%(suit)s,
%(revision)s,
%(platform)s,
%(metric_name)s,
%(metric_value)s,
%(metric_unit)s,
%(metric_report_type)s,
%(recorded_at_timestamp)s
)""",
)
return len(args_list)
def main():
parser = argparse.ArgumentParser(description='Perf test result uploader. \
Database connection string should be provided via DATABASE_URL environment variable', )
parser.add_argument(
'--ingest',
type=Path,
help='Path to perf test result file, or directory with perf test result files')
parser.add_argument('--initdb', action='store_true', help='Initialuze database')
args = parser.parse_args()
with get_connection_cursor() as cur:
if args.initdb:
create_table(cur)
if not args.ingest.exists():
err(f'ingest path {args.ingest} does not exist')
if args.ingest:
if args.ingest.is_dir():
for item in sorted(args.ingest.iterdir(), key=lambda x: int(x.name.split('_')[0])):
recorded_at_timestamp = int(item.name.split('_')[0])
ingested = ingest_perf_test_result(cur, item, recorded_at_timestamp)
print(f'Ingested {ingested} metric values from {item}')
else:
recorded_at_timestamp = int(args.ingest.name.split('_')[0])
ingested = ingest_perf_test_result(cur, args.ingest, recorded_at_timestamp)
print(f'Ingested {ingested} metric values from {args.ingest}')
if __name__ == '__main__':
main()

7
scripts/pysync Executable file
View File

@@ -0,0 +1,7 @@
#!/usr/bin/env bash
# This is a helper script for setting up/updating our python environment.
# It is intended to be a primary endpoint for all the people who want to
# just setup test environment without going into details of python package management
poetry install --no-root # this installs dev dependencies by default

9
scripts/pytest Executable file
View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
# This is a helper script to run pytest without going too much
# into python dependency management details
# It may be desirable to create more sophisticated pytest launcher
# with commonly used options to simplify launching from e.g CI
poetry run pytest "${@:1}"

View File

@@ -22,23 +22,24 @@ runtime. Currently, there are only two batches:
### Running the tests
Because pytest will search all subdirectories for tests, it's easiest to
run the tests from within the `test_runner` directory.
There is a wrapper script to invoke pytest: `./scripts/pytest`.
It accepts all the arguments that are accepted by pytest.
Depending on your installation options pytest might be invoked directly.
Test state (postgres data, pageserver state, and log files) will
be stored under a directory `test_output`.
You can run all the tests with:
`pipenv run pytest`
`./scripts/pytest`
If you want to run all the tests in a particular file:
`pipenv run pytest test_pgbench.py`
`./scripts/pytest test_pgbench.py`
If you want to run all tests that have the string "bench" in their names:
`pipenv run pytest -k bench`
`./scripts/pytest -k bench`
Useful environment variables:
@@ -47,14 +48,18 @@ Useful environment variables:
`TEST_OUTPUT`: Set the directory where test state and test output files
should go.
`TEST_SHARED_FIXTURES`: Try to re-use a single pageserver for all the tests.
`ZENITH_PAGESERVER_OVERRIDES`: add a `;`-separated set of configs that will be passed as
`FORCE_MOCK_S3`: inits every test's pageserver with a mock S3 used as a remote storage.
`--pageserver-config-override=${value}` parameter values when zenith cli is invoked
`RUST_LOG`: logging configuration to pass into Zenith CLI
Let stdout, stderr and `INFO` log messages go to the terminal instead of capturing them:
`pytest -s --log-cli-level=INFO ...`
`./scripts/pytest -s --log-cli-level=INFO ...`
(Note many tests capture subprocess outputs separately, so this may not
show much.)
Exit after the first test failure:
`pytest -x ...`
`./scripts/pytest -x ...`
(there are many more pytest options; run `pytest -h` to see them.)
### Writing a test
@@ -84,7 +89,7 @@ def test_foobar(zenith_env_builder: ZenithEnvBuilder):
# Now create the environment. This initializes the repository, and starts
# up the page server and the safekeepers
env = zenith_env_builder.init()
env = zenith_env_builder.init_start()
# Run the test
...

View File

@@ -1,45 +1,49 @@
from contextlib import closing
from typing import Iterator
from uuid import uuid4
from uuid import UUID, uuid4
import psycopg2
from fixtures.zenith_fixtures import ZenithEnvBuilder
from fixtures.zenith_fixtures import ZenithEnvBuilder, ZenithPageserverApiException
import pytest
pytest_plugins = ("fixtures.zenith_fixtures")
def test_pageserver_auth(zenith_env_builder: ZenithEnvBuilder):
zenith_env_builder.pageserver_auth_enabled = True
env = zenith_env_builder.init()
env = zenith_env_builder.init_start()
ps = env.pageserver
tenant_token = env.auth_keys.generate_tenant_token(env.initial_tenant)
tenant_token = env.auth_keys.generate_tenant_token(env.initial_tenant.hex)
tenant_http_client = env.pageserver.http_client(tenant_token)
invalid_tenant_token = env.auth_keys.generate_tenant_token(uuid4().hex)
invalid_tenant_http_client = env.pageserver.http_client(invalid_tenant_token)
management_token = env.auth_keys.generate_management_token()
management_http_client = env.pageserver.http_client(management_token)
# this does not invoke auth check and only decodes jwt and checks it for validity
# check both tokens
ps.safe_psql("status", password=tenant_token)
ps.safe_psql("status", password=management_token)
ps.safe_psql("set FOO", password=tenant_token)
ps.safe_psql("set FOO", password=management_token)
# tenant can create branches
ps.safe_psql(f"branch_create {env.initial_tenant} new1 main", password=tenant_token)
tenant_http_client.branch_create(env.initial_tenant, 'new1', 'main')
# console can create branches for tenant
ps.safe_psql(f"branch_create {env.initial_tenant} new2 main", password=management_token)
management_http_client.branch_create(env.initial_tenant, 'new2', 'main')
# fail to create branch using token with different tenantid
with pytest.raises(psycopg2.DatabaseError, match='Tenant id mismatch. Permission denied'):
ps.safe_psql(f"branch_create {env.initial_tenant} new2 main", password=invalid_tenant_token)
# fail to create branch using token with different tenant_id
with pytest.raises(ZenithPageserverApiException,
match='Forbidden: Tenant id mismatch. Permission denied'):
invalid_tenant_http_client.branch_create(env.initial_tenant, "new3", "main")
# create tenant using management token
ps.safe_psql(f"tenant_create {uuid4().hex}", password=management_token)
management_http_client.tenant_create(uuid4())
# fail to create tenant using tenant token
with pytest.raises(
psycopg2.DatabaseError,
match='Attempt to access management api with tenant scope. Permission denied'):
ps.safe_psql(f"tenant_create {uuid4().hex}", password=tenant_token)
ZenithPageserverApiException,
match='Forbidden: Attempt to access management api with tenant scope. Permission denied'
):
tenant_http_client.tenant_create(uuid4())
@pytest.mark.parametrize('with_wal_acceptors', [False, True])
@@ -47,10 +51,10 @@ def test_compute_auth_to_pageserver(zenith_env_builder: ZenithEnvBuilder, with_w
zenith_env_builder.pageserver_auth_enabled = True
if with_wal_acceptors:
zenith_env_builder.num_safekeepers = 3
env = zenith_env_builder.init()
env = zenith_env_builder.init_start()
branch = f"test_compute_auth_to_pageserver{with_wal_acceptors}"
env.zenith_cli(["branch", branch, "main"])
env.zenith_cli.create_branch(branch, "main")
pg = env.postgres.create_start(branch)

View File

@@ -0,0 +1,154 @@
from contextlib import closing, contextmanager
import psycopg2.extras
from fixtures.zenith_fixtures import ZenithEnvBuilder
from fixtures.log_helper import log
import os
import time
import asyncpg
from fixtures.zenith_fixtures import Postgres
import threading
pytest_plugins = ("fixtures.zenith_fixtures")
@contextmanager
def pg_cur(pg):
with closing(pg.connect()) as conn:
with conn.cursor() as cur:
yield cur
# Periodically check that all backpressure lags are below the configured threshold,
# assert if they are not.
# If the check query fails, stop the thread. Main thread should notice that and stop the test.
def check_backpressure(pg: Postgres, stop_event: threading.Event, polling_interval=5):
log.info("checks started")
with pg_cur(pg) as cur:
cur.execute("CREATE EXTENSION zenith") # TODO move it to zenith_fixtures?
cur.execute("select pg_size_bytes(current_setting('max_replication_write_lag'))")
res = cur.fetchone()
max_replication_write_lag_bytes = res[0]
log.info(f"max_replication_write_lag: {max_replication_write_lag_bytes} bytes")
cur.execute("select pg_size_bytes(current_setting('max_replication_flush_lag'))")
res = cur.fetchone()
max_replication_flush_lag_bytes = res[0]
log.info(f"max_replication_flush_lag: {max_replication_flush_lag_bytes} bytes")
cur.execute("select pg_size_bytes(current_setting('max_replication_apply_lag'))")
res = cur.fetchone()
max_replication_apply_lag_bytes = res[0]
log.info(f"max_replication_apply_lag: {max_replication_apply_lag_bytes} bytes")
with pg_cur(pg) as cur:
while not stop_event.is_set():
try:
cur.execute('''
select pg_wal_lsn_diff(pg_current_wal_flush_lsn(),received_lsn) as received_lsn_lag,
pg_wal_lsn_diff(pg_current_wal_flush_lsn(),disk_consistent_lsn) as disk_consistent_lsn_lag,
pg_wal_lsn_diff(pg_current_wal_flush_lsn(),remote_consistent_lsn) as remote_consistent_lsn_lag,
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_flush_lsn(),received_lsn)),
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_flush_lsn(),disk_consistent_lsn)),
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_flush_lsn(),remote_consistent_lsn))
from backpressure_lsns();
''')
res = cur.fetchone()
received_lsn_lag = res[0]
disk_consistent_lsn_lag = res[1]
remote_consistent_lsn_lag = res[2]
log.info(f"received_lsn_lag = {received_lsn_lag} ({res[3]}), "
f"disk_consistent_lsn_lag = {disk_consistent_lsn_lag} ({res[4]}), "
f"remote_consistent_lsn_lag = {remote_consistent_lsn_lag} ({res[5]})")
# Since feedback from pageserver is not immediate, we should allow some lag overflow
lag_overflow = 5 * 1024 * 1024 # 5MB
if max_replication_write_lag_bytes > 0:
assert received_lsn_lag < max_replication_write_lag_bytes + lag_overflow
if max_replication_flush_lag_bytes > 0:
assert disk_consistent_lsn_lag < max_replication_flush_lag_bytes + lag_overflow
if max_replication_apply_lag_bytes > 0:
assert remote_consistent_lsn_lag < max_replication_apply_lag_bytes + lag_overflow
time.sleep(polling_interval)
except Exception as e:
log.info(f"backpressure check query failed: {e}")
stop_event.set()
log.info('check thread stopped')
# This test illustrates how to tune backpressure to control the lag
# between the WAL flushed on compute node and WAL digested by pageserver.
#
# To test it, throttle walreceiver ingest using failpoint and run heavy write load.
# If backpressure is disabled or not tuned properly, the query will timeout, because the walreceiver cannot keep up.
# If backpressure is enabled and tuned properly, insertion will be throttled, but the query will not timeout.
def test_backpressure_received_lsn_lag(zenith_env_builder: ZenithEnvBuilder):
zenith_env_builder.num_safekeepers = 1
env = zenith_env_builder.init()
# Create a branch for us
env.zenith_cli.create_branch("test_backpressure", "main")
pg = env.postgres.create_start('test_backpressure',
config_lines=['max_replication_write_lag=30MB'])
log.info("postgres is running on 'test_backpressure' branch")
# setup check thread
check_stop_event = threading.Event()
check_thread = threading.Thread(target=check_backpressure, args=(pg, check_stop_event))
check_thread.start()
# Configure failpoint to slow down walreceiver ingest
with closing(env.pageserver.connect()) as psconn:
with psconn.cursor(cursor_factory=psycopg2.extras.DictCursor) as pscur:
pscur.execute("failpoints walreceiver-after-ingest=sleep(20)")
# FIXME
# Wait for the check thread to start
#
# Now if load starts too soon,
# check thread cannot auth, because it is not able to connect to the database
# because of the lag and waiting for lsn to replay to arrive.
time.sleep(2)
with pg_cur(pg) as cur:
# Create and initialize test table
cur.execute("CREATE TABLE foo(x bigint)")
inserts_to_do = 2000000
rows_inserted = 0
while check_thread.is_alive() and rows_inserted < inserts_to_do:
try:
cur.execute("INSERT INTO foo select from generate_series(1, 100000)")
rows_inserted += 100000
except Exception as e:
if check_thread.is_alive():
log.info('stopping check thread')
check_stop_event.set()
check_thread.join()
assert False, f"Exception {e} while inserting rows, but WAL lag is within configured threshold. That means backpressure is not tuned properly"
else:
assert False, f"Exception {e} while inserting rows and WAL lag overflowed configured threshold. That means backpressure doesn't work."
log.info(f"inserted {rows_inserted} rows")
if check_thread.is_alive():
log.info('stopping check thread')
check_stop_event.set()
check_thread.join()
log.info('check thread stopped')
else:
assert False, "WAL lag overflowed configured threshold. That means backpressure doesn't work."
#TODO test_backpressure_disk_consistent_lsn_lag. Play with pageserver's checkpoint settings
#TODO test_backpressure_remote_consistent_lsn_lag

View File

@@ -5,18 +5,24 @@ import psycopg2.extras
import pytest
from fixtures.log_helper import log
from fixtures.utils import print_gc_result
from fixtures.zenith_fixtures import ZenithEnv
pytest_plugins = ("fixtures.zenith_fixtures")
from fixtures.zenith_fixtures import ZenithEnvBuilder
#
# Create a couple of branches off the main branch, at a historical point in time.
#
def test_branch_behind(zenith_simple_env: ZenithEnv):
env = zenith_simple_env
def test_branch_behind(zenith_env_builder: ZenithEnvBuilder):
# Use safekeeper in this test to avoid a subtle race condition.
# Without safekeeper, walreceiver reconnection can stuck
# because of IO deadlock.
#
# See https://github.com/zenithdb/zenith/issues/1068
zenith_env_builder.num_safekeepers = 1
env = zenith_env_builder.init_start()
# Branch at the point where only 100 rows were inserted
env.zenith_cli(["branch", "test_branch_behind", "empty"])
env.zenith_cli.create_branch("test_branch_behind", "main")
pgmain = env.postgres.create_start('test_branch_behind')
log.info("postgres is running on 'test_branch_behind' branch")
@@ -54,7 +60,7 @@ def test_branch_behind(zenith_simple_env: ZenithEnv):
log.info(f'LSN after 200100 rows: {lsn_b}')
# Branch at the point where only 100 rows were inserted
env.zenith_cli(["branch", "test_branch_behind_hundred", "test_branch_behind@" + lsn_a])
env.zenith_cli.create_branch("test_branch_behind_hundred", "test_branch_behind@" + lsn_a)
# Insert many more rows. This generates enough WAL to fill a few segments.
main_cur.execute('''
@@ -69,7 +75,7 @@ def test_branch_behind(zenith_simple_env: ZenithEnv):
log.info(f'LSN after 400100 rows: {lsn_c}')
# Branch at the point where only 200100 rows were inserted
env.zenith_cli(["branch", "test_branch_behind_more", "test_branch_behind@" + lsn_b])
env.zenith_cli.create_branch("test_branch_behind_more", "test_branch_behind@" + lsn_b)
pg_hundred = env.postgres.create_start("test_branch_behind_hundred")
pg_more = env.postgres.create_start("test_branch_behind_more")
@@ -93,7 +99,7 @@ def test_branch_behind(zenith_simple_env: ZenithEnv):
# Check bad lsn's for branching
# branch at segment boundary
env.zenith_cli(["branch", "test_branch_segment_boundary", "test_branch_behind@0/3000000"])
env.zenith_cli.create_branch("test_branch_segment_boundary", "test_branch_behind@0/3000000")
pg = env.postgres.create_start("test_branch_segment_boundary")
cur = pg.connect().cursor()
cur.execute('SELECT 1')
@@ -101,19 +107,23 @@ def test_branch_behind(zenith_simple_env: ZenithEnv):
# branch at pre-initdb lsn
with pytest.raises(Exception, match="invalid branch start lsn"):
env.zenith_cli(["branch", "test_branch_preinitdb", "test_branch_behind@0/42"])
env.zenith_cli.create_branch("test_branch_preinitdb", "main@0/42")
# branch at pre-ancestor lsn
with pytest.raises(Exception, match="less than timeline ancestor lsn"):
env.zenith_cli.create_branch("test_branch_preinitdb", "test_branch_behind@0/42")
# check that we cannot create branch based on garbage collected data
with closing(env.pageserver.connect()) as psconn:
with psconn.cursor(cursor_factory=psycopg2.extras.DictCursor) as pscur:
# call gc to advace latest_gc_cutoff_lsn
pscur.execute(f"do_gc {env.initial_tenant} {timeline} 0")
pscur.execute(f"do_gc {env.initial_tenant.hex} {timeline} 0")
row = pscur.fetchone()
print_gc_result(row)
with pytest.raises(Exception, match="invalid branch start lsn"):
# this gced_lsn is pretty random, so if gc is disabled this woudln't fail
env.zenith_cli(["branch", "test_branch_create_fail", f"test_branch_behind@{gced_lsn}"])
env.zenith_cli.create_branch("test_branch_create_fail", f"test_branch_behind@{gced_lsn}")
# check that after gc everything is still there
hundred_cur.execute('SELECT count(*) FROM foo')

View File

@@ -6,16 +6,13 @@ from contextlib import closing
from fixtures.zenith_fixtures import ZenithEnv
from fixtures.log_helper import log
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test compute node start after clog truncation
#
def test_clog_truncate(zenith_simple_env: ZenithEnv):
env = zenith_simple_env
# Create a branch for us
env.zenith_cli(["branch", "test_clog_truncate", "empty"])
env.zenith_cli.create_branch("test_clog_truncate", "empty")
# set agressive autovacuum to make sure that truncation will happen
config = [
@@ -65,8 +62,8 @@ def test_clog_truncate(zenith_simple_env: ZenithEnv):
# create new branch after clog truncation and start a compute node on it
log.info(f'create branch at lsn_after_truncation {lsn_after_truncation}')
env.zenith_cli(
["branch", "test_clog_truncate_new", "test_clog_truncate@" + lsn_after_truncation])
env.zenith_cli.create_branch("test_clog_truncate_new",
"test_clog_truncate@" + lsn_after_truncation)
pg2 = env.postgres.create_start('test_clog_truncate_new')
log.info('postgres is running on test_clog_truncate_new branch')

View File

@@ -3,16 +3,13 @@ from contextlib import closing
from fixtures.zenith_fixtures import ZenithEnv
from fixtures.log_helper import log
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test starting Postgres with custom options
#
def test_config(zenith_simple_env: ZenithEnv):
env = zenith_simple_env
# Create a branch for us
env.zenith_cli(["branch", "test_config", "empty"])
env.zenith_cli.create_branch("test_config", "empty")
# change config
pg = env.postgres.create_start('test_config', config_lines=['log_min_messages=debug1'])

View File

@@ -5,15 +5,13 @@ from contextlib import closing
from fixtures.zenith_fixtures import ZenithEnv, check_restored_datadir_content
from fixtures.log_helper import log
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test CREATE DATABASE when there have been relmapper changes
#
def test_createdb(zenith_simple_env: ZenithEnv):
env = zenith_simple_env
env.zenith_cli(["branch", "test_createdb", "empty"])
env.zenith_cli.create_branch("test_createdb", "empty")
pg = env.postgres.create_start('test_createdb')
log.info("postgres is running on 'test_createdb' branch")
@@ -29,7 +27,7 @@ def test_createdb(zenith_simple_env: ZenithEnv):
lsn = cur.fetchone()[0]
# Create a branch
env.zenith_cli(["branch", "test_createdb2", "test_createdb@" + lsn])
env.zenith_cli.create_branch("test_createdb2", "test_createdb@" + lsn)
pg2 = env.postgres.create_start('test_createdb2')
@@ -43,7 +41,7 @@ def test_createdb(zenith_simple_env: ZenithEnv):
#
def test_dropdb(zenith_simple_env: ZenithEnv, test_output_dir):
env = zenith_simple_env
env.zenith_cli(["branch", "test_dropdb", "empty"])
env.zenith_cli.create_branch("test_dropdb", "empty")
pg = env.postgres.create_start('test_dropdb')
log.info("postgres is running on 'test_dropdb' branch")
@@ -68,10 +66,10 @@ def test_dropdb(zenith_simple_env: ZenithEnv, test_output_dir):
lsn_after_drop = cur.fetchone()[0]
# Create two branches before and after database drop.
env.zenith_cli(["branch", "test_before_dropdb", "test_dropdb@" + lsn_before_drop])
env.zenith_cli.create_branch("test_before_dropdb", "test_dropdb@" + lsn_before_drop)
pg_before = env.postgres.create_start('test_before_dropdb')
env.zenith_cli(["branch", "test_after_dropdb", "test_dropdb@" + lsn_after_drop])
env.zenith_cli.create_branch("test_after_dropdb", "test_dropdb@" + lsn_after_drop)
pg_after = env.postgres.create_start('test_after_dropdb')
# Test that database exists on the branch before drop

View File

@@ -3,15 +3,13 @@ from contextlib import closing
from fixtures.zenith_fixtures import ZenithEnv
from fixtures.log_helper import log
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test CREATE USER to check shared catalog restore
#
def test_createuser(zenith_simple_env: ZenithEnv):
env = zenith_simple_env
env.zenith_cli(["branch", "test_createuser", "empty"])
env.zenith_cli.create_branch("test_createuser", "empty")
pg = env.postgres.create_start('test_createuser')
log.info("postgres is running on 'test_createuser' branch")
@@ -27,7 +25,7 @@ def test_createuser(zenith_simple_env: ZenithEnv):
lsn = cur.fetchone()[0]
# Create a branch
env.zenith_cli(["branch", "test_createuser2", "test_createuser@" + lsn])
env.zenith_cli.create_branch("test_createuser2", "test_createuser@" + lsn)
pg2 = env.postgres.create_start('test_createuser2')

View File

@@ -0,0 +1,80 @@
from contextlib import closing
import asyncio
import asyncpg
import random
from fixtures.zenith_fixtures import ZenithEnv, Postgres, Safekeeper
from fixtures.log_helper import log
# Test configuration
#
# Create a table with {num_rows} rows, and perform {updates_to_perform} random
# UPDATEs on it, using {num_connections} separate connections.
num_connections = 10
num_rows = 100000
updates_to_perform = 10000
updates_performed = 0
# Run random UPDATEs on test table
async def update_table(pg: Postgres):
global updates_performed
pg_conn = await pg.connect_async()
while updates_performed < updates_to_perform:
updates_performed += 1
id = random.randrange(1, num_rows)
row = await pg_conn.fetchrow(f'UPDATE foo SET counter = counter + 1 WHERE id = {id}')
# Perform aggressive GC with 0 horizon
async def gc(env: ZenithEnv, timeline: str):
psconn = await env.pageserver.connect_async()
while updates_performed < updates_to_perform:
await psconn.execute(f"do_gc {env.initial_tenant.hex} {timeline} 0")
# At the same time, run UPDATEs and GC
async def update_and_gc(env: ZenithEnv, pg: Postgres, timeline: str):
workers = []
for worker_id in range(num_connections):
workers.append(asyncio.create_task(update_table(pg)))
workers.append(asyncio.create_task(gc(env, timeline)))
# await all workers
await asyncio.gather(*workers)
#
# Aggressively force GC, while running queries.
#
# (repro for https://github.com/zenithdb/zenith/issues/1047)
#
def test_gc_aggressive(zenith_simple_env: ZenithEnv):
env = zenith_simple_env
env.zenith_cli.create_branch("test_gc_aggressive", "empty")
pg = env.postgres.create_start('test_gc_aggressive')
log.info('postgres is running on test_gc_aggressive branch')
conn = pg.connect()
cur = conn.cursor()
cur.execute("SHOW zenith.zenith_timeline")
timeline = cur.fetchone()[0]
# Create table, and insert the first 100 rows
cur.execute('CREATE TABLE foo (id int, counter int, t text)')
cur.execute(f'''
INSERT INTO foo
SELECT g, 0, 'long string to consume some space' || g
FROM generate_series(1, {num_rows}) g
''')
cur.execute('CREATE INDEX ON foo(id)')
asyncio.run(update_and_gc(env, pg, timeline))
row = cur.execute('SELECT COUNT(*), SUM(counter) FROM foo')
assert cur.fetchone() == (num_rows, updates_to_perform)

Some files were not shown because too many files have changed in this diff Show More