Compare commits

..

90 Commits

Author SHA1 Message Date
Eric Seppanen
5b79e033bd wip: adapt layered_repository to snapfile 2021-08-19 20:23:11 -07:00
Eric Seppanen
93b2e49939 remove snapshot id; replace with timeline id
The snapshot id doesn't make sense when two snapshots are squashed.
Better to use the timeline id anyway.

It's still a little strange to squash two timelines together, but it
must be possible to flatten history even if there have been many branch
events, so this is should be a control plane / snapshot management
problem.
2021-08-18 12:11:22 -07:00
Eric Seppanen
c3833ef0f4 add snapshot squashing
Add logic to squash snapshot files.
Add snaptool (a binary for inspecting and manipulating snapshot files).
Use bookfile 0.3, which allows concurrent reads.
2021-08-18 12:11:22 -07:00
Eric Seppanen
acfc5c5d21 add another Page constructor, which copies bytes 2021-08-18 12:11:22 -07:00
Eric Seppanen
0a0d12368e snapfile: add snapshot metadata
Add some basic metadata to the snapshot file, including an id number,
predecessor snapshot, and lsn.
2021-08-18 12:11:22 -07:00
Eric Seppanen
8d2b517359 snapfile: split apart code into multiple files
versioned.rs: for things that get serialized and must be versioned to
avoid breaking backwards compatibility.

page.rs: for the Page struct.
2021-08-18 12:11:22 -07:00
Eric Seppanen
26bcd72619 add snapfile
The snapfile crate implements a snapshot file format. The format relies
heavily on the bookfile crate for the structured file format, and the
aversion crate for versioned data structures.

The overall structure of the file looks like this:
- first 4KB: bookfile header
- next 8KB * N: raw page data for N pages
- page index map (from page identifier to data offset)
- bookfile chapter index

When a SnapFile is opened for reading, the page index map is read into
memory; any page can be read directly from the file from that point.
2021-08-18 12:11:22 -07:00
Dmitry Rodionov
4bce65ff9a bump rust version in ci to 1.52.1 2021-08-17 20:31:28 +03:00
Heikki Linnakangas
3319befc30 Revert a bunch of commits that I pushed by accident
This reverts commits:
  e35a5aa550
  a389c2ed7f
  11ebcb531f
  8d2b61f4d1
  882f549236
  ddb7155bbe

Those were follow-up work on top of PR
https://github.com/zenithdb/zenith/pull/430, but they were still very
much not ready.
2021-08-17 19:20:27 +03:00
Heikki Linnakangas
ddb7155bbe WIP Store base images in separate ImageLayers 2021-08-17 18:55:04 +03:00
Heikki Linnakangas
882f549236 WIP: store base images separately 2021-08-17 18:54:53 +03:00
Heikki Linnakangas
8d2b61f4d1 Move code to handle snapshot filenames 2021-08-17 18:54:53 +03:00
Heikki Linnakangas
11ebcb531f Add Gauge for # of layers 2021-08-17 18:54:53 +03:00
Heikki Linnakangas
a389c2ed7f WIP: Track oldest open layer 2021-08-17 18:54:53 +03:00
Heikki Linnakangas
e35a5aa550 WIP: track mem usage 2021-08-17 18:54:53 +03:00
Heikki Linnakangas
45f641cabb Handle last "open" layer specially in LayerMap.
There can be only one "open" layer for each segment. That's the last one,
implemented by InMemoryLayer. That's the only one where new records can
be appended to. Much of the code needed to distinguish between the last
open layer and other layers anyway, so make the distinction explicit
in LayerMap.
2021-08-17 18:54:51 +03:00
Heikki Linnakangas
48f4a7b886 Refactor get_page_at_lsn() logic to layered_repository.rs
There was a a lot of duplicated code between the get_page_at_lsn()
implementations in InMemoryLayer and SnapshotLayer. Move the code for
requesting WAL redo from the Layer trait into LayeredTimeline. The
get-function in Layer now just returns the WAL records and base image
to the caller, and the caller is responsible for performing the WAL
redo on them.
2021-08-17 18:54:48 +03:00
Heikki Linnakangas
91f72fabc9 Work with smaller segments.
Split each relish into fixed-sized 10 MB segments. Separate layers are
created for each segment. This reduces the write amplification if you
have a large relation and update only parts of it; the downside is
that you have a lot more files. The 10 MB is just a guess, we should
do some modeling and testing in the future to figure out the optimal
size.

Each segment tracks the size of the segment separately. To figure out
the total size of a relish, you need to loop through the segment to
find the highest segment that's in use. That's a bit inefficient, but
will do for now. We might want to add a cache or something later.
2021-08-17 18:54:41 +03:00
anastasia
cbeb67067c Issue #367.
Change CLI so that we always create node from scratch at 'pg start'.
This operation preserve previously existing config

Add new flag '--config-only' to 'pg create'.
If this flag is passed, don't perform basebackup, just fill initial postgresql.conf for the node.
2021-08-17 18:12:31 +03:00
anastasia
921ec390bc cargo fmt 2021-08-16 19:41:07 +03:00
Heikki Linnakangas
f37cb21305 Update Cargo.lock for addition of 'bincode'
Commit 5eb1738e8b added a dependency to the 'bincode' crate. 'cargo build'
adds it to Cargo.lock automatically, so let's remember it.
2021-08-16 19:24:26 +03:00
Heikki Linnakangas
7ee8de3725 Add metrics to WAL redo.
Track the time spent on replaying WAL records by the special Postgres
process, the time spent waiting for acces to the Postgres process (since
there is only one per tenant), and the number of records replayed.
2021-08-16 15:49:17 +03:00
Heikki Linnakangas
047a05efb2 Minor formatting and comment fixes. 2021-08-16 15:48:59 +03:00
Dmitry Rodionov
0c4ab80eac try to be more intelligent in WalAcceptor.start, added a bunch of typing sugar to wal acceptor fixtures 2021-08-16 14:27:44 +03:00
Heikki Linnakangas
2450f82de5 Introduce a new "layered" repository implementation.
This replaces the RocksDB based implementation with an approach using
"snapshot files" on disk, and in-memory btreemaps to hold the recent
changes.

This make the repository implementation a configuration option. You can
choose 'layered' or 'rocksdb' with "zenith init --repository-format=<format>"
The unit tests have been refactored to exercise both implementations.
'layered' is now the default.

Push/pull is not implemented. The 'test_history_inmemory' test has been
commented out accordingly. It's not clear how we will implement that
functionality; probably by copying the snapshot files directly.
2021-08-16 10:06:48 +03:00
Max Sharnoff
5eb1738e8b Rework walkeeper protocol to use libpq (#366)
Most of the work here was done on the postgres side. There's more
information in the commit message there.
 (see: 04cfa326a5)

On the WAL acceptor side, we're now expecting 'START_WAL_PUSH' to
initialize the WAL keeper protocol. Everything else is mostly the same,
with the only real difference being that protocol messages are now
discrete CopyData messages sent over the postgres protocol.

For the sake of documentation, the full set of these messages is:

  <- recv: START_WAL_PUSH query
  <- recv: server info from postgres   (type `ServerInfo`)
  -> send: walkeeper info              (type `SafeKeeperInfo`)
  <- recv: vote info                   (type `RequestVote`)

  if node id mismatch:
    -> send: self node id (type `NodeId`); exit

  -> send: confirm vote (with node id) (type `NodeId`)

  loop:
    <- recv: info and maybe WAL block  (type `SafeKeeperRequest` + bytes)
         (break loop if done)
    -> send: confirm receipt           (type `SafeKeeperResponse`)
2021-08-13 11:25:16 -07:00
Heikki Linnakangas
6e22a8f709 Refactor WAL redo to not use a separate thread.
My main motivation is to make it easier to attribute time spent in WAL
redo to the request that needed the WAL redo. With this patch, the WAL
redo is performed by the requester thread, so it shows up in stack traces
and in 'perf' report as part of the requester's call stack. This is also
slightly simpler (less lines of code) and should be a bit faster too.
2021-08-13 17:23:36 +03:00
Heikki Linnakangas
f8de71eab0 Update vendor/postgres to fix race condition leading to CRC errors.
Fixes https://github.com/zenithdb/zenith/issues/413
2021-08-13 14:02:26 +03:00
Heikki Linnakangas
8517d9696d Move gc_iteration() function to Repository trait.
The upcoming layered storage implementation handles GC as a
repository-wide operation because it needs to pay attention to the branch
points of all timelines.
2021-08-12 23:46:01 +03:00
Heikki Linnakangas
97f9021c88 Fix JWT token encoding issue in test.
On my laptop, the server was receiving the token as a string with extra
b'...' escaping, e.g as "b'eyJ0....0ifQA'" instead of just "eyJ0....0ifQA".
That was causing the test to fail.

I'm using Python 3.9, while the CI is using Python 3.8. I suspect that's
why. My version of pyjwt might be different too.

See also https://github.com/jpadilla/pyjwt/issues/391.
2021-08-12 20:46:14 +03:00
Heikki Linnakangas
0a92b31496 If a pg_regress test fails in CI, save regression.diffs 2021-08-12 18:39:23 +03:00
anastasia
6c3726913f Introduce check for physical relishes.
They represent files and use RelationSizeEntry to track existing and dropped files.
They can be both blocky and non-blocky.
get_relish_size() and get_rel_exists() functions work with physical relishes, not only with blocky ones.
2021-08-12 14:42:21 +03:00
anastasia
1bfade8adc Issue #330. Use put_unlink for twophase relishes.
Follow PostgreSQL logic: remove Twophase files when prepared transaction is committed/aborted.

Always store Twophase segments as materialized page images (no wal records).
2021-08-12 14:42:21 +03:00
anastasia
4eebe22fbb cargo fmt 2021-08-12 14:42:21 +03:00
Heikki Linnakangas
20d5e757ca Remove now-unused get_next_tag function.
The only caller was removed by commit c99a211b01.
2021-08-11 22:16:38 +03:00
Heikki Linnakangas
70cb399d59 Add convenience function to create a RowDescriptor message for an int8 col.
Makes the code to construct a result set a bit more terse and readable.
2021-08-11 20:17:33 +03:00
Dmitry Rodionov
ce5333656f Introduce authentication v0.1.
Current state with authentication.
Page server validates JWT token passed as a password during connection
phase and later when performing an action such as create branch tenant
parameter of an operation is validated to match one submitted in token.
To allow access from console there is dedicated scope: PageServerApi,
this scope allows access to all tenants. See code for access validation in:
PageServerHandler::check_permission.

Because we are in progress of refactoring of communication layer
involving wal proposer protocol, and safekeeper<->pageserver. Safekeeper
now doesn’t check token passed from compute, and uses “hardcoded” token
passed via environment variable to communicate with pageserver.

Compute postgres now takes token from environment variable and passes it
as a password field in pageserver connection. It is not passed through
settings because then user will be able to retrieve it using pg_settings
or SHOW ..

I’ve added basic test in test_auth.py. Probably after we add
authentication to remaining network paths we should enable it by default
and switch all existing tests to use it.
2021-08-11 20:05:54 +03:00
Arseny Sher
5f0fd093d7 Revert "Walkeeper safe info (#408)"
Temporary revert commit 0ee2e16b17 as it leads to
safekeeper state deserialization failure. Let's sort that out and get it back.
2021-08-11 16:26:35 +03:00
Konstantin Knizhnik
0ee2e16b17 Walkeeper safe info (#408)
* Align prev record CRC on 8-bytes boundary

* Upadate safekeeper in-memory status on receiving message from WAL proposer
2021-08-11 09:14:05 +03:00
Konstantin Knizhnik
b607f0fd8e Align prev record CRC on 8-bytes boundary (#407) 2021-08-11 08:56:37 +03:00
anastasia
c99a211b01 Fix CLOG truncate handling in case of wraparound. 2021-08-11 05:49:24 +03:00
anastasia
949ac54401 Add test of clog (pg_xact) truncation 2021-08-11 05:49:24 +03:00
anastasia
e406811375 Fixes for handling SLRU relishes:
replace get_tx_status() with self.get_tx_is_in_progress() to handle xacts in truncated SLRU segments correctly
2021-08-11 05:49:24 +03:00
anastasia
590ace104a Fixes for handling SLRU relishes:
- don't return ZERO_PAGE from get_page_at_lsn_nowait() for truncated SLRU segments;
2021-08-11 05:49:24 +03:00
anastasia
e475f82ff1 Rename get_rel_size() to get_relish_size(). Don't bail if relish is not found, just return None and let the caller to decide how to handle this 2021-08-11 05:49:24 +03:00
anastasia
a368642790 cargo fmt 2021-08-10 14:26:52 +03:00
anastasia
8c7983797b Remove unused SLRUTruncate ObjectValue 2021-08-10 14:26:32 +03:00
anastasia
5dd9a66f9e Move postgres backend messages to trace level 2021-08-10 14:26:28 +03:00
anastasia
cc877f1980 Add unit test for find_end_of_wal().
Based on previous attempt to add same test by @lubennikovaav
Now WAL files are generated by initdb command.
2021-08-10 12:30:21 +03:00
anastasia
a5d57ca10b list_nonrels() returns elements in arbitrary order.
Remove incorrect comments that say otherwise.
2021-08-06 15:23:46 +03:00
Konstantin Knizhnik
3ca3394170 [refer #395] Check WAL record CRC in waldecoder (#396) 2021-08-05 16:57:57 +03:00
Heikki Linnakangas
e59e0ae2dc Clarify the terms "WAL service", "safekeeper", "proposer" 2021-08-05 10:27:56 +03:00
Stas Kelvich
ec07acfb12 fix typo in run_initdb() 2021-08-04 23:57:17 +03:00
Stas Kelvich
fa04096733 cargo fmt pass 2021-08-04 23:51:02 +03:00
Dmitry Ivanov
754892402c Enable full feature set for hyper in zenith_utils
Server functionality requires not only the "server" feature flag, but
also either "http1" or "http2" (or both). To make things simpler
(and prevent analogous problems), enable all features.
2021-08-04 21:41:17 +03:00
Stas Kelvich
02b9be488b Disable GC test.
Current GC test is flaky and overly strict. Since we are migrating to the layered repo format
with different GC implementation let's just silence this test for now.
2021-08-04 18:33:33 +03:00
Arseny Sher
cc3ac2b74c Allow safekeeper to stream till real end of wal.
Otherwise it prematurely terminates, e.g. in test_compute_restart.

ref #388
2021-08-04 18:03:43 +03:00
Arseny Sher
1dc2ae6968 Point vendor/postgres to main. 2021-08-04 14:21:01 +03:00
Stas Kelvich
04ae63a5c4 use proper postgres version 2021-08-04 14:15:07 +03:00
Arseny Sher
b77fade7b8 Look up wal directory properly in all find_end_of_wal callers.
ref #388
2021-08-04 14:15:07 +03:00
Stas Kelvich
56565c0f58 look up WAL in right directory 2021-08-04 14:15:07 +03:00
Dmitry Ivanov
ed634ec320 Extract message processing function from PostgresBackend's event loop
This patch has been extracted from #348, where it became unnecessary
after we had decided that we didn't want to measure anything inside
PostgresBackend.

IMO the change is good enough to make its way into the codebase,
even though it brings nothing "new" to the code.
2021-08-04 10:49:02 +03:00
Alexey Kondratov
bcaa59c0b9 Test compute restart with AND without safekeepers 2021-08-04 00:05:19 +03:00
Dmitry Ivanov
cb1b4a12a6 Add some prometheus metrics to pageserver
The metrics are served by an http endpoint, which
is meant to be spawned in a new thread.

In the future the endpoint will provide more APIs,
but for the time being, we won't bother with proper routing.
2021-08-03 21:42:24 +03:00
Heikki Linnakangas
9ff122835f Refactor ObjectTags, intruducing a new concept called "relish"
This clarifies - I hope - the abstractions between Repository and
ObjectRepository. The ObjectTag struct was a mix of objects that could
be accessed directly through the public Timeline interface, and also
objects that were created and used internally by the ObjectRepository
implementation and not supposed to be accessed directly by the
callers.  With the RelishTag separaate from ObjectTag, the distinction
is more clear: RelishTag is used in the public interface, and
ObjectTag is used internally between object_repository.rs and
object_store.rs, and it contains the internal metadata object types.

One awkward thing with the ObjectTag struct was that the Repository
implementation had to distinguish between ObjectTags for relations,
and track the size of the relation, while others were used to store
"blobs".  With the RelishTags, some relishes are considered
"non-blocky", and the Repository implementation is expected to track
their sizes, while others are stored as blobs. I'm not 100% happy with
how RelishTag captures that either: it just knows that some relish
kinds are blocky and some non-blocky, and there's an is_block()
function to check that.  But this does enable size-tracking for SLRUs,
allowing us to treat them more like relations.

This changes the way SLRUs are stored in the repository. Each SLRU
segment, e.g. "pg_clog/0000", "pg_clog/0001", are now handled as a
separate relish.  This removes the need for the SLRU-specific
put_slru_truncate() function in the Timeline trait. SLRU truncation is
now handled by caling put_unlink() on the segment. This is more in
line with how PostgreSQL stores SLRUs and handles their trunction.

The SLRUs are "blocky", so they are accessed one 8k page at a time,
and repository tracks their size. I considered an alternative design
where we would treat each SLRU segment as non-blocky, and just store
the whole file as one blob. Each SLRU segment is up to 256 kB in size,
which isn't that large, so that might've worked fine, too. One reason
I didn't do that is that it seems better to have the WAL redo
routines be as close as possible to the PostgreSQL routines. It
doesn't matter much in the repository, though; we have to track the
size for relations anyway, so there's not much difference in whether
we also do it for SLRUs.

While working on this, I noticed that the CLOG and MultiXact redo code
did not handle wraparound correctly. We need to fix that, but for now,
I just commented them out with a FIXME comment.
2021-08-03 14:01:05 +03:00
Heikki Linnakangas
f0030ae003 Handle SLRU ZERO records directly by storing an all-zeros page image.
It's simpler than storing the original WAL record.
2021-08-03 13:59:51 +03:00
Heikki Linnakangas
acc0f41985 Don't try to launch duplicate WAL redo thread if tenant already exists.
The codepath for tenant_create command first launched the WAL redo
thread, and then called branches::create_repo() which checked if the
tenant's directory already exists. That's problematic, because
launching the WAL redo thread will run initdb if the directory doesn't
already exist. Race condition: If the tenant already exists, it will
have a WAL redo thread already running, and the old and new WAL redo
thread might try to run initdb at the same time, causing all kinds of
weird failures.

The test_pageserver_api test was failing 100% repeatably on my laptop
because of this. I'm not sure why this doesn't occur on the CI:

    Jul 31 18:05:48.877 INFO running initdb in "./tenants/5227e4eb90894775ac6b8a8c76f24b2e/wal-redo-datadir", location: pageserver::walredo, pageserver/src/walredo.rs:483
    thread 'WAL redo thread' panicked at 'initdb failed: The files belonging to this database system will be owned by user "heikki".
    This user must also own the server process.

    The database cluster will be initialized with locale "C".
    The default database encoding has accordingly been set to "SQL_ASCII".
    The default text search configuration will be set to "english".

    Data page checksums are disabled.

    creating directory ./tenants/0305b1326f3ea33add0929d516da7cb6/wal-redo-datadir ... ok
    creating subdirectories ... ok
    selecting dynamic shared memory implementation ... posix
    selecting default max_connections ... 100
    selecting default shared_buffers ... 128MB
    selecting default time zone ... Europe/Helsinki
    creating configuration files ... ok
    running bootstrap script ...
    stderr:
    2021-07-31 15:05:48.875 GMT [282569] LOG:  could not open configuration file "/home/heikki/git-sandbox/zenith/test_output/test_tenant_list/repo/./tenants/0305b1326f3ea33add0929d516da7cb6/wal-redo-datadir/postgresql.conf": No such file or directory
    2021-07-31 15:05:48.875 GMT [282569] FATAL:  configuration file "/home/heikki/git-sandbox/zenith/test_output/test_tenant_list/repo/./tenants/0305b1326f3ea33add0929d516da7cb6/wal-redo-datadir/postgresql.conf" contains errors
    child process exited with exit code 1
    initdb: removing data directory "./tenants/0305b1326f3ea33add0929d516da7cb6/wal-redo-datadir"
2021-07-31 18:13:21 +03:00
Alexey Kondratov
bd7d811921 Add libseccomp-dev as a dep to Dockerfile 2021-07-25 17:46:47 +03:00
anastasia
14b6796915 Send pgdata subdirs with basebackup. Fix for 1e6267a. 2021-07-25 17:46:47 +03:00
Max Sharnoff
3f4815efa2 Correct LeSer doc: "Big Endian" -> "Little Endian" (#362) 2021-07-23 12:38:37 -07:00
anastasia
ec03848d2f Fix pageserver.log destination for zenith init.
The problem was caused by merge conflict in 767590b
2021-07-23 16:22:01 +03:00
anastasia
1e6267a35f Get rid of snapshot directory + related code cleanup and refactoring.
- Add new subdir postgres_ffi/samples/ for config file samples.
- Don't copy wal to the new branch on zenith init or zenith branch.
- Import_timeline_wal on zenith init.
2021-07-23 13:21:45 +03:00
Heikki Linnakangas
47824c5fca Remove page server interactive mode.
It was pretty cool, but no one used it, and it had gotten badly out of
date. The main interesting thing with it was to see some basic metrics
on the fly, while the page server is running, but the metrics collection
had been broken for a long time, too. Best to just remove it.
2021-07-23 12:21:21 +03:00
Dmitry Rodionov
767590bbd5 support tenants
this patch adds support for tenants. This touches mostly pageserver.
Directory layout on disk is changed to contain new layer of indirection.
Now path to particular repository has the following structure: <pageserver workdir>/tenants/<tenant
id>. Tenant id has the same format as timeline id. Tenant id is included in
pageserver commands when needed. Also new commands are available in
pageserver: tenant_list, tenant_create. This is also reflected CLI.
During init default tenant is created and it's id is saved in CLI config,
so following commands can use it without extra options. Tenant id is also included in
compute postgres configuration, so it can be passed via ServerInfo to
safekeeper and in connection string to pageserver.
For more info see docs/multitenancy.md.
2021-07-22 20:54:20 +03:00
Stas Kelvich
d210ba5fdb Update README.md 2021-07-22 20:33:34 +03:00
Dmitry Ivanov
8b656bad5f Add a missing [cfg(test)]
We don't always need to compile tests.
2021-07-22 16:46:27 +03:00
Dmitry Ivanov
97329d4906 Add a test for EOF in walkeeper's background thread
It would be nice to have a proper Timeline mock api,
but this time we'll get by with what we have.
2021-07-22 12:12:55 +03:00
Dmitry Ivanov
6a3b9b1d46 Fix accidental busyloop in walkeeper's background thread
It used to be the case that walkeeper's background thread
failed to recognize the end of stream (EOF) signaled by the
`Ok(None)` result of `FeMessage::read`.
2021-07-22 12:12:55 +03:00
anastasia
c913404739 Redirect log to pageserver.log during zenith init.
Add new module logger.rs that contains shared code to init logging
2021-07-21 18:56:34 +03:00
anastasia
8e42af9b1d Remove unused 'identify_system' pageserver query 2021-07-21 18:55:41 +03:00
Arseny Sher
fe17188464 Alternative way to truncate behind-the-vcl part of log.
Which is important to do before bumping epoch.
2021-07-21 17:27:05 +03:00
Arseny Sher
51b50f5cf5 Fix truncating the wal after VCL. 2021-07-21 17:27:05 +03:00
Arseny Sher
9e3fe2b4d4 Truncate not matching part of log.
ref #296
2021-07-21 17:27:05 +03:00
Arseny Sher
eb1618f2ed TLA+ specification of proposer-acceptor consensus protocol.
And .cfg file for running TLC.

ref #293
2021-07-21 17:27:05 +03:00
Stas Kelvich
791312824d set superuser name in python tests too 2021-07-21 17:22:22 +03:00
Stas Kelvich
a17b2a4364 reflect postgres superuser changes in pageserver->compute connstring 2021-07-21 17:22:22 +03:00
sharnoff
c4b2bf7ebd Use 'zenith_admin' as superuser name in initdb 2021-07-21 17:22:22 +03:00
Konstantin Knizhnik
0723d49e0b Object push (#276)
* Introducing common enum ObjectVal for all values

* Rewrite push mechanism to use raw object copy

* Fix history unit test

* Add skip_nonrel_objects functions for history unit tests
2021-07-21 00:41:57 +03:00
Eric Seppanen
ad79ca05e9 suppress nullptr warnings on auto-generated bindgen unit tests
Hopefully, this will be addressed upstream before too long; see
rust-bindgen issue #1651.
2021-07-20 20:12:15 +03:00
Heikki Linnakangas
325dd41277 Remove unused constructor function.
This was failing to compile with rustc nightly version, because the
datatype of 'fullPageWrites' was changed. See discussion at
https://github.com/zenithdb/zenith/issues/207#issuecomment-881478570.
But since the function is actually unused, let's just remove it.
2021-07-20 16:01:37 +03:00
106 changed files with 10064 additions and 3802 deletions

View File

@@ -7,7 +7,7 @@ executors:
zenith-build-executor:
resource_class: xlarge
docker:
- image: cimg/rust:1.51.0
- image: cimg/rust:1.52.1
jobs:
@@ -229,7 +229,7 @@ jobs:
when: always
command: |
du -sh /tmp/test_output/*
find /tmp/test_output -type f ! -name "pg.log" ! -name "pageserver.log" ! -name "wal_acceptor.log" -delete
find /tmp/test_output -type f ! -name "pg.log" ! -name "pageserver.log" ! -name "wal_acceptor.log" ! -name "regression.diffs" -delete
du -sh /tmp/test_output/*
- store_artifacts:
path: /tmp/test_output

304
Cargo.lock generated
View File

@@ -82,6 +82,30 @@ version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a"
[[package]]
name = "aversion"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "41992ab8cfcc3026ef9abceffe0c2b0479c043183fc23825e30d22baab6df334"
dependencies = [
"aversion-macros",
"byteorder",
"serde",
"serde_cbor",
"thiserror",
]
[[package]]
name = "aversion-macros"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5ba5785f953985aa0caca927ba4005880f3b4f53de87f134e810ae3549f744d2"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "aws-creds"
version = "0.26.0"
@@ -107,6 +131,12 @@ dependencies = [
"anyhow",
]
[[package]]
name = "base64"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3441f0f7b02788e948e47f457ca01f1d7e6d92c693bc132c22b087d3141c03ff"
[[package]]
name = "base64"
version = "0.13.0"
@@ -160,6 +190,18 @@ dependencies = [
"generic-array",
]
[[package]]
name = "bookfile"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "efa3e2086414e1bbecbc10730f265e5b079ab4ea0b830e7219a70dab6471e753"
dependencies = [
"aversion",
"byteorder",
"serde",
"thiserror",
]
[[package]]
name = "boxfnonce"
version = "0.1.1"
@@ -187,12 +229,6 @@ dependencies = [
"serde",
]
[[package]]
name = "cassowary"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df8670b8c7b9dae1793364eafadf7239c40d669904660c5960d74cfd80b46a53"
[[package]]
name = "cc"
version = "1.0.69"
@@ -268,6 +304,7 @@ version = "0.1.0"
dependencies = [
"anyhow",
"bytes",
"hex",
"lazy_static",
"nix",
"pageserver",
@@ -645,6 +682,12 @@ dependencies = [
"tracing",
]
[[package]]
name = "half"
version = "1.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "62aca2aba2d62b4a7f5b33f3712cb1b0692779a56fb510499d5c0aa594daeaf3"
[[package]]
name = "hashbrown"
version = "0.9.1"
@@ -660,6 +703,15 @@ version = "0.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ab5ef0d4909ef3724cc8cce6ccc8572c5c817592e9285f5464f8e86f8bd3726e"
[[package]]
name = "heck"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6d621efb26863f0e9924c6ac577e8275e5e6b77455db64ffa6c65c904e9e132c"
dependencies = [
"unicode-segmentation",
]
[[package]]
name = "hermit-abi"
version = "0.1.19"
@@ -674,6 +726,9 @@ name = "hex"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
dependencies = [
"serde",
]
[[package]]
name = "hex-literal"
@@ -828,6 +883,20 @@ dependencies = [
"wasm-bindgen",
]
[[package]]
name = "jsonwebtoken"
version = "7.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "afabcc15e437a6484fc4f12d0fd63068fe457bf93f1c148d3d9649c60b103f32"
dependencies = [
"base64 0.12.3",
"pem",
"ring",
"serde",
"serde_json",
"simple_asn1",
]
[[package]]
name = "lazy_static"
version = "1.4.0"
@@ -1021,6 +1090,17 @@ dependencies = [
"winapi",
]
[[package]]
name = "num-bigint"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "090c7f9998ee0ff65aa5b723e4009f7b217707f1fb5ea551329cc4d6231fb304"
dependencies = [
"autocfg",
"num-integer",
"num-traits",
]
[[package]]
name = "num-integer"
version = "0.1.44"
@@ -1050,12 +1130,6 @@ dependencies = [
"libc",
]
[[package]]
name = "numtoa"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8f8bdf33df195859076e54ab11ee78a1b208382d3a26ec40d142ffc1ecc49ef"
[[package]]
name = "once_cell"
version = "1.8.0"
@@ -1116,6 +1190,7 @@ name = "pageserver"
version = "0.1.0"
dependencies = [
"anyhow",
"bookfile",
"byteorder",
"bytes",
"chrono",
@@ -1136,6 +1211,7 @@ dependencies = [
"regex",
"rocksdb",
"rust-s3",
"scopeguard",
"serde",
"serde_json",
"slog",
@@ -1144,14 +1220,13 @@ dependencies = [
"slog-stdlog",
"slog-term",
"tar",
"termion",
"thiserror",
"tokio",
"tokio-stream",
"toml",
"tui",
"walkdir",
"workspace_hack",
"zenith_metrics",
"zenith_utils",
]
@@ -1186,6 +1261,17 @@ version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "19b17cddbe7ec3f8bc800887bab5e717348c95ea2ca0b1bf0837fb964dc67099"
[[package]]
name = "pem"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd56cbd21fea48d0c440b41cd69c589faacade08c992d9a54e471b79d0fd13eb"
dependencies = [
"base64 0.13.0",
"once_cell",
"regex",
]
[[package]]
name = "percent-encoding"
version = "2.1.0"
@@ -1248,7 +1334,7 @@ version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff3e0f70d32e20923cabf2df02913be7c1842d4c772db8065c00fcfdd1d1bff3"
dependencies = [
"base64",
"base64 0.13.0",
"byteorder",
"bytes",
"fallible-iterator",
@@ -1265,7 +1351,7 @@ name = "postgres-protocol"
version = "0.6.1"
source = "git+https://github.com/zenithdb/rust-postgres.git?rev=9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858#9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858"
dependencies = [
"base64",
"base64 0.13.0",
"byteorder",
"bytes",
"fallible-iterator",
@@ -1317,6 +1403,7 @@ dependencies = [
"regex",
"thiserror",
"workspace_hack",
"zenith_utils",
]
[[package]]
@@ -1325,6 +1412,30 @@ version = "0.2.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac74c624d6b2d21f425f752262f42188365d7b8ff1aff74c82e45136510a4857"
[[package]]
name = "proc-macro-error"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c"
dependencies = [
"proc-macro-error-attr",
"proc-macro2",
"quote",
"syn",
"version_check",
]
[[package]]
name = "proc-macro-error-attr"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869"
dependencies = [
"proc-macro2",
"quote",
"version_check",
]
[[package]]
name = "proc-macro-hack"
version = "0.5.19"
@@ -1346,6 +1457,27 @@ dependencies = [
"unicode-xid",
]
[[package]]
name = "prometheus"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5986aa8d62380092d2f50f8b1cdba9cb9b6731ffd4b25b51fd126b6c3e05b99c"
dependencies = [
"cfg-if 1.0.0",
"fnv",
"lazy_static",
"memchr",
"parking_lot",
"protobuf",
"thiserror",
]
[[package]]
name = "protobuf"
version = "2.24.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db50e77ae196458ccd3dc58a31ea1a90b0698ab1b7928d89f644c25d72070267"
[[package]]
name = "proxy"
version = "0.1.0"
@@ -1430,15 +1562,6 @@ dependencies = [
"bitflags",
]
[[package]]
name = "redox_termios"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8440d8acb4fd3d277125b4bd01a6f38aee8d814b3b5fc09b3f2b825d37d3fe8f"
dependencies = [
"redox_syscall",
]
[[package]]
name = "redox_users"
version = "0.4.0"
@@ -1481,7 +1604,7 @@ version = "0.11.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "246e9f61b9bb77df069a947682be06e31ac43ea37862e244a69f177694ea6d22"
dependencies = [
"base64",
"base64 0.13.0",
"bytes",
"encoding_rs",
"futures-core",
@@ -1510,6 +1633,21 @@ dependencies = [
"winreg",
]
[[package]]
name = "ring"
version = "0.16.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3053cf52e236a3ed746dfc745aa9cacf1b791d846bdaf412f60a8d7d6e17c8fc"
dependencies = [
"cc",
"libc",
"once_cell",
"spin",
"untrusted",
"web-sys",
"winapi",
]
[[package]]
name = "rocksdb"
version = "0.16.0"
@@ -1540,7 +1678,7 @@ dependencies = [
"async-trait",
"aws-creds",
"aws-region",
"base64",
"base64 0.13.0",
"cfg-if 1.0.0",
"chrono",
"futures",
@@ -1673,6 +1811,16 @@ dependencies = [
"xml-rs",
]
[[package]]
name = "serde_cbor"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e18acfa2f90e8b735b2836ab8d538de304cbb6729a7360729ea5a895d15a622"
dependencies = [
"half",
"serde",
]
[[package]]
name = "serde_derive"
version = "1.0.126"
@@ -1735,6 +1883,17 @@ dependencies = [
"libc",
]
[[package]]
name = "simple_asn1"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "692ca13de57ce0613a363c8c2f1de925adebc81b04c923ac60c5488bb44abe4b"
dependencies = [
"chrono",
"num-bigint",
"num-traits",
]
[[package]]
name = "siphasher"
version = "0.3.5"
@@ -1806,6 +1965,21 @@ version = "1.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fe0f37c9e8f3c5a4a66ad655a93c74daac4ad00c441533bf5c6e7990bb42604e"
[[package]]
name = "snapfile"
version = "0.1.0"
dependencies = [
"anyhow",
"aversion",
"bookfile",
"hex",
"rand",
"serde",
"structopt",
"tempfile",
"zenith_utils",
]
[[package]]
name = "socket2"
version = "0.4.0"
@@ -1816,6 +1990,12 @@ dependencies = [
"winapi",
]
[[package]]
name = "spin"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e63cff320ae2c57904679ba7cb63280a3dc4613885beafb148ee7bf9aa9042d"
[[package]]
name = "stringprep"
version = "0.1.2"
@@ -1832,6 +2012,30 @@ version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ea5119cdb4c55b55d432abb513a0429384878c15dde60cc77b1c99de1a95a6a"
[[package]]
name = "structopt"
version = "0.3.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69b041cdcb67226aca307e6e7be44c8806423d83e018bd662360a93dabce4d71"
dependencies = [
"clap",
"lazy_static",
"structopt-derive",
]
[[package]]
name = "structopt-derive"
version = "0.4.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7813934aecf5f51a54775e00068c237de98489463968231a51746bbbc03f9c10"
dependencies = [
"heck",
"proc-macro-error",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "subtle"
version = "2.4.1"
@@ -1900,18 +2104,6 @@ dependencies = [
"winapi-util",
]
[[package]]
name = "termion"
version = "1.5.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "077185e2eac69c3f8379a4298e1e07cd36beb962290d4a51199acf0fdc10607e"
dependencies = [
"libc",
"numtoa",
"redox_syscall",
"redox_termios",
]
[[package]]
name = "textwrap"
version = "0.11.0"
@@ -2128,19 +2320,6 @@ version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "59547bce71d9c38b83d9c0e92b6066c4253371f15005def0c30d9657f50c7642"
[[package]]
name = "tui"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ced152a8e9295a5b168adc254074525c17ac4a83c90b2716274cc38118bddc9"
dependencies = [
"bitflags",
"cassowary",
"termion",
"unicode-segmentation",
"unicode-width",
]
[[package]]
name = "typenum"
version = "1.13.0"
@@ -2183,6 +2362,12 @@ version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ccb82d61f80a663efe1f787a51b16b5a51e3314d6ac365b08639f52387b33f3"
[[package]]
name = "untrusted"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a156c684c91ea7d62626509bce3cb4e1d9ed5c4d978f7b4352658f96a4c26b4a"
[[package]]
name = "url"
version = "2.2.2"
@@ -2229,6 +2414,7 @@ name = "walkeeper"
version = "0.1.0"
dependencies = [
"anyhow",
"bincode",
"byteorder",
"bytes",
"clap",
@@ -2453,6 +2639,14 @@ dependencies = [
"zenith_utils",
]
[[package]]
name = "zenith_metrics"
version = "0.1.0"
dependencies = [
"lazy_static",
"prometheus",
]
[[package]]
name = "zenith_utils"
version = "0.1.0"
@@ -2461,11 +2655,17 @@ dependencies = [
"bincode",
"byteorder",
"bytes",
"hex",
"hex-literal",
"hyper",
"jsonwebtoken",
"lazy_static",
"log",
"postgres",
"rand",
"serde",
"thiserror",
"tokio",
"workspace_hack",
"zenith_metrics",
]

View File

@@ -1,13 +1,15 @@
[workspace]
members = [
"pageserver",
"walkeeper",
"zenith",
"control_plane",
"pageserver",
"postgres_ffi",
"zenith_utils",
"proxy",
"walkeeper",
"workspace_hack",
"proxy"
"zenith",
"zenith_metrics",
"zenith_utils",
"snapfile",
]
[profile.release]

View File

@@ -22,7 +22,7 @@
#
FROM alpine:3.13 as pg-build
RUN apk add --update clang llvm compiler-rt compiler-rt-static lld musl-dev binutils \
make bison flex readline-dev zlib-dev perl linux-headers
make bison flex readline-dev zlib-dev perl linux-headers libseccomp-dev
WORKDIR zenith
COPY ./vendor/postgres vendor/postgres
COPY ./Makefile Makefile
@@ -78,7 +78,7 @@ RUN cargo build --release
# out how to statically link rocksdb or avoid it at all).
#
FROM alpine:3.13
RUN apk add --update openssl build-base
RUN apk add --update openssl build-base libseccomp-dev
RUN apk --no-cache --update --repository https://dl-cdn.alpinelinux.org/alpine/edge/testing add rocksdb
COPY --from=build /zenith/target/release/pageserver /usr/local/bin
COPY --from=build /zenith/target/release/wal_acceptor /usr/local/bin

View File

@@ -51,6 +51,8 @@ postgres: postgres-configure
$(MAKE) -C tmp_install/build MAKELEVEL=0 install
+@echo "Compiling contrib/zenith"
$(MAKE) -C tmp_install/build/contrib/zenith install
+@echo "Compiling contrib/zenith_test_utils"
$(MAKE) -C tmp_install/build/contrib/zenith_test_utils install
postgres-clean:
$(MAKE) -C tmp_install/build MAKELEVEL=0 clean

View File

@@ -24,7 +24,7 @@ pip install pytest psycopg2
2. Build zenith and patched postgres
```sh
git clone --recursive https://github.com/libzenith/zenith.git
git clone --recursive https://github.com/zenithdb/zenith.git
cd zenith
make -j5
```
@@ -34,8 +34,7 @@ make -j5
# Create repository in .zenith with proper paths to binaries and data
# Later that would be responsibility of a package install script
> ./target/debug/zenith init
<...>
new zenith repository was created in .zenith
pageserver init succeeded
# start pageserver
> ./target/debug/zenith start
@@ -87,7 +86,7 @@ waiting for server to start.... done
# but all modifications would not affect data in original postgres
> psql -p55433 -h 127.0.0.1 postgres
postgres=# select * from t;
key | value
key | value
-----+-------
1 | 1
(1 row)
@@ -159,9 +158,3 @@ Integration tests, written in Python using the `pytest` framework.
`test_runner/zenith_regress`:
Quick way to add new SQL regression test to integration tests set.
`/integration_tests`:
Another pack of integration tests. Written in Rust.
[Rust]: https://www.rust-lang.org/learn/get-started

View File

@@ -19,6 +19,7 @@ anyhow = "1.0"
bytes = "1.0.1"
nix = "0.20"
url = "2.2.2"
hex = { version = "0.4.3", features = ["serde"] }
pageserver = { path = "../pageserver" }
walkeeper = { path = "../walkeeper" }

View File

@@ -7,7 +7,7 @@ use std::sync::Arc;
use std::time::Duration;
use std::{collections::BTreeMap, path::PathBuf};
use std::{
fs::{self, OpenOptions},
fs::{self, File, OpenOptions},
io::Read,
};
@@ -15,10 +15,11 @@ use anyhow::{Context, Result};
use lazy_static::lazy_static;
use regex::Regex;
use zenith_utils::connstring::connection_host_port;
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::ZTenantId;
use zenith_utils::zid::ZTimelineId;
use crate::local_env::LocalEnv;
use pageserver::ZTimelineId;
use crate::storage::PageServerNode;
//
@@ -27,27 +28,36 @@ use crate::storage::PageServerNode;
pub struct ComputeControlPlane {
base_port: u16,
pageserver: Arc<PageServerNode>,
pub nodes: BTreeMap<String, Arc<PostgresNode>>,
pub nodes: BTreeMap<(ZTenantId, String), Arc<PostgresNode>>,
env: LocalEnv,
}
impl ComputeControlPlane {
// Load current nodes with ports from data directories on disk
// Directory structure has the following layout:
// pgdatadirs
// |- tenants
// | |- <tenant_id>
// | | |- <branch name>
pub fn load(env: LocalEnv) -> Result<ComputeControlPlane> {
// TODO: since pageserver do not have config file yet we believe here that
// it is running on default port. Change that when pageserver will have config.
let pageserver = Arc::new(PageServerNode::from_env(&env));
let mut nodes = BTreeMap::default();
let pgdatadirspath = &env.pg_data_dirs_path();
let nodes: Result<BTreeMap<_, _>> = fs::read_dir(&pgdatadirspath)
for tenant_dir in fs::read_dir(&pgdatadirspath)
.with_context(|| format!("failed to list {}", pgdatadirspath.display()))?
.into_iter()
.map(|f| {
PostgresNode::from_dir_entry(f?, &env, &pageserver)
.map(|node| (node.name.clone(), Arc::new(node)))
})
.collect();
let nodes = nodes?;
{
let tenant_dir = tenant_dir?;
for timeline_dir in fs::read_dir(tenant_dir.path())
.with_context(|| format!("failed to list {}", tenant_dir.path().display()))?
{
let node = PostgresNode::from_dir_entry(timeline_dir?, &env, &pageserver)?;
nodes.insert((node.tenantid, node.name.clone()), Arc::new(node));
}
}
Ok(ComputeControlPlane {
base_port: 55431,
@@ -75,44 +85,40 @@ impl ComputeControlPlane {
}
}
/// Connect to a page server, get base backup, and untar it to initialize a
/// new data directory
pub fn new_from_page_server(
pub fn new_node(
&mut self,
is_test: bool,
timelineid: ZTimelineId,
name: &str,
tenantid: ZTenantId,
branch_name: &str,
config_only: bool,
) -> Result<Arc<PostgresNode>> {
let timeline_id = self
.pageserver
.branch_get_by_name(&tenantid, branch_name)?
.timeline_id;
let node = Arc::new(PostgresNode {
name: name.to_owned(),
name: branch_name.to_owned(),
address: SocketAddr::new("127.0.0.1".parse().unwrap(), self.get_port()),
env: self.env.clone(),
pageserver: Arc::clone(&self.pageserver),
is_test,
timelineid,
is_test: false,
timelineid: timeline_id,
tenantid,
});
node.init_from_page_server()?;
self.nodes.insert(node.name.clone(), Arc::clone(&node));
Ok(node)
}
pub fn new_node(&mut self, branch_name: &str) -> Result<Arc<PostgresNode>> {
let timeline_id = self.pageserver.branch_get_by_name(branch_name)?.timeline_id;
let node = self.new_from_page_server(false, timeline_id, branch_name)?;
node.init_from_page_server(self.env.auth_type, config_only)?;
self.nodes
.insert((tenantid, node.name.clone()), Arc::clone(&node));
// Configure the node to stream WAL directly to the pageserver
node.append_conf(
"postgresql.conf",
format!(
concat!(
"shared_preload_libraries = zenith\n",
"synchronous_standby_names = 'pageserver'\n", // TODO: add a new function arg?
"zenith.callmemaybe_connstring = '{}'\n", // FIXME escaping
),
node.connstr()
node.connstr(),
)
.as_str(),
)?;
@@ -123,6 +129,7 @@ impl ComputeControlPlane {
///////////////////////////////////////////////////////////////////////////////
#[derive(Debug)]
pub struct PostgresNode {
pub address: SocketAddr,
name: String,
@@ -130,6 +137,7 @@ pub struct PostgresNode {
pageserver: Arc<PageServerNode>,
is_test: bool,
pub timelineid: ZTimelineId,
pub tenantid: ZTenantId,
}
impl PostgresNode {
@@ -149,6 +157,8 @@ impl PostgresNode {
static ref CONF_PORT_RE: Regex = Regex::new(r"(?m)^\s*port\s*=\s*(\d+)\s*$").unwrap();
static ref CONF_TIMELINE_RE: Regex =
Regex::new(r"(?m)^\s*zenith.zenith_timeline\s*=\s*'(\w+)'\s*$").unwrap();
static ref CONF_TENANT_RE: Regex =
Regex::new(r"(?m)^\s*zenith.zenith_tenant\s*=\s*'(\w+)'\s*$").unwrap();
}
// parse data directory name
@@ -196,6 +206,22 @@ impl PostgresNode {
.parse()
.with_context(|| err_msg)?;
// parse tenant
let err_msg = format!(
"failed to find tenant definition in config file {}",
cfg_path.to_str().unwrap()
);
let tenantid = CONF_TENANT_RE
.captures(config.as_str())
.ok_or_else(|| anyhow::Error::msg(err_msg.clone() + " 1"))?
.iter()
.last()
.ok_or_else(|| anyhow::Error::msg(err_msg.clone() + " 2"))?
.ok_or_else(|| anyhow::Error::msg(err_msg.clone() + " 3"))?
.as_str()
.parse()
.with_context(|| err_msg)?;
// ok now
Ok(PostgresNode {
address: SocketAddr::new("127.0.0.1".parse().unwrap(), port),
@@ -204,47 +230,19 @@ impl PostgresNode {
pageserver: Arc::clone(pageserver),
is_test: false,
timelineid,
tenantid,
})
}
// Connect to a page server, get base backup, and untar it to initialize a
// new data directory
pub fn init_from_page_server(&self) -> Result<()> {
pub fn do_basebackup(&self) -> Result<()> {
let pgdata = self.pgdata();
println!(
"Extracting base backup to create postgres instance: path={} port={}",
pgdata.display(),
self.address.port()
);
// initialize data directory
if self.is_test {
fs::remove_dir_all(&pgdata).ok();
}
let sql = format!("basebackup {}", self.timelineid);
let sql = format!("basebackup {} {}", self.tenantid, self.timelineid);
let mut client = self
.pageserver
.page_server_psql_client()
.with_context(|| "connecting to page server failed")?;
fs::create_dir_all(&pgdata)
.with_context(|| format!("could not create data directory {}", pgdata.display()))?;
fs::set_permissions(pgdata.as_path(), fs::Permissions::from_mode(0o700)).with_context(
|| {
format!(
"could not set permissions in data directory {}",
pgdata.display()
)
},
)?;
// FIXME: The compute node should be able to stream the WAL it needs from the WAL safekeepers or archive.
// But that's not implemented yet. For now, 'pg_wal' is included in the base backup tarball that
// we receive from the Page Server, so we don't need to create the empty 'pg_wal' directory here.
//fs::create_dir_all(pgdata.join("pg_wal"))?;
let mut copyreader = client
.copy_out(sql.as_str())
.with_context(|| "page server 'basebackup' command failed")?;
@@ -260,6 +258,45 @@ impl PostgresNode {
ar.unpack(&pgdata)
.with_context(|| "extracting page backup failed")?;
Ok(())
}
// Connect to a page server, get base backup, and untar it to initialize a
// new data directory
pub fn init_from_page_server(&self, auth_type: AuthType, config_only: bool) -> Result<()> {
let pgdata = self.pgdata();
println!(
"Extracting base backup to create postgres instance: path={} port={}",
pgdata.display(),
self.address.port()
);
// initialize data directory
if self.is_test {
fs::remove_dir_all(&pgdata).ok();
}
fs::create_dir_all(&pgdata)
.with_context(|| format!("could not create data directory {}", pgdata.display()))?;
fs::set_permissions(pgdata.as_path(), fs::Permissions::from_mode(0o700)).with_context(
|| {
format!(
"could not set permissions in data directory {}",
pgdata.display()
)
},
)?;
if config_only {
//Just create an empty config file
File::create(self.pgdata().join("postgresql.conf").to_str().unwrap())?;
} else {
self.do_basebackup()?;
fs::create_dir_all(self.pgdata().join("pg_wal"))?;
fs::create_dir_all(self.pgdata().join("pg_wal").join("archive_status"))?;
}
// wal_log_hints is mandatory when running against pageserver (see gh issue#192)
// TODO: is it possible to check wal_log_hints at pageserver side via XLOG_PARAMETER_CHANGE?
self.append_conf(
@@ -284,30 +321,39 @@ impl PostgresNode {
// Never clean up old WAL. TODO: We should use a replication
// slot or something proper, to prevent the compute node
// from removing WAL that hasn't been streamed to the safekeepr or
// page server yet. But this will do for now.
// page server yet. (gh issue #349)
self.append_conf("postgresql.conf", "wal_keep_size='10TB'\n")?;
// Connect it to the page server.
// set up authentication
let password = if let AuthType::ZenithJWT = auth_type {
"$ZENITH_AUTH_TOKEN"
} else {
""
};
// Configure that node to take pages from pageserver
let (host, port) = connection_host_port(&self.pageserver.connection_config());
let (host, port) = connection_host_port(&self.pageserver.connection_config);
self.append_conf(
"postgresql.conf",
&format!(
"shared_preload_libraries = zenith \n\
zenith.page_server_connstring = 'host={} port={}'\n\
zenith.zenith_timeline='{}'\n",
host, port, self.timelineid
),
format!(
concat!(
"shared_preload_libraries = zenith\n",
// $ZENITH_AUTH_TOKEN will be replaced with value from environment variable during compute pg startup
// it is done this way because otherwise user will be able to retrieve the value using SHOW command or pg_settings
"zenith.page_server_connstring = 'host={} port={} password={}'\n",
"zenith.zenith_timeline='{}'\n",
"zenith.zenith_tenant='{}'\n",
),
host, port, password, self.timelineid, self.tenantid,
)
.as_str(),
)?;
fs::create_dir_all(self.pgdata().join("pg_wal"))?;
fs::create_dir_all(self.pgdata().join("pg_wal").join("archive_status"))?;
Ok(())
}
pub fn pgdata(&self) -> PathBuf {
self.env.pg_data_dir(&self.name)
self.env.pg_data_dir(&self.tenantid, &self.name)
}
pub fn status(&self) -> &str {
@@ -331,45 +377,88 @@ impl PostgresNode {
Ok(())
}
fn pg_ctl(&self, args: &[&str]) -> Result<()> {
fn pg_ctl(&self, args: &[&str], auth_token: &Option<String>) -> Result<()> {
let pg_ctl_path = self.env.pg_bin_dir().join("pg_ctl");
let mut cmd = Command::new(pg_ctl_path);
cmd.args(
[
&[
"-D",
self.pgdata().to_str().unwrap(),
"-l",
self.pgdata().join("pg.log").to_str().unwrap(),
"-w", //wait till pg_ctl actually does what was asked
],
args,
]
.concat(),
)
.env_clear()
.env("LD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap());
if let Some(token) = auth_token {
cmd.env("ZENITH_AUTH_TOKEN", token);
}
let pg_ctl = cmd.status().with_context(|| "pg_ctl failed")?;
let pg_ctl = Command::new(pg_ctl_path)
.args(
[
&[
"-D",
self.pgdata().to_str().unwrap(),
"-l",
self.pgdata().join("pg.log").to_str().unwrap(),
"-w", //wait till pg_ctl actually does what was asked
],
args,
]
.concat(),
)
.env_clear()
.env("LD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.status()
.with_context(|| "pg_ctl failed")?;
if !pg_ctl.success() {
anyhow::bail!("pg_ctl failed");
}
Ok(())
}
pub fn start(&self) -> Result<()> {
pub fn start(&self, auth_token: &Option<String>) -> Result<()> {
// Bail if the node already running.
if self.status() == "running" {
anyhow::bail!("The node is already running");
}
// 1. We always start compute node from scratch, so
// if old dir exists, preserve config files and drop the directory
// XXX Now we only use 'postgresql.conf'.
// If we will need 'pg_hba.conf', support it here too
let postgresql_conf_path = self.pgdata().join("postgresql.conf");
let postgresql_conf = fs::read(postgresql_conf_path.clone()).with_context(|| {
format!(
"failed to read config file in {}",
postgresql_conf_path.to_str().unwrap()
)
})?;
println!(
"Destroying postgres data directory '{}'",
self.pgdata().to_str().unwrap()
);
fs::remove_dir_all(&self.pgdata())?;
// 2. Create new node
self.init_from_page_server(self.env.auth_type, false)?;
// 3. Bring back config files
if let Ok(mut file) = OpenOptions::new()
.append(false)
.write(true)
.open(&postgresql_conf_path)
{
file.write_all(&postgresql_conf)?;
file.sync_all()?;
}
// 4. Finally start the compute node postgres
println!("Starting postgres node at '{}'", self.connstr());
self.pg_ctl(&["start"])
self.pg_ctl(&["start"], auth_token)
}
pub fn restart(&self) -> Result<()> {
self.pg_ctl(&["restart"])
pub fn restart(&self, auth_token: &Option<String>) -> Result<()> {
self.pg_ctl(&["restart"], auth_token)
}
pub fn stop(&self, destroy: bool) -> Result<()> {
self.pg_ctl(&["-m", "immediate", "stop"])?;
self.pg_ctl(&["-m", "immediate", "stop"], &None)?;
if destroy {
println!(
"Destroying postgres data directory '{}'",
@@ -382,10 +471,11 @@ impl PostgresNode {
pub fn connstr(&self) -> String {
format!(
"host={} port={} user={}",
"host={} port={} user={} dbname={}",
self.address.ip(),
self.address.port(),
self.whoami()
"zenith_admin",
"postgres"
)
}

View File

@@ -4,19 +4,24 @@
// Now it also provides init method which acts like a stub for proper installation
// script which will use local paths.
//
use anyhow::{anyhow, Result};
use anyhow::{anyhow, Context, Result};
use hex;
use serde::{Deserialize, Serialize};
use std::fs;
use std::path::PathBuf;
use std::process::{Command, Stdio};
use std::{collections::BTreeMap, env};
use url::Url;
use zenith_utils::auth::{encode_from_key_path, Claims, Scope};
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::ZTenantId;
pub type Remotes = BTreeMap<String, String>;
//
// This data structures represent deserialized zenith CLI config
//
#[derive(Serialize, Deserialize, Clone)]
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct LocalEnv {
// Pageserver connection strings
pub pageserver_connstring: String,
@@ -33,6 +38,22 @@ pub struct LocalEnv {
// Path to pageserver binary. Empty for remote pageserver.
pub zenith_distrib_dir: Option<PathBuf>,
// keeping tenant id in config to reduce copy paste when running zenith locally with single tenant
#[serde(with = "hex")]
pub tenantid: ZTenantId,
// Repository format, 'rocksdb' or 'layered' or None for default
pub repository_format: Option<String>,
// jwt auth token used for communication with pageserver
pub auth_token: String,
// used to determine which auth type is used
pub auth_type: AuthType,
// used to issue tokens during e.g pg start
pub private_key_path: PathBuf,
pub remotes: Remotes,
}
@@ -54,11 +75,13 @@ impl LocalEnv {
}
pub fn pg_data_dirs_path(&self) -> PathBuf {
self.base_data_dir.join("pgdatadirs")
self.base_data_dir.join("pgdatadirs").join("tenants")
}
pub fn pg_data_dir(&self, name: &str) -> PathBuf {
self.pg_data_dirs_path().join(name)
pub fn pg_data_dir(&self, tenantid: &ZTenantId, branch_name: &str) -> PathBuf {
self.pg_data_dirs_path()
.join(tenantid.to_string())
.join(branch_name)
}
// TODO: move pageserver files into ./pageserver
@@ -77,7 +100,12 @@ fn base_path() -> PathBuf {
//
// Initialize a new Zenith repository
//
pub fn init(remote_pageserver: Option<&str>) -> Result<()> {
pub fn init(
remote_pageserver: Option<&str>,
tenantid: ZTenantId,
auth_type: AuthType,
repository_format: Option<&str>,
) -> Result<()> {
// check if config already exists
let base_path = base_path();
if base_path.exists() {
@@ -86,6 +114,7 @@ pub fn init(remote_pageserver: Option<&str>) -> Result<()> {
base_path.to_str().unwrap()
);
}
fs::create_dir(&base_path)?;
// ok, now check that expected binaries are present
@@ -102,8 +131,43 @@ pub fn init(remote_pageserver: Option<&str>) -> Result<()> {
anyhow::bail!("Can't find postgres binary at {:?}", pg_distrib_dir);
}
fs::create_dir(&base_path)?;
fs::create_dir(base_path.join("pgdatadirs"))?;
// generate keys for jwt
// openssl genrsa -out private_key.pem 2048
let private_key_path = base_path.join("auth_private_key.pem");
let keygen_output = Command::new("openssl")
.arg("genrsa")
.args(&["-out", private_key_path.to_str().unwrap()])
.arg("2048")
.stdout(Stdio::null())
.output()
.with_context(|| "failed to generate auth private key")?;
if !keygen_output.status.success() {
anyhow::bail!(
"openssl failed: '{}'",
String::from_utf8_lossy(&keygen_output.stderr)
);
}
let public_key_path = base_path.join("auth_public_key.pem");
// openssl rsa -in private_key.pem -pubout -outform PEM -out public_key.pem
let keygen_output = Command::new("openssl")
.arg("rsa")
.args(&["-in", private_key_path.to_str().unwrap()])
.arg("-pubout")
.args(&["-outform", "PEM"])
.args(&["-out", public_key_path.to_str().unwrap()])
.stdout(Stdio::null())
.output()
.with_context(|| "failed to generate auth private key")?;
if !keygen_output.status.success() {
anyhow::bail!(
"openssl failed: '{}'",
String::from_utf8_lossy(&keygen_output.stderr)
);
}
let auth_token =
encode_from_key_path(&Claims::new(None, Scope::PageServerApi), &private_key_path)?;
let conf = if let Some(addr) = remote_pageserver {
// check that addr is parsable
@@ -115,6 +179,11 @@ pub fn init(remote_pageserver: Option<&str>) -> Result<()> {
zenith_distrib_dir: None,
base_data_dir: base_path,
remotes: BTreeMap::default(),
tenantid,
repository_format: repository_format.map(|x| x.into()),
auth_token,
auth_type,
private_key_path,
}
} else {
// Find zenith binaries.
@@ -129,9 +198,16 @@ pub fn init(remote_pageserver: Option<&str>) -> Result<()> {
zenith_distrib_dir: Some(zenith_distrib_dir),
base_data_dir: base_path,
remotes: BTreeMap::default(),
tenantid,
repository_format: repository_format.map(|x| x.into()),
auth_token,
auth_type,
private_key_path,
}
};
fs::create_dir_all(conf.pg_data_dirs_path())?;
let toml = toml::to_string_pretty(&conf)?;
fs::write(conf.base_data_dir.join("config"), toml)?;

View File

@@ -9,6 +9,8 @@ use anyhow::{anyhow, bail, Result};
use nix::sys::signal::{kill, Signal};
use nix::unistd::Pid;
use postgres::{Config, NoTls};
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::ZTenantId;
use crate::local_env::LocalEnv;
use crate::read_pidfile;
@@ -20,44 +22,61 @@ use zenith_utils::connstring::connection_address;
//
// Used in CLI and tests.
//
#[derive(Debug)]
pub struct PageServerNode {
pub kill_on_exit: bool,
pub connection_config: Option<Config>,
pub connection_config: Config,
pub env: LocalEnv,
}
impl PageServerNode {
pub fn from_env(env: &LocalEnv) -> PageServerNode {
let password = if matches!(env.auth_type, AuthType::ZenithJWT) {
&env.auth_token
} else {
""
};
PageServerNode {
kill_on_exit: false,
connection_config: None, // default
connection_config: Self::default_config(password), // default
env: env.clone(),
}
}
fn default_config() -> Config {
"postgresql://no_user@localhost:64000/no_db"
fn default_config(password: &str) -> Config {
format!("postgresql://no_user:{}@localhost:64000/no_db", password)
.parse()
.unwrap()
}
pub fn connection_config(&self) -> Config {
match &self.connection_config {
Some(config) => config.clone(),
None => Self::default_config(),
}
}
pub fn init(&self) -> Result<()> {
pub fn init(
&self,
create_tenant: Option<&str>,
enable_auth: bool,
repository_format: Option<&str>,
) -> Result<()> {
let mut cmd = Command::new(self.env.pageserver_bin()?);
let mut args = vec![
"--init",
"-D",
self.env.base_data_dir.to_str().unwrap(),
"--postgres-distrib",
self.env.pg_distrib_dir.to_str().unwrap(),
];
if enable_auth {
args.extend(&["--auth-validation-public-key-path", "auth_public_key.pem"]);
args.extend(&["--auth-type", "ZenithJWT"]);
}
if let Some(repo_format) = repository_format {
args.extend(&["--repository-format", repo_format]);
}
create_tenant.map(|tenantid| args.extend(&["--create-tenant", tenantid]));
let status = cmd
.args(&[
"--init",
"-D",
self.env.base_data_dir.to_str().unwrap(),
"--postgres-distrib",
self.env.pg_distrib_dir.to_str().unwrap(),
])
.args(args)
.env_clear()
.env("RUST_BACKTRACE", "1")
.status()
@@ -81,7 +100,7 @@ impl PageServerNode {
pub fn start(&self) -> Result<()> {
println!(
"Starting pageserver at '{}' in {}",
connection_address(&self.connection_config()),
connection_address(&self.connection_config),
self.repo_path().display()
);
@@ -101,18 +120,21 @@ impl PageServerNode {
// It takes a while for the page server to start up. Wait until it is
// open for business.
for retries in 1..15 {
let client = self.page_server_psql_client();
if client.is_ok() {
break;
} else {
println!("Pageserver not responding yet, retrying ({})...", retries);
thread::sleep(Duration::from_secs(1));
match self.page_server_psql_client() {
Ok(_) => {
println!("Pageserver started");
return Ok(());
}
Err(err) => {
println!(
"Pageserver not responding yet, err {} retrying ({})...",
err, retries
);
thread::sleep(Duration::from_secs(1));
}
}
}
println!("Pageserver started");
Ok(())
bail!("pageserver failed to start");
}
pub fn stop(&self) -> Result<()> {
@@ -123,7 +145,7 @@ impl PageServerNode {
}
// wait for pageserver stop
let address = connection_address(&self.connection_config());
let address = connection_address(&self.connection_config);
for _ in 0..5 {
let stream = TcpStream::connect(&address);
thread::sleep(Duration::from_secs(1));
@@ -138,19 +160,40 @@ impl PageServerNode {
}
pub fn page_server_psql(&self, sql: &str) -> Vec<postgres::SimpleQueryMessage> {
let mut client = self.connection_config().connect(NoTls).unwrap();
let mut client = self.connection_config.connect(NoTls).unwrap();
println!("Pageserver query: '{}'", sql);
client.simple_query(sql).unwrap()
}
pub fn page_server_psql_client(&self) -> Result<postgres::Client, postgres::Error> {
self.connection_config().connect(NoTls)
self.connection_config.connect(NoTls)
}
pub fn branches_list(&self) -> Result<Vec<BranchInfo>> {
pub fn tenants_list(&self) -> Result<Vec<String>> {
let mut client = self.page_server_psql_client()?;
let query_result = client.simple_query("branch_list")?;
let query_result = client.simple_query("tenant_list")?;
let tenants_json = query_result
.first()
.map(|msg| match msg {
postgres::SimpleQueryMessage::Row(row) => row.get(0),
_ => None,
})
.flatten()
.ok_or_else(|| anyhow!("missing tenants"))?;
Ok(serde_json::from_str(tenants_json)?)
}
pub fn tenant_create(&self, tenantid: &ZTenantId) -> Result<()> {
let mut client = self.page_server_psql_client()?;
client.simple_query(format!("tenant_create {}", tenantid).as_str())?;
Ok(())
}
pub fn branches_list(&self, tenantid: &ZTenantId) -> Result<Vec<BranchInfo>> {
let mut client = self.page_server_psql_client()?;
let query_result = client.simple_query(&format!("branch_list {}", tenantid))?;
let branches_json = query_result
.first()
.map(|msg| match msg {
@@ -160,14 +203,19 @@ impl PageServerNode {
.flatten()
.ok_or_else(|| anyhow!("missing branches"))?;
let res: Vec<BranchInfo> = serde_json::from_str(branches_json)?;
Ok(res)
Ok(serde_json::from_str(branches_json)?)
}
pub fn branch_create(&self, name: &str, startpoint: &str) -> Result<BranchInfo> {
pub fn branch_create(
&self,
branch_name: &str,
startpoint: &str,
tenantid: &ZTenantId,
) -> Result<BranchInfo> {
let mut client = self.page_server_psql_client()?;
let query_result =
client.simple_query(format!("branch_create {} {}", name, startpoint).as_str())?;
let query_result = client.simple_query(
format!("branch_create {} {} {}", tenantid, branch_name, startpoint).as_str(),
)?;
let branch_json = query_result
.first()
@@ -190,8 +238,12 @@ impl PageServerNode {
}
// TODO: make this a separate request type and avoid loading all the branches
pub fn branch_get_by_name(&self, name: &str) -> Result<BranchInfo> {
let branch_infos = self.branches_list()?;
pub fn branch_get_by_name(
&self,
tenantid: &ZTenantId,
branch_name: &str,
) -> Result<BranchInfo> {
let branch_infos = self.branches_list(tenantid)?;
let branche_by_name: Result<HashMap<String, BranchInfo>> = branch_infos
.into_iter()
.map(|branch_info| Ok((branch_info.name.clone(), branch_info)))
@@ -199,8 +251,8 @@ impl PageServerNode {
let branche_by_name = branche_by_name?;
let branch = branche_by_name
.get(name)
.ok_or_else(|| anyhow!("Branch {} not found", name))?;
.get(branch_name)
.ok_or_else(|| anyhow!("Branch {} not found", branch_name))?;
Ok(branch.clone())
}

30
docs/authentication.md Normal file
View File

@@ -0,0 +1,30 @@
## Authentication
### Overview
Current state of authentication includes usage of JWT tokens in communication between compute and pageserver and between CLI and pageserver. JWT token is signed using RSA keys. CLI generates a key pair during call to `zenith init`. Using following openssl commands:
```bash
openssl genrsa -out private_key.pem 2048
openssl rsa -in private_key.pem -pubout -outform PEM -out public_key.pem
```
CLI also generates signed token and saves it in the config for later access to pageserver. Now authentication is optional. Pageserver has two variables in config: `auth_validation_public_key_path` and `auth_type`, so when auth type present and set to `ZenithJWT` pageserver will require authentication for connections. Actual JWT is passed in password field of connection string. There is a caveat for psql, it silently truncates passwords to 100 symbols, so to correctly pass JWT via psql you have to either use PGPASSWORD environment variable, or store password in psql config file.
Currently there is no authentication between compute and safekeepers, because this communication layer is under heavy refactoring. After this refactoring support for authentication will be added there too. Now safekeeper supports "hardcoded" token passed via environment variable to be able to use callmemaybe command in pageserver.
Compute uses token passed via environment variable to communicate to pageserver and in the future to the safekeeper too.
JWT authentication now supports two scopes: tenant and pageserverapi. Tenant scope is intended for use in tenant related api calls, e.g. create_branch. Compute launched for particular tenant also uses this scope. Scope pageserver api is intended to be used by console to manage pageserver. For now we have only one management operation - create tenant.
Examples for token generation in python:
```python
# generate pageserverapi token
management_token = jwt.encode({"scope": "pageserverapi"}, auth_keys.priv, algorithm="RS256")
# generate tenant token
tenant_token = jwt.encode({"scope": "tenant", "tenant_id": ps.initial_tenant}, auth_keys.priv, algorithm="RS256")
```
Utility functions to work with jwts in rust are located in zenith_utils/src/auth.rs

59
docs/multitenancy.md Normal file
View File

@@ -0,0 +1,59 @@
## Multitenancy
### Overview
Zenith supports multitenancy. One pageserver can serve multiple tenants at once. Tenants can be managed via zenith CLI. During page server setup tenant can be created using ```zenith init --create-tenant``` Also tenants can be added into the system on the fly without pageserver restart. This can be done using the following cli command: ```zenith tenant create``` Tenants use random identifiers which can be represented as a 32 symbols hexadecimal string. So zenith tenant create accepts desired tenant id as an optional argument. The concept of timelines/branches is working independently per tenant.
### Tenants in other commands
By default during `zenith init` new tenant is created on the pageserver. Newly created tenant's id is saved to cli config, so other commands can use it automatically if no direct arugment `--tenantid=<tenantid>` is provided. So generally tenantid more frequently appears in internal pageserver interface. Its commands take tenantid argument to distinguish to which tenant operation should be applied. CLI support creation of new tenants.
Examples for cli:
```sh
zenith tenant list
zenith tenant create // generates new id
zenith tenant create ee6016ec31116c1b7c33dfdfca38892f
zenith pg create main // default tenant from zenith init
zenith pg create main --tenantid=ee6016ec31116c1b7c33dfdfca38892f
zenith branch --tenantid=ee6016ec31116c1b7c33dfdfca38892f
```
### Data layout
On the page server tenants introduce one level of indirection, so data directory structured the following way:
```
<pageserver working directory>
├── pageserver.log
├── pageserver.pid
├── pageserver.toml
└── tenants
├── 537cffa58a4fa557e49e19951b5a9d6b
├── de182bc61fb11a5a6b390a8aed3a804a
└── ee6016ec31116c1b7c33dfdfca38891f
```
Wal redo activity, timelines, snapshots are managed for each tenant independently.
For local environment used for example in tests there also new level of indirection for tenants. It touches `pgdatadirs` directory. Now it contains `tenants` subdirectory so the structure looks the following way:
```
pgdatadirs
└── tenants
├── de182bc61fb11a5a6b390a8aed3a804a
│ └── main
└── ee6016ec31116c1b7c33dfdfca38892f
└── main
```
### Changes to postgres
Tenant id is passed to postgres via GUC the same way as the timeline. Tenant id is added to commands issued to pageserver, namely: pagestream, callmemaybe. Tenant id is also exists in ServerInfo structure, this is needed to pass the value to wal receiver to be able to forward it to the pageserver.
### Safety
For now particular tenant can only appear on a particular pageserver. Set of WAL acceptors are also pinned to particular (tenantid, timeline) pair so there can only be one writer for particular (tenantid, timeline).

View File

@@ -0,0 +1,25 @@
version: "3"
services:
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
# ports:
# - "9090:9090"
# TODO: find a proper portable solution
network_mode: "host"
grafana:
image: grafana/grafana:latest
volumes:
- ./grafana.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_DISABLE_LOGIN_FORM=true
# ports:
# - "3000:3000"
# TODO: find a proper portable solution
network_mode: "host"

12
monitoring/grafana.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://localhost:9090
basicAuth: false
isDefault: false
version: 1
editable: false

View File

@@ -0,0 +1,5 @@
scrape_configs:
- job_name: 'default'
scrape_interval: 10s
static_configs:
- targets: ['localhost:9898']

View File

@@ -7,6 +7,7 @@ edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bookfile = "^0.3"
chrono = "0.4.19"
rand = "0.8.3"
regex = "1.4.5"
@@ -21,12 +22,10 @@ slog-term = "2.8.0"
slog = "2.7.0"
log = "0.4.14"
clap = "2.33.0"
termion = "1.5.6"
tui = "0.14.0"
daemonize = "0.4.1"
rust-s3 = { version = "0.27.0-rc4", features = ["no-verify-ssl"] }
tokio = { version = "1.3.0", features = ["full"] }
tokio-stream = { version = "0.1.4" }
tokio = { version = "1.5.0", features = ["full"] }
tokio-stream = { version = "0.1.5" }
postgres-types = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
postgres-protocol = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
@@ -43,7 +42,9 @@ serde = { version = "1.0", features = ["derive"] }
serde_json = "1"
fs_extra = "1.2.0"
toml = "0.5"
scopeguard = "1.1.0"
postgres_ffi = { path = "../postgres_ffi" }
zenith_metrics = { path = "../zenith_metrics" }
zenith_utils = { path = "../zenith_utils" }
workspace_hack = { path = "../workspace_hack" }

View File

@@ -4,22 +4,22 @@
//! TODO: this module has nothing to do with PostgreSQL pg_basebackup.
//! It could use a better name.
//!
//! Stateless Postgres compute node is launched by sending tarball which contains non-relational data (multixacts, clog, filenodemaps, twophase files)
//! and generate pg_control and dummy segment of WAL. This module is responsible for creation of such tarball from snapshot directory and
//! data stored in object storage.
//! Stateless Postgres compute node is launched by sending a tarball
//! which contains non-relational data (multixacts, clog, filenodemaps, twophase files),
//! generated pg_control and dummy segment of WAL.
//! This module is responsible for creation of such tarball
//! from data stored in object storage.
//!
use crate::ZTimelineId;
use bytes::{BufMut, BytesMut};
use log::*;
use std::io;
use std::io::Write;
use std::sync::Arc;
use std::time::SystemTime;
use tar::{Builder, Header};
use walkdir::WalkDir;
use tar::{Builder, EntryType, Header};
use crate::object_key::*;
use crate::relish::*;
use crate::repository::Timeline;
use postgres_ffi::relfile_utils::*;
use postgres_ffi::xlog_utils::*;
use postgres_ffi::*;
use zenith_utils::lsn::Lsn;
@@ -32,90 +32,60 @@ pub struct Basebackup<'a> {
timeline: &'a Arc<dyn Timeline>,
lsn: Lsn,
prev_record_lsn: Lsn,
snappath: String,
slru_buf: [u8; pg_constants::SLRU_SEG_SIZE],
slru_segno: u32,
slru_path: &'static str,
}
impl<'a> Basebackup<'a> {
pub fn new(
write: &'a mut dyn Write,
timelineid: ZTimelineId,
timeline: &'a Arc<dyn Timeline>,
lsn: Lsn,
prev_record_lsn: Lsn,
snapshot_lsn: Lsn,
) -> Basebackup<'a> {
Basebackup {
ar: Builder::new(write),
timeline,
lsn,
prev_record_lsn,
snappath: format!("timelines/{}/snapshots/{:016X}", timelineid, snapshot_lsn.0),
slru_path: "",
slru_segno: u32::MAX,
slru_buf: [0u8; pg_constants::SLRU_SEG_SIZE],
}
}
pub fn send_tarball(&mut self) -> anyhow::Result<()> {
debug!("sending tarball of snapshot in {}", self.snappath);
for entry in WalkDir::new(&self.snappath) {
let entry = entry?;
let fullpath = entry.path();
let relpath = entry.path().strip_prefix(&self.snappath).unwrap();
// Create pgdata subdirs structure
for dir in pg_constants::PGDATA_SUBDIRS.iter() {
info!("send subdir {:?}", *dir);
let header = new_tar_header_dir(*dir)?;
self.ar.append(&header, &mut io::empty())?;
}
if relpath.to_str().unwrap() == "" {
continue;
}
if entry.file_type().is_dir() {
trace!(
"sending dir {} as {}",
fullpath.display(),
relpath.display()
);
self.ar.append_dir(relpath, fullpath)?;
} else if entry.file_type().is_symlink() {
error!("ignoring symlink in snapshot dir");
} else if entry.file_type().is_file() {
if !is_rel_file_path(relpath.to_str().unwrap()) {
if entry.file_name() != "pg_filenode.map" // this files will be generated from object storage
&& !relpath.starts_with("pg_xact/")
&& !relpath.starts_with("pg_multixact/")
{
trace!("sending {}", relpath.display());
self.ar.append_path_with_name(fullpath, relpath)?;
}
} else {
// relation pages are loaded on demand and should not be included in tarball
trace!("not sending {}", relpath.display());
}
// Send empty config files.
for filepath in pg_constants::PGDATA_SPECIAL_FILES.iter() {
if *filepath == "pg_hba.conf" {
let data = pg_constants::PG_HBA.as_bytes();
let header = new_tar_header(&filepath, data.len() as u64)?;
self.ar.append(&header, &data[..])?;
} else {
error!("unknown file type: {}", fullpath.display());
let header = new_tar_header(&filepath, 0)?;
self.ar.append(&header, &mut io::empty())?;
}
}
// Generate non-relational files.
// Iteration is sorted order: all objects of the same time are grouped and traversed
// in key ascending order. For example all pg_xact records precede pg_multixact records and are sorted by block number.
// It allows to easily construct SLRU segments (32 blocks).
// Gather non-relational files from object storage pages.
for obj in self.timeline.list_nonrels(self.lsn)? {
match obj {
ObjectTag::Clog(slru) => self.add_slru_segment("pg_xact", &obj, slru.blknum)?,
ObjectTag::MultiXactMembers(slru) => {
self.add_slru_segment("pg_multixact/members", &obj, slru.blknum)?
RelishTag::Slru { slru, segno } => {
self.add_slru_segment(slru, segno)?;
}
ObjectTag::MultiXactOffsets(slru) => {
self.add_slru_segment("pg_multixact/offsets", &obj, slru.blknum)?
RelishTag::FileNodeMap { spcnode, dbnode } => {
self.add_relmap_file(spcnode, dbnode)?;
}
RelishTag::TwoPhase { xid } => {
self.add_twophase_file(xid)?;
}
ObjectTag::FileNodeMap(db) => self.add_relmap_file(&obj, &db)?,
ObjectTag::TwoPhase(prepare) => self.add_twophase_file(&obj, prepare.xid)?,
_ => {}
}
}
self.finish_slru_segment()?; // write last non-completed SLRU segment (if any)
// Generate pg_control and bootstrap WAL segment.
self.add_pgcontrol_file()?;
self.ar.finish()?;
debug!("all tarred up!");
@@ -123,70 +93,81 @@ impl<'a> Basebackup<'a> {
}
//
// Generate SLRU segment files from repository. Path identifies SLRU kind (pg_xact, pg_multixact/members, ...).
// Intially pass an empty string.
// Generate SLRU segment files from repository.
//
fn add_slru_segment(
&mut self,
path: &'static str,
tag: &ObjectTag,
page: u32,
) -> anyhow::Result<()> {
let img = self
fn add_slru_segment(&mut self, slru: SlruKind, segno: u32) -> anyhow::Result<()> {
let seg_size = self
.timeline
.get_page_at_lsn_nowait(*tag, self.lsn, false)?;
// Zero length image indicates truncated segment: just skip it
if !img.is_empty() {
assert!(img.len() == pg_constants::BLCKSZ as usize);
let segno = page / pg_constants::SLRU_PAGES_PER_SEGMENT;
if self.slru_path != "" && (self.slru_segno != segno || self.slru_path != path) {
// Switch to new segment: save old one
let segname = format!("{}/{:>04X}", self.slru_path, self.slru_segno);
let header = new_tar_header(&segname, pg_constants::SLRU_SEG_SIZE as u64)?;
self.ar.append(&header, &self.slru_buf[..])?;
self.slru_buf = [0u8; pg_constants::SLRU_SEG_SIZE]; // reinitialize segment buffer
}
self.slru_segno = segno;
self.slru_path = path;
let offs_start = (page % pg_constants::SLRU_PAGES_PER_SEGMENT) as usize
* pg_constants::BLCKSZ as usize;
let offs_end = offs_start + pg_constants::BLCKSZ as usize;
self.slru_buf[offs_start..offs_end].copy_from_slice(&img);
}
Ok(())
}
.get_relish_size(RelishTag::Slru { slru, segno }, self.lsn)?;
//
// We flush SLRU segments to the tarball once them are completed.
// This method is used to flush last (may be incompleted) segment.
//
fn finish_slru_segment(&mut self) -> anyhow::Result<()> {
if self.slru_path != "" {
// is there is some incompleted segment
let segname = format!("{}/{:>04X}", self.slru_path, self.slru_segno);
let header = new_tar_header(&segname, pg_constants::SLRU_SEG_SIZE as u64)?;
self.ar.append(&header, &self.slru_buf[..])?;
if seg_size == None {
trace!(
"SLRU segment {}/{:>04X} was truncated",
slru.to_str(),
segno
);
return Ok(());
}
let nblocks = seg_size.unwrap();
let mut slru_buf: Vec<u8> =
Vec::with_capacity(nblocks as usize * pg_constants::BLCKSZ as usize);
for blknum in 0..nblocks {
let img = self.timeline.get_page_at_lsn_nowait(
RelishTag::Slru { slru, segno },
blknum,
self.lsn,
)?;
assert!(img.len() == pg_constants::BLCKSZ as usize);
slru_buf.extend_from_slice(&img);
}
let segname = format!("{}/{:>04X}", slru.to_str(), segno);
let header = new_tar_header(&segname, slru_buf.len() as u64)?;
self.ar.append(&header, slru_buf.as_slice())?;
trace!("Added to basebackup slru {} relsize {}", segname, nblocks);
Ok(())
}
//
// Extract pg_filenode.map files from repository
// Along with them also send PG_VERSION for each database.
//
fn add_relmap_file(&mut self, tag: &ObjectTag, db: &DatabaseTag) -> anyhow::Result<()> {
let img = self
.timeline
.get_page_at_lsn_nowait(*tag, self.lsn, false)?;
info!("add_relmap_file {:?}", db);
let path = if db.spcnode == pg_constants::GLOBALTABLESPACE_OID {
fn add_relmap_file(&mut self, spcnode: u32, dbnode: u32) -> anyhow::Result<()> {
let img = self.timeline.get_page_at_lsn_nowait(
RelishTag::FileNodeMap { spcnode, dbnode },
0,
self.lsn,
)?;
let path = if spcnode == pg_constants::GLOBALTABLESPACE_OID {
let dst_path = "PG_VERSION";
let version_bytes = pg_constants::PG_MAJORVERSION.as_bytes();
let header = new_tar_header(&dst_path, version_bytes.len() as u64)?;
self.ar.append(&header, &version_bytes[..])?;
let dst_path = format!("global/PG_VERSION");
let header = new_tar_header(&dst_path, version_bytes.len() as u64)?;
self.ar.append(&header, &version_bytes[..])?;
String::from("global/pg_filenode.map") // filenode map for global tablespace
} else {
// User defined tablespaces are not supported
assert!(db.spcnode == pg_constants::DEFAULTTABLESPACE_OID);
let src_path = format!("{}/base/1/PG_VERSION", self.snappath);
let dst_path = format!("base/{}/PG_VERSION", db.dbnode);
self.ar.append_path_with_name(&src_path, &dst_path)?;
format!("base/{}/pg_filenode.map", db.dbnode)
assert!(spcnode == pg_constants::DEFAULTTABLESPACE_OID);
// Append dir path for each database
let path = format!("base/{}", dbnode);
let header = new_tar_header_dir(&path)?;
self.ar.append(&header, &mut io::empty())?;
let dst_path = format!("base/{}/PG_VERSION", dbnode);
let version_bytes = pg_constants::PG_MAJORVERSION.as_bytes();
let header = new_tar_header(&dst_path, version_bytes.len() as u64)?;
self.ar.append(&header, &version_bytes[..])?;
format!("base/{}/pg_filenode.map", dbnode)
};
assert!(img.len() == 512);
let header = new_tar_header(&path, img.len() as u64)?;
@@ -197,14 +178,11 @@ impl<'a> Basebackup<'a> {
//
// Extract twophase state files
//
fn add_twophase_file(&mut self, tag: &ObjectTag, xid: TransactionId) -> anyhow::Result<()> {
// Include in tarball two-phase files only of in-progress transactions
if self.timeline.get_tx_status(xid, self.lsn)?
== pg_constants::TRANSACTION_STATUS_IN_PROGRESS
fn add_twophase_file(&mut self, xid: TransactionId) -> anyhow::Result<()> {
if let Ok(img) =
self.timeline
.get_page_at_lsn_nowait(RelishTag::TwoPhase { xid }, 0, self.lsn)
{
let img = self
.timeline
.get_page_at_lsn_nowait(*tag, self.lsn, false)?;
let mut buf = BytesMut::new();
buf.extend_from_slice(&img[..]);
let crc = crc32c::crc32c(&img[..]);
@@ -217,15 +195,16 @@ impl<'a> Basebackup<'a> {
}
//
// Add generated pg_control file
// Add generated pg_control file and bootstrap WAL segment.
// Also send zenith.signal file with extra bootstrap data.
//
fn add_pgcontrol_file(&mut self) -> anyhow::Result<()> {
let checkpoint_bytes =
self.timeline
.get_page_at_lsn_nowait(ObjectTag::Checkpoint, self.lsn, false)?;
.get_page_at_lsn_nowait(RelishTag::Checkpoint, 0, self.lsn)?;
let pg_control_bytes =
self.timeline
.get_page_at_lsn_nowait(ObjectTag::ControlFile, self.lsn, false)?;
.get_page_at_lsn_nowait(RelishTag::ControlFile, 0, self.lsn)?;
let mut pg_control = ControlFileData::decode(&pg_control_bytes)?;
let mut checkpoint = CheckPoint::decode(&checkpoint_bytes)?;
@@ -239,12 +218,13 @@ impl<'a> Basebackup<'a> {
checkpoint.redo = self.lsn.0 + self.lsn.calc_padding(8u32);
//reset some fields we don't want to preserve
//TODO Check this.
//We may need to determine the value from twophase data.
checkpoint.oldestActiveXid = 0;
//save new values in pg_control
pg_control.checkPoint = checkpoint_lsn;
pg_control.checkPointCopy = checkpoint;
info!("pg_control.state = {}", pg_control.state);
pg_control.state = pg_constants::DB_SHUTDOWNED;
// add zenith.signal file
@@ -267,65 +247,12 @@ impl<'a> Basebackup<'a> {
let wal_file_path = format!("pg_wal/{}", wal_file_name);
let header = new_tar_header(&wal_file_path, pg_constants::WAL_SEGMENT_SIZE as u64)?;
let wal_seg = generate_wal_segment(&pg_control);
assert!(wal_seg.len() == pg_constants::WAL_SEGMENT_SIZE);
self.ar.append(&header, &wal_seg[..])?;
Ok(())
}
}
///
/// Parse a path, relative to the root of PostgreSQL data directory, as
/// a PostgreSQL relation data file.
///
fn parse_rel_file_path(path: &str) -> Result<(), FilePathError> {
/*
* Relation data files can be in one of the following directories:
*
* global/
* shared relations
*
* base/<db oid>/
* regular relations, default tablespace
*
* pg_tblspc/<tblspc oid>/<tblspc version>/
* within a non-default tablespace (the name of the directory
* depends on version)
*
* And the relation data files themselves have a filename like:
*
* <oid>.<segment number>
*/
if let Some(fname) = path.strip_prefix("global/") {
let (_relnode, _forknum, _segno) = parse_relfilename(fname)?;
Ok(())
} else if let Some(dbpath) = path.strip_prefix("base/") {
let mut s = dbpath.split('/');
let dbnode_str = s.next().ok_or(FilePathError::InvalidFileName)?;
let _dbnode = dbnode_str.parse::<u32>()?;
let fname = s.next().ok_or(FilePathError::InvalidFileName)?;
if s.next().is_some() {
return Err(FilePathError::InvalidFileName);
};
let (_relnode, _forknum, _segno) = parse_relfilename(fname)?;
Ok(())
} else if path.strip_prefix("pg_tblspc/").is_some() {
// TODO
error!("tablespaces not implemented yet");
Err(FilePathError::InvalidFileName)
} else {
Err(FilePathError::InvalidFileName)
}
}
//
// Check if it is relational file
//
fn is_rel_file_path(path: &str) -> bool {
parse_rel_file_path(path).is_ok()
}
//
// Create new tarball entry header
//
@@ -344,3 +271,20 @@ fn new_tar_header(path: &str, size: u64) -> anyhow::Result<Header> {
header.set_cksum();
Ok(header)
}
fn new_tar_header_dir(path: &str) -> anyhow::Result<Header> {
let mut header = Header::new_gnu();
header.set_size(0);
header.set_path(path)?;
header.set_mode(0o755); // -rw-------
header.set_entry_type(EntryType::dir());
header.set_mtime(
// use currenttime as last modified time
SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs(),
);
header.set_cksum();
Ok(header)
}

View File

@@ -6,38 +6,42 @@ use log::*;
use serde::{Deserialize, Serialize};
use std::{
env,
fs::{File, OpenOptions},
io,
net::TcpListener,
path::{Path, PathBuf},
process::exit,
str::FromStr,
sync::Arc,
thread,
time::Duration,
};
use zenith_utils::{auth::JwtAuth, postgres_backend::AuthType};
use anyhow::{Context, Result};
use anyhow::{ensure, Result};
use clap::{App, Arg, ArgMatches};
use daemonize::Daemonize;
use slog::{Drain, FnValue};
use pageserver::{branches, page_cache, page_service, tui, PageServerConf};
use pageserver::{branches, logger, page_cache, page_service, PageServerConf, RepositoryFormat};
use zenith_utils::http_endpoint;
const DEFAULT_LISTEN_ADDR: &str = "127.0.0.1:64000";
const DEFAULT_HTTP_ENDPOINT_ADDR: &str = "127.0.0.1:9898";
const DEFAULT_GC_HORIZON: u64 = 64 * 1024 * 1024;
const DEFAULT_GC_PERIOD: Duration = Duration::from_secs(100);
const DEFAULT_GC_PERIOD: Duration = Duration::from_secs(10);
const DEFAULT_WAL_REDOERS: usize = 1;
const DEFAULT_SUPERUSER: &str = "zenith_admin";
/// String arguments that can be declared via CLI or config file
#[derive(Serialize, Deserialize)]
struct CfgFileParams {
listen_addr: Option<String>,
http_endpoint_addr: Option<String>,
gc_horizon: Option<String>,
gc_period: Option<String>,
wal_redoers: Option<String>,
pg_distrib_dir: Option<String>,
auth_validation_public_key_path: Option<String>,
auth_type: Option<String>,
repository_format: Option<String>,
}
impl CfgFileParams {
@@ -49,10 +53,13 @@ impl CfgFileParams {
Self {
listen_addr: get_arg("listen"),
http_endpoint_addr: get_arg("http_endpoint"),
gc_horizon: get_arg("gc_horizon"),
gc_period: get_arg("gc_period"),
wal_redoers: get_arg("wal_redoers"),
pg_distrib_dir: get_arg("postgres-distrib"),
auth_validation_public_key_path: get_arg("auth-validation-public-key-path"),
auth_type: get_arg("auth-type"),
repository_format: get_arg("repository-format"),
}
}
@@ -61,20 +68,32 @@ impl CfgFileParams {
// TODO cleaner way to do this
Self {
listen_addr: self.listen_addr.or(other.listen_addr),
http_endpoint_addr: self.http_endpoint_addr.or(other.http_endpoint_addr),
gc_horizon: self.gc_horizon.or(other.gc_horizon),
gc_period: self.gc_period.or(other.gc_period),
wal_redoers: self.wal_redoers.or(other.wal_redoers),
pg_distrib_dir: self.pg_distrib_dir.or(other.pg_distrib_dir),
auth_validation_public_key_path: self
.auth_validation_public_key_path
.or(other.auth_validation_public_key_path),
auth_type: self.auth_type.or(other.auth_type),
repository_format: self.repository_format.or(other.repository_format),
}
}
/// Create a PageServerConf from these string parameters
fn try_into_config(&self) -> Result<PageServerConf> {
let workdir = PathBuf::from(".");
let listen_addr = match self.listen_addr.as_ref() {
Some(addr) => addr.clone(),
None => DEFAULT_LISTEN_ADDR.to_owned(),
};
let http_endpoint_addr = match self.http_endpoint_addr.as_ref() {
Some(addr) => addr.clone(),
None => DEFAULT_HTTP_ENDPOINT_ADDR.to_owned(),
};
let gc_horizon: u64 = match self.gc_horizon.as_ref() {
Some(horizon_str) => horizon_str.parse()?,
None => DEFAULT_GC_HORIZON,
@@ -84,32 +103,67 @@ impl CfgFileParams {
None => DEFAULT_GC_PERIOD,
};
let wal_redoers = match self.wal_redoers.as_ref() {
Some(wal_redoers_str) => wal_redoers_str.parse::<usize>()?,
None => DEFAULT_WAL_REDOERS,
};
let pg_distrib_dir = match self.pg_distrib_dir.as_ref() {
Some(pg_distrib_dir_str) => PathBuf::from(pg_distrib_dir_str),
None => env::current_dir()?.join("tmp_install"),
};
let auth_validation_public_key_path = self
.auth_validation_public_key_path
.as_ref()
.map(PathBuf::from);
let auth_type = self
.auth_type
.as_ref()
.map_or(Ok(AuthType::Trust), |auth_type| {
AuthType::from_str(&auth_type)
})?;
if !pg_distrib_dir.join("bin/postgres").exists() {
anyhow::bail!("Can't find postgres binary at {:?}", pg_distrib_dir);
}
if auth_type == AuthType::ZenithJWT {
ensure!(
auth_validation_public_key_path.is_some(),
"Missing auth_validation_public_key_path when auth_type is ZenithJWT"
);
let path_ref = auth_validation_public_key_path.as_ref().unwrap();
ensure!(
path_ref.exists(),
format!("Can't find auth_validation_public_key at {:?}", path_ref)
);
}
let repository_format = match self.repository_format.as_ref() {
Some(repo_format_str) if repo_format_str == "rocksdb" => RepositoryFormat::RocksDb,
Some(repo_format_str) if repo_format_str == "layered" => RepositoryFormat::Layered,
Some(repo_format_str) => anyhow::bail!(
"invalid --repository-format '{}', must be 'rocksdb' or 'layered'",
repo_format_str
),
None => RepositoryFormat::Layered, // default
};
Ok(PageServerConf {
daemonize: false,
interactive: false,
materialize: false,
listen_addr,
http_endpoint_addr,
gc_horizon,
gc_period,
wal_redoers,
workdir: PathBuf::from("."),
superuser: String::from(DEFAULT_SUPERUSER),
workdir,
pg_distrib_dir,
auth_validation_public_key_path,
auth_type,
repository_format,
})
}
}
@@ -124,19 +178,6 @@ fn main() -> Result<()> {
.takes_value(true)
.help("listen for incoming page requests on ip:port (default: 127.0.0.1:5430)"),
)
.arg(
Arg::with_name("interactive")
.short("i")
.long("interactive")
.takes_value(false)
.help("Interactive mode"),
)
.arg(
Arg::with_name("materialize")
.long("materialize")
.takes_value(false)
.help("Materialize pages constructed by get_page_at"),
)
.arg(
Arg::with_name("daemonize")
.short("d")
@@ -162,12 +203,6 @@ fn main() -> Result<()> {
.takes_value(true)
.help("Interval between garbage collector iterations"),
)
.arg(
Arg::with_name("wal_redoers")
.long("wal_redoers")
.takes_value(true)
.help("Number of wal-redo postgres instances"),
)
.arg(
Arg::with_name("workdir")
.short("D")
@@ -181,6 +216,31 @@ fn main() -> Result<()> {
.takes_value(true)
.help("Postgres distribution directory"),
)
.arg(
Arg::with_name("create-tenant")
.long("create-tenant")
.takes_value(true)
.help("Create tenant during init")
.requires("init"),
)
.arg(
Arg::with_name("auth-validation-public-key-path")
.long("auth-validation-public-key-path")
.takes_value(true)
.help("Path to public key used to validate jwt signature"),
)
.arg(
Arg::with_name("auth-type")
.long("auth-type")
.takes_value(true)
.help("Authentication scheme type. One of: Trust, MD5, ZenithJWT"),
)
.arg(
Arg::with_name("repository-format")
.long("repository-format")
.takes_value(true)
.help("Which repository implementation to use, 'rocksdb' or 'layered'"),
)
.get_matches();
let workdir = Path::new(arg_matches.value_of("workdir").unwrap_or(".zenith"));
@@ -189,6 +249,8 @@ fn main() -> Result<()> {
let args_params = CfgFileParams::from_args(&arg_matches);
let init = arg_matches.is_present("init");
let create_tenant = arg_matches.value_of("create-tenant");
let params = if init {
// We're initializing the repo, so there's no config file yet
args_params
@@ -199,20 +261,16 @@ fn main() -> Result<()> {
args_params.or(file_params)
};
// Set CWD to workdir for non-daemon modes
env::set_current_dir(&workdir)?;
// Ensure the config is valid, even if just init-ing
let mut conf = params.try_into_config()?;
conf.daemonize = arg_matches.is_present("daemonize");
conf.interactive = arg_matches.is_present("interactive");
conf.materialize = arg_matches.is_present("materialize");
if init && (conf.daemonize || conf.interactive) {
eprintln!("--daemonize and --interactive may not be used with --init");
exit(1);
}
if conf.daemonize && conf.interactive {
eprintln!("--daemonize is not allowed with --interactive: choose one");
if init && conf.daemonize {
eprintln!("--daemonize cannot be used with --init");
exit(1);
}
@@ -223,53 +281,26 @@ fn main() -> Result<()> {
// Create repo and exit if init was requested
if init {
branches::init_repo(conf, &workdir)?;
branches::init_pageserver(conf, create_tenant)?;
// write the config file
let cfg_file_contents = toml::to_string_pretty(&params)?;
// TODO support enable-auth flag
std::fs::write(&cfg_file_path, cfg_file_contents)?;
return Ok(());
}
// Set CWD to workdir for non-daemon modes
env::set_current_dir(&workdir)?;
start_pageserver(conf)
}
fn start_pageserver(conf: &'static PageServerConf) -> Result<()> {
let log_filename = "pageserver.log";
// Don't open the same file for output multiple times;
// the different fds could overwrite each other's output.
let log_file = OpenOptions::new()
.create(true)
.append(true)
.open(&log_filename)
.with_context(|| format!("failed to open {:?}", &log_filename))?;
// Initialize logger
let logger_file = log_file.try_clone().unwrap();
let _scope_guard = init_logging(&conf, logger_file)?;
let (_scope_guard, log_file) = logger::init_logging(&conf, "pageserver.log")?;
let _log_guard = slog_stdlog::init()?;
// Note: this `info!(...)` macro comes from `log` crate
info!("standard logging redirected to slog");
let tui_thread = if conf.interactive {
// Initialize the UI
Some(
thread::Builder::new()
.name("UI thread".into())
.spawn(|| {
let _ = tui::ui_main();
})
.unwrap(),
)
} else {
None
};
// TODO: Check that it looks like a valid repository before going further
if conf.daemonize {
@@ -292,6 +323,11 @@ fn start_pageserver(conf: &'static PageServerConf) -> Result<()> {
}
}
// Spawn a new thread for the http endpoint
thread::Builder::new()
.name("Metrics thread".into())
.spawn(move || http_endpoint::thread_main(conf.http_endpoint_addr.clone()))?;
// Check that we can bind to address before starting threads to simplify shutdown
// sequence if port is occupied.
info!("Starting pageserver on {}", conf.listen_addr);
@@ -300,72 +336,27 @@ fn start_pageserver(conf: &'static PageServerConf) -> Result<()> {
// Initialize page cache, this will spawn walredo_thread
page_cache::init(conf);
// initialize authentication for incoming connections
let auth = match &conf.auth_type {
AuthType::Trust | AuthType::MD5 => Arc::new(None),
AuthType::ZenithJWT => {
// unwrap is ok because check is performed when creating config, so path is set and file exists
let key_path = conf.auth_validation_public_key_path.as_ref().unwrap();
Arc::new(Some(JwtAuth::from_key_path(key_path)?))
}
};
info!("Using auth: {:#?}", conf.auth_type);
// Spawn a thread to listen for connections. It will spawn further threads
// for each connection.
let page_service_thread = thread::Builder::new()
.name("Page Service thread".into())
.spawn(move || page_service::thread_main(conf, pageserver_listener))?;
.spawn(move || {
page_service::thread_main(conf, auth, pageserver_listener, conf.auth_type)
})?;
if let Some(tui_thread) = tui_thread {
// The TUI thread exits when the user asks to Quit.
tui_thread.join().unwrap();
} else {
page_service_thread
.join()
.expect("Page service thread has panicked")?
}
page_service_thread
.join()
.expect("Page service thread has panicked")?;
Ok(())
}
fn init_logging(
conf: &PageServerConf,
log_file: File,
) -> Result<slog_scope::GlobalLoggerGuard, io::Error> {
if conf.interactive {
Ok(tui::init_logging())
} else if conf.daemonize {
let decorator = slog_term::PlainSyncDecorator::new(log_file);
let drain = slog_term::FullFormat::new(decorator).build();
let drain = slog::Filter::new(drain, |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Info) {
return true;
}
false
});
let drain = std::sync::Mutex::new(drain).fuse();
let logger = slog::Logger::root(
drain,
slog::o!(
"location" =>
FnValue(move |record| {
format!("{}, {}:{}",
record.module(),
record.file(),
record.line()
)
}
)
),
);
Ok(slog_scope::set_global_logger(logger))
} else {
let decorator = slog_term::TermDecorator::new().build();
let drain = slog_term::FullFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).chan_size(1000).build().fuse();
let drain = slog::Filter::new(drain, |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Info) {
return true;
}
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver")
{
return true;
}
false
})
.fuse();
let logger = slog::Logger::root(drain, slog::o!());
Ok(slog_scope::set_global_logger(logger))
}
}

View File

@@ -4,25 +4,27 @@
// TODO: move all paths construction to conf impl
//
use anyhow::{anyhow, bail, Context, Result};
use fs::File;
use postgres_ffi::{pg_constants, xlog_utils, ControlFileData};
use rand::Rng;
use anyhow::{bail, ensure, Context, Result};
use postgres_ffi::ControlFileData;
use serde::{Deserialize, Serialize};
use std::env;
use std::io::{Read, Write};
use std::{
collections::HashMap,
fs, io,
path::{Path, PathBuf},
fs,
path::Path,
process::{Command, Stdio},
str::FromStr,
sync::Arc,
};
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use log::*;
use zenith_utils::lsn::Lsn;
use crate::logger;
use crate::object_repository::ObjectRepository;
use crate::page_cache;
use crate::restore_local_repo;
use crate::{repository::Repository, PageServerConf, ZTimelineId};
use crate::walredo::WalRedoManager;
use crate::{repository::Repository, PageServerConf, RepositoryFormat};
#[derive(Serialize, Deserialize, Clone)]
pub struct BranchInfo {
@@ -39,57 +41,51 @@ pub struct PointInTime {
pub lsn: Lsn,
}
pub fn init_repo(conf: &'static PageServerConf, repo_dir: &Path) -> Result<()> {
pub fn init_pageserver(conf: &'static PageServerConf, create_tenant: Option<&str>) -> Result<()> {
// Initialize logger
let (_scope_guard, _log_file) = logger::init_logging(&conf, "pageserver.log")?;
let _log_guard = slog_stdlog::init()?;
if let Some(tenantid) = create_tenant {
let tenantid = ZTenantId::from_str(tenantid)?;
println!("initializing tenantid {}", tenantid);
create_repo(
conf,
tenantid,
Arc::new(crate::walredo::DummyRedoManager {}),
)
.with_context(|| "failed to create repo")?;
}
fs::create_dir_all(conf.tenants_path())?;
println!("pageserver init succeeded");
Ok(())
}
pub fn create_repo(
conf: &'static PageServerConf,
tenantid: ZTenantId,
wal_redo_manager: Arc<dyn WalRedoManager + Send + Sync>,
) -> Result<Arc<dyn Repository>> {
let repo_dir = conf.tenant_path(&tenantid);
if repo_dir.exists() {
bail!("repo for {} already exists", tenantid)
}
// top-level dir may exist if we are creating it through CLI
fs::create_dir_all(repo_dir)
fs::create_dir_all(&repo_dir)
.with_context(|| format!("could not create directory {}", repo_dir.display()))?;
env::set_current_dir(repo_dir)?;
// Note: this `info!(...)` macro comes from `log` crate
info!("standard logging redirected to slog");
fs::create_dir(std::path::Path::new("timelines"))?;
fs::create_dir(std::path::Path::new("refs"))?;
fs::create_dir(std::path::Path::new("refs").join("branches"))?;
fs::create_dir(std::path::Path::new("refs").join("tags"))?;
fs::create_dir(conf.timelines_path(&tenantid))?;
fs::create_dir_all(conf.branches_path(&tenantid))?;
fs::create_dir_all(conf.tags_path(&tenantid))?;
println!("created directory structure in {}", repo_dir.display());
info!("created directory structure in {}", repo_dir.display());
// Run initdb
//
// We create the cluster temporarily in a "tmp" directory inside the repository,
// and move it to the right location from there.
let tmppath = std::path::Path::new("tmp");
print!("running initdb... ");
io::stdout().flush()?;
let initdb_path = conf.pg_bin_dir().join("initdb");
let initdb_otput = Command::new(initdb_path)
.args(&["-D", tmppath.to_str().unwrap()])
.arg("--no-instructions")
.env_clear()
.env("LD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.stdout(Stdio::null())
.output()
.with_context(|| "failed to execute initdb")?;
if !initdb_otput.status.success() {
anyhow::bail!(
"initdb failed: '{}'",
String::from_utf8_lossy(&initdb_otput.stderr)
);
}
println!("initdb succeeded");
// Read control file to extract the LSN and system id
let controlfile_path = tmppath.join("global").join("pg_control");
let controlfile = ControlFileData::decode(&fs::read(controlfile_path)?)?;
// let systemid = controlfile.system_identifier;
let lsn = controlfile.checkPoint;
let lsnstr = format!("{:016X}", lsn);
// Bootstrap the repository by loading the newly-initdb'd cluster into 'main' branch.
let tli = create_timeline(conf, None)?;
let timelinedir = conf.timeline_path(tli);
let tli = create_timeline(conf, None, &tenantid)?;
// We don't use page_cache here, because we don't want to spawn the WAL redo thread during
// repository initialization.
@@ -100,54 +96,129 @@ pub fn init_repo(conf: &'static PageServerConf, repo_dir: &Path) -> Result<()> {
// and we failed to run initdb again in the same directory. This has been solved for the
// rapid init+start case now, but the general race condition remains if you restart the
// server quickly.
let storage = crate::rocksdb_storage::RocksObjectStore::create(conf)?;
//let storage = crate::inmem_storage::InmemObjectStore::create(conf)?;
let repo: Arc<dyn Repository + Sync + Send> =
match conf.repository_format {
RepositoryFormat::Layered => Arc::new(
crate::layered_repository::LayeredRepository::new(conf, wal_redo_manager, tenantid),
),
RepositoryFormat::RocksDb => {
let obj_store = crate::rocksdb_storage::RocksObjectStore::create(conf, &tenantid)?;
let repo = crate::object_repository::ObjectRepository::new(
conf,
std::sync::Arc::new(storage),
std::sync::Arc::new(crate::walredo::DummyRedoManager {}),
);
let timeline = repo.create_empty_timeline(tli, Lsn(lsn))?;
Arc::new(ObjectRepository::new(
conf,
Arc::new(obj_store),
wal_redo_manager,
tenantid,
))
}
};
restore_local_repo::import_timeline_from_postgres_datadir(&tmppath, &*timeline, Lsn(lsn))?;
// Load data into pageserver
// TODO To implement zenith import we need to
// move data loading out of create_repo()
bootstrap_timeline(conf, tenantid, tli, &*repo)?;
// Move the initial WAL file
fs::rename(
tmppath.join("pg_wal").join("000000010000000000000001"),
timelinedir
.join("wal")
.join("000000010000000000000001.partial"),
)?;
println!("created initial timeline {}", tli);
Ok(repo)
}
let data = tli.to_string();
fs::write(conf.branch_path("main"), data)?;
println!("created main branch");
// Returns checkpoint LSN from controlfile
fn get_lsn_from_controlfile(path: &Path) -> Result<Lsn> {
// Read control file to extract the LSN
let controlfile_path = path.join("global").join("pg_control");
let controlfile = ControlFileData::decode(&fs::read(controlfile_path)?)?;
let lsn = controlfile.checkPoint;
// Remove pg_wal
fs::remove_dir_all(tmppath.join("pg_wal"))?;
Ok(Lsn(lsn))
}
// Move the data directory as an initial base backup.
// FIXME: It would be enough to only copy the non-relation files here, the relation
// data was already loaded into the repository.
let target = timelinedir.join("snapshots").join(&lsnstr);
fs::rename(tmppath, &target)?;
// Create the cluster temporarily in a initdbpath directory inside the repository
// to get bootstrap data for timeline initialization.
//
fn run_initdb(conf: &'static PageServerConf, initdbpath: &Path) -> Result<()> {
info!("running initdb... ");
println!(
"new zenith repository was created in {}",
repo_dir.display()
);
let initdb_path = conf.pg_bin_dir().join("initdb");
let initdb_output = Command::new(initdb_path)
.args(&["-D", initdbpath.to_str().unwrap()])
.args(&["-U", &conf.superuser])
.arg("--no-instructions")
.env_clear()
.env("LD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.stdout(Stdio::null())
.output()
.with_context(|| "failed to execute initdb")?;
if !initdb_output.status.success() {
anyhow::bail!(
"initdb failed: '{}'",
String::from_utf8_lossy(&initdb_output.stderr)
);
}
info!("initdb succeeded");
Ok(())
}
pub(crate) fn get_branches(conf: &PageServerConf) -> Result<Vec<BranchInfo>> {
let repo = page_cache::get_repository();
//
// - run initdb to init temporary instance and get bootstrap data
// - after initialization complete, remove the temp dir.
//
fn bootstrap_timeline(
conf: &'static PageServerConf,
tenantid: ZTenantId,
tli: ZTimelineId,
repo: &dyn Repository,
) -> Result<()> {
let initdb_path = conf.tenant_path(&tenantid).join("tmp");
// Init temporarily repo to get bootstrap data
run_initdb(conf, &initdb_path)?;
let pgdata_path = initdb_path;
let lsn = get_lsn_from_controlfile(&pgdata_path)?;
info!("bootstrap_timeline {:?} at lsn {}", pgdata_path, lsn);
let timeline = repo.create_empty_timeline(tli, lsn)?;
restore_local_repo::import_timeline_from_postgres_datadir(&pgdata_path, &*timeline, lsn)?;
let wal_dir = pgdata_path.join("pg_wal");
restore_local_repo::import_timeline_wal(&wal_dir, &*timeline, timeline.get_last_record_lsn())?;
println!(
"created initial timeline {} timeline.lsn {}",
tli,
timeline.get_last_record_lsn()
);
let data = tli.to_string();
fs::write(conf.branch_path("main", &tenantid), data)?;
println!("created main branch");
// Remove temp dir. We don't need it anymore
fs::remove_dir_all(pgdata_path)?;
Ok(())
}
pub(crate) fn get_tenants(conf: &PageServerConf) -> Result<Vec<String>> {
let tenants_dir = conf.tenants_path();
std::fs::read_dir(&tenants_dir)?
.map(|dir_entry_res| {
let dir_entry = dir_entry_res?;
ensure!(dir_entry.file_type()?.is_dir());
Ok(dir_entry.file_name().to_str().unwrap().to_owned())
})
.collect()
}
pub(crate) fn get_branches(conf: &PageServerConf, tenantid: &ZTenantId) -> Result<Vec<BranchInfo>> {
let repo = page_cache::get_repository_for_tenant(tenantid)?;
// Each branch has a corresponding record (text file) in the refs/branches
// with timeline_id.
let branches_dir = std::path::Path::new("refs").join("branches");
let branches_dir = conf.branches_path(tenantid);
std::fs::read_dir(&branches_dir)?
.map(|dir_entry_res| {
@@ -160,7 +231,7 @@ pub(crate) fn get_branches(conf: &PageServerConf) -> Result<Vec<BranchInfo>> {
.map(|timeline| timeline.get_last_valid_lsn())
.ok();
let ancestor_path = conf.ancestor_path(timeline_id);
let ancestor_path = conf.ancestor_path(&timeline_id, tenantid);
let mut ancestor_id: Option<String> = None;
let mut ancestor_lsn: Option<String> = None;
@@ -193,41 +264,19 @@ pub(crate) fn get_branches(conf: &PageServerConf) -> Result<Vec<BranchInfo>> {
.collect()
}
pub(crate) fn get_system_id(conf: &PageServerConf) -> Result<u64> {
// let branches = get_branches();
let branches_dir = std::path::Path::new("refs").join("branches");
let branches = std::fs::read_dir(&branches_dir)?
.map(|dir_entry_res| {
let dir_entry = dir_entry_res?;
let name = dir_entry.file_name().to_str().unwrap().to_string();
let timeline_id = std::fs::read_to_string(dir_entry.path())?.parse::<ZTimelineId>()?;
Ok((name, timeline_id))
})
.collect::<Result<HashMap<String, ZTimelineId>>>()?;
let main_tli = branches
.get("main")
.ok_or_else(|| anyhow!("Branch main not found"))?;
let (_, main_snap_dir) = find_latest_snapshot(conf, *main_tli)?;
let controlfile_path = main_snap_dir.join("global").join("pg_control");
let controlfile = ControlFileData::decode(&fs::read(controlfile_path)?)?;
Ok(controlfile.system_identifier)
}
pub(crate) fn create_branch(
conf: &PageServerConf,
branchname: &str,
startpoint_str: &str,
tenantid: &ZTenantId,
) -> Result<BranchInfo> {
let repo = page_cache::get_repository();
let repo = page_cache::get_repository_for_tenant(tenantid)?;
if conf.branch_path(&branchname).exists() {
if conf.branch_path(branchname, tenantid).exists() {
anyhow::bail!("branch {} already exists", branchname);
}
let mut startpoint = parse_point_in_time(conf, startpoint_str)?;
let mut startpoint = parse_point_in_time(conf, startpoint_str, tenantid)?;
if startpoint.lsn == Lsn(0) {
// Find end of WAL on the old timeline
@@ -239,31 +288,16 @@ pub(crate) fn create_branch(
}
// create a new timeline directory for it
let newtli = create_timeline(conf, Some(startpoint))?;
let newtimelinedir = conf.timeline_path(newtli);
let newtli = create_timeline(conf, Some(startpoint), tenantid)?;
// Let the Repository backend do its initialization
repo.branch_timeline(startpoint.timelineid, newtli, startpoint.lsn)?;
// Copy the latest snapshot (TODO: before the startpoint) and all WAL
// TODO: be smarter and avoid the copying...
let (_maxsnapshot, oldsnapshotdir) = find_latest_snapshot(conf, startpoint.timelineid)?;
let copy_opts = fs_extra::dir::CopyOptions::new();
fs_extra::dir::copy(oldsnapshotdir, newtimelinedir.join("snapshots"), &copy_opts)?;
let oldtimelinedir = conf.timeline_path(startpoint.timelineid);
copy_wal(
&oldtimelinedir.join("wal"),
&newtimelinedir.join("wal"),
startpoint.lsn,
pg_constants::WAL_SEGMENT_SIZE,
)?;
// Remember the human-readable branch name for the new timeline.
// FIXME: there's a race condition, if you create a branch with the same
// name concurrently.
let data = newtli.to_string();
fs::write(conf.branch_path(&branchname), data)?;
fs::write(conf.branch_path(&branchname, tenantid), data)?;
Ok(BranchInfo {
name: branchname.to_string(),
@@ -293,7 +327,11 @@ pub(crate) fn create_branch(
// mytag
//
//
fn parse_point_in_time(conf: &PageServerConf, s: &str) -> Result<PointInTime> {
fn parse_point_in_time(
conf: &PageServerConf,
s: &str,
tenantid: &ZTenantId,
) -> Result<PointInTime> {
let mut strings = s.split('@');
let name = strings.next().unwrap();
@@ -308,21 +346,21 @@ fn parse_point_in_time(conf: &PageServerConf, s: &str) -> Result<PointInTime> {
// Check if it's a tag
if lsn.is_none() {
let tagpath = conf.tag_path(name);
let tagpath = conf.tag_path(name, &tenantid);
if tagpath.exists() {
let pointstr = fs::read_to_string(tagpath)?;
return parse_point_in_time(conf, &pointstr);
return parse_point_in_time(conf, &pointstr, &tenantid);
}
}
// Check if it's a branch
// Check if it's branch @ LSN
let branchpath = conf.branch_path(name);
let branchpath = conf.branch_path(name, &tenantid);
if branchpath.exists() {
let pointstr = fs::read_to_string(branchpath)?;
let mut result = parse_point_in_time(conf, &pointstr)?;
let mut result = parse_point_in_time(conf, &pointstr, &tenantid)?;
result.lsn = lsn.unwrap_or(Lsn(0));
return Ok(result);
@@ -331,7 +369,7 @@ fn parse_point_in_time(conf: &PageServerConf, s: &str) -> Result<PointInTime> {
// Check if it's a timelineid
// Check if it's timelineid @ LSN
if let Ok(timelineid) = ZTimelineId::from_str(name) {
let tlipath = conf.timeline_path(timelineid);
let tlipath = conf.timeline_path(&timelineid, &tenantid);
if tlipath.exists() {
return Ok(PointInTime {
timelineid,
@@ -343,16 +381,18 @@ fn parse_point_in_time(conf: &PageServerConf, s: &str) -> Result<PointInTime> {
bail!("could not parse point-in-time {}", s);
}
fn create_timeline(conf: &PageServerConf, ancestor: Option<PointInTime>) -> Result<ZTimelineId> {
fn create_timeline(
conf: &PageServerConf,
ancestor: Option<PointInTime>,
tenantid: &ZTenantId,
) -> Result<ZTimelineId> {
// Create initial timeline
let mut tli_buf = [0u8; 16];
rand::thread_rng().fill(&mut tli_buf);
let timelineid = ZTimelineId::from(tli_buf);
let timelinedir = conf.timeline_path(timelineid);
let timelineid = ZTimelineId::generate();
let timelinedir = conf.timeline_path(&timelineid, tenantid);
fs::create_dir(&timelinedir)?;
fs::create_dir(&timelinedir.join("snapshots"))?;
fs::create_dir(&timelinedir.join("wal"))?;
if let Some(ancestor) = ancestor {
@@ -362,72 +402,3 @@ fn create_timeline(conf: &PageServerConf, ancestor: Option<PointInTime>) -> Resu
Ok(timelineid)
}
///
/// Copy all WAL segments from one directory to another, up to given LSN.
///
/// If the given LSN is in the middle of a segment, the last segment containing it
/// is written out as .partial, and padded with zeros.
///
fn copy_wal(src_dir: &Path, dst_dir: &Path, upto: Lsn, wal_seg_size: usize) -> Result<()> {
let last_segno = upto.segment_number(wal_seg_size);
let last_segoff = upto.segment_offset(wal_seg_size);
for entry in fs::read_dir(src_dir).unwrap().flatten() {
let entry_name = entry.file_name();
let fname = entry_name.to_str().unwrap();
// Check if the filename looks like an xlog file, or a .partial file.
if !xlog_utils::IsXLogFileName(fname) && !xlog_utils::IsPartialXLogFileName(fname) {
continue;
}
let (segno, _tli) = xlog_utils::XLogFromFileName(fname, wal_seg_size as usize);
let copylen;
let mut dst_fname = PathBuf::from(fname);
if segno > last_segno {
// future segment, skip
continue;
} else if segno < last_segno {
copylen = wal_seg_size;
dst_fname.set_extension("");
} else {
copylen = last_segoff;
dst_fname.set_extension("partial");
}
let src_file = File::open(entry.path())?;
let mut dst_file = File::create(dst_dir.join(&dst_fname))?;
std::io::copy(&mut src_file.take(copylen as u64), &mut dst_file)?;
if copylen < wal_seg_size {
std::io::copy(
&mut std::io::repeat(0).take((wal_seg_size - copylen) as u64),
&mut dst_file,
)?;
}
}
Ok(())
}
// Find the latest snapshot for a timeline
fn find_latest_snapshot(conf: &PageServerConf, timeline: ZTimelineId) -> Result<(Lsn, PathBuf)> {
let snapshotsdir = conf.snapshots_path(timeline);
let paths = fs::read_dir(&snapshotsdir)?;
let mut maxsnapshot = Lsn(0);
let mut snapshotdir: Option<PathBuf> = None;
for path in paths {
let path = path?;
let filename = path.file_name().to_str().unwrap().to_owned();
if let Ok(lsn) = Lsn::from_hex(&filename) {
maxsnapshot = std::cmp::max(lsn, maxsnapshot);
snapshotdir = Some(path.path());
}
}
if maxsnapshot == Lsn(0) {
// TODO: check ancestor timeline
anyhow::bail!("no snapshot found in {}", snapshotsdir.display());
}
Ok((maxsnapshot, snapshotdir.unwrap()))
}

View File

@@ -1,345 +0,0 @@
//!
//! An implementation of the ObjectStore interface, backed by BTreeMap
//!
use crate::object_key::*;
use crate::object_store::ObjectStore;
use crate::repository::RelTag;
use crate::PageServerConf;
use crate::ZTimelineId;
use anyhow::{bail, Result};
use serde::{Deserialize, Serialize};
use std::collections::{BTreeMap, HashSet};
use std::fs::File;
use std::io::prelude::*;
use std::ops::Bound::*;
use std::sync::RwLock;
use zenith_utils::bin_ser::BeSer;
use zenith_utils::lsn::Lsn;
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize)]
pub struct StorageKey {
obj_key: ObjectKey,
lsn: Lsn,
}
impl StorageKey {
/// The first key for a given timeline
fn timeline_start(timeline: ZTimelineId) -> Self {
Self {
obj_key: ObjectKey {
timeline,
tag: ObjectTag::FirstTag,
},
lsn: Lsn(0),
}
}
}
pub struct InmemObjectStore {
conf: &'static PageServerConf,
db: RwLock<BTreeMap<StorageKey, Vec<u8>>>,
}
impl ObjectStore for InmemObjectStore {
fn get(&self, key: &ObjectKey, lsn: Lsn) -> Result<Vec<u8>> {
let db = self.db.read().unwrap();
let val = db.get(&StorageKey {
obj_key: key.clone(),
lsn,
});
if let Some(val) = val {
Ok(val.clone())
} else {
bail!("could not find page {:?}", key);
}
}
fn get_next_key(&self, key: &ObjectKey) -> Result<Option<ObjectKey>> {
let search_key = StorageKey {
obj_key: key.clone(),
lsn: Lsn(0),
};
let db = self.db.read().unwrap();
for pair in db.range(&search_key..) {
let key = pair.0;
return Ok(Some(key.obj_key.clone()));
}
Ok(None)
}
fn put(&self, key: &ObjectKey, lsn: Lsn, value: &[u8]) -> Result<()> {
let mut db = self.db.write().unwrap();
db.insert(
StorageKey {
obj_key: key.clone(),
lsn,
},
value.to_vec(),
);
Ok(())
}
fn unlink(&self, key: &ObjectKey, lsn: Lsn) -> Result<()> {
let mut db = self.db.write().unwrap();
db.remove(&StorageKey {
obj_key: key.clone(),
lsn,
});
Ok(())
}
/// Iterate through page versions of given page, starting from the given LSN.
/// The versions are walked in descending LSN order.
fn object_versions<'a>(
&'a self,
key: &ObjectKey,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = (Lsn, Vec<u8>)> + 'a>> {
let from = StorageKey {
obj_key: key.clone(),
lsn: Lsn(0),
};
let till = StorageKey {
obj_key: key.clone(),
lsn,
};
let db = self.db.read().unwrap();
let versions: Vec<(Lsn, Vec<u8>)> = db
.range(from..=till)
.map(|pair| (pair.0.lsn, pair.1.clone()))
.collect();
Ok(Box::new(InmemObjectVersionIter::new(versions)))
}
/// Iterate through all timeline objects
fn list_objects<'a>(
&'a self,
timeline: ZTimelineId,
nonrel_only: bool,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = ObjectTag> + 'a>> {
let curr_key = StorageKey::timeline_start(timeline);
Ok(Box::new(InmemObjectIter {
store: &self,
curr_key,
timeline,
nonrel_only,
lsn,
}))
}
/// Get a list of all distinct relations in given tablespace and database.
///
/// TODO: This implementation is very inefficient, it scans
/// through all entries in the given database. In practice, this
/// is used for CREATE DATABASE, and usually the template database is small.
/// But if it's not, this will be slow.
fn list_rels(
&self,
timelineid: ZTimelineId,
spcnode: u32,
dbnode: u32,
lsn: Lsn,
) -> Result<HashSet<RelTag>> {
// FIXME: This scans everything. Very slow
let mut rels: HashSet<RelTag> = HashSet::new();
let mut search_rel_tag = RelTag {
spcnode,
dbnode,
relnode: 0,
forknum: 0u8,
};
let db = self.db.read().unwrap();
'outer: loop {
let search_key = StorageKey {
obj_key: ObjectKey {
timeline: timelineid,
tag: ObjectTag::RelationMetadata(search_rel_tag),
},
lsn: Lsn(0),
};
for pair in db.range(&search_key..) {
let key = pair.0;
if let ObjectTag::RelationMetadata(rel_tag) = key.obj_key.tag {
if spcnode != 0 && rel_tag.spcnode != spcnode
|| dbnode != 0 && rel_tag.dbnode != dbnode
{
break 'outer;
}
if key.lsn <= lsn {
// visible in this snapshot
rels.insert(rel_tag);
}
search_rel_tag = rel_tag;
// skip to next relation
// FIXME: What if relnode is u32::MAX ?
search_rel_tag.relnode += 1;
continue 'outer;
} else {
// no more relation metadata entries
break 'outer;
}
}
}
Ok(rels)
}
/// Iterate through versions of all objects in a timeline.
///
/// Returns objects in increasing key-version order.
/// Returns all versions up to and including the specified LSN.
fn objects<'a>(
&'a self,
timeline: ZTimelineId,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = Result<(ObjectTag, Lsn, Vec<u8>)>> + 'a>> {
let curr_key = StorageKey::timeline_start(timeline);
Ok(Box::new(InmemObjects {
store: &self,
curr_key,
timeline,
lsn,
}))
}
fn compact(&self) {}
}
impl Drop for InmemObjectStore {
fn drop(&mut self) {
let path = self.conf.workdir.join("objstore.dmp");
let mut f = File::create(path).unwrap();
f.write(&self.db.ser().unwrap()).unwrap();
}
}
impl InmemObjectStore {
pub fn open(conf: &'static PageServerConf) -> Result<InmemObjectStore> {
let path = conf.workdir.join("objstore.dmp");
let mut f = File::open(path)?;
let mut buffer = Vec::new();
// read the whole file
f.read_to_end(&mut buffer)?;
let db = RwLock::new(BTreeMap::des(&buffer)?);
Ok(InmemObjectStore { conf: conf, db })
}
pub fn create(conf: &'static PageServerConf) -> Result<InmemObjectStore> {
Ok(InmemObjectStore {
conf: conf,
db: RwLock::new(BTreeMap::new()),
})
}
}
///
/// Iterator for `object_versions`. Returns all page versions of a given block, in
/// reverse LSN order.
///
struct InmemObjectVersionIter {
versions: Vec<(Lsn, Vec<u8>)>,
curr: usize,
}
impl InmemObjectVersionIter {
fn new(versions: Vec<(Lsn, Vec<u8>)>) -> InmemObjectVersionIter {
let curr = versions.len();
InmemObjectVersionIter { versions, curr }
}
}
impl Iterator for InmemObjectVersionIter {
type Item = (Lsn, Vec<u8>);
fn next(&mut self) -> std::option::Option<Self::Item> {
if self.curr == 0 {
None
} else {
self.curr -= 1;
Some(self.versions[self.curr].clone())
}
}
}
struct InmemObjects<'r> {
store: &'r InmemObjectStore,
curr_key: StorageKey,
timeline: ZTimelineId,
lsn: Lsn,
}
impl<'r> Iterator for InmemObjects<'r> {
// TODO consider returning Box<[u8]>
type Item = Result<(ObjectTag, Lsn, Vec<u8>)>;
fn next(&mut self) -> Option<Self::Item> {
self.next_result().transpose()
}
}
impl<'r> InmemObjects<'r> {
fn next_result(&mut self) -> Result<Option<(ObjectTag, Lsn, Vec<u8>)>> {
let db = self.store.db.read().unwrap();
for pair in db.range((Excluded(&self.curr_key), Unbounded)) {
let key = pair.0;
if key.obj_key.timeline != self.timeline {
return Ok(None);
}
if key.lsn > self.lsn {
// TODO can speed up by seeking iterator
continue;
}
self.curr_key = key.clone();
let value = pair.1.clone();
return Ok(Some((key.obj_key.tag, key.lsn, value)));
}
Ok(None)
}
}
///
/// Iterator for `list_objects`. Returns all objects preceeding specified LSN
///
struct InmemObjectIter<'a> {
store: &'a InmemObjectStore,
curr_key: StorageKey,
timeline: ZTimelineId,
nonrel_only: bool,
lsn: Lsn,
}
impl<'a> Iterator for InmemObjectIter<'a> {
type Item = ObjectTag;
fn next(&mut self) -> std::option::Option<Self::Item> {
let db = self.store.db.read().unwrap();
'outer: loop {
for pair in db.range((Excluded(&self.curr_key), Unbounded)) {
let key = pair.0;
if key.obj_key.timeline != self.timeline {
return None;
}
self.curr_key = key.clone();
self.curr_key.lsn = Lsn(u64::MAX); // next seek should skip all versions
if key.lsn <= self.lsn {
// visible in this snapshot
if self.nonrel_only {
match key.obj_key.tag {
ObjectTag::RelationMetadata(_) => return None,
ObjectTag::RelationBuffer(_) => return None,
_ => return Some(key.obj_key.tag),
}
} else {
return Some(key.obj_key.tag);
}
}
continue 'outer;
}
return None;
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,298 @@
# Overview
The on-disk format is based on immutable files. The page server
receives a stream of incoming WAL, parses the WAL records to determine
which pages they apply to, and accumulates the incoming changes in
memory. Every now and then, the accumulated changes are written out to
new files.
The files are called "snapshot files". Each snapshot file corresponds
to one 10 MB slice of a PostgreSQL relation fork. The snapshot files
for each timeline are stored in the timeline's subdirectory under
.zenith/tenants/<tenantid>/timelines.
The files are named like this:
rel_<spcnode>_<dbnode>_<relnode>_<forknum>_<segno>_<start LSN>_<end LSN>
For example:
rel_1663_13990_2609_0_10_000000000169C348_0000000001702000
Some non-relation files are also stored in repository. For example,
a CLOG segment would be named like this:
pg_xact_0000_0_00000000198B06B0_00000000198C2550
There is no difference in how the relation and non-relation files are
managed, except that the first part of file names is different.
Internally, the relations and non-relation files that are managed in
the versioned store are together called "relishes".
Each snapshot file contains a full snapshot, that is, full copy of all
pages in the relation, as of the "start LSN". It also contains all WAL
records applicable to the relation between the start and end
LSNs. With this information, the page server can reconstruct any page
version of the relation in the LSN range.
If a file has been dropped, the last snapshot file for it is created
with the _DROPPED suffix, e.g.
rel_1663_13990_2609_0_10_000000000169C348_0000000001702000_DROPPED
In addition to the relations, with "rel_*" prefix, we use the same
format for storing various smaller files from the PostgreSQL data
directory. They will use different suffixes and the naming scheme
up to the LSN range varies. The Zenith source code uses the term
"relish" to mean "a relation, or other file that's treated like a
relation in the storage"
## Notation used in this document
The full path of a snapshot file looks like this:
.zenith/tenants/941ddc8604413b88b3d208bddf90396c/timelines/4af489b06af8eed9e27a841775616962/rel_1663_13990_2609_0_10_000000000169C348_0000000001702000
For simplicity, the examples below use a simplified notation for the
paths. The tenant ID is left out, the timeline ID is replaced with
the human-readable branch name, and spcnode+dbnode+relnode+forkum+segno
with a human-readable table name. The LSNs are also shorter. For
example, a snapshot file for 'orders' table on 'main' branch, with LSN
range 100-200 would be:
main/orders_100_200
# Creating snapshot files
Let's start with a simple example with a system that contains one
branch called 'main' and two tables, 'orders' and 'customers'. The end
of WAL is currently at LSN 250. In this starting situation, you would
have two files on disk:
main/orders_100_200
main/customers_100_200
In addition to those files, the recent changes between LSN 200 and the
end of WAL at 250 are kept in memory. If the page server crashes, the
latest records between 200-250 need to be re-read from the WAL.
Whenever enough WAL has been accumulated in memory, the page server
writes out the changes in memory into new snapshot files. This process
is called "checkpointing" (not to be confused with the PostgreSQL
checkpoints, that's a different thing). The page server only creates
snapshot files for relations that have been modified since the last
checkpoint. For example, if the current end of WAL is at LSN 450, and
the last checkpoint happened at LSN 400 but there hasn't been any
recent changes to 'customers' table, you would have these files on
disk:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/customers_100_200
If the customers table is modified later, a new file is created for it
at the next checkpoint. The new file will cover the "gap" from the
last snapshot file, so the LSN ranges are always contiguous:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/customers_100_200
main/customers_200_500
## Reading page versions
Whenever a GetPage@LSN request comes in from the compute node, the
page server needs to reconstruct the requested page, as it was at the
requested LSN. To do that, the page server first checks the recent
in-memory layer; if the requested page version is found there, it can
be returned immediatedly without looking at the files on
disk. Otherwise the page server needs to locate the snapshot file that
contains the requested page version.
For example, if a request comes in for table 'orders' at LSN 250, the
page server would load the 'main/orders_200_300' file into memory, and
reconstruct and return the requested page from it, as it was at
LSN 250. Because the snapshot file consists of a full image of the
relation at the start LSN and the WAL, reconstructing the page
involves replaying any WAL records applicable to the page between LSNs
200-250, starting from the base image at LSN 200.
A request at a file boundary can be satisfied using either file. For
example, if there are two files on disk:
main/orders_100_200
main/orders_200_300
And a request comes with LSN 200, either file can be used for it. It
is better to use the later file, however, because it contains an
already materialized version of all the pages at LSN 200. Using the
first file, you would need to apply any WAL records between 100 and
200 to reconstruct the requested page.
# Multiple branches
Imagine that a child branch is created at LSN 250:
@250
----main--+-------------------------->
\
+---child-------------->
Then, the 'orders' table is updated differently on the 'main' and
'child' branches. You now have this situation on disk:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/customers_100_200
child/orders_250_300
child/orders_300_400
Because the 'customers' table hasn't been modified on the child
branch, there is no file for it there. If you request a page for it on
the 'child' branch, the page server will not find any snapshot file
for it in the 'child' directory, so it will recurse to look into the
parent 'main' branch instead.
From the 'child' branch's point of view, the history for each relation
is linear, and the request's LSN identifies unambiguously which file
you need to look at. For example, the history for the 'orders' table
on the 'main' branch consists of these files:
main/orders_100_200
main/orders_200_300
main/orders_300_400
And from the 'child' branch's point of view, it consists of these
files:
main/orders_100_200
main/orders_200_300
child/orders_250_300
child/orders_300_400
The branch metadata includes the point where the child branch was
created, LSN 250. If a page request comes with LSN 275, we read the
page version from the 'child/orders_250_300' file. If the request LSN
is 225, we read it from the 'main/orders_200_300' file instead. The
page versions between 250-300 in the 'main/orders_200_300' file are
ignored when operating on the child branch.
Note: It doesn't make any difference if the child branch is created
when the end of the main branch was at LSN 250, or later when the tip of
the main branch had already moved on. The latter case, creating a
branch at a historic LSN, is how we support PITR in Zenith.
# Garbage collection
In this scheme, we keep creating new snapshot files over time. We also
need a mechanism to remove old files that are no longer needed,
because disk space isn't infinite.
What files are still needed? Currently, the page server supports PITR
and branching from any branch at any LSN that is "recent enough" from
the tip of the branch. "Recent enough" is defined as an LSN horizon,
which by default is 64 MB. (See DEFAULT_GC_HORIZON). For this
example, let's assume that the LSN horizon is 150 units.
Let's look at the single branch scenario again. Imagine that the end
of the branch is LSN 525, so that the GC horizon is currently at
525-150 = 375
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/orders_400_500
main/customers_100_200
We can remove files 'main/orders_100_200' and 'main/orders_200_300',
because the end LSNs of those files are older than GC horizon 375, and
there are more recent snapshot files for the table. 'main/orders_300_400'
and 'main/orders_400_500' are still within the horizon, so they must be
retained. 'main/customers_100_200' is old enough, but it cannot be
removed because there is no newer snapshot file for the table.
Things get slightly more complicated with multiple branches. All of
the above still holds, but in addition to recent files we must also
retain older shapshot files that are still needed by child branches.
For example, if child branch is created at LSN 150, and the 'customers'
table is updated on the branch, you would have these files:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/orders_400_500
main/customers_100_200
child/customers_150_300
In this situation, the 'main/orders_100_200' file cannot be removed,
even though it is older than the GC horizon, because it is still
needed by the child branch. 'main/orders_200_300' can still be
removed. So after garbage collection, these files would remain:
main/orders_100_200
main/orders_300_400
main/orders_400_500
main/customers_100_200
child/customers_150_300
If 'orders' is modified later on the 'child' branch, we will create a
snapshot file for it on the child:
main/orders_100_200
main/orders_300_400
main/orders_400_500
main/customers_100_200
child/customers_150_300
child/orders_150_400
After this, the 'main/orders_100_200' file can be removed. It is no
longer needed by the child branch, because there is a newer snapshot
file there. TODO: This optimization hasn't been implemented! The GC
algorithm will currently keep the file on the 'main' branch anyway, for
as long as the child branch exists.
# TODO: On LSN ranges
In principle, each relation can be checkpointed separately, i.e. the
LSN ranges of the files don't need to line up. So this would be legal:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/customers_150_250
main/customers_250_500
However, the code currently always checkpoints all relations together.
So that situation doesn't arise in practice.
It would also be OK to have overlapping LSN ranges for the same relation:
main/orders_100_200
main/orders_200_300
main/orders_250_350
main/orders_300_400
The code that reads the snapshot files should cope with this, but this
situation doesn't arise either, because the checkpointing code never
does that. It could be useful, however, as a transient state when
garbage collecting around branch points, or explicit recovery
points. For example, if we start with this:
main/orders_100_200
main/orders_200_300
main/orders_300_400
And there is a branch or explicit recovery point at LSN 150, we could
replace 'main/orders_100_200' with 'main/orders_150_150' to keep a
snapshot only at that exact point that's still needed, removing the
other page versions around it. But such compaction has not been
implemented yet.

View File

@@ -0,0 +1,490 @@
//!
//! An in-memory layer stores recently received page versions in memory. The page versions
//! are held in a BTreeMap, and there's another BTreeMap to track the size of the relation.
//!
use crate::layered_repository::page_history::PageHistory;
use crate::layered_repository::storage_layer::{
Layer, PageReconstructData, PageVersion, SegmentTag, RELISH_SEG_SIZE,
};
use crate::layered_repository::{LayeredTimeline, SnapshotLayer};
use crate::repository::WALRecord;
use crate::PageServerConf;
use crate::{ZTenantId, ZTimelineId};
use anyhow::{anyhow, bail, Result};
use bytes::Bytes;
use log::*;
use std::collections::BTreeMap;
use std::ops::Bound::Included;
use std::sync::{Arc, Mutex};
use zenith_utils::lsn::Lsn;
pub struct InMemoryLayer {
conf: &'static PageServerConf,
tenantid: ZTenantId,
timelineid: ZTimelineId,
seg: SegmentTag,
///
/// This layer contains all the changes from 'start_lsn'. The
/// start is inclusive. There is no end LSN; we only use in-memory
/// layer at the end of a timeline.
///
start_lsn: Lsn,
/// The above fields never change. The parts that do change are in 'inner',
/// and protected by mutex.
inner: Mutex<InMemoryLayerInner>,
}
pub struct InMemoryLayerInner {
/// If this relation was dropped, remember when that happened.
drop_lsn: Option<Lsn>,
///
/// All versions of all pages in the layer are are kept here.
/// Indexed by block number.
///
pages: BTreeMap<u32, PageHistory>,
///
/// `segsizes` tracks the size of the segment at different points in time.
///
segsizes: BTreeMap<Lsn, u32>,
}
impl Layer for InMemoryLayer {
fn get_timeline_id(&self) -> ZTimelineId {
return self.timelineid;
}
fn get_seg_tag(&self) -> SegmentTag {
return self.seg;
}
fn get_start_lsn(&self) -> Lsn {
return self.start_lsn;
}
fn get_end_lsn(&self) -> Lsn {
let inner = self.inner.lock().unwrap();
if let Some(drop_lsn) = inner.drop_lsn {
drop_lsn
} else {
Lsn(u64::MAX)
}
}
fn is_dropped(&self) -> bool {
let inner = self.inner.lock().unwrap();
inner.drop_lsn.is_some()
}
/// Look up given page in the cache.
fn get_page_reconstruct_data(
&self,
blknum: u32,
lsn: Lsn,
reconstruct_data: &mut PageReconstructData,
) -> Result<Option<Lsn>> {
// Scan the BTreeMap backwards, starting from reconstruct_data.lsn.
let mut need_base_image_lsn: Option<Lsn> = Some(lsn);
assert!(self.seg.blknum_in_seg(blknum));
{
let inner = self.inner.lock().unwrap();
let pages = &inner.pages;
// FIXME: this assumes the latest page version is always the right answer.
// How should this work if the requested lsn is in the past? in the future?
let latest_version = pages
.get(&blknum)
.and_then(PageHistory::latest)
.ok_or_else(|| anyhow!("page not found"))?;
let (entry_lsn, entry) = latest_version;
if true {
if let Some(img) = &entry.page_image {
reconstruct_data.page_img = Some(img.clone());
need_base_image_lsn = None;
} else if let Some(rec) = &entry.record {
reconstruct_data.records.push(rec.clone());
if rec.will_init {
// This WAL record initializes the page, so no need to go further back
need_base_image_lsn = None;
} else {
need_base_image_lsn = Some(entry_lsn);
}
} else {
// No base image, and no WAL record. Huh?
bail!("no page image or WAL record for requested page");
}
}
// release lock on self.pages
}
Ok(need_base_image_lsn)
}
/// Get size of the relation at given LSN
fn get_seg_size(&self, lsn: Lsn) -> Result<u32> {
// Scan the BTreeMap backwards, starting from the given entry.
let inner = self.inner.lock().unwrap();
let mut iter = inner.segsizes.range((Included(&Lsn(0)), Included(&lsn)));
if let Some((_entry_lsn, entry)) = iter.next_back() {
let result = *entry;
drop(inner);
trace!("get_seg_size: {} at {} -> {}", self.seg, lsn, result);
Ok(result)
} else {
bail!("No size found for {} at {} in memory", self.seg, lsn);
}
}
/// Does this segment exist at given LSN?
fn get_seg_exists(&self, lsn: Lsn) -> Result<bool> {
let inner = self.inner.lock().unwrap();
// Is the requested LSN after the segment was dropped?
if let Some(drop_lsn) = inner.drop_lsn {
if lsn >= drop_lsn {
return Ok(false);
}
}
// Otherwise, it exists
Ok(true)
}
}
impl InMemoryLayer {
///
/// Create a new, empty, in-memory layer
///
pub fn create(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
tenantid: ZTenantId,
seg: SegmentTag,
start_lsn: Lsn,
) -> Result<InMemoryLayer> {
trace!(
"initializing new empty InMemoryLayer for writing {} on timeline {} at {}",
seg,
timelineid,
start_lsn
);
Ok(InMemoryLayer {
conf,
timelineid,
tenantid,
seg,
start_lsn,
inner: Mutex::new(InMemoryLayerInner {
drop_lsn: None,
pages: BTreeMap::new(),
segsizes: BTreeMap::new(),
}),
})
}
// Write operations
/// Remember new page version, as a WAL record over previous version
pub fn put_wal_record(&self, blknum: u32, rec: WALRecord) -> Result<()> {
self.put_page_version(
blknum,
rec.lsn,
PageVersion {
page_image: None,
record: Some(rec),
},
)
}
/// Remember new page version, as a full page image
pub fn put_page_image(&self, blknum: u32, lsn: Lsn, img: Bytes) -> Result<()> {
self.put_page_version(
blknum,
lsn,
PageVersion {
page_image: Some(img),
record: None,
},
)
}
/// Common subroutine of the public put_wal_record() and put_page_image() functions.
/// Adds the page version to the in-memory tree
pub fn put_page_version(&self, blknum: u32, lsn: Lsn, pv: PageVersion) -> Result<()> {
assert!(self.seg.blknum_in_seg(blknum));
trace!(
"put_page_version blk {} of {} at {}/{}",
blknum,
self.seg.rel,
self.timelineid,
lsn
);
let mut inner = self.inner.lock().unwrap();
let page_history = inner
.pages
.entry(blknum)
.or_insert_with(PageHistory::default);
page_history.push(lsn, pv);
// Also update the relation size, if this extended the relation.
if self.seg.rel.is_blocky() {
let newsize = blknum - self.seg.segno * RELISH_SEG_SIZE + 1;
let mut iter = inner.segsizes.range((Included(&Lsn(0)), Included(&lsn)));
let oldsize;
if let Some((_entry_lsn, entry)) = iter.next_back() {
oldsize = *entry;
} else {
oldsize = 0;
//bail!("No old size found for {} at {}", self.tag, lsn);
}
if newsize > oldsize {
trace!(
"enlarging segment {} from {} to {} blocks at {}",
self.seg,
oldsize,
newsize,
lsn
);
inner.segsizes.insert(lsn, newsize);
}
}
Ok(())
}
/// Remember that the relation was truncated at given LSN
pub fn put_truncation(&self, lsn: Lsn, segsize: u32) -> anyhow::Result<()> {
let mut inner = self.inner.lock().unwrap();
let old = inner.segsizes.insert(lsn, segsize);
if old.is_some() {
// We already had an entry for this LSN. That's odd..
warn!("Inserting truncation, but had an entry for the LSN already");
}
Ok(())
}
/// Remember that the segment was dropped at given LSN
pub fn put_unlink(&self, lsn: Lsn) -> anyhow::Result<()> {
let mut inner = self.inner.lock().unwrap();
assert!(inner.drop_lsn.is_none());
inner.drop_lsn = Some(lsn);
info!("dropped segment {} at {}", self.seg, lsn);
Ok(())
}
///
/// Initialize a new InMemoryLayer for, by copying the state at the given
/// point in time from given existing layer.
///
pub fn copy_snapshot(
conf: &'static PageServerConf,
timeline: &LayeredTimeline,
src: &dyn Layer,
timelineid: ZTimelineId,
tenantid: ZTenantId,
lsn: Lsn,
) -> Result<InMemoryLayer> {
trace!(
"initializing new InMemoryLayer for writing {} on timeline {} at {}",
src.get_seg_tag(),
timelineid,
lsn
);
let mut pages = BTreeMap::new();
let mut segsizes = BTreeMap::new();
let seg = src.get_seg_tag();
let startblk;
let size;
if seg.rel.is_blocky() {
size = src.get_seg_size(lsn)?;
segsizes.insert(lsn, size);
startblk = seg.segno * RELISH_SEG_SIZE;
} else {
size = 1;
startblk = 0;
}
for blknum in startblk..(startblk + size) {
let img = timeline.materialize_page(seg, blknum, lsn, src)?;
let pv = PageVersion {
page_image: Some(img),
record: None,
};
let page_history = PageHistory::from_image(lsn, pv);
pages.insert(blknum, page_history);
}
Ok(InMemoryLayer {
conf,
timelineid,
tenantid,
seg: src.get_seg_tag(),
start_lsn: lsn,
inner: Mutex::new(InMemoryLayerInner {
drop_lsn: None,
pages,
segsizes,
}),
})
}
///
/// Write the this in-memory layer to disk, as a snapshot layer.
///
/// The cutoff point for the layer that's written to disk is 'end_lsn'.
///
/// Returns new layers that replace this one. Always returns a
/// SnapshotLayer containing the page versions that were written to disk,
/// but if there were page versions newer than 'end_lsn', also return a new
/// in-memory layer containing those page versions. The caller replaces
/// this layer with the returned layers in the layer map.
///
pub fn freeze(
&self,
cutoff_lsn: Lsn,
// This is needed just to call materialize_page()
timeline: &LayeredTimeline,
) -> Result<(Option<Arc<SnapshotLayer>>, Option<Arc<InMemoryLayer>>)> {
info!(
"freezing in memory layer for {} on timeline {} at {}",
self.seg, self.timelineid, cutoff_lsn
);
let inner = self.inner.lock().unwrap();
// Normally, use the cutoff LSN as the end of the frozen layer.
// But if the relation was dropped, we know that there are no
// more changes coming in for it, and in particular we know that
// there are no changes "in flight" for the LSN anymore, so we use
// the drop LSN instead. The drop-LSN could be ahead of the
// caller-specified LSN!
let dropped = inner.drop_lsn.is_some();
let end_lsn = if dropped {
inner.drop_lsn.unwrap()
} else {
cutoff_lsn
};
// Divide all the page versions into old and new at the 'end_lsn' cutoff point.
let mut before_pages = BTreeMap::new();
let mut before_segsizes;
let mut after_segsizes;
let mut after_pages = BTreeMap::new();
if !dropped {
before_segsizes = BTreeMap::new();
after_segsizes = BTreeMap::new();
for (lsn, size) in inner.segsizes.iter() {
if *lsn > end_lsn {
after_segsizes.insert(*lsn, *size);
} else {
before_segsizes.insert(*lsn, *size);
}
}
for (blknum, page_history) in inner.pages.iter() {
let (old, new) = page_history.clone().split_at(end_lsn);
before_pages.insert(*blknum, old);
after_pages.insert(*blknum, new);
}
} else {
before_pages = inner.pages.clone();
before_segsizes = inner.segsizes.clone();
after_segsizes = BTreeMap::new();
after_pages = BTreeMap::new();
}
// we can release the lock now.
drop(inner);
// Write the page versions before the cutoff to disk.
let snapfile = SnapshotLayer::create(
self.conf,
self.timelineid,
self.tenantid,
self.seg,
self.start_lsn,
end_lsn,
dropped,
before_pages,
before_segsizes,
)?;
// If there were any "new" page versions, initialize a new in-memory layer to hold
// them
let new_open = if !after_segsizes.is_empty() || !after_pages.is_empty() {
info!("created new in-mem layer for {} {}-", self.seg, end_lsn);
let new_open = Self::copy_snapshot(
self.conf,
timeline,
&snapfile,
self.timelineid,
self.tenantid,
end_lsn,
)?;
let mut new_inner = new_open.inner.lock().unwrap();
new_inner.pages.append(&mut after_pages);
new_inner.segsizes.append(&mut after_segsizes);
drop(new_inner);
Some(Arc::new(new_open))
} else {
None
};
let new_historic = Some(Arc::new(snapfile));
Ok((new_historic, new_open))
}
/// debugging function to print out the contents of the layer
#[allow(unused)]
pub fn dump(&self) -> String {
let mut result = format!(
"----- inmemory layer for {} {}-> ----\n",
self.seg, self.start_lsn
);
let inner = self.inner.lock().unwrap();
for (k, v) in inner.segsizes.iter() {
result += &format!("{}: {}\n", k, v);
}
for (page_num, page_history) in inner.pages.iter() {
for (lsn, image) in page_history.iter() {
result += &format!(
"blk {} at {}: {}/{}\n",
page_num,
lsn,
image.page_image.is_some(),
image.record.is_some()
);
}
}
result
}
}

View File

@@ -0,0 +1,281 @@
//!
//! The layer map tracks what layers exist for all the relations in a timeline.
//!
//! When the timeline is first accessed, the server lists of all snapshot files
//! in the timelines/<timelineid> directory, and populates this map with
//! SnapshotLayers corresponding to each file. When new WAL is received,
//! we create InMemoryLayers to hold the incoming records. Now and then,
//! in the checkpoint() function, the in-memory layers are frozen, forming
//! new snapshot layers and corresponding files are written to disk.
//!
use crate::layered_repository::storage_layer::{Layer, SegmentTag};
use crate::layered_repository::{InMemoryLayer, SnapshotLayer};
use crate::relish::*;
use anyhow::Result;
use log::*;
use std::collections::HashSet;
use std::collections::{BTreeMap, HashMap};
use std::ops::Bound::Included;
use std::sync::Arc;
use zenith_utils::lsn::Lsn;
///
/// LayerMap tracks what layers exist or a timeline. The last layer that is
/// open for writes is always an InMemoryLayer, and is tracked separately
/// because there can be only one for each segment. The older layers,
/// stored on disk, are kept in a BTreeMap keyed by the layer's start LSN.
///
pub struct LayerMap {
segs: HashMap<SegmentTag, SegEntry>,
}
struct SegEntry {
pub open: Option<Arc<InMemoryLayer>>,
pub historic: BTreeMap<Lsn, Arc<SnapshotLayer>>,
}
impl LayerMap {
///
/// Look up using the given segment tag and LSN. This differs from a plain
/// key-value lookup in that if there is any layer that covers the
/// given LSN, or precedes the given LSN, it is returned. In other words,
/// you don't need to know the exact start LSN of the layer.
///
pub fn get(&self, tag: &SegmentTag, lsn: Lsn) -> Option<Arc<dyn Layer>> {
let segentry = self.segs.get(tag)?;
if let Some(open) = &segentry.open {
if open.get_start_lsn() <= lsn {
let x: Arc<dyn Layer> = Arc::clone(&open) as _;
return Some(x);
}
}
if let Some((_k, v)) = segentry
.historic
.range((Included(Lsn(0)), Included(lsn)))
.next_back()
{
let x: Arc<dyn Layer> = Arc::clone(&v) as _;
Some(x)
} else {
None
}
}
///
/// Get the open layer for given segment for writing. Or None if no open
/// layer exists.
///
pub fn get_open(&self, tag: &SegmentTag) -> Option<Arc<InMemoryLayer>> {
let segentry = self.segs.get(tag)?;
if let Some(open) = &segentry.open {
Some(Arc::clone(open))
} else {
None
}
}
///
/// Insert an open in-memory layer
///
pub fn insert_open(&mut self, layer: Arc<InMemoryLayer>) {
let tag = layer.get_seg_tag();
if let Some(segentry) = self.segs.get_mut(&tag) {
if let Some(_old) = &segentry.open {
// FIXME: shouldn't exist, but check
}
segentry.open = Some(layer);
} else {
let segentry = SegEntry {
open: Some(layer),
historic: BTreeMap::new(),
};
self.segs.insert(tag, segentry);
}
}
///
/// Insert an on-disk layer
///
pub fn insert_historic(&mut self, layer: Arc<SnapshotLayer>) {
let tag = layer.get_seg_tag();
let start_lsn = layer.get_start_lsn();
if let Some(segentry) = self.segs.get_mut(&tag) {
segentry.historic.insert(start_lsn, layer);
} else {
let mut historic = BTreeMap::new();
historic.insert(start_lsn, layer);
let segentry = SegEntry {
open: None,
historic,
};
self.segs.insert(tag, segentry);
}
}
///
/// Remove an on-disk layer from the map.
///
/// This should be called when the corresponding file on disk has been deleted.
///
pub fn remove_historic(&mut self, layer: &SnapshotLayer) {
let tag = layer.get_seg_tag();
let start_lsn = layer.get_start_lsn();
if let Some(segentry) = self.segs.get_mut(&tag) {
segentry.historic.remove(&start_lsn);
}
}
pub fn list_rels(&self, spcnode: u32, dbnode: u32) -> Result<HashSet<RelTag>> {
let mut rels: HashSet<RelTag> = HashSet::new();
for (seg, _entry) in self.segs.iter() {
if let RelishTag::Relation(reltag) = seg.rel {
// FIXME: skip if it was dropped before the requested LSN. But there is no
// LSN argument
if (spcnode == 0 || reltag.spcnode == spcnode)
&& (dbnode == 0 || reltag.dbnode == dbnode)
{
rels.insert(reltag);
}
}
}
Ok(rels)
}
pub fn list_nonrels(&self, _lsn: Lsn) -> Result<HashSet<RelishTag>> {
let mut rels: HashSet<RelishTag> = HashSet::new();
// Scan the timeline directory to get all rels in this timeline.
for (seg, _entry) in self.segs.iter() {
// FIXME: skip if it was dropped before the requested LSN.
if let RelishTag::Relation(_) = seg.rel {
} else {
rels.insert(seg.rel);
}
}
Ok(rels)
}
/// Is there a newer layer for given segment?
pub fn newer_layer_exists(&self, seg: SegmentTag, lsn: Lsn) -> bool {
if let Some(segentry) = self.segs.get(&seg) {
if let Some(_open) = &segentry.open {
return true;
}
for (newer_lsn, layer) in segentry
.historic
.range((Included(lsn), Included(Lsn(u64::MAX))))
{
if layer.get_end_lsn() > lsn {
trace!(
"found later layer for {}, {} {}-{}",
seg,
lsn,
newer_lsn,
layer.get_end_lsn()
);
return true;
} else {
trace!("found singleton layer for {}, {} {}", seg, lsn, newer_lsn);
continue;
}
}
}
trace!("no later layer found for {}, {}", seg, lsn);
false
}
pub fn iter_open_layers(&mut self) -> OpenLayerIter {
OpenLayerIter {
last: None,
segiter: self.segs.iter_mut(),
}
}
pub fn iter_historic_layers(&self) -> HistoricLayerIter {
HistoricLayerIter {
segiter: self.segs.iter(),
iter: None,
}
}
}
impl Default for LayerMap {
fn default() -> Self {
LayerMap {
segs: HashMap::new(),
}
}
}
pub struct OpenLayerIter<'a> {
last: Option<&'a mut SegEntry>,
segiter: std::collections::hash_map::IterMut<'a, SegmentTag, SegEntry>,
}
impl<'a> OpenLayerIter<'a> {
pub fn replace(&mut self, replacement: Option<Arc<InMemoryLayer>>) {
let segentry = self.last.as_mut().unwrap();
segentry.open = replacement;
}
pub fn insert_historic(&mut self, new_layer: Arc<SnapshotLayer>) {
let start_lsn = new_layer.get_start_lsn();
let segentry = self.last.as_mut().unwrap();
segentry.historic.insert(start_lsn, new_layer);
}
}
impl<'a> Iterator for OpenLayerIter<'a> {
type Item = Arc<InMemoryLayer>;
fn next(&mut self) -> std::option::Option<<Self as std::iter::Iterator>::Item> {
while let Some((_seg, entry)) = self.segiter.next() {
if let Some(open) = &entry.open {
let op = Arc::clone(&open);
self.last = Some(entry);
return Some(op);
}
}
self.last = None;
None
}
}
pub struct HistoricLayerIter<'a> {
segiter: std::collections::hash_map::Iter<'a, SegmentTag, SegEntry>,
iter: Option<std::collections::btree_map::Iter<'a, Lsn, Arc<SnapshotLayer>>>,
}
impl<'a> Iterator for HistoricLayerIter<'a> {
type Item = Arc<SnapshotLayer>;
fn next(&mut self) -> std::option::Option<<Self as std::iter::Iterator>::Item> {
loop {
if let Some(x) = &mut self.iter {
if let Some(x) = x.next() {
return Some(Arc::clone(&*x.1));
}
}
if let Some(seg) = self.segiter.next() {
self.iter = Some(seg.1.historic.iter());
continue;
} else {
return None;
}
}
}
}

View File

@@ -0,0 +1,94 @@
use super::storage_layer::PageVersion;
use std::collections::VecDeque;
use zenith_utils::lsn::Lsn;
/// A data structure that holds one or more versions of a particular page number.
//
#[derive(Default, Clone)]
pub struct PageHistory {
/// Pages stored in order, from oldest to newest.
pages: VecDeque<(Lsn, PageVersion)>,
}
impl PageHistory {
/// Create a new PageHistory containing a single image.
pub fn from_image(lsn: Lsn, image: PageVersion) -> Self {
let mut pages = VecDeque::new();
pages.push_back((lsn, image));
PageHistory { pages }
}
/// Push a newer page image.
pub fn push(&mut self, lsn: Lsn, page: PageVersion) {
if let Some((back_lsn, _)) = self.pages.back() {
debug_assert_ne!(
back_lsn, &lsn,
"push page at lsn {:?} but one already exists",
lsn
);
debug_assert!(back_lsn < &lsn, "pushed page is older than latest lsn");
}
self.pages.push_back((lsn, page));
}
pub fn latest(&self) -> Option<(Lsn, &PageVersion)> {
self.pages.back().map(|(lsn, page)| (*lsn, page))
}
/// Split a page history at a particular LSN.
///
/// This consumes this PageHistory and returns two new ones.
/// Any changes exactly matching the split LSN will be in the
/// "old" history.
//
// FIXME: Is this necessary? There is some debate whether "splitting"
// layers is the best design.
//
pub fn split_at(self, split_lsn: Lsn) -> (PageHistory, PageHistory) {
let mut old = PageHistory::default();
let mut new = PageHistory::default();
for (lsn, page) in self.pages {
if lsn > split_lsn {
new.push(lsn, page)
} else {
old.push(lsn, page);
}
}
(old, new)
}
pub fn iter(&self) -> impl Iterator<Item = &(Lsn, PageVersion)> {
self.pages.iter()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn page_history() {
fn make_page(b: u8) -> PageVersion {
let image = vec![b; 8192].into();
PageVersion {
page_image: Some(image),
record: None,
}
}
let mut ph = PageHistory::default();
ph.push(10.into(), make_page(1));
ph.push(20.into(), make_page(2));
ph.push(30.into(), make_page(3));
let (latest_lsn, latest_image) = ph.latest().unwrap();
assert_eq!(latest_lsn, 30.into());
assert!(matches!(latest_image, PageVersion { page_image: Some(im), .. } if im[0] == 3));
let mut it = ph.iter();
assert_eq!(it.next().unwrap().0, 10.into());
assert_eq!(it.next().unwrap().0, 20.into());
assert_eq!(it.next().unwrap().0, 30.into());
assert!(it.next().is_none());
}
}

View File

@@ -0,0 +1,567 @@
//!
//! A SnapshotLayer represents one snapshot file on disk. One file holds all page
//! version and size information of one relation, in a range of LSN.
//! The name "snapshot file" is a bit of a misnomer because a snapshot file doesn't
//! contain a snapshot at a specific LSN, but rather all the page versions in a range
//! of LSNs.
//!
//! Currently, a snapshot file contains full information needed to reconstruct any
//! page version in the LSN range, without consulting any other snapshot files. When
//! a new snapshot file is created for writing, the full contents of relation are
//! materialized as it is at the beginning of the LSN range. That can be very expensive,
//! we should find a way to store differential files. But this keeps the read-side
//! of things simple. You can find the correct snapshot file based on RelishTag and
//! timeline+LSN, and once you've located it, you have all the data you need to in that
//! file.
//!
//! When a snapshot file needs to be accessed, we slurp the whole file into memory, into
//! the SnapshotLayer struct. See load() and unload() functions.
//!
//! On disk, the snapshot files are stored in timelines/<timelineid> directory.
//! Currently, there are no subdirectories, and each snapshot file is named like this:
//!
//! <spcnode>_<dbnode>_<relnode>_<forknum>_<start LSN>_<end LSN>
//!
//! For example:
//!
//! 1663_13990_2609_0_000000000169C348_000000000169C349
//!
//! If a relation is dropped, we add a '_DROPPED' to the end of the filename to indicate that.
//! So the above example would become:
//!
//! 1663_13990_2609_0_000000000169C348_000000000169C349_DROPPED
//!
//! The end LSN indicates when it was dropped in that case, we don't store it in the
//! file contents in any way.
//!
//! A snapshot file is constructed using the 'bookfile' crate. Each file consists of two
//! parts: the page versions and the relation sizes. They are stored as separate chapters.
//!
use crate::layered_repository::page_history::PageHistory;
use crate::layered_repository::storage_layer::{
Layer, PageReconstructData, PageVersion, SegmentTag,
};
use crate::relish::*;
use crate::PageServerConf;
use crate::{ZTenantId, ZTimelineId};
use anyhow::{bail, Result};
use log::*;
use std::collections::BTreeMap;
use std::fmt;
use std::fs;
use std::fs::File;
use std::io::Write;
use std::ops::Bound::Included;
use std::path::PathBuf;
use std::sync::{Arc, Mutex, MutexGuard};
use bookfile::{Book, BookWriter};
use zenith_utils::bin_ser::BeSer;
use zenith_utils::lsn::Lsn;
// Magic constant to identify a Zenith snapshot file
static SNAPSHOT_FILE_MAGIC: u32 = 0x5A616E01;
static PAGE_VERSIONS_CHAPTER: u64 = 1;
static REL_SIZES_CHAPTER: u64 = 2;
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone)]
struct SnapshotFileName {
seg: SegmentTag,
start_lsn: Lsn,
end_lsn: Lsn,
dropped: bool,
}
impl SnapshotFileName {
fn from_str(fname: &str) -> Option<Self> {
// Split the filename into parts
//
// <spcnode>_<dbnode>_<relnode>_<forknum>_<seg>_<start LSN>_<end LSN>
//
// or if it was dropped:
//
// <spcnode>_<dbnode>_<relnode>_<forknum>_<seg>_<start LSN>_<end LSN>_DROPPED
//
let rel;
let mut parts;
if let Some(rest) = fname.strip_prefix("rel_") {
parts = rest.split('_');
rel = RelishTag::Relation(RelTag {
spcnode: parts.next()?.parse::<u32>().ok()?,
dbnode: parts.next()?.parse::<u32>().ok()?,
relnode: parts.next()?.parse::<u32>().ok()?,
forknum: parts.next()?.parse::<u8>().ok()?,
});
} else if let Some(rest) = fname.strip_prefix("pg_xact_") {
parts = rest.split('_');
rel = RelishTag::Slru {
slru: SlruKind::Clog,
segno: u32::from_str_radix(parts.next()?, 16).ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_multixact_members_") {
parts = rest.split('_');
rel = RelishTag::Slru {
slru: SlruKind::MultiXactMembers,
segno: u32::from_str_radix(parts.next()?, 16).ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_multixact_offsets_") {
parts = rest.split('_');
rel = RelishTag::Slru {
slru: SlruKind::MultiXactOffsets,
segno: u32::from_str_radix(parts.next()?, 16).ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_filenodemap_") {
parts = rest.split('_');
rel = RelishTag::FileNodeMap {
spcnode: parts.next()?.parse::<u32>().ok()?,
dbnode: parts.next()?.parse::<u32>().ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_twophase_") {
parts = rest.split('_');
rel = RelishTag::TwoPhase {
xid: parts.next()?.parse::<u32>().ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_control_checkpoint_") {
parts = rest.split('_');
rel = RelishTag::Checkpoint;
} else if let Some(rest) = fname.strip_prefix("pg_control_") {
parts = rest.split('_');
rel = RelishTag::ControlFile;
} else {
return None;
}
let segno = parts.next()?.parse::<u32>().ok()?;
let seg = SegmentTag { rel, segno };
let start_lsn = Lsn::from_hex(parts.next()?).ok()?;
let end_lsn = Lsn::from_hex(parts.next()?).ok()?;
let mut dropped = false;
if let Some(suffix) = parts.next() {
if suffix == "DROPPED" {
dropped = true;
} else {
warn!("unrecognized filename in timeline dir: {}", fname);
return None;
}
}
if parts.next().is_some() {
warn!("unrecognized filename in timeline dir: {}", fname);
return None;
}
Some(SnapshotFileName {
seg,
start_lsn,
end_lsn,
dropped,
})
}
fn to_string(&self) -> String {
let basename = match self.seg.rel {
RelishTag::Relation(reltag) => format!(
"rel_{}_{}_{}_{}",
reltag.spcnode, reltag.dbnode, reltag.relnode, reltag.forknum
),
RelishTag::Slru {
slru: SlruKind::Clog,
segno,
} => format!("pg_xact_{:04X}", segno),
RelishTag::Slru {
slru: SlruKind::MultiXactMembers,
segno,
} => format!("pg_multixact_members_{:04X}", segno),
RelishTag::Slru {
slru: SlruKind::MultiXactOffsets,
segno,
} => format!("pg_multixact_offsets_{:04X}", segno),
RelishTag::FileNodeMap { spcnode, dbnode } => {
format!("pg_filenodemap_{}_{}", spcnode, dbnode)
}
RelishTag::TwoPhase { xid } => format!("pg_twophase_{}", xid),
RelishTag::Checkpoint => format!("pg_control_checkpoint"),
RelishTag::ControlFile => format!("pg_control"),
};
format!(
"{}_{}_{:016X}_{:016X}{}",
basename,
self.seg.segno,
u64::from(self.start_lsn),
u64::from(self.end_lsn),
if self.dropped { "_DROPPED" } else { "" }
)
}
}
impl fmt::Display for SnapshotFileName {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.to_string())
}
}
///
/// SnapshotLayer is the in-memory data structure associated with an
/// on-disk snapshot file. We keep a SnapshotLayer in memory for each
/// file, in the LayerMap. If a layer is in "loaded" state, we have a
/// copy of the file in memory, in 'inner'. Otherwise the struct is
/// just a placeholder for a file that exists on disk, and it needs to
/// be loaded before using it in queries.
///
pub struct SnapshotLayer {
conf: &'static PageServerConf,
pub tenantid: ZTenantId,
pub timelineid: ZTimelineId,
pub seg: SegmentTag,
//
// This entry contains all the changes from 'start_lsn' to 'end_lsn'. The
// start is inclusive, and end is exclusive.
pub start_lsn: Lsn,
pub end_lsn: Lsn,
dropped: bool,
inner: Mutex<SnapshotLayerInner>,
}
pub struct SnapshotLayerInner {
/// If false, the 'page_versions' and 'relsizes' have not been
/// loaded into memory yet.
loaded: bool,
/// All versions of all pages in the file are are kept here.
/// Indexed by block number and LSN.
pages: BTreeMap<u32, PageHistory>,
/// `relsizes` tracks the size of the relation at different points in time.
relsizes: BTreeMap<Lsn, u32>,
}
impl Layer for SnapshotLayer {
fn get_timeline_id(&self) -> ZTimelineId {
return self.timelineid;
}
fn get_seg_tag(&self) -> SegmentTag {
return self.seg;
}
fn is_dropped(&self) -> bool {
return self.dropped;
}
fn get_start_lsn(&self) -> Lsn {
return self.start_lsn;
}
fn get_end_lsn(&self) -> Lsn {
return self.end_lsn;
}
/// Look up given page in the cache.
fn get_page_reconstruct_data(
&self,
blknum: u32,
lsn: Lsn,
reconstruct_data: &mut PageReconstructData,
) -> Result<Option<Lsn>> {
/*
// Scan the BTreeMap backwards, starting from the given entry.
let mut need_base_image_lsn: Option<Lsn> = Some(lsn);
{
let inner = self.load()?;
let minkey = (blknum, Lsn(0));
let maxkey = (blknum, lsn);
let mut iter = inner
.page_versions
.range((Included(&minkey), Included(&maxkey)));
while let Some(((_blknum, entry_lsn), entry)) = iter.next_back() {
if let Some(img) = &entry.page_image {
reconstruct_data.page_img = Some(img.clone());
need_base_image_lsn = None;
break;
} else if let Some(rec) = &entry.record {
reconstruct_data.records.push(rec.clone());
if rec.will_init {
// This WAL record initializes the page, so no need to go further back
need_base_image_lsn = None;
break;
} else {
need_base_image_lsn = Some(*entry_lsn);
}
} else {
// No base image, and no WAL record. Huh?
bail!("no page image or WAL record for requested page");
}
}
// release lock on 'inner'
}
Ok(need_base_image_lsn)
*/
todo!()
}
/// Get size of the relation at given LSN
fn get_seg_size(&self, lsn: Lsn) -> Result<u32> {
// Scan the BTreeMap backwards, starting from the given entry.
let inner = self.load()?;
let mut iter = inner.relsizes.range((Included(&Lsn(0)), Included(&lsn)));
if let Some((_entry_lsn, entry)) = iter.next_back() {
let result = *entry;
drop(inner);
trace!("get_seg_size: {} at {} -> {}", self.seg, lsn, result);
Ok(result)
} else {
error!(
"No size found for {} at {} in snapshot layer {} {}-{}",
self.seg, lsn, self.seg, self.start_lsn, self.end_lsn
);
bail!(
"No size found for {} at {} in snapshot layer",
self.seg,
lsn
);
}
}
/// Does this segment exist at given LSN?
fn get_seg_exists(&self, lsn: Lsn) -> Result<bool> {
// Is the requested LSN after the rel was dropped?
if self.dropped && lsn >= self.end_lsn {
return Ok(false);
}
// Otherwise, it exists.
Ok(true)
}
}
impl SnapshotLayer {
fn path(&self) -> PathBuf {
Self::path_for(
self.conf,
self.timelineid,
self.tenantid,
&SnapshotFileName {
seg: self.seg,
start_lsn: self.start_lsn,
end_lsn: self.end_lsn,
dropped: self.dropped,
},
)
}
fn path_for(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
tenantid: ZTenantId,
fname: &SnapshotFileName,
) -> PathBuf {
conf.timeline_path(&timelineid, &tenantid)
.join(fname.to_string())
}
/// Create a new snapshot file, using the given btreemaps containing the page versions and
/// relsizes.
///
/// This is used to write the in-memory layer to disk. The in-memory layer uses the same
/// data structure with two btreemaps as we do, so passing the btreemaps is currently
/// expedient.
pub fn create(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
tenantid: ZTenantId,
seg: SegmentTag,
start_lsn: Lsn,
end_lsn: Lsn,
dropped: bool,
pages: BTreeMap<u32, PageHistory>,
relsizes: BTreeMap<Lsn, u32>,
) -> Result<SnapshotLayer> {
let snapfile = SnapshotLayer {
conf: conf,
timelineid: timelineid,
tenantid: tenantid,
seg: seg,
start_lsn: start_lsn,
end_lsn,
dropped,
inner: Mutex::new(SnapshotLayerInner {
loaded: true,
pages,
relsizes,
}),
};
/*
let inner = snapfile.inner.lock().unwrap();
// Write the in-memory btreemaps into a file
let path = snapfile.path();
// Note: This overwrites any existing file. There shouldn't be any.
// FIXME: throw an error instead?
let file = File::create(&path)?;
let book = BookWriter::new(file, SNAPSHOT_FILE_MAGIC)?;
// Write out page versions
let mut chapter = book.new_chapter(PAGE_VERSIONS_CHAPTER);
let buf = BTreeMap::ser(&inner.page_versions)?;
chapter.write_all(&buf)?;
let book = chapter.close()?;
// and relsizes to separate chapter
let mut chapter = book.new_chapter(REL_SIZES_CHAPTER);
let buf = BTreeMap::ser(&inner.relsizes)?;
chapter.write_all(&buf)?;
let book = chapter.close()?;
book.close()?;
trace!("saved {}", &path.display());
drop(inner);
Ok(snapfile)
*/
todo!()
}
///
/// Load the contents of the file into memory
///
fn load(&self) -> Result<MutexGuard<SnapshotLayerInner>> {
/*
// quick exit if already loaded
let mut inner = self.inner.lock().unwrap();
if inner.loaded {
return Ok(inner);
}
let path = Self::path_for(
self.conf,
self.timelineid,
self.tenantid,
&SnapshotFileName {
seg: self.seg,
start_lsn: self.start_lsn,
end_lsn: self.end_lsn,
dropped: self.dropped,
},
);
let file = File::open(&path)?;
let book = Book::new(file)?;
let chapter = book.read_chapter(PAGE_VERSIONS_CHAPTER)?;
let page_versions = BTreeMap::des(&chapter)?;
let chapter = book.read_chapter(REL_SIZES_CHAPTER)?;
let relsizes = BTreeMap::des(&chapter)?;
debug!("loaded from {}", &path.display());
*inner = SnapshotLayerInner {
loaded: true,
page_versions,
relsizes,
};
Ok(inner)
*/
todo!()
}
/// Create SnapshotLayers representing all files on disk
///
// TODO: returning an Iterator would be more idiomatic
pub fn list_snapshot_files(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
tenantid: ZTenantId,
) -> Result<Vec<Arc<SnapshotLayer>>> {
/*
let path = conf.timeline_path(&timelineid, &tenantid);
let mut snapfiles: Vec<Arc<SnapshotLayer>> = Vec::new();
for direntry in fs::read_dir(path)? {
let fname = direntry?.file_name();
let fname = fname.to_str().unwrap();
if let Some(snapfilename) = SnapshotFileName::from_str(fname) {
let snapfile = SnapshotLayer {
conf,
timelineid,
tenantid,
seg: snapfilename.seg,
start_lsn: snapfilename.start_lsn,
end_lsn: snapfilename.end_lsn,
dropped: snapfilename.dropped,
inner: Mutex::new(SnapshotLayerInner {
loaded: false,
page_versions: BTreeMap::new(),
relsizes: BTreeMap::new(),
}),
};
snapfiles.push(Arc::new(snapfile));
}
}
return Ok(snapfiles);
*/
todo!()
}
pub fn delete(&self) -> Result<()> {
// delete underlying file
fs::remove_file(self.path())?;
Ok(())
}
///
/// Release most of the memory used by this layer. If it's accessed again later,
/// it will need to be loaded back.
///
pub fn unload(&self) -> Result<()> {
/*
let mut inner = self.inner.lock().unwrap();
inner.page_versions = BTreeMap::new();
inner.relsizes = BTreeMap::new();
inner.loaded = false;
Ok(())
*/
todo!()
}
/// debugging function to print out the contents of the layer
#[allow(unused)]
pub fn dump(&self) -> String {
let mut result = format!(
"----- snapshot layer for {} {}-{} ----\n",
self.seg, self.start_lsn, self.end_lsn
);
let inner = self.inner.lock().unwrap();
for (k, v) in inner.relsizes.iter() {
result += &format!("{}: {}\n", k, v);
}
//for (k, v) in inner.page_versions.iter() {
// result += &format!("blk {} at {}: {}/{}\n", k.0, k.1, v.page_image.is_some(), v.record.is_some());
//}
result
}
}

View File

@@ -0,0 +1,128 @@
//!
//! Common traits and structs for layers
//!
use crate::relish::RelishTag;
use crate::repository::WALRecord;
use crate::ZTimelineId;
use anyhow::Result;
use bytes::Bytes;
use serde::{Deserialize, Serialize};
use std::fmt;
use zenith_utils::lsn::Lsn;
// Size of one segment in pages (10 MB)
pub const RELISH_SEG_SIZE: u32 = 10 * 1024 * 1024 / 8192;
///
/// Each relish stored in the repository is divided into fixed-sized "segments",
/// with 10 MB of key-space, or 1280 8k pages each.
///
#[derive(Debug, PartialEq, Eq, PartialOrd, Hash, Ord, Clone, Copy)]
pub struct SegmentTag {
pub rel: RelishTag,
pub segno: u32,
}
impl fmt::Display for SegmentTag {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}.{}", self.rel, self.segno)
}
}
impl SegmentTag {
pub const fn from_blknum(rel: RelishTag, blknum: u32) -> SegmentTag {
SegmentTag {
rel,
segno: blknum / RELISH_SEG_SIZE,
}
}
pub fn blknum_in_seg(&self, blknum: u32) -> bool {
blknum / RELISH_SEG_SIZE == self.segno
}
}
///
/// Represents a version of a page at a specific LSN. The LSN is the key of the
/// entry in the 'page_versions' hash, it is not duplicated here.
///
/// A page version can be stored as a full page image, or as WAL record that needs
/// to be applied over the previous page version to reconstruct this version.
///
/// It's also possible to have both a WAL record and a page image in the same
/// PageVersion. That happens if page version is originally stored as a WAL record
/// but it is later reconstructed by a GetPage@LSN request by performing WAL
/// redo. The get_page_at_lsn() code will store the reconstructed pag image next to
/// the WAL record in that case. TODO: That's pretty accidental, not the result
/// of any grand design. If we want to keep reconstructed page versions around, we
/// probably should have a separate buffer cache so that we could control the
/// replacement policy globally. Or if we keep a reconstructed page image, we
/// could throw away the WAL record.
///
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PageVersion {
/// an 8kb page image
pub page_image: Option<Bytes>,
/// WAL record to get from previous page version to this one.
pub record: Option<WALRecord>,
}
///
/// Data needed to reconstruct a page version
///
/// 'page_img' is the old base image of the page to start the WAL replay with.
/// It can be None, if the first WAL record initializes the page (will_init)
/// 'records' contains the records to apply over the base image.
///
pub struct PageReconstructData {
pub records: Vec<WALRecord>,
pub page_img: Option<Bytes>,
}
///
/// A Layer holds all page versions for one segment of a relish, in a range of LSNs.
/// There are two kinds of layers, in-memory and snapshot layers. In-memory
/// layers are used to ingest incoming WAL, and provide fast access
/// to the recent page versions. Snaphot layers are stored on disk, and
/// are immutable. This trait presents the common functionality of
/// in-memory and snapshot layers.
///
/// Each layer contains a full snapshot of the segment at the start
/// LSN. In addition to that, it contains WAL (or more page images)
/// needed to recontruct any page version up to the end LSN.
///
pub trait Layer: Send + Sync {
// These functions identify the relish segment and the LSN range
// that this Layer holds.
fn get_timeline_id(&self) -> ZTimelineId;
fn get_seg_tag(&self) -> SegmentTag;
fn get_start_lsn(&self) -> Lsn;
fn get_end_lsn(&self) -> Lsn;
fn is_dropped(&self) -> bool;
///
/// Return data needed to reconstruct given page at LSN.
///
/// It is up to the caller to collect more data from previous layer and
/// perform WAL redo, if necessary.
///
/// If returns Some, the returned data is not complete. The caller needs
/// to continue with the returned 'lsn'.
///
/// Note that the 'blknum' is the offset of the page from the beginning
/// of the *relish*, not the beginning of the segment. The requested
/// 'blknum' must be covered by this segment.
fn get_page_reconstruct_data(
&self,
blknum: u32,
lsn: Lsn,
reconstruct_data: &mut PageReconstructData,
) -> Result<Option<Lsn>>;
// Functions that correspond to the Timeline trait functions.
fn get_seg_size(&self, lsn: Lsn) -> Result<u32>;
fn get_seg_exists(&self, lsn: Lsn) -> Result<bool>;
}

View File

@@ -1,37 +1,46 @@
use serde::{Deserialize, Serialize};
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use std::fmt;
use std::path::PathBuf;
use std::str::FromStr;
use std::time::Duration;
use lazy_static::lazy_static;
use zenith_metrics::{register_int_gauge_vec, IntGaugeVec};
pub mod basebackup;
pub mod branches;
pub mod inmem_storage;
pub mod layered_repository;
pub mod logger;
pub mod object_key;
pub mod object_repository;
pub mod object_store;
pub mod page_cache;
pub mod page_service;
pub mod relish;
pub mod repository;
pub mod restore_local_repo;
pub mod rocksdb_storage;
pub mod tui;
pub mod tui_event;
mod tui_logger;
pub mod waldecoder;
pub mod walreceiver;
pub mod walredo;
lazy_static! {
static ref LIVE_CONNECTIONS_COUNT: IntGaugeVec = register_int_gauge_vec!(
"pageserver_live_connections_count",
"Number of live network connections",
&["pageserver_connection_kind"]
)
.expect("failed to define a metric");
}
#[derive(Debug, Clone)]
pub struct PageServerConf {
pub daemonize: bool,
pub interactive: bool,
pub materialize: bool,
pub listen_addr: String,
pub http_endpoint_addr: String,
pub gc_horizon: u64,
pub gc_period: Duration,
pub wal_redoers: usize,
pub superuser: String,
// Repository directory, relative to current working directory.
// Normally, the page server changes the current working directory
@@ -42,6 +51,18 @@ pub struct PageServerConf {
pub workdir: PathBuf,
pub pg_distrib_dir: PathBuf,
pub auth_type: AuthType,
pub auth_validation_public_key_path: Option<PathBuf>,
pub repository_format: RepositoryFormat,
}
#[derive(Debug, Clone, PartialEq)]
pub enum RepositoryFormat {
Layered,
RocksDb,
}
impl PageServerConf {
@@ -49,24 +70,44 @@ impl PageServerConf {
// Repository paths, relative to workdir.
//
fn tag_path(&self, name: &str) -> PathBuf {
self.workdir.join("refs").join("tags").join(name)
fn tenants_path(&self) -> PathBuf {
self.workdir.join("tenants")
}
fn branch_path(&self, name: &str) -> PathBuf {
self.workdir.join("refs").join("branches").join(name)
fn tenant_path(&self, tenantid: &ZTenantId) -> PathBuf {
self.tenants_path().join(tenantid.to_string())
}
fn timeline_path(&self, timelineid: ZTimelineId) -> PathBuf {
self.workdir.join("timelines").join(timelineid.to_string())
fn tags_path(&self, tenantid: &ZTenantId) -> PathBuf {
self.tenant_path(tenantid).join("refs").join("tags")
}
fn snapshots_path(&self, timelineid: ZTimelineId) -> PathBuf {
self.timeline_path(timelineid).join("snapshots")
fn tag_path(&self, tag_name: &str, tenantid: &ZTenantId) -> PathBuf {
self.tags_path(tenantid).join(tag_name)
}
fn ancestor_path(&self, timelineid: ZTimelineId) -> PathBuf {
self.timeline_path(timelineid).join("ancestor")
fn branches_path(&self, tenantid: &ZTenantId) -> PathBuf {
self.tenant_path(tenantid).join("refs").join("branches")
}
fn branch_path(&self, branch_name: &str, tenantid: &ZTenantId) -> PathBuf {
self.branches_path(tenantid).join(branch_name)
}
fn timelines_path(&self, tenantid: &ZTenantId) -> PathBuf {
self.tenant_path(tenantid).join("timelines")
}
fn timeline_path(&self, timelineid: &ZTimelineId, tenantid: &ZTenantId) -> PathBuf {
self.timelines_path(tenantid).join(timelineid.to_string())
}
fn ancestor_path(&self, timelineid: &ZTimelineId, tenantid: &ZTenantId) -> PathBuf {
self.timeline_path(timelineid, tenantid).join("ancestor")
}
fn wal_dir_path(&self, timelineid: &ZTimelineId, tenantid: &ZTenantId) -> PathBuf {
self.timeline_path(timelineid, tenantid).join("wal")
}
//
@@ -81,64 +122,3 @@ impl PageServerConf {
self.pg_distrib_dir.join("lib")
}
}
/// Zenith Timeline ID is a 128-bit random ID.
///
/// Zenith timeline IDs are different from PostgreSQL timeline
/// IDs. They serve a similar purpose though: they differentiate
/// between different "histories" of the same cluster. However,
/// PostgreSQL timeline IDs are a bit cumbersome, because they are only
/// 32-bits wide, and they must be in ascending order in any given
/// timeline history. Those limitations mean that we cannot generate a
/// new PostgreSQL timeline ID by just generating a random number. And
/// that in turn is problematic for the "pull/push" workflow, where you
/// have a local copy of a zenith repository, and you periodically sync
/// the local changes with a remote server. When you work "detached"
/// from the remote server, you cannot create a PostgreSQL timeline ID
/// that's guaranteed to be different from all existing timelines in
/// the remote server. For example, if two people are having a clone of
/// the repository on their laptops, and they both create a new branch
/// with different name. What timeline ID would they assign to their
/// branches? If they pick the same one, and later try to push the
/// branches to the same remote server, they will get mixed up.
///
/// To avoid those issues, Zenith has its own concept of timelines that
/// is separate from PostgreSQL timelines, and doesn't have those
/// limitations. A zenith timeline is identified by a 128-bit ID, which
/// is usually printed out as a hex string.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
pub struct ZTimelineId([u8; 16]);
impl FromStr for ZTimelineId {
type Err = hex::FromHexError;
fn from_str(s: &str) -> Result<ZTimelineId, Self::Err> {
let timelineid = hex::decode(s)?;
let mut buf: [u8; 16] = [0u8; 16];
buf.copy_from_slice(timelineid.as_slice());
Ok(ZTimelineId(buf))
}
}
impl ZTimelineId {
pub fn from(b: [u8; 16]) -> ZTimelineId {
ZTimelineId(b)
}
pub fn get_from_buf(buf: &mut dyn bytes::Buf) -> ZTimelineId {
let mut arr = [0u8; 16];
buf.copy_to_slice(&mut arr);
ZTimelineId::from(arr)
}
pub fn as_arr(&self) -> [u8; 16] {
self.0
}
}
impl fmt::Display for ZTimelineId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str(&hex::encode(self.0))
}
}

45
pageserver/src/logger.rs Normal file
View File

@@ -0,0 +1,45 @@
use crate::PageServerConf;
use anyhow::{Context, Result};
use slog::{Drain, FnValue};
use std::fs::{File, OpenOptions};
pub fn init_logging(
_conf: &PageServerConf,
log_filename: &str,
) -> Result<(slog_scope::GlobalLoggerGuard, File)> {
// Don't open the same file for output multiple times;
// the different fds could overwrite each other's output.
let log_file = OpenOptions::new()
.create(true)
.append(true)
.open(&log_filename)
.with_context(|| format!("failed to open {:?}", &log_filename))?;
let logger_file = log_file.try_clone().unwrap();
let decorator = slog_term::PlainSyncDecorator::new(logger_file);
let drain = slog_term::FullFormat::new(decorator).build();
let drain = slog::Filter::new(drain, |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Info) {
return true;
}
false
});
let drain = std::sync::Mutex::new(drain).fuse();
let logger = slog::Logger::root(
drain,
slog::o!(
"location" =>
FnValue(move |record| {
format!("{}, {}:{}",
record.module(),
record.file(),
record.line()
)
}
)
),
);
Ok((slog_scope::set_global_logger(logger), log_file))
}

View File

@@ -1,84 +1,49 @@
use crate::repository::{BufferTag, RelTag};
use crate::waldecoder::TransactionId;
use crate::ZTimelineId;
//!
//! Common structs shared by object_repository.rs and object_store.rs.
//!
use crate::relish::RelishTag;
use serde::{Deserialize, Serialize};
use zenith_utils::zid::ZTimelineId;
///
/// ObjectKey is the key type used to identify objects stored in an object
/// repository. It is shared between object_repository.rs and object_store.rs.
/// It is mostly opaque to ObjectStore, it just stores and retrieves objects
/// using the key given by the caller.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
///
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ObjectKey {
pub timeline: ZTimelineId,
pub tag: ObjectTag,
}
///
/// Non-relation transaction status files (clog (a.k.a. pg_xact) and pg_multixact)
/// in Postgres are handled by SLRU (Simple LRU) buffer, hence the name.
///
/// These files are global for a postgres instance.
///
/// These files are divided into segments, which are divided into pages
/// of the same BLCKSZ as used for relation files.
///
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct SlruBufferTag {
pub blknum: u32,
}
///
/// Special type of Postgres files: pg_filenode.map is needed to map
/// catalog table OIDs to filenode numbers, which define filename.
///
/// Each database has a map file for its local mapped catalogs,
/// and there is a separate map file for shared catalogs.
///
/// These files have untypical size of 512 bytes.
///
/// See PostgreSQL relmapper.c for details.
///
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct DatabaseTag {
pub spcnode: u32,
pub dbnode: u32,
}
///
/// Non-relation files that keep state for prepared transactions.
/// Unlike other files these are not divided into pages.
///
/// See PostgreSQL twophase.c for details.
///
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct PrepareTag {
pub xid: TransactionId,
}
/// ObjectTag is a part of ObjectKey that is specific to the type of
/// the stored object.
///
/// NB: the order of the enum values is significant! In particular,
/// rocksdb_storage.rs assumes that TimelineMetadataTag is first
///
/// Buffer is the kind of object that is accessible by the public
/// get_page_at_lsn() / put_page_image() / put_wal_record() functions in
/// the repository.rs interface. The rest are internal objects stored in
/// the key-value store, to store various metadata. They're not directly
/// accessible outside object_repository.rs
///
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum ObjectTag {
// dummy tag preceeding all other keys
FirstTag,
// Metadata about a timeline. Not versioned.
TimelineMetadataTag,
// Special entry that represents PostgreSQL checkpoint.
// We use it to track fields needed to restore controlfile checkpoint.
Checkpoint,
// Various types of non-relation files.
// We need them to bootstrap compute node.
ControlFile,
Clog(SlruBufferTag),
MultiXactMembers(SlruBufferTag),
MultiXactOffsets(SlruBufferTag),
FileNodeMap(DatabaseTag),
TwoPhase(PrepareTag),
// put relations at the end of enum to allow efficient iterations through non-rel objects
RelationMetadata(RelTag),
RelationBuffer(BufferTag),
// These objects store metadata about one relish. Currently it's used
// just to track the relish's size. It's not used for non-blocky relishes
// at all.
RelationMetadata(RelishTag),
// These are the pages exposed in the public Repository/Timeline interface.
Buffer(RelishTag, u32),
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,12 @@
//! Low-level key-value storage abstraction.
//!
use crate::object_key::*;
use crate::repository::RelTag;
use crate::ZTimelineId;
use crate::relish::*;
use anyhow::Result;
use std::collections::HashSet;
use std::iter::Iterator;
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTimelineId;
///
/// Low-level storage abstraction.
@@ -69,6 +69,11 @@ pub trait ObjectStore: Send + Sync {
lsn: Lsn,
) -> Result<HashSet<RelTag>>;
/// Iterate through non-rel relishes
///
/// This is used to prepare tarball for new node startup.
fn list_nonrels<'a>(&'a self, timelineid: ZTimelineId, lsn: Lsn) -> Result<HashSet<RelishTag>>;
/// Iterate through objects tags. If nonrel_only, then only non-relationa data is iterated.
///
/// This is used to implement GC and preparing tarball for new node startup
@@ -76,7 +81,6 @@ pub trait ObjectStore: Send + Sync {
fn list_objects<'a>(
&'a self,
timelineid: ZTimelineId,
nonrel_only: bool,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = ObjectTag> + 'a>>;

View File

@@ -1,37 +1,91 @@
//! This module acts as a switchboard to access different repositories managed by this
//! page server. Currently, a Page Server can only manage one repository, so there
//! isn't much here. If we implement multi-tenancy, this will probably be changed into
//! a hash map, keyed by the tenant ID.
//! page server.
use crate::branches;
use crate::layered_repository::LayeredRepository;
use crate::object_repository::ObjectRepository;
use crate::repository::Repository;
use crate::rocksdb_storage::RocksObjectStore;
//use crate::inmem_storage::InmemObjectStore;
use crate::walredo::PostgresRedoManager;
use crate::PageServerConf;
use crate::{PageServerConf, RepositoryFormat};
use anyhow::{anyhow, bail, Result};
use lazy_static::lazy_static;
use log::info;
use std::collections::HashMap;
use std::fs;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use zenith_utils::zid::ZTenantId;
lazy_static! {
pub static ref REPOSITORY: Mutex<Option<Arc<dyn Repository>>> = Mutex::new(None);
pub static ref REPOSITORY: Mutex<HashMap<ZTenantId, Arc<dyn Repository>>> =
Mutex::new(HashMap::new());
}
pub fn init(conf: &'static PageServerConf) {
let mut m = REPOSITORY.lock().unwrap();
let obj_store = RocksObjectStore::open(conf).unwrap();
//let obj_store = InmemObjectStore::open(conf).unwrap();
for dir_entry in fs::read_dir(conf.tenants_path()).unwrap() {
let tenantid =
ZTenantId::from_str(dir_entry.unwrap().file_name().to_str().unwrap()).unwrap();
// Set up a WAL redo manager, for applying WAL records.
let walredo_mgr = PostgresRedoManager::new(conf);
// Set up a WAL redo manager, for applying WAL records.
let walredo_mgr = PostgresRedoManager::new(conf, tenantid);
// we have already changed current dir to the repository.
let repo = ObjectRepository::new(conf, Arc::new(obj_store), Arc::new(walredo_mgr));
// Set up an object repository, for actual data storage.
let repo: Arc<dyn Repository + Sync + Send> = match conf.repository_format {
RepositoryFormat::Layered => {
let repo = Arc::new(LayeredRepository::new(
conf,
Arc::new(walredo_mgr),
tenantid,
));
LayeredRepository::launch_checkpointer_thread(conf, repo.clone());
repo
}
RepositoryFormat::RocksDb => {
let obj_store = RocksObjectStore::open(conf, &tenantid).unwrap();
*m = Some(Arc::new(repo));
Arc::new(ObjectRepository::new(
conf,
Arc::new(obj_store),
Arc::new(walredo_mgr),
tenantid,
))
}
};
info!("initialized storage for tenant: {}", &tenantid);
m.insert(tenantid, repo);
}
}
pub fn get_repository() -> Arc<dyn Repository> {
pub fn create_repository_for_tenant(
conf: &'static PageServerConf,
tenantid: ZTenantId,
) -> Result<()> {
let mut m = REPOSITORY.lock().unwrap();
// First check that the tenant doesn't exist already
if m.get(&tenantid).is_some() {
bail!("tenant {} already exists", tenantid);
}
let wal_redo_manager = Arc::new(PostgresRedoManager::new(conf, tenantid));
let repo = branches::create_repo(conf, tenantid, wal_redo_manager)?;
m.insert(tenantid, repo);
Ok(())
}
pub fn insert_repository_for_tenant(tenantid: ZTenantId, repo: Arc<dyn Repository>) {
let o = &mut REPOSITORY.lock().unwrap();
o.insert(tenantid, repo);
}
pub fn get_repository_for_tenant(tenantid: &ZTenantId) -> Result<Arc<dyn Repository>> {
let o = &REPOSITORY.lock().unwrap();
Arc::clone(o.as_ref().unwrap())
o.get(tenantid)
.map(|repo| Arc::clone(repo))
.ok_or_else(|| anyhow!("repository not found for tenant name {}", tenantid))
}

View File

@@ -10,31 +10,36 @@
// *callmemaybe <zenith timelineid> $url* -- ask pageserver to start walreceiver on $url
//
use anyhow::{anyhow, bail};
use anyhow::{anyhow, bail, ensure, Result};
use bytes::{Buf, BufMut, Bytes, BytesMut};
use lazy_static::lazy_static;
use log::*;
use regex::Regex;
use std::io::Write;
use std::net::TcpListener;
use std::str;
use std::str::FromStr;
use std::sync::Arc;
use std::thread;
use std::{io, net::TcpStream};
use zenith_metrics::{register_histogram_vec, HistogramVec};
use zenith_utils::auth::JwtAuth;
use zenith_utils::auth::{Claims, Scope};
use zenith_utils::postgres_backend::PostgresBackend;
use zenith_utils::postgres_backend::{self, AuthType};
use zenith_utils::pq_proto::{
BeMessage, FeMessage, RowDescriptor, HELLO_WORLD_ROW, SINGLE_COL_ROWDESC,
};
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use zenith_utils::{bin_ser::BeSer, lsn::Lsn};
use crate::basebackup;
use crate::branches;
use crate::object_key::ObjectTag;
use crate::page_cache;
use crate::repository::{BufferTag, RelTag, RelationUpdate, Update};
use crate::restore_local_repo;
use crate::relish::*;
use crate::repository::Modification;
use crate::walreceiver;
use crate::PageServerConf;
use crate::ZTimelineId;
// Wrapped in libpq CopyData
enum PagestreamFeMessage {
@@ -138,34 +143,77 @@ impl PagestreamBeMessage {
///
/// Listens for connections, and launches a new handler thread for each.
///
pub fn thread_main(conf: &'static PageServerConf, listener: TcpListener) -> anyhow::Result<()> {
pub fn thread_main(
conf: &'static PageServerConf,
auth: Arc<Option<JwtAuth>>,
listener: TcpListener,
auth_type: AuthType,
) -> anyhow::Result<()> {
loop {
let (socket, peer_addr) = listener.accept()?;
debug!("accepted connection from {}", peer_addr);
socket.set_nodelay(true).unwrap();
let local_auth = Arc::clone(&auth);
thread::spawn(move || {
if let Err(err) = page_service_conn_main(conf, socket) {
if let Err(err) = page_service_conn_main(conf, local_auth, socket, auth_type) {
error!("error: {}", err);
}
});
}
}
fn page_service_conn_main(conf: &'static PageServerConf, socket: TcpStream) -> anyhow::Result<()> {
let mut conn_handler = PageServerHandler::new(conf);
let mut pgbackend = PostgresBackend::new(socket, AuthType::Trust)?;
fn page_service_conn_main(
conf: &'static PageServerConf,
auth: Arc<Option<JwtAuth>>,
socket: TcpStream,
auth_type: AuthType,
) -> anyhow::Result<()> {
// Immediatsely increment the gauge, then create a job to decrement it on thread exit.
// One of the pros of `defer!` is that this will *most probably*
// get called, even in presence of panics.
let gauge = crate::LIVE_CONNECTIONS_COUNT.with_label_values(&["page_service"]);
gauge.inc();
scopeguard::defer! {
gauge.dec();
}
let mut conn_handler = PageServerHandler::new(conf, auth);
let pgbackend = PostgresBackend::new(socket, auth_type)?;
pgbackend.run(&mut conn_handler)
}
#[derive(Debug)]
struct PageServerHandler {
conf: &'static PageServerConf,
auth: Arc<Option<JwtAuth>>,
claims: Option<Claims>,
}
const TIME_BUCKETS: &[f64] = &[
0.00001, // 1/100000 s
0.0001, 0.00015, 0.0002, 0.00025, 0.0003, 0.00035, 0.0005, 0.00075, // 1/10000 s
0.001, 0.0025, 0.005, 0.0075, // 1/1000 s
0.01, 0.0125, 0.015, 0.025, 0.05, // 1/100 s
0.1, // 1/10 s
];
lazy_static! {
static ref SMGR_QUERY_TIME: HistogramVec = register_histogram_vec!(
"pageserver_smgr_query_time",
"Time spent on smgr query handling",
&["smgr_query_type"],
TIME_BUCKETS.into()
)
.expect("failed to define a metric");
}
impl PageServerHandler {
pub fn new(conf: &'static PageServerConf) -> Self {
PageServerHandler { conf }
pub fn new(conf: &'static PageServerConf, auth: Arc<Option<JwtAuth>>) -> Self {
PageServerHandler {
conf,
auth,
claims: None,
}
}
fn handle_controlfile(&self, pgb: &mut PostgresBackend) -> io::Result<()> {
@@ -180,9 +228,10 @@ impl PageServerHandler {
&self,
pgb: &mut PostgresBackend,
timelineid: ZTimelineId,
tenantid: ZTenantId,
) -> anyhow::Result<()> {
// Check that the timeline exists
let repository = page_cache::get_repository();
let repository = page_cache::get_repository_for_tenant(&tenantid)?;
let timeline = repository.get_timeline(timelineid).map_err(|_| {
anyhow!(
"client requested pagestream on timeline {} which does not exist in page server",
@@ -205,14 +254,19 @@ impl PageServerHandler {
let response = match zenith_fe_msg {
PagestreamFeMessage::Exists(req) => {
let tag = RelTag {
let rel = RelTag {
spcnode: req.spcnode,
dbnode: req.dbnode,
relnode: req.relnode,
forknum: req.forknum,
};
let tag = RelishTag::Relation(rel);
let exist = timeline.get_rel_exists(tag, req.lsn).unwrap_or(false);
let exist = SMGR_QUERY_TIME
.with_label_values(&["get_rel_exists"])
.observe_closure_duration(|| {
timeline.get_rel_exists(tag, req.lsn).unwrap_or(false)
});
PagestreamBeMessage::Status(PagestreamStatusResponse {
ok: exist,
@@ -220,44 +274,56 @@ impl PageServerHandler {
})
}
PagestreamFeMessage::Nblocks(req) => {
let tag = RelTag {
let rel = RelTag {
spcnode: req.spcnode,
dbnode: req.dbnode,
relnode: req.relnode,
forknum: req.forknum,
};
let tag = RelishTag::Relation(rel);
let n_blocks = timeline.get_rel_size(tag, req.lsn).unwrap_or(0);
let n_blocks = SMGR_QUERY_TIME
.with_label_values(&["get_rel_size"])
.observe_closure_duration(|| {
// Return 0 if relation is not found.
// This is what postgres smgr expects.
timeline
.get_relish_size(tag, req.lsn)
.unwrap_or(Some(0))
.unwrap_or(0)
});
PagestreamBeMessage::Nblocks(PagestreamStatusResponse { ok: true, n_blocks })
}
PagestreamFeMessage::Read(req) => {
let tag = ObjectTag::RelationBuffer(BufferTag {
rel: RelTag {
spcnode: req.spcnode,
dbnode: req.dbnode,
relnode: req.relnode,
forknum: req.forknum,
},
blknum: req.blkno,
});
let read_response = match timeline.get_page_at_lsn(tag, req.lsn) {
Ok(p) => PagestreamReadResponse {
ok: true,
n_blocks: 0,
page: p,
},
Err(e) => {
const ZERO_PAGE: [u8; 8192] = [0; 8192];
error!("get_page_at_lsn: {}", e);
PagestreamReadResponse {
ok: false,
n_blocks: 0,
page: Bytes::from_static(&ZERO_PAGE),
}
}
let rel = RelTag {
spcnode: req.spcnode,
dbnode: req.dbnode,
relnode: req.relnode,
forknum: req.forknum,
};
let tag = RelishTag::Relation(rel);
let read_response = SMGR_QUERY_TIME
.with_label_values(&["get_page_at_lsn"])
.observe_closure_duration(|| {
match timeline.get_page_at_lsn(tag, req.blkno, req.lsn) {
Ok(p) => PagestreamReadResponse {
ok: true,
n_blocks: 0,
page: p,
},
Err(e) => {
const ZERO_PAGE: [u8; 8192] = [0; 8192];
error!("get_page_at_lsn: {}", e);
PagestreamReadResponse {
ok: false,
n_blocks: 0,
page: Bytes::from_static(&ZERO_PAGE),
}
}
}
});
PagestreamBeMessage::Read(read_response)
}
@@ -274,9 +340,10 @@ impl PageServerHandler {
pgb: &mut PostgresBackend,
timelineid: ZTimelineId,
lsn: Option<Lsn>,
tenantid: ZTenantId,
) -> anyhow::Result<()> {
// check that the timeline exists
let repository = page_cache::get_repository();
let repository = page_cache::get_repository_for_tenant(&tenantid)?;
let timeline = repository.get_timeline(timelineid).map_err(|e| {
error!("error fetching timeline: {:?}", e);
anyhow!(
@@ -290,21 +357,15 @@ impl PageServerHandler {
/* Send a tarball of the latest snapshot on the timeline */
// find latest snapshot
let snapshot_lsn =
restore_local_repo::find_latest_snapshot(&self.conf, timelineid).unwrap();
let req_lsn = lsn.unwrap_or_else(|| timeline.get_last_valid_lsn());
{
let mut writer = CopyDataSink { pgb };
let mut basebackup = basebackup::Basebackup::new(
&mut writer,
timelineid,
&timeline,
req_lsn,
timeline.get_prev_record_lsn(),
snapshot_lsn,
);
basebackup.send_tarball()?;
}
@@ -313,9 +374,68 @@ impl PageServerHandler {
Ok(())
}
// when accessing management api supply None as an argument
// when using to authorize tenant pass corresponding tenant id
fn check_permission(&self, tenantid: Option<ZTenantId>) -> Result<()> {
if self.auth.is_none() {
// auth is set to Trust, nothing to check so just return ok
return Ok(());
}
// auth is some, just checked above, when auth is some
// then claims are always present because of checks during connetion init
// so this expect won't trigger
let claims = self
.claims
.as_ref()
.expect("claims presence already checked");
match (&claims.scope, tenantid) {
(Scope::Tenant, None) => {
bail!("Attempt to access management api with tenant scope. Permission denied")
}
(Scope::Tenant, Some(tenantid)) => {
if claims.tenant_id.unwrap() != tenantid {
bail!("Tenant id mismatch. Permission denied")
}
Ok(())
}
(Scope::PageServerApi, None) => Ok(()), // access to management api for PageServerApi scope
(Scope::PageServerApi, Some(_)) => Ok(()), // access to tenant api using PageServerApi scope
}
}
}
impl postgres_backend::Handler for PageServerHandler {
fn check_auth_jwt(
&mut self,
_pgb: &mut PostgresBackend,
jwt_response: &[u8],
) -> anyhow::Result<()> {
// this unwrap is never triggered, because check_auth_jwt only called when auth_type is ZenithJWT
// which requires auth to be present
let data = self
.auth
.as_ref()
.as_ref()
.unwrap()
.decode(&str::from_utf8(jwt_response)?)?;
if matches!(data.claims.scope, Scope::Tenant) {
ensure!(
data.claims.tenant_id.is_some(),
"jwt token scope is Tenant, but tenant id is missing"
)
}
info!(
"jwt auth succeeded for scope: {:#?} by tenantid: {:?}",
data.claims.scope, data.claims.tenant_id,
);
self.claims = Some(data.claims);
Ok(())
}
fn process_query(
&mut self,
pgb: &mut PostgresBackend,
@@ -328,84 +448,112 @@ impl postgres_backend::Handler for PageServerHandler {
if query_string.last() == Some(&0) {
query_string.truncate(query_string.len() - 1);
}
let query_string = std::str::from_utf8(&query_string)?;
if query_string.starts_with(b"controlfile") {
if query_string.starts_with("controlfile") {
self.handle_controlfile(pgb)?;
} else if query_string.starts_with(b"pagestream ") {
let (_l, r) = query_string.split_at("pagestream ".len());
let timelineid_str = String::from_utf8(r.to_vec())?;
let timelineid = ZTimelineId::from_str(&timelineid_str)?;
} else if query_string.starts_with("pagestream ") {
let (_, params_raw) = query_string.split_at("pagestream ".len());
let params = params_raw.split(" ").collect::<Vec<_>>();
ensure!(
params.len() == 2,
"invalid param number for pagestream command"
);
let tenantid = ZTenantId::from_str(params[0])?;
let timelineid = ZTimelineId::from_str(params[1])?;
self.handle_pagerequests(pgb, timelineid)?;
} else if query_string.starts_with(b"basebackup ") {
let (_l, r) = query_string.split_at("basebackup ".len());
let r = r.to_vec();
let basebackup_args = String::from(String::from_utf8(r)?.trim_end());
let args: Vec<&str> = basebackup_args.rsplit(' ').collect();
let timelineid_str = args[0];
info!("got basebackup command: \"{}\"", timelineid_str);
let timelineid = ZTimelineId::from_str(&timelineid_str)?;
let lsn = if args.len() > 1 {
Some(Lsn::from_str(args[1])?)
self.check_permission(Some(tenantid))?;
self.handle_pagerequests(pgb, timelineid, tenantid)?;
} else if query_string.starts_with("basebackup ") {
let (_, params_raw) = query_string.split_at("basebackup ".len());
let params = params_raw.split(" ").collect::<Vec<_>>();
ensure!(
params.len() == 2,
"invalid param number for basebackup command"
);
let tenantid = ZTenantId::from_str(params[0])?;
let timelineid = ZTimelineId::from_str(params[1])?;
self.check_permission(Some(tenantid))?;
// TODO are there any tests with lsn option?
let lsn = if params.len() == 3 {
Some(Lsn::from_str(params[2])?)
} else {
None
};
info!(
"got basebackup command. tenantid=\"{}\" timelineid=\"{}\" lsn=\"{:#?}\"",
tenantid, timelineid, lsn
);
// Check that the timeline exists
self.handle_basebackup_request(pgb, timelineid, lsn)?;
self.handle_basebackup_request(pgb, timelineid, lsn, tenantid)?;
pgb.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with(b"callmemaybe ") {
let query_str = String::from_utf8(query_string.to_vec())?;
// callmemaybe <zenith timelineid as hex string> <connstr>
} else if query_string.starts_with("callmemaybe ") {
// callmemaybe <zenith tenantid as hex string> <zenith timelineid as hex string> <connstr>
// TODO lazy static
let re = Regex::new(r"^callmemaybe ([[:xdigit:]]+) (.*)$").unwrap();
let re = Regex::new(r"^callmemaybe ([[:xdigit:]]+) ([[:xdigit:]]+) (.*)$").unwrap();
let caps = re
.captures(&query_str)
.ok_or_else(|| anyhow!("invalid callmemaybe: '{}'", query_str))?;
.captures(query_string)
.ok_or_else(|| anyhow!("invalid callmemaybe: '{}'", query_string))?;
let timelineid = ZTimelineId::from_str(caps.get(1).unwrap().as_str())?;
let connstr: String = String::from(caps.get(2).unwrap().as_str());
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let timelineid = ZTimelineId::from_str(caps.get(2).unwrap().as_str())?;
let connstr = caps.get(3).unwrap().as_str().to_owned();
self.check_permission(Some(tenantid))?;
// Check that the timeline exists
let repository = page_cache::get_repository();
let repository = page_cache::get_repository_for_tenant(&tenantid)?;
if repository.get_timeline(timelineid).is_err() {
bail!("client requested callmemaybe on timeline {} which does not exist in page server", timelineid);
}
walreceiver::launch_wal_receiver(&self.conf, timelineid, &connstr);
walreceiver::launch_wal_receiver(&self.conf, timelineid, &connstr, tenantid.to_owned());
pgb.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with(b"branch_create ") {
let query_str = String::from_utf8(query_string.to_vec())?;
let err = || anyhow!("invalid branch_create: '{}'", query_str);
} else if query_string.starts_with("branch_create ") {
let err = || anyhow!("invalid branch_create: '{}'", query_string);
// branch_create <branchname> <startpoint>
// branch_create <tenantid> <branchname> <startpoint>
// TODO lazy static
// TOOD: escaping, to allow branch names with spaces
let re = Regex::new(r"^branch_create (\S+) ([^\r\n\s;]+)[\r\n\s;]*;?$").unwrap();
let caps = re.captures(&query_str).ok_or_else(err)?;
let re = Regex::new(r"^branch_create ([[:xdigit:]]+) (\S+) ([^\r\n\s;]+)[\r\n\s;]*;?$")
.unwrap();
let caps = re.captures(&query_string).ok_or_else(err)?;
let branchname: String = String::from(caps.get(1).ok_or_else(err)?.as_str());
let startpoint_str: String = String::from(caps.get(2).ok_or_else(err)?.as_str());
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let branchname = caps.get(2).ok_or_else(err)?.as_str().to_owned();
let startpoint_str = caps.get(3).ok_or_else(err)?.as_str().to_owned();
let branch = branches::create_branch(&self.conf, &branchname, &startpoint_str)?;
self.check_permission(Some(tenantid))?;
let branch =
branches::create_branch(&self.conf, &branchname, &startpoint_str, &tenantid)?;
let branch = serde_json::to_vec(&branch)?;
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::DataRow(&[Some(&branch)]))?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with(b"push ") {
let query_str = std::str::from_utf8(&query_string)?;
let mut it = query_str.split(' ');
it.next().unwrap();
let timeline_id: ZTimelineId = it
.next()
.ok_or_else(|| anyhow!("missing timeline id"))?
.parse()?;
} else if query_string.starts_with("push ") {
// push <zenith tenantid as hex string> <zenith timelineid as hex string>
let re = Regex::new(r"^push ([[:xdigit:]]+) ([[:xdigit:]]+)$").unwrap();
let caps = re
.captures(query_string)
.ok_or_else(|| anyhow!("invalid push: '{}'", query_string))?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let timelineid = ZTimelineId::from_str(caps.get(2).unwrap().as_str())?;
self.check_permission(Some(tenantid))?;
let start_lsn = Lsn(0); // TODO this needs to come from the repo
let timeline =
page_cache::get_repository().create_empty_timeline(timeline_id, start_lsn)?;
let timeline = page_cache::get_repository_for_tenant(&tenantid)?
.create_empty_timeline(timelineid, start_lsn)?;
pgb.write_message(&BeMessage::CopyInResponse)?;
@@ -414,38 +562,14 @@ impl postgres_backend::Handler for PageServerHandler {
while let Some(msg) = pgb.read_message()? {
match msg {
FeMessage::CopyData(bytes) => {
let relation_update = RelationUpdate::des(&bytes)?;
let modification = Modification::des(&bytes)?;
last_lsn = relation_update.lsn;
match relation_update.update {
Update::Page { blknum, img } => {
let tag = ObjectTag::RelationBuffer(BufferTag {
rel: relation_update.rel,
blknum,
});
timeline.put_page_image(tag, relation_update.lsn, img, true)?;
}
Update::WALRecord { blknum, rec } => {
let tag = ObjectTag::RelationBuffer(BufferTag {
rel: relation_update.rel,
blknum,
});
timeline.put_wal_record(tag, rec)?;
}
Update::Truncate { n_blocks } => {
timeline.put_truncation(
relation_update.rel,
relation_update.lsn,
n_blocks,
)?;
}
Update::Unlink => {
todo!()
}
}
last_lsn = modification.lsn;
timeline.put_raw_data(
modification.tag,
modification.lsn,
&modification.data,
)?;
}
FeMessage::CopyDone => {
timeline.advance_last_valid_lsn(last_lsn);
@@ -457,22 +581,25 @@ impl postgres_backend::Handler for PageServerHandler {
}
pgb.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with(b"request_push ") {
let query_str = std::str::from_utf8(&query_string)?;
let mut it = query_str.split(' ');
it.next().unwrap();
} else if query_string.starts_with("request_push ") {
// request_push <zenith tenantid as hex string> <zenith timelineid as hex string> <postgres_connection_uri>
let re = Regex::new(r"^request_push ([[:xdigit:]]+) ([[:xdigit:]]+) (.*)$").unwrap();
let timeline_id: ZTimelineId = it
.next()
.ok_or_else(|| anyhow!("missing timeline id"))?
.parse()?;
let timeline = page_cache::get_repository().get_timeline(timeline_id)?;
let caps = re
.captures(query_string)
.ok_or_else(|| anyhow!("invalid request_push: '{}'", query_string))?;
let postgres_connection_uri =
it.next().ok_or_else(|| anyhow!("missing postgres uri"))?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let timelineid = ZTimelineId::from_str(caps.get(2).unwrap().as_str())?;
let postgres_connection_uri = caps.get(3).unwrap().as_str();
self.check_permission(Some(tenantid))?;
let timeline =
page_cache::get_repository_for_tenant(&tenantid)?.get_timeline(timelineid)?;
let mut conn = postgres::Client::connect(postgres_connection_uri, postgres::NoTls)?;
let mut copy_in = conn.copy_in(format!("push {}", timeline_id.to_string()).as_str())?;
let mut copy_in = conn.copy_in(format!("push {}", timelineid.to_string()).as_str())?;
let history = timeline.history()?;
for update_res in history {
@@ -485,104 +612,100 @@ impl postgres_backend::Handler for PageServerHandler {
copy_in.finish()?;
pgb.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with(b"branch_list") {
let branches = crate::branches::get_branches(&self.conf)?;
} else if query_string.starts_with("branch_list ") {
// branch_list <zenith tenantid as hex string>
let re = Regex::new(r"^branch_list ([[:xdigit:]]+)$").unwrap();
let caps = re
.captures(query_string)
.ok_or_else(|| anyhow!("invalid branch_list: '{}'", query_string))?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let branches = crate::branches::get_branches(&self.conf, &tenantid)?;
let branches_buf = serde_json::to_vec(&branches)?;
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::DataRow(&[Some(&branches_buf)]))?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with(b"status") {
} else if query_string.starts_with("tenant_list") {
let tenants = crate::branches::get_tenants(&self.conf)?;
let tenants_buf = serde_json::to_vec(&tenants)?;
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::DataRow(&[Some(&tenants_buf)]))?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with("tenant_create") {
let err = || anyhow!("invalid tenant_create: '{}'", query_string);
// tenant_create <tenantid>
let re = Regex::new(r"^tenant_create ([[:xdigit:]]+)$").unwrap();
let caps = re.captures(&query_string).ok_or_else(err)?;
self.check_permission(None)?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
page_cache::create_repository_for_tenant(&self.conf, tenantid)?;
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with("status") {
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&HELLO_WORLD_ROW)?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.to_ascii_lowercase().starts_with(b"set ") {
} else if query_string.to_ascii_lowercase().starts_with("set ") {
// important because psycopg2 executes "SET datestyle TO 'ISO'"
// on connect
pgb.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string
.to_ascii_lowercase()
.starts_with(b"identify_system")
{
// TODO: match postgres response formarmat for 'identify_system'
let system_id = crate::branches::get_system_id(&self.conf)?.to_string();
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?;
pgb.write_message_noflush(&BeMessage::DataRow(&[Some(system_id.as_bytes())]))?;
pgb.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
} else if query_string.starts_with(b"do_gc ") {
} else if query_string.starts_with("do_gc ") {
// Run GC immediately on given timeline.
// FIXME: This is just for tests. See test_runner/batch_others/test_gc.py.
// This probably should require special authentication or a global flag to
// enable, I don't think we want to or need to allow regular clients to invoke
// GC.
let query_str = std::str::from_utf8(&query_string)?;
let mut it = query_str.split(' ');
it.next().unwrap();
// do_gc <tenant_id> <timeline_id> <gc_horizon>
let re = Regex::new(r"^do_gc ([[:xdigit:]]+)\s([[:xdigit:]]+)($|\s)([[:digit:]]+)?")
.unwrap();
let timeline_id: ZTimelineId = it
.next()
.ok_or_else(|| anyhow!("missing timeline id"))?
.parse()?;
let timeline = page_cache::get_repository().get_timeline(timeline_id)?;
let caps = re
.captures(query_string)
.ok_or_else(|| anyhow!("invalid do_gc: '{}'", query_string))?;
let horizon: u64 = it
.next()
.unwrap_or(&self.conf.gc_horizon.to_string())
.parse()?;
let tenantid = ZTenantId::from_str(caps.get(1).unwrap().as_str())?;
let timelineid = ZTimelineId::from_str(caps.get(2).unwrap().as_str())?;
let gc_horizon: u64 = caps
.get(4)
.map(|h| h.as_str().parse())
.unwrap_or(Ok(self.conf.gc_horizon))?;
let result = timeline.gc_iteration(horizon, true)?;
let repo = page_cache::get_repository_for_tenant(&tenantid)?;
let result = repo.gc_iteration(Some(timelineid), gc_horizon, true)?;
pgb.write_message_noflush(&BeMessage::RowDescription(&[
RowDescriptor {
name: b"n_relations",
typoid: 20,
typlen: 8,
..Default::default()
},
RowDescriptor {
name: b"truncated",
typoid: 20,
typlen: 8,
..Default::default()
},
RowDescriptor {
name: b"deleted",
typoid: 20,
typlen: 8,
..Default::default()
},
RowDescriptor {
name: b"prep_deleted",
typoid: 20,
typlen: 8,
..Default::default()
},
RowDescriptor {
name: b"slru_deleted",
typoid: 20,
typlen: 8,
..Default::default()
},
RowDescriptor {
name: b"chkp_deleted",
typoid: 20,
typlen: 8,
..Default::default()
},
RowDescriptor {
name: b"dropped",
typoid: 20,
typlen: 8,
..Default::default()
},
RowDescriptor {
name: b"elapsed",
typoid: 20,
typlen: 8,
..Default::default()
},
RowDescriptor::int8_col(b"n_relations"),
RowDescriptor::int8_col(b"truncated"),
RowDescriptor::int8_col(b"deleted"),
RowDescriptor::int8_col(b"prep_deleted"),
RowDescriptor::int8_col(b"slru_deleted"),
RowDescriptor::int8_col(b"chkp_deleted"),
RowDescriptor::int8_col(b"control_deleted"),
RowDescriptor::int8_col(b"filenodemap_deleted"),
RowDescriptor::int8_col(b"dropped"),
RowDescriptor::int8_col(b"snapshot_relfiles_total"),
RowDescriptor::int8_col(b"snapshot_relfiles_needed_by_cutoff"),
RowDescriptor::int8_col(b"snapshot_relfiles_needed_by_branches"),
RowDescriptor::int8_col(b"snapshot_relfiles_not_updated"),
RowDescriptor::int8_col(b"snapshot_relfiles_removed"),
RowDescriptor::int8_col(b"snapshot_relfiles_dropped"),
RowDescriptor::int8_col(b"snapshot_nonrelfiles_total"),
RowDescriptor::int8_col(b"snapshot_nonrelfiles_needed_by_cutoff"),
RowDescriptor::int8_col(b"snapshot_nonrelfiles_needed_by_branches"),
RowDescriptor::int8_col(b"snapshot_nonrelfiles_not_updated"),
RowDescriptor::int8_col(b"snapshot_nonrelfiles_removed"),
RowDescriptor::int8_col(b"snapshot_nonrelfiles_dropped"),
RowDescriptor::int8_col(b"elapsed"),
]))?
.write_message_noflush(&BeMessage::DataRow(&[
Some(&result.n_relations.to_string().as_bytes()),
@@ -591,7 +714,46 @@ impl postgres_backend::Handler for PageServerHandler {
Some(&result.prep_deleted.to_string().as_bytes()),
Some(&result.slru_deleted.to_string().as_bytes()),
Some(&result.chkp_deleted.to_string().as_bytes()),
Some(&result.control_deleted.to_string().as_bytes()),
Some(&result.filenodemap_deleted.to_string().as_bytes()),
Some(&result.dropped.to_string().as_bytes()),
Some(&result.snapshot_relfiles_total.to_string().as_bytes()),
Some(
&result
.snapshot_relfiles_needed_by_cutoff
.to_string()
.as_bytes(),
),
Some(
&result
.snapshot_relfiles_needed_by_branches
.to_string()
.as_bytes(),
),
Some(&result.snapshot_relfiles_not_updated.to_string().as_bytes()),
Some(&result.snapshot_relfiles_removed.to_string().as_bytes()),
Some(&result.snapshot_relfiles_dropped.to_string().as_bytes()),
Some(&result.snapshot_nonrelfiles_total.to_string().as_bytes()),
Some(
&result
.snapshot_nonrelfiles_needed_by_cutoff
.to_string()
.as_bytes(),
),
Some(
&result
.snapshot_nonrelfiles_needed_by_branches
.to_string()
.as_bytes(),
),
Some(
&result
.snapshot_nonrelfiles_not_updated
.to_string()
.as_bytes(),
),
Some(&result.snapshot_nonrelfiles_removed.to_string().as_bytes()),
Some(&result.snapshot_nonrelfiles_dropped.to_string().as_bytes()),
Some(&result.elapsed.as_millis().to_string().as_bytes()),
]))?
.write_message(&BeMessage::CommandComplete(b"SELECT 1"))?;

235
pageserver/src/relish.rs Normal file
View File

@@ -0,0 +1,235 @@
//!
//! Zenith stores PostgreSQL relations, and some other files, in the
//! repository. The relations (i.e. tables and indexes) take up most
//! of the space in a typical installation, while the other files are
//! small. We call each relation and other file that is stored in the
//! repository a "relish". It comes from "rel"-ish, as in "kind of a
//! rel", because it covers relations as well as other things that are
//! not relations, but are treated similarly for the purposes of the
//! storage layer.
//!
//! This source file contains the definition of the RelishTag struct,
//! which uniquely identifies a relish.
//!
//! Relishes come in two flavors: blocky and non-blocky. Relations and
//! SLRUs are blocky, that is, they are divided into 8k blocks, and
//! the repository tracks their size. Other relishes are non-blocky:
//! the content of the whole relish is stored as one blob. Block
//! number must be passed as 0 for all operations on a non-blocky
//! relish. The one "block" that you store in a non-blocky relish can
//! have arbitrary size, but they are expected to be small, or you
//! will have performance issues.
//!
//! All relishes are versioned by LSN in the repository.
//!
use serde::{Deserialize, Serialize};
use std::fmt;
use postgres_ffi::relfile_utils::forknumber_to_name;
use postgres_ffi::{Oid, TransactionId};
///
/// RelishTag identifies one relish.
///
#[derive(Debug, Clone, Copy, Hash, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum RelishTag {
// Relations correspond to PostgreSQL relation forks. Each
// PostgreSQL relation fork is considered a separate relish.
Relation(RelTag),
// SLRUs include pg_clog, pg_multixact/members, and
// pg_multixact/offsets. There are other SLRUs in PostgreSQL, but
// they don't need to be stored permanently (e.g. pg_subtrans),
// or we do not support them in zenith yet (pg_commit_ts).
//
// These are currently never requested directly by the compute
// nodes, although in principle that would be possible. However,
// when a new compute node is created, these are included in the
// tarball that we send to the compute node to initialize the
// PostgreSQL data directory.
//
// Each SLRU segment in PostgreSQL is considered a separate
// relish. For example, pg_clog/0000, pg_clog/0001, and so forth.
//
// SLRU segments are divided into blocks, like relations.
Slru { slru: SlruKind, segno: u32 },
// Miscellaneous other files that need to be included in the
// tarball at compute node creation. These are non-blocky, and are
// expected to be small.
//
// FileNodeMap represents PostgreSQL's 'pg_filenode.map'
// files. They are needed to map catalog table OIDs to filenode
// numbers. Usually the mapping is done by looking up a relation's
// 'relfilenode' field in the 'pg_class' system table, but that
// doesn't work for 'pg_class' itself and a few other such system
// relations. See PostgreSQL relmapper.c for details.
//
// Each database has a map file for its local mapped catalogs,
// and there is a separate map file for shared catalogs.
//
// These files are always 512 bytes long (although we don't check
// or care about that in the page server).
//
FileNodeMap { spcnode: Oid, dbnode: Oid },
//
// State files for prepared transactions (e.g pg_twophase/1234)
//
TwoPhase { xid: TransactionId },
// The control file, stored in global/pg_control
ControlFile,
// Special entry that represents PostgreSQL checkpoint. It doesn't
// correspond to to any physical file in PostgreSQL, but we use it
// to track fields needed to restore the checkpoint data in the
// control file, when a compute node is created.
Checkpoint,
}
impl RelishTag {
pub const fn is_blocky(&self) -> bool {
match self {
// These relishes work with blocks
RelishTag::Relation(_) | RelishTag::Slru { slru: _, segno: _ } => true,
// and these don't
RelishTag::FileNodeMap {
spcnode: _,
dbnode: _,
}
| RelishTag::TwoPhase { xid: _ }
| RelishTag::ControlFile
| RelishTag::Checkpoint => false,
}
}
// Physical relishes represent files and use
// RelationSizeEntry to track existing and dropped files.
// They can be both blocky and non-blocky.
pub const fn is_physical(&self) -> bool {
match self {
// These relishes represent physical files
RelishTag::Relation(_)
| RelishTag::Slru { .. }
| RelishTag::FileNodeMap { .. }
| RelishTag::TwoPhase { .. } => true,
// and these don't
RelishTag::ControlFile | RelishTag::Checkpoint => false,
}
}
// convenience function to check if this relish is a normal relation.
pub const fn is_relation(&self) -> bool {
if let RelishTag::Relation(_) = self {
true
} else {
false
}
}
}
///
/// Relation data file segment id throughout the Postgres cluster.
///
/// Every data file in Postgres is uniquely identified by 4 numbers:
/// - relation id / node (`relnode`)
/// - database id (`dbnode`)
/// - tablespace id (`spcnode`), in short this is a unique id of a separate
/// directory to store data files.
/// - forknumber (`forknum`) is used to split different kinds of data of the same relation
/// between some set of files (`relnode`, `relnode_fsm`, `relnode_vm`).
///
/// In native Postgres code `RelFileNode` structure and individual `ForkNumber` value
/// are used for the same purpose.
/// [See more related comments here](https:///github.com/postgres/postgres/blob/99c5852e20a0987eca1c38ba0c09329d4076b6a0/src/include/storage/relfilenode.h#L57).
///
#[derive(Debug, PartialEq, Eq, PartialOrd, Hash, Ord, Clone, Copy, Serialize, Deserialize)]
pub struct RelTag {
pub forknum: u8,
pub spcnode: Oid,
pub dbnode: Oid,
pub relnode: Oid,
}
/// Display RelTag in the same format that's used in most PostgreSQL debug messages:
///
/// <spcnode>/<dbnode>/<relnode>[_fsm|_vm|_init]
///
impl fmt::Display for RelTag {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if let Some(forkname) = forknumber_to_name(self.forknum) {
write!(
f,
"{}/{}/{}_{}",
self.spcnode, self.dbnode, self.relnode, forkname
)
} else {
write!(f, "{}/{}/{}", self.spcnode, self.dbnode, self.relnode)
}
}
}
/// Display RelTag in the same format that's used in most PostgreSQL debug messages:
///
/// <spcnode>/<dbnode>/<relnode>[_fsm|_vm|_init]
///
impl fmt::Display for RelishTag {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
RelishTag::Relation(rel) => rel.fmt(f),
RelishTag::Slru { slru, segno } => {
// e.g. pg_clog/0001
write!(f, "{}/{:04X}", slru.to_str(), segno)
}
RelishTag::FileNodeMap { spcnode, dbnode } => {
write!(f, "relmapper file for spc {} db {}", spcnode, dbnode)
}
RelishTag::TwoPhase { xid } => {
write!(f, "pg_twophase/{:08X}", xid)
}
RelishTag::ControlFile => {
write!(f, "control file")
}
RelishTag::Checkpoint => {
write!(f, "checkpoint")
}
}
}
}
///
/// Non-relation transaction status files (clog (a.k.a. pg_xact) and
/// pg_multixact) in Postgres are handled by SLRU (Simple LRU) buffer,
/// hence the name.
///
/// These files are global for a postgres instance.
///
/// These files are divided into segments, which are divided into
/// pages of the same BLCKSZ as used for relation files.
///
#[derive(Debug, Clone, Copy, Hash, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum SlruKind {
Clog,
MultiXactMembers,
MultiXactOffsets,
}
impl SlruKind {
pub fn to_str(&self) -> &'static str {
match self {
Self::Clog => "pg_xact",
Self::MultiXactMembers => "pg_multixact/members",
Self::MultiXactOffsets => "pg_multixact/offsets",
}
}
}
pub const FIRST_NONREL_RELISH_TAG: RelishTag = RelishTag::Slru {
slru: SlruKind::Clog,
segno: 0,
};

View File

@@ -1,18 +1,15 @@
use crate::object_key::*;
use crate::waldecoder::TransactionId;
use crate::ZTimelineId;
use crate::relish::*;
use anyhow::Result;
use bytes::{Buf, BufMut, Bytes, BytesMut};
use postgres_ffi::nonrelfile_utils::transaction_id_get_status;
use postgres_ffi::pg_constants;
use postgres_ffi::relfile_utils::forknumber_to_name;
use serde::{Deserialize, Serialize};
use std::collections::HashSet;
use std::fmt;
use std::iter::Iterator;
use std::ops::AddAssign;
use std::sync::Arc;
use std::time::Duration;
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTimelineId;
///
/// A repository corresponds to one .zenith directory. One repository holds multiple
@@ -31,6 +28,27 @@ pub trait Repository: Send + Sync {
/// Branch a timeline
fn branch_timeline(&self, src: ZTimelineId, dst: ZTimelineId, start_lsn: Lsn) -> Result<()>;
/// perform one garbage collection iteration.
/// garbage collection is periodically performed by gc thread,
/// but it can be explicitly requested through page server api.
///
/// 'timelineid' specifies the timeline to GC, or None for all.
/// `horizon` specifies delta from last lsn to preserve all object versions (pitr interval).
/// `compact` parameter is used to force compaction of storage.
/// some storage implementation are based on lsm tree and require periodic merge (compaction).
/// usually storage implementation determines itself when compaction should be performed.
/// but for gc tests it way be useful to force compaction just after completion of gc iteration
/// to make sure that all detected garbage is removed.
/// so right now `compact` is set to true when gc explicitly requested through page srver api,
/// and is st to false in gc threads which infinitely repeats gc iterations in loop.
fn gc_iteration(
&self,
timelineid: Option<ZTimelineId>,
horizon: u64,
compact: bool,
) -> Result<GcResult>;
// TODO get timelines?
//fn get_stats(&self) -> RepositoryStats;
}
@@ -39,39 +57,85 @@ pub trait Repository: Send + Sync {
///
#[derive(Default)]
pub struct GcResult {
// FIXME: These counters make sense for the ObjectRepository. They are not used
// by the LayeredRepository.
pub n_relations: u64,
pub inspected: u64,
pub truncated: u64,
pub deleted: u64,
pub prep_deleted: u64, // 2PC prepare
pub slru_deleted: u64, // SLRU (clog, multixact)
pub chkp_deleted: u64, // Checkpoints
pub prep_deleted: u64, // RelishTag::Twophase
pub slru_deleted: u64, // RelishTag::Slru
pub chkp_deleted: u64, // RelishTag::Checkpoint
pub control_deleted: u64, // RelishTag::ControlFile
pub filenodemap_deleted: u64, // RelishTag::FileNodeMap
pub dropped: u64,
// These are used for the LayeredRepository instead
pub snapshot_relfiles_total: u64,
pub snapshot_relfiles_needed_by_cutoff: u64,
pub snapshot_relfiles_needed_by_branches: u64,
pub snapshot_relfiles_not_updated: u64,
pub snapshot_relfiles_removed: u64, // # of snapshot files removed because they have been made obsolete by newer snapshot files.
pub snapshot_relfiles_dropped: u64, // # of snapshot files removed because the relation was dropped
pub snapshot_nonrelfiles_total: u64,
pub snapshot_nonrelfiles_needed_by_cutoff: u64,
pub snapshot_nonrelfiles_needed_by_branches: u64,
pub snapshot_nonrelfiles_not_updated: u64,
pub snapshot_nonrelfiles_removed: u64, // # of snapshot files removed because they have been made obsolete by newer snapshot files.
pub snapshot_nonrelfiles_dropped: u64, // # of snapshot files removed because the relation was dropped
pub elapsed: Duration,
}
impl AddAssign for GcResult {
fn add_assign(&mut self, other: Self) {
self.n_relations += other.n_relations;
self.truncated += other.truncated;
self.deleted += other.deleted;
self.dropped += other.dropped;
self.snapshot_relfiles_total += other.snapshot_relfiles_total;
self.snapshot_relfiles_needed_by_cutoff += other.snapshot_relfiles_needed_by_cutoff;
self.snapshot_relfiles_needed_by_branches += other.snapshot_relfiles_needed_by_branches;
self.snapshot_relfiles_not_updated += other.snapshot_relfiles_not_updated;
self.snapshot_relfiles_removed += other.snapshot_relfiles_removed;
self.snapshot_relfiles_dropped += other.snapshot_relfiles_dropped;
self.snapshot_nonrelfiles_total += other.snapshot_nonrelfiles_total;
self.snapshot_nonrelfiles_needed_by_cutoff += other.snapshot_nonrelfiles_needed_by_cutoff;
self.snapshot_nonrelfiles_needed_by_branches +=
other.snapshot_nonrelfiles_needed_by_branches;
self.snapshot_nonrelfiles_not_updated += other.snapshot_nonrelfiles_not_updated;
self.snapshot_nonrelfiles_removed += other.snapshot_nonrelfiles_removed;
self.snapshot_nonrelfiles_dropped += other.snapshot_nonrelfiles_dropped;
self.elapsed += other.elapsed;
}
}
pub trait Timeline: Send + Sync {
//------------------------------------------------------------------------------
// Public GET functions
//------------------------------------------------------------------------------
/// Look up given page in the cache.
fn get_page_at_lsn(&self, tag: ObjectTag, lsn: Lsn) -> Result<Bytes>;
fn get_page_at_lsn(&self, tag: RelishTag, blknum: u32, lsn: Lsn) -> Result<Bytes>;
/// Look up given page in the cache.
fn get_page_at_lsn_nowait(&self, tag: ObjectTag, lsn: Lsn, materialize: bool) -> Result<Bytes>;
fn get_page_at_lsn_nowait(&self, tag: RelishTag, blknum: u32, lsn: Lsn) -> Result<Bytes>;
/// Get size of relation
fn get_rel_size(&self, tag: RelTag, lsn: Lsn) -> Result<u32>;
/// Get size of a relish
fn get_relish_size(&self, tag: RelishTag, lsn: Lsn) -> Result<Option<u32>>;
/// Does relation exist?
fn get_rel_exists(&self, tag: RelTag, lsn: Lsn) -> Result<bool>;
fn get_rel_exists(&self, tag: RelishTag, lsn: Lsn) -> Result<bool>;
/// Get a list of all distinct relations in given tablespace and database.
fn list_rels(&self, spcnode: u32, dbnode: u32, lsn: Lsn) -> Result<HashSet<RelTag>>;
/// Get a list of non-relational objects
fn list_nonrels<'a>(&'a self, lsn: Lsn) -> Result<Box<dyn Iterator<Item = ObjectTag> + 'a>>;
fn list_nonrels<'a>(&'a self, lsn: Lsn) -> Result<HashSet<RelishTag>>;
//------------------------------------------------------------------------------
// Public PUT functions, to update the repository with new page versions.
@@ -83,27 +147,28 @@ pub trait Timeline: Send + Sync {
///
/// This will implicitly extend the relation, if the page is beyond the
/// current end-of-file.
fn put_wal_record(&self, tag: ObjectTag, rec: WALRecord) -> Result<()>;
fn put_wal_record(&self, tag: RelishTag, blknum: u32, rec: WALRecord) -> Result<()>;
/// Like put_wal_record, but with ready-made image of the page.
fn put_page_image(
&self,
tag: RelishTag,
blknum: u32,
lsn: Lsn,
img: Bytes,
update_meta: bool,
) -> Result<()>;
/// Truncate relation
fn put_truncation(&self, rel: RelishTag, lsn: Lsn, nblocks: u32) -> Result<()>;
/// Unlink relish.
/// This method is used for marking dropped relations and truncated SLRU segments
fn put_unlink(&self, tag: RelishTag, lsn: Lsn) -> Result<()>;
/// Put raw data
fn put_raw_data(&self, tag: ObjectTag, lsn: Lsn, data: &[u8]) -> Result<()>;
/// Like put_wal_record, but with ready-made image of the page.
fn put_page_image(&self, tag: ObjectTag, lsn: Lsn, img: Bytes, update_meta: bool)
-> Result<()>;
/// Truncate relation
fn put_truncation(&self, rel: RelTag, lsn: Lsn, nblocks: u32) -> Result<()>;
/// Unlink relation. This method is used for marking dropped relations.
fn put_unlink(&self, tag: RelTag, lsn: Lsn) -> Result<()>;
/// Truncate SLRU segment
fn put_slru_truncate(&self, tag: ObjectTag, lsn: Lsn) -> Result<()>;
// Get object tag greater or equal than specified
fn get_next_tag(&self, tag: ObjectTag) -> Result<Option<ObjectTag>>;
/// Remember the all WAL before the given LSN has been processed.
///
/// The WAL receiver calls this after the put_* functions, to indicate that
@@ -138,49 +203,32 @@ pub trait Timeline: Send + Sync {
/// Relation size is increased implicitly and decreased with Truncate updates.
// TODO ordering guarantee?
fn history<'a>(&'a self) -> Result<Box<dyn History + 'a>>;
/// Perform one garbage collection iteration.
/// Garbage collection is periodically performed by GC thread,
/// but it can be explicitly requested through page server API.
///
/// `horizon` specifies delta from last LSN to preserve all object versions (PITR interval).
/// `compact` parameter is used to force compaction of storage.
/// Some storage implementation are based on LSM tree and require periodic merge (compaction).
/// Usually storage implementation determines itself when compaction should be performed.
/// But for GC tests it way be useful to force compaction just after completion of GC iteration
/// to make sure that all detected garbage is removed.
/// So right now `compact` is set to true when GC explicitly requested through page srver API,
/// and is st to false in GC threads which infinitely repeats GC iterations in loop.
fn gc_iteration(&self, horizon: u64, compact: bool) -> Result<GcResult>;
// Check transaction status
fn get_tx_status(&self, xid: TransactionId, lsn: Lsn) -> anyhow::Result<u8> {
let blknum = xid / pg_constants::CLOG_XACTS_PER_PAGE;
let tag = ObjectTag::Clog(SlruBufferTag { blknum });
let clog_page = self.get_page_at_lsn(tag, lsn)?;
let status = transaction_id_get_status(xid, &clog_page[..]);
Ok(status)
}
}
pub trait History: Iterator<Item = Result<RelationUpdate>> {
pub trait History: Iterator<Item = Result<Modification>> {
/// The last_valid_lsn at the time of history() call.
fn lsn(&self) -> Lsn;
}
//
// Structure representing any update operation of object storage.
// It is used to copy object storage content in PUSH method.
//
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
pub struct RelationUpdate {
pub rel: RelTag,
pub struct Modification {
pub tag: ObjectTag,
pub lsn: Lsn,
pub update: Update,
pub data: Vec<u8>,
}
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
pub enum Update {
Page { blknum: u32, img: Bytes },
WALRecord { blknum: u32, rec: WALRecord },
Truncate { n_blocks: u32 },
Unlink,
impl Modification {
pub fn new(entry: (ObjectTag, Lsn, Vec<u8>)) -> Modification {
Modification {
tag: entry.0,
lsn: entry.1,
data: entry.2,
}
}
}
#[derive(Clone)]
@@ -191,76 +239,6 @@ pub struct RepositoryStats {
pub num_getpage_requests: Lsn,
}
///
/// Relation data file segment id throughout the Postgres cluster.
///
/// Every data file in Postgres is uniquely identified by 4 numbers:
/// - relation id / node (`relnode`)
/// - database id (`dbnode`)
/// - tablespace id (`spcnode`), in short this is a unique id of a separate
/// directory to store data files.
/// - forknumber (`forknum`) is used to split different kinds of data of the same relation
/// between some set of files (`relnode`, `relnode_fsm`, `relnode_vm`).
///
/// In native Postgres code `RelFileNode` structure and individual `ForkNumber` value
/// are used for the same purpose.
/// [See more related comments here](https:///github.com/postgres/postgres/blob/99c5852e20a0987eca1c38ba0c09329d4076b6a0/src/include/storage/relfilenode.h#L57).
///
#[derive(Debug, PartialEq, Eq, PartialOrd, Hash, Ord, Clone, Copy, Serialize, Deserialize)]
pub struct RelTag {
pub forknum: u8,
pub spcnode: u32,
pub dbnode: u32,
pub relnode: u32,
}
impl RelTag {
pub const ZEROED: Self = Self {
forknum: 0,
spcnode: 0,
dbnode: 0,
relnode: 0,
};
}
/// Display RelTag in the same format that's used in most PostgreSQL debug messages:
///
/// <spcnode>/<dbnode>/<relnode>[_fsm|_vm|_init]
///
impl fmt::Display for RelTag {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if let Some(forkname) = forknumber_to_name(self.forknum) {
write!(
f,
"{}/{}/{}_{}",
self.spcnode, self.dbnode, self.relnode, forkname
)
} else {
write!(f, "{}/{}/{}", self.spcnode, self.dbnode, self.relnode)
}
}
}
///
/// `RelTag` + block number (`blknum`) gives us a unique id of the page in the cluster.
/// This is used as a part of the key inside key-value storage (RocksDB currently).
///
/// In Postgres `BufferTag` structure is used for exactly the same purpose.
/// [See more related comments here](https://github.com/postgres/postgres/blob/99c5852e20a0987eca1c38ba0c09329d4076b6a0/src/include/storage/buf_internals.h#L91).
///
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, Copy, Serialize, Deserialize)]
pub struct BufferTag {
pub rel: RelTag,
pub blknum: u32,
}
impl BufferTag {
pub const ZEROED: Self = Self {
rel: RelTag::ZEROED,
blknum: 0,
};
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct WALRecord {
pub lsn: Lsn, // LSN at the *end* of the record
@@ -301,39 +279,34 @@ impl WALRecord {
#[cfg(test)]
mod tests {
use super::*;
use crate::layered_repository::LayeredRepository;
use crate::object_repository::ObjectRepository;
use crate::object_repository::{ObjectValue, PageEntry, RelationSizeEntry};
use crate::rocksdb_storage::RocksObjectStore;
use crate::walredo::{WalRedoError, WalRedoManager};
use crate::PageServerConf;
use crate::{PageServerConf, RepositoryFormat};
use postgres_ffi::pg_constants;
use std::fs;
use std::path::PathBuf;
use std::str::FromStr;
use std::time::Duration;
use zenith_utils::bin_ser::BeSer;
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::ZTenantId;
/// Arbitrary relation tag, for testing.
const TESTREL_A: RelTag = RelTag {
const TESTREL_A: RelishTag = RelishTag::Relation(RelTag {
spcnode: 0,
dbnode: 111,
relnode: 1000,
forknum: 0,
};
const TESTREL_B: RelTag = RelTag {
});
const TESTREL_B: RelishTag = RelishTag::Relation(RelTag {
spcnode: 0,
dbnode: 111,
relnode: 1001,
forknum: 0,
};
/// Convenience function to create a BufferTag for testing.
/// Helps to keeps the tests shorter.
#[allow(non_snake_case)]
fn TEST_BUF(blknum: u32) -> ObjectTag {
ObjectTag::RelationBuffer(BufferTag {
rel: TESTREL_A,
blknum,
})
}
});
/// Convenience function to create a page image with given string as the only content
#[allow(non_snake_case)]
@@ -345,93 +318,127 @@ mod tests {
buf.freeze()
}
fn get_test_repo(test_name: &str) -> Result<Box<dyn Repository>> {
static ZERO_PAGE: Bytes = Bytes::from_static(&[0u8; 8192]);
fn get_test_repo(
test_name: &str,
repository_format: RepositoryFormat,
) -> Result<Box<dyn Repository>> {
let repo_dir = PathBuf::from(format!("../tmp_check/test_{}", test_name));
let _ = fs::remove_dir_all(&repo_dir);
fs::create_dir_all(&repo_dir)?;
fs::create_dir_all(&repo_dir.join("timelines"))?;
let conf = PageServerConf {
daemonize: false,
interactive: false,
materialize: false,
gc_horizon: 64 * 1024 * 1024,
gc_period: Duration::from_secs(10),
wal_redoers: 1,
listen_addr: "127.0.0.1:5430".to_string(),
http_endpoint_addr: "127.0.0.1:9898".to_string(),
superuser: "zenith_admin".to_string(),
workdir: repo_dir,
pg_distrib_dir: "".into(),
auth_type: AuthType::Trust,
auth_validation_public_key_path: None,
repository_format,
};
// Make a static copy of the config. This can never be free'd, but that's
// OK in a test.
let conf: &'static PageServerConf = Box::leak(Box::new(conf));
let obj_store = RocksObjectStore::create(conf)?;
let tenantid = ZTenantId::generate();
fs::create_dir_all(conf.tenant_path(&tenantid)).unwrap();
let walredo_mgr = TestRedoManager {};
let repo = ObjectRepository::new(conf, Arc::new(obj_store), Arc::new(walredo_mgr));
let repo: Box<dyn Repository + Sync + Send> = match conf.repository_format {
RepositoryFormat::Layered => Box::new(LayeredRepository::new(
conf,
Arc::new(walredo_mgr),
tenantid,
)),
RepositoryFormat::RocksDb => {
let obj_store = RocksObjectStore::create(conf, &tenantid)?;
Ok(Box::new(repo))
Box::new(ObjectRepository::new(
conf,
Arc::new(obj_store),
Arc::new(walredo_mgr),
tenantid,
))
}
};
Ok(repo)
}
/// Test get_relsize() and truncation.
#[test]
fn test_relsize() -> Result<()> {
fn test_relsize_rocksdb() -> Result<()> {
let repo = get_test_repo("test_relsize_rocksdb", RepositoryFormat::RocksDb)?;
test_relsize(&*repo)
}
#[test]
fn test_relsize_layered() -> Result<()> {
let repo = get_test_repo("test_relsize_layered", RepositoryFormat::Layered)?;
test_relsize(&*repo)
}
fn test_relsize(repo: &dyn Repository) -> Result<()> {
// get_timeline() with non-existent timeline id should fail
//repo.get_timeline("11223344556677881122334455667788");
// Create timeline to work on
let repo = get_test_repo("test_relsize")?;
let timelineid = ZTimelineId::from_str("11223344556677881122334455667788").unwrap();
let tline = repo.create_empty_timeline(timelineid, Lsn(0))?;
tline.init_valid_lsn(Lsn(1));
tline.put_page_image(TEST_BUF(0), Lsn(2), TEST_IMG("foo blk 0 at 2"), true)?;
tline.put_page_image(TEST_BUF(0), Lsn(2), TEST_IMG("foo blk 0 at 2"), true)?;
tline.put_page_image(TEST_BUF(0), Lsn(3), TEST_IMG("foo blk 0 at 3"), true)?;
tline.put_page_image(TEST_BUF(1), Lsn(4), TEST_IMG("foo blk 1 at 4"), true)?;
tline.put_page_image(TEST_BUF(2), Lsn(5), TEST_IMG("foo blk 2 at 5"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(2), TEST_IMG("foo blk 0 at 2"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(2), TEST_IMG("foo blk 0 at 2"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(3), TEST_IMG("foo blk 0 at 3"), true)?;
tline.put_page_image(TESTREL_A, 1, Lsn(4), TEST_IMG("foo blk 1 at 4"), true)?;
tline.put_page_image(TESTREL_A, 2, Lsn(5), TEST_IMG("foo blk 2 at 5"), true)?;
tline.advance_last_valid_lsn(Lsn(5));
// The relation was created at LSN 2, not visible at LSN 1 yet.
assert_eq!(tline.get_rel_exists(TESTREL_A, Lsn(1))?, false);
assert!(tline.get_rel_size(TESTREL_A, Lsn(1)).is_err());
assert!(tline.get_relish_size(TESTREL_A, Lsn(1))?.is_none());
assert_eq!(tline.get_rel_exists(TESTREL_A, Lsn(2))?, true);
assert_eq!(tline.get_rel_size(TESTREL_A, Lsn(2))?, 1);
assert_eq!(tline.get_rel_size(TESTREL_A, Lsn(5))?, 3);
assert_eq!(tline.get_relish_size(TESTREL_A, Lsn(2))?.unwrap(), 1);
assert_eq!(tline.get_relish_size(TESTREL_A, Lsn(5))?.unwrap(), 3);
// Check page contents at each LSN
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(0), Lsn(2))?,
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(2))?,
TEST_IMG("foo blk 0 at 2")
);
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(0), Lsn(3))?,
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(3))?,
TEST_IMG("foo blk 0 at 3")
);
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(0), Lsn(4))?,
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(4))?,
TEST_IMG("foo blk 0 at 3")
);
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(1), Lsn(4))?,
tline.get_page_at_lsn(TESTREL_A, 1, Lsn(4))?,
TEST_IMG("foo blk 1 at 4")
);
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(0), Lsn(5))?,
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(5))?,
TEST_IMG("foo blk 0 at 3")
);
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(1), Lsn(5))?,
tline.get_page_at_lsn(TESTREL_A, 1, Lsn(5))?,
TEST_IMG("foo blk 1 at 4")
);
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(2), Lsn(5))?,
tline.get_page_at_lsn(TESTREL_A, 2, Lsn(5))?,
TEST_IMG("foo blk 2 at 5")
);
@@ -440,20 +447,20 @@ mod tests {
tline.advance_last_valid_lsn(Lsn(6));
// Check reported size and contents after truncation
assert_eq!(tline.get_rel_size(TESTREL_A, Lsn(6))?, 2);
assert_eq!(tline.get_relish_size(TESTREL_A, Lsn(6))?.unwrap(), 2);
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(0), Lsn(6))?,
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(6))?,
TEST_IMG("foo blk 0 at 3")
);
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(1), Lsn(6))?,
tline.get_page_at_lsn(TESTREL_A, 1, Lsn(6))?,
TEST_IMG("foo blk 1 at 4")
);
// should still see the truncated block with older LSN
assert_eq!(tline.get_rel_size(TESTREL_A, Lsn(5))?, 3);
assert_eq!(tline.get_relish_size(TESTREL_A, Lsn(5))?.unwrap(), 3);
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(2), Lsn(5))?,
tline.get_page_at_lsn(TESTREL_A, 2, Lsn(5))?,
TEST_IMG("foo blk 2 at 5")
);
@@ -466,23 +473,33 @@ mod tests {
/// This isn't very interesting with the RocksDb implementation, as we don't pay
/// any attention to Postgres segment boundaries there.
#[test]
fn test_large_rel() -> Result<()> {
let repo = get_test_repo("test_large_rel")?;
fn test_large_rel_rocksdb() -> Result<()> {
let repo = get_test_repo("test_large_rel_rocksdb", RepositoryFormat::RocksDb)?;
test_large_rel(&*repo)
}
#[test]
fn test_large_rel_layered() -> Result<()> {
let repo = get_test_repo("test_large_rel_layered", RepositoryFormat::Layered)?;
test_large_rel(&*repo)
}
fn test_large_rel(repo: &dyn Repository) -> Result<()> {
let timelineid = ZTimelineId::from_str("11223344556677881122334455667788").unwrap();
let tline = repo.create_empty_timeline(timelineid, Lsn(0))?;
tline.init_valid_lsn(Lsn(1));
let mut lsn = 0;
for i in 0..pg_constants::RELSEG_SIZE + 1 {
let img = TEST_IMG(&format!("foo blk {} at {}", i, Lsn(lsn)));
let mut lsn = 1;
for blknum in 0..pg_constants::RELSEG_SIZE + 1 {
let img = TEST_IMG(&format!("foo blk {} at {}", blknum, Lsn(lsn)));
lsn += 1;
tline.put_page_image(TEST_BUF(i as u32), Lsn(lsn), img, true)?;
tline.put_page_image(TESTREL_A, blknum as u32, Lsn(lsn), img, true)?;
}
tline.advance_last_valid_lsn(Lsn(lsn));
assert_eq!(
tline.get_rel_size(TESTREL_A, Lsn(lsn))?,
tline.get_relish_size(TESTREL_A, Lsn(lsn))?.unwrap(),
pg_constants::RELSEG_SIZE + 1
);
@@ -491,7 +508,7 @@ mod tests {
tline.put_truncation(TESTREL_A, Lsn(lsn), pg_constants::RELSEG_SIZE)?;
tline.advance_last_valid_lsn(Lsn(lsn));
assert_eq!(
tline.get_rel_size(TESTREL_A, Lsn(lsn))?,
tline.get_relish_size(TESTREL_A, Lsn(lsn))?.unwrap(),
pg_constants::RELSEG_SIZE
);
@@ -500,34 +517,71 @@ mod tests {
tline.put_truncation(TESTREL_A, Lsn(lsn), pg_constants::RELSEG_SIZE - 1)?;
tline.advance_last_valid_lsn(Lsn(lsn));
assert_eq!(
tline.get_rel_size(TESTREL_A, Lsn(lsn))?,
tline.get_relish_size(TESTREL_A, Lsn(lsn))?.unwrap(),
pg_constants::RELSEG_SIZE - 1
);
// Truncate to 1500, and then truncate all the way down to 0, one block at a time
// This tests the behavior at segment boundaries
let mut size: i32 = 3000;
while size >= 0 {
lsn += 1;
tline.put_truncation(TESTREL_A, Lsn(lsn), size as u32)?;
tline.advance_last_valid_lsn(Lsn(lsn));
assert_eq!(
tline.get_relish_size(TESTREL_A, Lsn(lsn))?.unwrap(),
size as u32
);
size -= 1;
}
Ok(())
}
fn skip_nonrel_objects<'a>(
snapshot: Box<dyn History + 'a>,
) -> Result<impl Iterator<Item = <dyn History as Iterator>::Item> + 'a> {
Ok(snapshot.skip_while(|r| match r {
Ok(m) => match m.tag {
ObjectTag::RelationMetadata(_) => false,
_ => true,
},
_ => panic!("Iteration error"),
}))
}
#[test]
fn test_branch_rocksdb() -> Result<()> {
let repo = get_test_repo("test_branch_rocksdb", RepositoryFormat::RocksDb)?;
test_branch(&*repo)
}
#[test]
fn test_branch_layered() -> Result<()> {
let repo = get_test_repo("test_branch_layered", RepositoryFormat::Layered)?;
test_branch(&*repo)
}
///
/// Test branch creation
///
#[test]
fn test_branch() -> Result<()> {
let repo = get_test_repo("test_branch")?;
fn test_branch(repo: &dyn Repository) -> Result<()> {
let timelineid = ZTimelineId::from_str("11223344556677881122334455667788").unwrap();
let tline = repo.create_empty_timeline(timelineid, Lsn(0))?;
// Import initial dummy checkpoint record, otherwise the get_timeline() call
// after branching fails below
tline.put_page_image(RelishTag::Checkpoint, 0, Lsn(1), ZERO_PAGE.clone(), false)?;
// Create a relation on the timeline
tline.init_valid_lsn(Lsn(1));
tline.put_page_image(TEST_BUF(0), Lsn(2), TEST_IMG("foo blk 0 at 2"), true)?;
tline.put_page_image(TEST_BUF(0), Lsn(3), TEST_IMG("foo blk 0 at 3"), true)?;
tline.put_page_image(TEST_BUF(0), Lsn(4), TEST_IMG("foo blk 0 at 4"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(2), TEST_IMG("foo blk 0 at 2"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(3), TEST_IMG("foo blk 0 at 3"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(4), TEST_IMG("foo blk 0 at 4"), true)?;
// Create another relation
let buftag2 = ObjectTag::RelationBuffer(BufferTag {
rel: TESTREL_B,
blknum: 0,
});
tline.put_page_image(buftag2, Lsn(2), TEST_IMG("foobar blk 0 at 2"), true)?;
tline.put_page_image(TESTREL_B, 0, Lsn(2), TEST_IMG("foobar blk 0 at 2"), true)?;
tline.advance_last_valid_lsn(Lsn(4));
@@ -536,78 +590,110 @@ mod tests {
repo.branch_timeline(timelineid, newtimelineid, Lsn(3))?;
let newtline = repo.get_timeline(newtimelineid)?;
newtline.put_page_image(TEST_BUF(0), Lsn(4), TEST_IMG("bar blk 0 at 4"), true)?;
newtline.put_page_image(TESTREL_A, 0, Lsn(4), TEST_IMG("bar blk 0 at 4"), true)?;
newtline.advance_last_valid_lsn(Lsn(4));
// Check page contents on both branches
assert_eq!(
tline.get_page_at_lsn(TEST_BUF(0), Lsn(4))?,
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(4))?,
TEST_IMG("foo blk 0 at 4")
);
assert_eq!(
newtline.get_page_at_lsn(TEST_BUF(0), Lsn(4))?,
newtline.get_page_at_lsn(TESTREL_A, 0, Lsn(4))?,
TEST_IMG("bar blk 0 at 4")
);
assert_eq!(
newtline.get_page_at_lsn(buftag2, Lsn(4))?,
newtline.get_page_at_lsn(TESTREL_B, 0, Lsn(4))?,
TEST_IMG("foobar blk 0 at 2")
);
assert_eq!(newtline.get_rel_size(TESTREL_B, Lsn(4))?, 1);
assert_eq!(newtline.get_relish_size(TESTREL_B, Lsn(4))?.unwrap(), 1);
Ok(())
}
#[test]
fn test_history() -> Result<()> {
let repo = get_test_repo("test_snapshot")?;
fn test_history_rocksdb() -> Result<()> {
let repo = get_test_repo("test_history_rocksdb", RepositoryFormat::RocksDb)?;
test_history(&*repo)
}
#[test]
// TODO: This doesn't work with the layered storage, the functions needed for push/pull
// functionality haven't been implemented yet.
#[ignore]
fn test_history_layered() -> Result<()> {
let repo = get_test_repo("test_history_layered", RepositoryFormat::Layered)?;
test_history(&*repo)
}
fn test_history(repo: &dyn Repository) -> Result<()> {
let timelineid = ZTimelineId::from_str("11223344556677881122334455667788").unwrap();
let tline = repo.create_empty_timeline(timelineid, Lsn(0))?;
let mut snapshot = tline.history()?;
let snapshot = tline.history()?;
assert_eq!(snapshot.lsn(), Lsn(0));
let mut snapshot = skip_nonrel_objects(snapshot)?;
assert_eq!(None, snapshot.next().transpose()?);
// add a page and advance the last valid LSN
let rel = TESTREL_A;
let tag = TEST_BUF(1);
tline.put_page_image(tag, Lsn(1), TEST_IMG("blk 1 @ lsn 1"), true)?;
tline.put_page_image(rel, 1, Lsn(1), TEST_IMG("blk 1 @ lsn 1"), true)?;
tline.advance_last_valid_lsn(Lsn(1));
let mut snapshot = tline.history()?;
assert_eq!(snapshot.lsn(), Lsn(1));
let expected_page = RelationUpdate {
rel: rel,
let expected_page = Modification {
tag: ObjectTag::Buffer(rel, 1),
lsn: Lsn(1),
update: Update::Page {
blknum: 1,
img: TEST_IMG("blk 1 @ lsn 1"),
},
data: ObjectValue::ser(&ObjectValue::Page(PageEntry::Page(TEST_IMG(
"blk 1 @ lsn 1",
))))?,
};
let expected_init_size = Modification {
tag: ObjectTag::RelationMetadata(rel),
lsn: Lsn(1),
data: ObjectValue::ser(&ObjectValue::RelationSize(RelationSizeEntry::Size(2)))?,
};
let expected_trunc_size = Modification {
tag: ObjectTag::RelationMetadata(rel),
lsn: Lsn(2),
data: ObjectValue::ser(&ObjectValue::RelationSize(RelationSizeEntry::Size(0)))?,
};
let snapshot = tline.history()?;
assert_eq!(snapshot.lsn(), Lsn(1));
let mut snapshot = skip_nonrel_objects(snapshot)?;
assert_eq!(
Some(&expected_init_size),
snapshot.next().transpose()?.as_ref()
);
assert_eq!(Some(&expected_page), snapshot.next().transpose()?.as_ref());
assert_eq!(None, snapshot.next().transpose()?);
// truncate to zero, but don't advance the last valid LSN
tline.put_truncation(rel, Lsn(2), 0)?;
let mut snapshot = tline.history()?;
let snapshot = tline.history()?;
assert_eq!(snapshot.lsn(), Lsn(1));
let mut snapshot = skip_nonrel_objects(snapshot)?;
assert_eq!(
Some(&expected_init_size),
snapshot.next().transpose()?.as_ref()
);
assert_eq!(Some(&expected_page), snapshot.next().transpose()?.as_ref());
assert_eq!(None, snapshot.next().transpose()?);
// advance the last valid LSN and the truncation should be observable
tline.advance_last_valid_lsn(Lsn(2));
let mut snapshot = tline.history()?;
let snapshot = tline.history()?;
assert_eq!(snapshot.lsn(), Lsn(2));
// TODO ordering not guaranteed by API. But currently it returns the
// truncation entry before the block data.
let expected_truncate = RelationUpdate {
rel: rel,
lsn: Lsn(2),
update: Update::Truncate { n_blocks: 0 },
};
assert_eq!(Some(expected_truncate), snapshot.next().transpose()?);
let mut snapshot = skip_nonrel_objects(snapshot)?;
assert_eq!(
Some(&expected_init_size),
snapshot.next().transpose()?.as_ref()
);
assert_eq!(
Some(&expected_trunc_size),
snapshot.next().transpose()?.as_ref()
);
assert_eq!(Some(&expected_page), snapshot.next().transpose()?.as_ref());
assert_eq!(None, snapshot.next().transpose()?);
@@ -620,14 +706,16 @@ mod tests {
impl WalRedoManager for TestRedoManager {
fn request_redo(
&self,
tag: ObjectTag,
rel: RelishTag,
blknum: u32,
lsn: Lsn,
base_img: Option<Bytes>,
records: Vec<WALRecord>,
) -> Result<Bytes, WalRedoError> {
let s = format!(
"redo for {:?} to get to {}, with {} and {} records",
tag,
"redo for {} blk {} to get to {}, with {} and {} records",
rel,
blknum,
lsn,
if base_img.is_some() {
"base image"

View File

@@ -3,7 +3,7 @@
//! zenith Timeline.
//!
use log::*;
use std::cmp::{max, min};
use std::cmp::min;
use std::fs;
use std::fs::File;
use std::io::Read;
@@ -14,42 +14,19 @@ use std::path::{Path, PathBuf};
use anyhow::Result;
use bytes::{Buf, Bytes};
use crate::object_key::*;
use crate::relish::*;
use crate::repository::*;
use crate::waldecoder::*;
use crate::PageServerConf;
use crate::ZTimelineId;
use postgres_ffi::relfile_utils::*;
use postgres_ffi::xlog_utils::*;
use postgres_ffi::{pg_constants, CheckPoint, ControlFileData};
use postgres_ffi::{Oid, TransactionId};
use zenith_utils::lsn::Lsn;
const MAX_MBR_BLKNO: u32 =
pg_constants::MAX_MULTIXACT_ID / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32;
///
/// Find latest snapshot in a timeline's 'snapshots' directory
///
pub fn find_latest_snapshot(_conf: &PageServerConf, timeline: ZTimelineId) -> Result<Lsn> {
let snapshotspath = format!("timelines/{}/snapshots", timeline);
let mut last_snapshot_lsn = Lsn(0);
for direntry in fs::read_dir(&snapshotspath).unwrap() {
let filename = direntry.unwrap().file_name();
if let Ok(lsn) = Lsn::from_filename(&filename) {
last_snapshot_lsn = max(lsn, last_snapshot_lsn);
} else {
error!("unrecognized file in snapshots directory: {:?}", filename);
}
}
if last_snapshot_lsn == Lsn(0) {
error!("could not find valid snapshot in {}", &snapshotspath);
// TODO return error?
}
Ok(last_snapshot_lsn)
}
const ZERO_PAGE: Bytes = Bytes::from_static(&[0u8; 8192]);
///
/// Import all relation data pages from local disk into the repository.
@@ -67,21 +44,21 @@ pub fn import_timeline_from_postgres_datadir(
// These special files appear in the snapshot, but are not needed by the page server
Some("pg_control") => {
import_nonrel_file(timeline, lsn, ObjectTag::ControlFile, &direntry.path())?;
import_nonrel_file(timeline, lsn, RelishTag::ControlFile, &direntry.path())?;
// Extract checkpoint record from pg_control and store is as separate object
let pg_control_bytes =
timeline.get_page_at_lsn_nowait(ObjectTag::ControlFile, lsn, false)?;
timeline.get_page_at_lsn_nowait(RelishTag::ControlFile, 0, lsn)?;
let pg_control = ControlFileData::decode(&pg_control_bytes)?;
let checkpoint_bytes = pg_control.checkPointCopy.encode();
timeline.put_page_image(ObjectTag::Checkpoint, lsn, checkpoint_bytes, false)?;
timeline.put_page_image(RelishTag::Checkpoint, 0, lsn, checkpoint_bytes, false)?;
}
Some("pg_filenode.map") => import_nonrel_file(
timeline,
lsn,
ObjectTag::FileNodeMap(DatabaseTag {
RelishTag::FileNodeMap {
spcnode: pg_constants::GLOBALTABLESPACE_OID,
dbnode: 0,
}),
},
&direntry.path(),
)?,
@@ -101,6 +78,11 @@ pub fn import_timeline_from_postgres_datadir(
for direntry in fs::read_dir(path.join("base"))? {
let direntry = direntry?;
//skip all temporary files
if direntry.file_name().to_str().unwrap() == "pgsql_tmp" {
continue;
}
let dboid = direntry.file_name().to_str().unwrap().parse::<u32>()?;
for direntry in fs::read_dir(direntry.path())? {
@@ -113,10 +95,10 @@ pub fn import_timeline_from_postgres_datadir(
Some("pg_filenode.map") => import_nonrel_file(
timeline,
lsn,
ObjectTag::FileNodeMap(DatabaseTag {
RelishTag::FileNodeMap {
spcnode: pg_constants::DEFAULTTABLESPACE_OID,
dbnode: dboid,
}),
},
&direntry.path(),
)?,
@@ -133,43 +115,24 @@ pub fn import_timeline_from_postgres_datadir(
}
for entry in fs::read_dir(path.join("pg_xact"))? {
let entry = entry?;
import_slru_file(
timeline,
lsn,
|blknum| ObjectTag::Clog(SlruBufferTag { blknum }),
&entry.path(),
)?;
import_slru_file(timeline, lsn, SlruKind::Clog, &entry.path())?;
}
for entry in fs::read_dir(path.join("pg_multixact").join("members"))? {
let entry = entry?;
import_slru_file(
timeline,
lsn,
|blknum| ObjectTag::MultiXactMembers(SlruBufferTag { blknum }),
&entry.path(),
)?;
import_slru_file(timeline, lsn, SlruKind::MultiXactMembers, &entry.path())?;
}
for entry in fs::read_dir(path.join("pg_multixact").join("offsets"))? {
let entry = entry?;
import_slru_file(
timeline,
lsn,
|blknum| ObjectTag::MultiXactOffsets(SlruBufferTag { blknum }),
&entry.path(),
)?;
import_slru_file(timeline, lsn, SlruKind::MultiXactOffsets, &entry.path())?;
}
for entry in fs::read_dir(path.join("pg_twophase"))? {
let entry = entry?;
let xid = u32::from_str_radix(&entry.path().to_str().unwrap(), 16)?;
import_nonrel_file(
timeline,
lsn,
ObjectTag::TwoPhase(PrepareTag { xid }),
&entry.path(),
)?;
import_nonrel_file(timeline, lsn, RelishTag::TwoPhase { xid }, &entry.path())?;
}
// TODO: Scan pg_tblspc
timeline.advance_last_valid_lsn(lsn);
timeline.checkpoint()?;
Ok(())
@@ -200,16 +163,14 @@ fn import_relfile(
let r = file.read_exact(&mut buf);
match r {
Ok(_) => {
let tag = ObjectTag::RelationBuffer(BufferTag {
rel: RelTag {
spcnode: spcoid,
dbnode: dboid,
relnode,
forknum,
},
blknum,
});
timeline.put_page_image(tag, lsn, Bytes::copy_from_slice(&buf), true)?;
let rel = RelTag {
spcnode: spcoid,
dbnode: dboid,
relnode,
forknum,
};
let tag = RelishTag::Relation(rel);
timeline.put_page_image(tag, blknum, lsn, Bytes::copy_from_slice(&buf), true)?;
}
// TODO: UnexpectedEof is expected
@@ -231,10 +192,16 @@ fn import_relfile(
Ok(())
}
///
/// Import a "non-blocky" file into the repository
///
/// This is used for small files like the control file, twophase files etc. that
/// are just slurped into the repository as one blob.
///
fn import_nonrel_file(
timeline: &dyn Timeline,
lsn: Lsn,
tag: ObjectTag,
tag: RelishTag,
path: &Path,
) -> Result<()> {
let mut file = File::open(path)?;
@@ -242,31 +209,34 @@ fn import_nonrel_file(
// read the whole file
file.read_to_end(&mut buffer)?;
timeline.put_page_image(tag, lsn, Bytes::copy_from_slice(&buffer[..]), false)?;
info!("importing non-rel file {}", path.display());
timeline.put_page_image(tag, 0, lsn, Bytes::copy_from_slice(&buffer[..]), false)?;
Ok(())
}
fn import_slru_file(
timeline: &dyn Timeline,
lsn: Lsn,
gen_tag: fn(blknum: u32) -> ObjectTag,
path: &Path,
) -> Result<()> {
// Does it look like a relation file?
///
/// Import an SLRU segment file
///
fn import_slru_file(timeline: &dyn Timeline, lsn: Lsn, slru: SlruKind, path: &Path) -> Result<()> {
// Does it look like an SLRU file?
let mut file = File::open(path)?;
let mut buf: [u8; 8192] = [0u8; 8192];
let segno = u32::from_str_radix(path.file_name().unwrap().to_str().unwrap(), 16)?;
let mut blknum: u32 = segno * pg_constants::SLRU_PAGES_PER_SEGMENT;
info!("importing slru file {}", path.display());
let mut rpageno = 0;
loop {
let r = file.read_exact(&mut buf);
match r {
Ok(_) => {
timeline.put_page_image(
gen_tag(blknum),
RelishTag::Slru { slru, segno },
rpageno,
lsn,
Bytes::copy_from_slice(&buf),
false,
true,
)?;
}
@@ -283,14 +253,16 @@ fn import_slru_file(
}
},
};
blknum += 1;
rpageno += 1;
// TODO: Check that the file isn't unexpectedly large, not larger than SLRU_PAGES_PER_SEGMENT pages
}
Ok(())
}
/// Scan PostgreSQL WAL files in given directory, and load all records >= 'startpoint' into
/// the repository.
/// Scan PostgreSQL WAL files in given directory
/// and load all records >= 'startpoint' into the repository.
pub fn import_timeline_wal(walpath: &Path, timeline: &dyn Timeline, startpoint: Lsn) -> Result<()> {
let mut waldecoder = WalStreamDecoder::new(startpoint);
@@ -298,13 +270,15 @@ pub fn import_timeline_wal(walpath: &Path, timeline: &dyn Timeline, startpoint:
let mut offset = startpoint.segment_offset(pg_constants::WAL_SEGMENT_SIZE);
let mut last_lsn = startpoint;
let checkpoint_bytes =
timeline.get_page_at_lsn_nowait(ObjectTag::Checkpoint, startpoint, false)?;
let checkpoint_bytes = timeline.get_page_at_lsn_nowait(RelishTag::Checkpoint, 0, startpoint)?;
let mut checkpoint = CheckPoint::decode(&checkpoint_bytes)?;
loop {
// FIXME: assume postgresql tli 1 for now
let filename = XLogFileName(1, segno, pg_constants::WAL_SEGMENT_SIZE);
let mut buf = Vec::new();
//Read local file
let mut path = walpath.join(&filename);
// It could be as .partial
@@ -325,12 +299,12 @@ pub fn import_timeline_wal(walpath: &Path, timeline: &dyn Timeline, startpoint:
file.seek(SeekFrom::Start(offset as u64))?;
}
let mut buf = Vec::new();
let nread = file.read_to_end(&mut buf)?;
if nread != pg_constants::WAL_SEGMENT_SIZE - offset as usize {
// Maybe allow this for .partial files?
error!("read only {} bytes from WAL file", nread);
}
waldecoder.feed_bytes(&buf);
let mut nrecords = 0;
@@ -352,19 +326,18 @@ pub fn import_timeline_wal(walpath: &Path, timeline: &dyn Timeline, startpoint:
nrecords += 1;
}
info!(
"imported {} records from WAL file {} up to {}",
nrecords,
path.display(),
last_lsn
);
info!("imported {} records up to {}", nrecords, last_lsn);
segno += 1;
offset = 0;
}
info!("reached end of WAL at {}", last_lsn);
let checkpoint_bytes = checkpoint.encode();
timeline.put_page_image(ObjectTag::Checkpoint, last_lsn, checkpoint_bytes, false)?;
timeline.put_page_image(RelishTag::Checkpoint, 0, last_lsn, checkpoint_bytes, false)?;
timeline.advance_last_valid_lsn(last_lsn);
timeline.checkpoint()?;
Ok(())
}
@@ -384,14 +357,11 @@ pub fn save_decoded_record(
// Iterate through all the blocks that the record modifies, and
// "put" a separate copy of the record for each block.
for blk in decoded.blocks.iter() {
let tag = ObjectTag::RelationBuffer(BufferTag {
rel: RelTag {
spcnode: blk.rnode_spcnode,
dbnode: blk.rnode_dbnode,
relnode: blk.rnode_relnode,
forknum: blk.forknum as u8,
},
blknum: blk.blkno,
let tag = RelishTag::Relation(RelTag {
spcnode: blk.rnode_spcnode,
dbnode: blk.rnode_dbnode,
relnode: blk.rnode_relnode,
forknum: blk.forknum as u8,
});
let rec = WALRecord {
@@ -401,7 +371,7 @@ pub fn save_decoded_record(
main_data_offset: decoded.main_data_offset as u32,
};
timeline.put_wal_record(tag, rec)?;
timeline.put_wal_record(tag, blk.blkno, rec)?;
}
let mut buf = decoded.record.clone();
@@ -424,79 +394,92 @@ pub fn save_decoded_record(
} else if decoded.xl_rmid == pg_constants::RM_TBLSPC_ID {
trace!("XLOG_TBLSPC_CREATE/DROP is not handled yet");
} else if decoded.xl_rmid == pg_constants::RM_CLOG_ID {
let blknum = buf.get_u32_le();
let info = decoded.xl_info & !pg_constants::XLR_INFO_MASK;
let tag = ObjectTag::Clog(SlruBufferTag { blknum });
if info == pg_constants::CLOG_ZEROPAGE {
let rec = WALRecord {
let pageno = buf.get_u32_le();
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
timeline.put_page_image(
RelishTag::Slru {
slru: SlruKind::Clog,
segno,
},
rpageno,
lsn,
will_init: true,
rec: recdata.clone(),
main_data_offset: decoded.main_data_offset as u32,
};
timeline.put_wal_record(tag, rec)?;
ZERO_PAGE,
true,
)?;
} else {
assert!(info == pg_constants::CLOG_TRUNCATE);
checkpoint.oldestXid = buf.get_u32_le();
checkpoint.oldestXidDB = buf.get_u32_le();
trace!(
"RM_CLOG_ID truncate blkno {} oldestXid {} oldestXidDB {}",
blknum,
checkpoint.oldestXid,
checkpoint.oldestXidDB
);
if let Some(ObjectTag::Clog(first_slru_tag)) =
timeline.get_next_tag(ObjectTag::Clog(SlruBufferTag { blknum: 0 }))?
{
for trunc_blknum in first_slru_tag.blknum..=blknum {
let tag = ObjectTag::Clog(SlruBufferTag {
blknum: trunc_blknum,
});
timeline.put_slru_truncate(tag, lsn)?;
}
}
let xlrec = XlClogTruncate::decode(&mut buf);
save_clog_truncate_record(checkpoint, timeline, lsn, &xlrec)?;
}
} else if decoded.xl_rmid == pg_constants::RM_XACT_ID {
let info = decoded.xl_info & pg_constants::XLOG_XACT_OPMASK;
if info == pg_constants::XLOG_XACT_COMMIT
|| info == pg_constants::XLOG_XACT_COMMIT_PREPARED
|| info == pg_constants::XLOG_XACT_ABORT
if info == pg_constants::XLOG_XACT_COMMIT || info == pg_constants::XLOG_XACT_ABORT {
let parsed_xact = XlXactParsedRecord::decode(&mut buf, decoded.xl_xid, decoded.xl_info);
save_xact_record(timeline, lsn, &parsed_xact, decoded)?;
} else if info == pg_constants::XLOG_XACT_COMMIT_PREPARED
|| info == pg_constants::XLOG_XACT_ABORT_PREPARED
{
let parsed_xact = XlXactParsedRecord::decode(&mut buf, decoded.xl_xid, decoded.xl_info);
save_xact_record(timeline, lsn, &parsed_xact, decoded)?;
} else if info == pg_constants::XLOG_XACT_PREPARE {
let rec = WALRecord {
// Remove twophase file. see RemoveTwoPhaseFile() in postgres code
info!(
"unlink twophaseFile for xid {} parsed_xact.xid {} here at {}",
decoded.xl_xid, parsed_xact.xid, lsn
);
timeline.put_unlink(
RelishTag::TwoPhase {
xid: parsed_xact.xid,
},
lsn,
will_init: true,
rec: recdata.clone(),
main_data_offset: decoded.main_data_offset as u32,
};
timeline.put_wal_record(
ObjectTag::TwoPhase(PrepareTag {
)?;
} else if info == pg_constants::XLOG_XACT_PREPARE {
let mut buf = decoded.record.clone();
buf.advance(decoded.main_data_offset);
timeline.put_page_image(
RelishTag::TwoPhase {
xid: decoded.xl_xid,
}),
rec,
},
0,
lsn,
Bytes::copy_from_slice(&buf[..]),
true,
)?;
}
} else if decoded.xl_rmid == pg_constants::RM_MULTIXACT_ID {
let info = decoded.xl_info & pg_constants::XLR_RMGR_INFO_MASK;
if info == pg_constants::XLOG_MULTIXACT_ZERO_OFF_PAGE
|| info == pg_constants::XLOG_MULTIXACT_ZERO_MEM_PAGE
{
let blknum = buf.get_u32_le();
let rec = WALRecord {
if info == pg_constants::XLOG_MULTIXACT_ZERO_OFF_PAGE {
let pageno = buf.get_u32_le();
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
timeline.put_page_image(
RelishTag::Slru {
slru: SlruKind::MultiXactOffsets,
segno,
},
rpageno,
lsn,
will_init: true,
rec: recdata.clone(),
main_data_offset: decoded.main_data_offset as u32,
};
let tag = if info == pg_constants::XLOG_MULTIXACT_ZERO_OFF_PAGE {
ObjectTag::MultiXactOffsets(SlruBufferTag { blknum })
} else {
ObjectTag::MultiXactMembers(SlruBufferTag { blknum })
};
timeline.put_wal_record(tag, rec)?;
ZERO_PAGE,
true,
)?;
} else if info == pg_constants::XLOG_MULTIXACT_ZERO_OFF_PAGE {
let pageno = buf.get_u32_le();
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
timeline.put_page_image(
RelishTag::Slru {
slru: SlruKind::MultiXactMembers,
segno,
},
rpageno,
lsn,
ZERO_PAGE,
true,
)?;
} else if info == pg_constants::XLOG_MULTIXACT_CREATE_ID {
let xlrec = XlMultiXactCreate::decode(&mut buf);
save_multixact_create_record(checkpoint, timeline, lsn, &xlrec, decoded)?;
@@ -560,7 +543,9 @@ fn save_xlog_dbase_create(timeline: &dyn Timeline, lsn: Lsn, rec: &XlCreateDatab
assert_eq!(src_rel.spcnode, src_tablespace_id);
assert_eq!(src_rel.dbnode, src_db_id);
let nblocks = timeline.get_rel_size(src_rel, req_lsn)?;
let nblocks = timeline
.get_relish_size(RelishTag::Relation(src_rel), req_lsn)?
.unwrap_or(0);
let dst_rel = RelTag {
spcnode: tablespace_id,
dbnode: db_id,
@@ -570,41 +555,36 @@ fn save_xlog_dbase_create(timeline: &dyn Timeline, lsn: Lsn, rec: &XlCreateDatab
// Copy content
for blknum in 0..nblocks {
let src_key = ObjectTag::RelationBuffer(BufferTag {
rel: src_rel,
blknum,
});
let dst_key = ObjectTag::RelationBuffer(BufferTag {
rel: dst_rel,
blknum,
});
let content =
timeline.get_page_at_lsn_nowait(RelishTag::Relation(src_rel), blknum, req_lsn)?;
let content = timeline.get_page_at_lsn_nowait(src_key, req_lsn, false)?;
debug!("copying block {} from {} to {}", blknum, src_rel, dst_rel);
debug!("copying block {:?} to {:?}", src_key, dst_key);
timeline.put_page_image(dst_key, lsn, content, true)?;
timeline.put_page_image(RelishTag::Relation(dst_rel), blknum, lsn, content, true)?;
num_blocks_copied += 1;
}
if nblocks == 0 {
// make sure we have some trace of the relation, even if it's empty
timeline.put_truncation(dst_rel, lsn, 0)?;
timeline.put_truncation(RelishTag::Relation(dst_rel), lsn, 0)?;
}
num_rels_copied += 1;
}
// Copy relfilemap
// TODO This implementation is very inefficient -
// it scans all non-rels only to find FileNodeMaps
for tag in timeline.list_nonrels(req_lsn)? {
match tag {
ObjectTag::FileNodeMap(db) => {
if db.spcnode == src_tablespace_id && db.dbnode == src_db_id {
let img = timeline.get_page_at_lsn_nowait(tag, req_lsn, false)?;
let new_tag = ObjectTag::FileNodeMap(DatabaseTag {
RelishTag::FileNodeMap { spcnode, dbnode } => {
if spcnode == src_tablespace_id && dbnode == src_db_id {
let img = timeline.get_page_at_lsn_nowait(tag, 0, req_lsn)?;
let new_tag = RelishTag::FileNodeMap {
spcnode: tablespace_id,
dbnode: db_id,
});
timeline.put_page_image(new_tag, lsn, img, false)?;
};
timeline.put_page_image(new_tag, 0, lsn, img, false)?;
break;
}
}
@@ -633,7 +613,7 @@ fn save_xlog_smgr_truncate(timeline: &dyn Timeline, lsn: Lsn, rec: &XlSmgrTrunca
relnode,
forknum: pg_constants::MAIN_FORKNUM,
};
timeline.put_truncation(rel, lsn, rec.blkno)?;
timeline.put_truncation(RelishTag::Relation(rel), lsn, rec.blkno)?;
}
if (rec.flags & pg_constants::SMGR_TRUNCATE_FSM) != 0 {
let rel = RelTag {
@@ -656,7 +636,7 @@ fn save_xlog_smgr_truncate(timeline: &dyn Timeline, lsn: Lsn, rec: &XlSmgrTrunca
info!("Partial truncation of FSM is not supported");
}
let num_fsm_blocks = 0;
timeline.put_truncation(rel, lsn, num_fsm_blocks)?;
timeline.put_truncation(RelishTag::Relation(rel), lsn, num_fsm_blocks)?;
}
if (rec.flags & pg_constants::SMGR_TRUNCATE_VM) != 0 {
let rel = RelTag {
@@ -675,7 +655,7 @@ fn save_xlog_smgr_truncate(timeline: &dyn Timeline, lsn: Lsn, rec: &XlSmgrTrunca
info!("Partial truncation of VM is not supported");
}
let num_vm_blocks = 0;
timeline.put_truncation(rel, lsn, num_vm_blocks)?;
timeline.put_truncation(RelishTag::Relation(rel), lsn, num_vm_blocks)?;
}
Ok(())
}
@@ -690,38 +670,156 @@ fn save_xact_record(
decoded: &DecodedWALRecord,
) -> Result<()> {
// Record update of CLOG page
let mut blknum = parsed.xid / pg_constants::CLOG_XACTS_PER_PAGE;
let tag = ObjectTag::Clog(SlruBufferTag { blknum });
let mut pageno = parsed.xid / pg_constants::CLOG_XACTS_PER_PAGE;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
let rec = WALRecord {
lsn,
will_init: false,
rec: decoded.record.clone(),
main_data_offset: decoded.main_data_offset as u32,
};
timeline.put_wal_record(tag, rec.clone())?;
timeline.put_wal_record(
RelishTag::Slru {
slru: SlruKind::Clog,
segno,
},
rpageno,
rec.clone(),
)?;
for subxact in &parsed.subxacts {
let subxact_blknum = subxact / pg_constants::CLOG_XACTS_PER_PAGE;
if subxact_blknum != blknum {
blknum = subxact_blknum;
let tag = ObjectTag::Clog(SlruBufferTag { blknum });
timeline.put_wal_record(tag, rec.clone())?;
let subxact_pageno = subxact / pg_constants::CLOG_XACTS_PER_PAGE;
if subxact_pageno != pageno {
pageno = subxact_pageno;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
timeline.put_wal_record(
RelishTag::Slru {
slru: SlruKind::Clog,
segno,
},
rpageno,
rec.clone(),
)?;
}
}
for xnode in &parsed.xnodes {
for forknum in pg_constants::MAIN_FORKNUM..=pg_constants::VISIBILITYMAP_FORKNUM {
let rel_tag = RelTag {
let rel = RelTag {
forknum,
spcnode: xnode.spcnode,
dbnode: xnode.dbnode,
relnode: xnode.relnode,
};
timeline.put_unlink(rel_tag, lsn)?;
timeline.put_unlink(RelishTag::Relation(rel), lsn)?;
}
}
Ok(())
}
// See TransactionIdPrecedes in transam.c
fn transaction_id_precedes(id1: TransactionId, id2: TransactionId) -> bool {
/*
* If either ID is a permanent XID then we can just do unsigned
* comparison. If both are normal, do a modulo-2^32 comparison.
*/
if !(id1 >= pg_constants::FIRST_NORMAL_TRANSACTION_ID)
|| !(id2 >= pg_constants::FIRST_NORMAL_TRANSACTION_ID)
{
return id1 < id2;
}
let diff = id1.wrapping_sub(id2) as i32;
return diff < 0;
}
// See CLOGPagePrecedes in clog.c
fn clogpage_precedes(page1: u32, page2: u32) -> bool {
let mut xid1 = page1 * pg_constants::CLOG_XACTS_PER_PAGE;
xid1 += pg_constants::FIRST_NORMAL_TRANSACTION_ID + 1;
let mut xid2 = page2 * pg_constants::CLOG_XACTS_PER_PAGE;
xid2 += pg_constants::FIRST_NORMAL_TRANSACTION_ID + 1;
return transaction_id_precedes(xid1, xid2)
&& transaction_id_precedes(xid1, xid2 + pg_constants::CLOG_XACTS_PER_PAGE - 1);
}
// See SlruMayDeleteSegment() in slru.c
fn slru_may_delete_clogsegment(segpage: u32, cutoff_page: u32) -> bool {
let seg_last_page = segpage + pg_constants::SLRU_PAGES_PER_SEGMENT - 1;
assert_eq!(segpage % pg_constants::SLRU_PAGES_PER_SEGMENT, 0);
return clogpage_precedes(segpage, cutoff_page)
&& clogpage_precedes(seg_last_page, cutoff_page);
}
fn save_clog_truncate_record(
checkpoint: &mut CheckPoint,
timeline: &dyn Timeline,
lsn: Lsn,
xlrec: &XlClogTruncate,
) -> Result<()> {
info!(
"RM_CLOG_ID truncate pageno {} oldestXid {} oldestXidDB {} lsn {}",
xlrec.pageno, xlrec.oldest_xid, xlrec.oldest_xid_db, lsn
);
// Here we treat oldestXid and oldestXidDB
// differently from postgres redo routines.
// In postgres checkpoint.oldestXid lags behind xlrec.oldest_xid
// until checkpoint happens and updates the value.
// Here we can use the most recent value.
// It's just an optimization, though and can be deleted.
// TODO Figure out if there will be any issues with replica.
checkpoint.oldestXid = xlrec.oldest_xid;
checkpoint.oldestXidDB = xlrec.oldest_xid_db;
// TODO Treat AdvanceOldestClogXid() or write a comment why we don't need it
let latest_page_number = checkpoint.nextXid.value as u32 / pg_constants::CLOG_XACTS_PER_PAGE;
// Now delete all segments containing pages between xlrec.pageno
// and latest_page_number.
// First, make an important safety check:
// the current endpoint page must not be eligible for removal.
// See SimpleLruTruncate() in slru.c
if clogpage_precedes(latest_page_number, xlrec.pageno) {
info!("could not truncate directory pg_xact apparent wraparound");
return Ok(());
}
// Iterate via SLRU CLOG segments and unlink segments that we're ready to truncate
// TODO This implementation is very inefficient -
// it scans all non-rels only to find Clog
//
// We cannot pass 'lsn' to the Timeline.list_nonrels(), or it
// will block waiting for the last valid LSN to advance up to
// it. So we use the previous record's LSN in the get calls
// instead.
let req_lsn = min(timeline.get_last_record_lsn(), lsn);
for obj in timeline.list_nonrels(req_lsn)? {
match obj {
RelishTag::Slru { slru, segno } => {
if slru == SlruKind::Clog {
let segpage = segno * pg_constants::SLRU_PAGES_PER_SEGMENT;
if slru_may_delete_clogsegment(segpage, xlrec.pageno) {
timeline.put_unlink(RelishTag::Slru { slru, segno }, lsn)?;
trace!("unlink CLOG segment {:>04X} at lsn {}", segno, lsn);
}
}
}
_ => {}
}
}
Ok(())
}
fn save_multixact_create_record(
checkpoint: &mut CheckPoint,
timeline: &dyn Timeline,
@@ -735,31 +833,47 @@ fn save_multixact_create_record(
rec: decoded.record.clone(),
main_data_offset: decoded.main_data_offset as u32,
};
let blknum = xlrec.mid / pg_constants::MULTIXACT_OFFSETS_PER_PAGE as u32;
let tag = ObjectTag::MultiXactOffsets(SlruBufferTag { blknum });
timeline.put_wal_record(tag, rec.clone())?;
let pageno = xlrec.mid / pg_constants::MULTIXACT_OFFSETS_PER_PAGE as u32;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
timeline.put_wal_record(
RelishTag::Slru {
slru: SlruKind::MultiXactOffsets,
segno,
},
rpageno,
rec.clone(),
)?;
let first_mbr_blkno = xlrec.moff / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32;
let last_mbr_blkno =
let first_mbr_pageno = xlrec.moff / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32;
let last_mbr_pageno =
(xlrec.moff + xlrec.nmembers - 1) / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32;
// The members SLRU can, in contrast to the offsets one, be filled to almost
// the full range at once. So we need to handle wraparound.
let mut blknum = first_mbr_blkno;
let mut pageno = first_mbr_pageno;
loop {
// Update members page
let tag = ObjectTag::MultiXactMembers(SlruBufferTag { blknum });
timeline.put_wal_record(tag, rec.clone())?;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
timeline.put_wal_record(
RelishTag::Slru {
slru: SlruKind::MultiXactMembers,
segno,
},
rpageno,
rec.clone(),
)?;
if blknum == last_mbr_blkno {
if pageno == last_mbr_pageno {
// last block inclusive
break;
}
// handle wraparound
if blknum == MAX_MBR_BLKNO {
blknum = 0;
if pageno == MAX_MBR_BLKNO {
pageno = 0;
} else {
blknum += 1;
pageno += 1;
}
}
if xlrec.mid >= checkpoint.nextMulti {
@@ -775,10 +889,23 @@ fn save_multixact_create_record(
acc
}
});
checkpoint.update_next_xid(max_mbr_xid);
Ok(())
}
#[allow(non_upper_case_globals)]
const MaxMultiXactOffset: u32 = 0xFFFFFFFF;
#[allow(non_snake_case)]
const fn MXOffsetToMemberPage(xid: u32) -> u32 {
xid / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32
}
#[allow(non_snake_case)]
const fn MXOffsetToMemberSegment(xid: u32) -> i32 {
(MXOffsetToMemberPage(xid) / pg_constants::SLRU_PAGES_PER_SEGMENT) as i32
}
fn save_multixact_truncate_record(
checkpoint: &mut CheckPoint,
timeline: &dyn Timeline,
@@ -787,31 +914,35 @@ fn save_multixact_truncate_record(
) -> Result<()> {
checkpoint.oldestMulti = xlrec.end_trunc_off;
checkpoint.oldestMultiDB = xlrec.oldest_multi_db;
let first_off_blkno = xlrec.start_trunc_off / pg_constants::MULTIXACT_OFFSETS_PER_PAGE as u32;
let last_off_blkno = xlrec.end_trunc_off / pg_constants::MULTIXACT_OFFSETS_PER_PAGE as u32;
// PerformMembersTruncation
let maxsegment: i32 = MXOffsetToMemberSegment(MaxMultiXactOffset);
let startsegment: i32 = MXOffsetToMemberSegment(xlrec.start_trunc_memb);
let endsegment: i32 = MXOffsetToMemberSegment(xlrec.end_trunc_memb);
let mut segment: i32 = startsegment;
// Delete all the segments except the last one. The last segment can still
// contain, possibly partially, valid data.
for blknum in first_off_blkno..last_off_blkno {
let tag = ObjectTag::MultiXactOffsets(SlruBufferTag { blknum });
timeline.put_slru_truncate(tag, lsn)?;
}
let first_mbr_blkno = xlrec.start_trunc_memb / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32;
let last_mbr_blkno = xlrec.end_trunc_memb / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32;
// The members SLRU can, in contrast to the offsets one, be filled to almost
// the full range at once. So we need to handle wraparound.
let mut blknum = first_mbr_blkno;
// Delete all the segments but the last one. The last segment can still
// contain, possibly partially, valid data.
while blknum != last_mbr_blkno {
let tag = ObjectTag::MultiXactMembers(SlruBufferTag { blknum });
timeline.put_slru_truncate(tag, lsn)?;
// handle wraparound
if blknum == MAX_MBR_BLKNO {
blknum = 0;
while segment != endsegment {
timeline.put_unlink(
RelishTag::Slru {
slru: SlruKind::MultiXactMembers,
segno: segment as u32,
},
lsn,
)?;
/* move to next segment, handling wraparound correctly */
if segment == maxsegment {
segment = 0;
} else {
blknum += 1;
segment += 1;
}
}
// Truncate offsets
// FIXME: this did not handle wraparound correctly
Ok(())
}
@@ -827,10 +958,10 @@ fn save_relmap_record(
rec: decoded.record.clone(),
main_data_offset: decoded.main_data_offset as u32,
};
let tag = ObjectTag::FileNodeMap(DatabaseTag {
let tag = RelishTag::FileNodeMap {
spcnode: xlrec.tsid,
dbnode: xlrec.dbid,
});
timeline.put_wal_record(tag, rec)?;
};
timeline.put_wal_record(tag, 0, rec)?;
Ok(())
}

View File

@@ -3,15 +3,16 @@
//!
use crate::object_key::*;
use crate::object_store::ObjectStore;
use crate::repository::RelTag;
use crate::relish::*;
use crate::PageServerConf;
use crate::ZTimelineId;
use anyhow::{bail, Result};
use serde::{Deserialize, Serialize};
use std::collections::HashSet;
use std::sync::{Arc, Mutex};
use zenith_utils::bin_ser::BeSer;
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTenantId;
use zenith_utils::zid::ZTimelineId;
#[derive(Debug, Clone, Serialize, Deserialize)]
struct StorageKey {
@@ -143,10 +144,9 @@ impl ObjectStore for RocksObjectStore {
fn list_objects<'a>(
&'a self,
timeline: ZTimelineId,
nonrel_only: bool,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = ObjectTag> + 'a>> {
let iter = RocksObjectIter::new(&self.db, timeline, nonrel_only, lsn)?;
let iter = RocksObjectIter::new(&self.db, timeline, lsn)?;
Ok(Box::new(iter))
}
@@ -178,7 +178,7 @@ impl ObjectStore for RocksObjectStore {
let search_key = StorageKey {
obj_key: ObjectKey {
timeline: timelineid,
tag: ObjectTag::RelationMetadata(search_rel_tag),
tag: ObjectTag::RelationMetadata(RelishTag::Relation(search_rel_tag)),
},
lsn: Lsn(0),
};
@@ -188,7 +188,7 @@ impl ObjectStore for RocksObjectStore {
}
let key = StorageKey::des(iter.key().unwrap())?;
if let ObjectTag::RelationMetadata(rel_tag) = key.obj_key.tag {
if let ObjectTag::RelationMetadata(RelishTag::Relation(rel_tag)) = key.obj_key.tag {
if spcnode != 0 && rel_tag.spcnode != spcnode
|| dbnode != 0 && rel_tag.dbnode != dbnode
{
@@ -211,6 +211,49 @@ impl ObjectStore for RocksObjectStore {
Ok(rels)
}
/// Get a list of all distinct NON-relations in timeline
/// that are visible at given lsn.
///
/// TODO: This implementation is very inefficient, it scans
/// through all non-rel page versions in the system. In practice, this
/// is used when initializing a new compute node, and the non-rel files
/// are never very large nor change very frequently, so this will do for now.
fn list_nonrels(&self, timelineid: ZTimelineId, lsn: Lsn) -> Result<HashSet<RelishTag>> {
let mut rels: HashSet<RelishTag> = HashSet::new();
let search_key = StorageKey {
obj_key: ObjectKey {
timeline: timelineid,
tag: ObjectTag::Buffer(FIRST_NONREL_RELISH_TAG, 0),
},
lsn: Lsn(0),
};
let mut iter = self.db.raw_iterator();
iter.seek(search_key.ser()?);
while iter.valid() {
let key = StorageKey::des(iter.key().unwrap())?;
if key.obj_key.timeline != timelineid {
// reached end of this timeline in the store
break;
}
if let ObjectTag::Buffer(rel_tag, _blknum) = key.obj_key.tag {
if key.lsn <= lsn {
// visible in this snapshot
rels.insert(rel_tag);
}
}
// TODO: we could skip to next relation here like we do in list_rels(),
// but hopefully there are not that many SLRU segments or other non-rel
// entries for it to matter.
iter.next();
}
Ok(rels)
}
/// Iterate through versions of all objects in a timeline.
///
/// Returns objects in increasing key-version order.
@@ -241,26 +284,30 @@ impl ObjectStore for RocksObjectStore {
impl RocksObjectStore {
/// Open a RocksDB database.
pub fn open(conf: &'static PageServerConf) -> Result<RocksObjectStore> {
pub fn open(conf: &'static PageServerConf, tenantid: &ZTenantId) -> Result<RocksObjectStore> {
let opts = Self::get_rocksdb_opts();
let obj_store = Self::new(conf, opts)?;
let obj_store = Self::new(conf, opts, tenantid)?;
Ok(obj_store)
}
/// Create a new, empty RocksDB database.
pub fn create(conf: &'static PageServerConf) -> Result<RocksObjectStore> {
let path = conf.workdir.join("rocksdb-storage");
pub fn create(conf: &'static PageServerConf, tenantid: &ZTenantId) -> Result<RocksObjectStore> {
let path = conf.tenant_path(&tenantid).join("rocksdb-storage");
std::fs::create_dir(&path)?;
let mut opts = Self::get_rocksdb_opts();
opts.create_if_missing(true);
opts.set_error_if_exists(true);
let obj_store = Self::new(conf, opts)?;
let obj_store = Self::new(conf, opts, tenantid)?;
Ok(obj_store)
}
fn new(conf: &'static PageServerConf, mut opts: rocksdb::Options) -> Result<RocksObjectStore> {
let path = conf.workdir.join("rocksdb-storage");
fn new(
conf: &'static PageServerConf,
mut opts: rocksdb::Options,
tenantid: &ZTenantId,
) -> Result<RocksObjectStore> {
let path = conf.tenant_path(&tenantid).join("rocksdb-storage");
let gc = Arc::new(GarbageCollector::new());
let gc_ref = gc.clone();
opts.set_compaction_filter("ttl", move |_level: u32, key: &[u8], _val: &[u8]| {
@@ -382,17 +429,11 @@ impl<'r> RocksObjects<'r> {
struct RocksObjectIter<'a> {
timeline: ZTimelineId,
key: StorageKey,
nonrel_only: bool,
lsn: Lsn,
dbiter: rocksdb::DBRawIterator<'a>,
}
impl<'a> RocksObjectIter<'a> {
fn new(
db: &'a rocksdb::DB,
timeline: ZTimelineId,
nonrel_only: bool,
lsn: Lsn,
) -> Result<RocksObjectIter<'a>> {
fn new(db: &'a rocksdb::DB, timeline: ZTimelineId, lsn: Lsn) -> Result<RocksObjectIter<'a>> {
let key = StorageKey {
obj_key: ObjectKey {
timeline,
@@ -404,7 +445,6 @@ impl<'a> RocksObjectIter<'a> {
Ok(RocksObjectIter {
key,
timeline,
nonrel_only,
lsn,
dbiter,
})
@@ -428,15 +468,7 @@ impl<'a> Iterator for RocksObjectIter<'a> {
self.key.lsn = Lsn(u64::MAX); // next seek should skip all versions
if key.lsn <= self.lsn {
// visible in this snapshot
if self.nonrel_only {
match key.obj_key.tag {
ObjectTag::RelationMetadata(_) => return None,
ObjectTag::RelationBuffer(_) => return None,
_ => return Some(key.obj_key.tag),
}
} else {
return Some(key.obj_key.tag);
}
return Some(key.obj_key.tag);
}
}
}

View File

@@ -1,307 +0,0 @@
use crate::tui_event::{Event, Events};
use crate::tui_logger::TuiLogger;
use crate::tui_logger::TuiLoggerWidget;
use lazy_static::lazy_static;
use std::sync::Arc;
use std::{error::Error, io};
use termion::{event::Key, input::MouseTerminal, raw::IntoRawMode, screen::AlternateScreen};
use tui::backend::TermionBackend;
use tui::buffer::Buffer;
use tui::layout::{Constraint, Direction, Layout, Rect};
use tui::style::{Color, Modifier, Style};
use tui::text::{Span, Spans, Text};
use tui::widgets::{Block, BorderType, Borders, Paragraph, Widget};
use tui::Terminal;
use slog::Drain;
lazy_static! {
pub static ref PAGESERVICE_DRAIN: Arc<TuiLogger> = Arc::new(TuiLogger::default());
pub static ref WALRECEIVER_DRAIN: Arc<TuiLogger> = Arc::new(TuiLogger::default());
pub static ref WALREDO_DRAIN: Arc<TuiLogger> = Arc::new(TuiLogger::default());
pub static ref CATCHALL_DRAIN: Arc<TuiLogger> = Arc::new(TuiLogger::default());
}
pub fn init_logging() -> slog_scope::GlobalLoggerGuard {
let pageservice_drain =
slog::Filter::new(PAGESERVICE_DRAIN.as_ref(), |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver::page_service")
{
return true;
}
false
})
.fuse();
let walredo_drain = slog::Filter::new(WALREDO_DRAIN.as_ref(), |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver::walredo")
{
return true;
}
false
})
.fuse();
let walreceiver_drain =
slog::Filter::new(WALRECEIVER_DRAIN.as_ref(), |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver::walreceiver")
{
return true;
}
false
})
.fuse();
let catchall_drain = slog::Filter::new(CATCHALL_DRAIN.as_ref(), |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Info) {
return true;
}
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver")
{
return true;
}
false
})
.fuse();
let drain = pageservice_drain;
let drain = slog::Duplicate::new(drain, walreceiver_drain).fuse();
let drain = slog::Duplicate::new(drain, walredo_drain).fuse();
let drain = slog::Duplicate::new(drain, catchall_drain).fuse();
let drain = slog_async::Async::new(drain).chan_size(1000).build().fuse();
let drain = slog::Filter::new(drain, |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Info) {
return true;
}
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver")
{
return true;
}
false
})
.fuse();
let logger = slog::Logger::root(drain, slog::o!());
slog_scope::set_global_logger(logger)
}
pub fn ui_main() -> Result<(), Box<dyn Error>> {
// Terminal initialization
let stdout = io::stdout().into_raw_mode()?;
let stdout = MouseTerminal::from(stdout);
let stdout = AlternateScreen::from(stdout);
let backend = TermionBackend::new(stdout);
let mut terminal = Terminal::new(backend)?;
// Setup event handlers
let events = Events::new();
loop {
terminal.draw(|f| {
let size = f.size();
// +----------------+----------------+
// | | |
// | top_top_left | top_top_right |
// | | |
// +----------------+----------------|
// | | |
// | top_bot_left | top_left_right |
// | | |
// +----------------+----------------+
// | |
// | bottom |
// | |
// +---------------------------------+
let chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Percentage(70), Constraint::Percentage(30)].as_ref())
.split(size);
let top_chunk = chunks[0];
let bottom_chunk = chunks[1];
let top_chunks = Layout::default()
.direction(Direction::Horizontal)
.constraints([Constraint::Percentage(50), Constraint::Percentage(50)].as_ref())
.split(top_chunk);
let top_left_chunk = top_chunks[0];
let top_right_chunk = top_chunks[1];
let c = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Percentage(50), Constraint::Percentage(50)].as_ref())
.split(top_left_chunk);
let top_top_left_chunk = c[0];
let top_bot_left_chunk = c[1];
let c = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Percentage(50), Constraint::Percentage(50)].as_ref())
.split(top_right_chunk);
let top_top_right_chunk = c[0];
let top_bot_right_chunk = c[1];
f.render_widget(
LogWidget::new(PAGESERVICE_DRAIN.as_ref(), "Page Service"),
top_top_left_chunk,
);
f.render_widget(
LogWidget::new(WALREDO_DRAIN.as_ref(), "WAL Redo"),
top_bot_left_chunk,
);
f.render_widget(
LogWidget::new(WALRECEIVER_DRAIN.as_ref(), "WAL Receiver"),
top_top_right_chunk,
);
f.render_widget(MetricsWidget {}, top_bot_right_chunk);
f.render_widget(
LogWidget::new(CATCHALL_DRAIN.as_ref(), "All Log").show_module(true),
bottom_chunk,
);
})?;
// If ther user presses 'q', quit.
// silence clippy's suggestion to rewrite this as an if-statement. Match
// makes more sense as soon as we get another command than 'q'.
#[allow(clippy::single_match)]
#[allow(clippy::collapsible_match)]
if let Event::Input(key) = events.next()? {
match key {
Key::Char('q') => {
break;
}
_ => (),
}
}
}
terminal.show_cursor().unwrap();
terminal.clear().unwrap();
Ok(())
}
#[allow(dead_code)]
struct LogWidget<'a> {
logger: &'a TuiLogger,
title: &'a str,
show_module: bool,
}
impl<'a> LogWidget<'a> {
fn new(logger: &'a TuiLogger, title: &'a str) -> LogWidget<'a> {
LogWidget {
logger,
title,
show_module: false,
}
}
fn show_module(mut self, b: bool) -> LogWidget<'a> {
self.show_module = b;
self
}
}
impl<'a> Widget for LogWidget<'a> {
fn render(self, area: Rect, buf: &mut Buffer) {
let w = TuiLoggerWidget::default(self.logger)
.block(
Block::default()
.borders(Borders::ALL)
.title(self.title)
.border_type(BorderType::Rounded),
)
.show_module(true)
.style_error(Style::default().fg(Color::Red))
.style_warn(Style::default().fg(Color::Yellow))
.style_info(Style::default().fg(Color::Green));
w.render(area, buf);
}
}
// Render a widget to show some metrics
struct MetricsWidget {}
fn _get_metric_u64(title: &str, value: u64) -> Spans {
Spans::from(vec![
Span::styled(format!("{:<20}", title), Style::default()),
Span::raw(": "),
Span::styled(
value.to_string(),
Style::default().add_modifier(Modifier::BOLD),
),
])
}
// This is not used since LSNs were removed from page cache stats.
// Maybe it will be used in the future?
fn _get_metric_str<'a>(title: &str, value: &'a str) -> Spans<'a> {
Spans::from(vec![
Span::styled(format!("{:<20}", title), Style::default()),
Span::raw(": "),
Span::styled(value, Style::default().add_modifier(Modifier::BOLD)),
])
}
impl tui::widgets::Widget for MetricsWidget {
fn render(self, area: Rect, buf: &mut Buffer) {
let block = Block::default()
.borders(Borders::ALL)
.title("Page Cache Metrics")
.border_type(BorderType::Rounded);
let inner_area = block.inner(area);
block.render(area, buf);
#[allow(unused_mut)]
let mut lines: Vec<Spans> = Vec::new();
// FIXME
//let page_cache_stats = crate::page_cache::get_stats();
// This is not used since LSNs were removed from page cache stats.
// Maybe it will be used in the future?
/*
let lsnrange = format!(
"{} - {}",
page_cache_stats.first_valid_lsn, page_cache_stats.last_valid_lsn
);
let last_valid_recordlsn_str = page_cache_stats.last_record_lsn.to_string();
lines.push(get_metric_str("Valid LSN range", &lsnrange));
lines.push(get_metric_str("Last record LSN", &last_valid_recordlsn_str));
*/
/*
lines.push(get_metric_u64(
"# of cache entries",
page_cache_stats.num_entries,
));
lines.push(get_metric_u64(
"# of page images",
page_cache_stats.num_page_images,
));
lines.push(get_metric_u64(
"# of WAL records",
page_cache_stats.num_wal_records,
));
lines.push(get_metric_u64(
"# of GetPage@LSN calls",
page_cache_stats.num_getpage_requests,
));
*/
let text = Text::from(lines);
Paragraph::new(text).render(inner_area, buf);
}
}

View File

@@ -1,96 +0,0 @@
use std::io;
use std::sync::mpsc;
use std::sync::{
atomic::{AtomicBool, Ordering},
Arc,
};
use std::thread;
use std::time::Duration;
use termion::event::Key;
use termion::input::TermRead;
pub enum Event<I> {
Input(I),
Tick,
}
/// A small event handler that wrap termion input and tick events. Each event
/// type is handled in its own thread and returned to a common `Receiver`
#[allow(dead_code)]
pub struct Events {
rx: mpsc::Receiver<Event<Key>>,
input_handle: thread::JoinHandle<()>,
ignore_exit_key: Arc<AtomicBool>,
tick_handle: thread::JoinHandle<()>,
}
#[derive(Debug, Clone, Copy)]
pub struct Config {
pub exit_key: Key,
pub tick_rate: Duration,
}
impl Default for Config {
fn default() -> Config {
Config {
exit_key: Key::Char('q'),
tick_rate: Duration::from_millis(250),
}
}
}
impl Events {
pub fn new() -> Events {
Events::with_config(Config::default())
}
pub fn with_config(config: Config) -> Events {
let (tx, rx) = mpsc::channel();
let ignore_exit_key = Arc::new(AtomicBool::new(false));
let input_handle = {
let tx = tx.clone();
let ignore_exit_key = ignore_exit_key.clone();
thread::spawn(move || {
let stdin = io::stdin();
for evt in stdin.keys() {
// This will panic if stdin returns EOF.
let key = evt.unwrap();
if let Err(err) = tx.send(Event::Input(key)) {
eprintln!("{}", err);
return;
}
if !ignore_exit_key.load(Ordering::Relaxed) && key == config.exit_key {
return;
}
}
})
};
let tick_handle = {
thread::spawn(move || loop {
if tx.send(Event::Tick).is_err() {
break;
}
thread::sleep(config.tick_rate);
})
};
Events {
rx,
input_handle,
ignore_exit_key,
tick_handle,
}
}
pub fn next(&self) -> Result<Event<Key>, mpsc::RecvError> {
self.rx.recv()
}
pub fn disable_exit_key(&mut self) {
self.ignore_exit_key.store(true, Ordering::Relaxed);
}
pub fn enable_exit_key(&mut self) {
self.ignore_exit_key.store(false, Ordering::Relaxed);
}
}

View File

@@ -1,199 +0,0 @@
//
// A TUI Widget that displays log entries
//
// This is heavily inspired by gin66's tui_logger crate at https://github.com/gin66/tui-logger,
// but I wrote this based on the 'slog' module, which simplified things a lot. tui-logger also
// implemented the slog Drain trait, but it had a model of one global buffer for the records.
// With this implementation, each TuiLogger is a separate ring buffer and separate slog Drain.
// Also, I didn't do any of the "hot log" stuff that gin66's implementation had, you can use an
// AsyncDrain to buffer and handle overflow if desired.
//
use chrono::offset::Local;
use chrono::DateTime;
use slog::{Drain, Level, OwnedKVList, Record};
use slog_async::AsyncRecord;
use std::collections::VecDeque;
use std::sync::Mutex;
use std::time::SystemTime;
use tui::buffer::Buffer;
use tui::layout::Rect;
use tui::style::{Modifier, Style};
use tui::text::{Span, Spans};
use tui::widgets::{Block, Paragraph, Widget, Wrap};
// Size of the log ring buffer, in # of records
static BUFFER_SIZE: usize = 1000;
pub struct TuiLogger {
events: Mutex<VecDeque<(SystemTime, AsyncRecord)>>,
}
impl<'a> Default for TuiLogger {
fn default() -> TuiLogger {
TuiLogger {
events: Mutex::new(VecDeque::with_capacity(BUFFER_SIZE)),
}
}
}
impl Drain for TuiLogger {
type Ok = ();
type Err = slog::Error;
fn log(&self, record: &Record, values: &OwnedKVList) -> Result<Self::Ok, Self::Err> {
let mut events = self.events.lock().unwrap();
let now = SystemTime::now();
let asyncrec = AsyncRecord::from(record, values);
events.push_front((now, asyncrec));
if events.len() > BUFFER_SIZE {
events.pop_back();
}
Ok(())
}
}
// TuiLoggerWidget renders a TuiLogger ring buffer
pub struct TuiLoggerWidget<'b> {
block: Option<Block<'b>>,
/// Base style of the widget
style: Style,
/// Level based style
style_error: Option<Style>,
style_warn: Option<Style>,
style_debug: Option<Style>,
style_trace: Option<Style>,
style_info: Option<Style>,
show_module: bool,
logger: &'b TuiLogger,
}
impl<'b> TuiLoggerWidget<'b> {
pub fn default(logger: &'b TuiLogger) -> TuiLoggerWidget<'b> {
TuiLoggerWidget {
block: None,
style: Default::default(),
style_error: None,
style_warn: None,
style_debug: None,
style_trace: None,
style_info: None,
show_module: true,
logger,
}
}
}
impl<'b> TuiLoggerWidget<'b> {
pub fn block(mut self, block: Block<'b>) -> TuiLoggerWidget<'b> {
self.block = Some(block);
self
}
#[allow(unused)]
pub fn style(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style = style;
self
}
pub fn style_error(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_error = Some(style);
self
}
pub fn style_warn(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_warn = Some(style);
self
}
pub fn style_info(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_info = Some(style);
self
}
#[allow(unused)]
pub fn style_trace(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_trace = Some(style);
self
}
#[allow(unused)]
pub fn style_debug(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_debug = Some(style);
self
}
pub fn show_module(mut self, b: bool) -> TuiLoggerWidget<'b> {
self.show_module = b;
self
}
}
impl<'b> Widget for TuiLoggerWidget<'b> {
fn render(mut self, area: Rect, buf: &mut Buffer) {
buf.set_style(area, self.style);
let list_area = match self.block.take() {
Some(b) => {
let inner_area = b.inner(area);
b.render(area, buf);
inner_area
}
None => area,
};
if list_area.width == 0 || list_area.height == 0 {
return;
}
let la_height = list_area.height as usize;
//
// Iterate through the records in the buffer. The records are
// pushed to the front, so the newest records come first.
//
let mut lines: Vec<Spans> = Vec::new();
let style_msg = Style::default().add_modifier(Modifier::BOLD);
{
let events = self.logger.events.lock().unwrap();
for evt in events.iter() {
let (timestamp, rec) = evt;
rec.as_record_values(|rec, _kwlist| {
let mut line: Vec<Span> = Vec::new();
let datetime: DateTime<Local> = timestamp.clone().into();
let ts = format!("{}", datetime.format("%H:%M:%S%.3f "));
line.push(Span::raw(ts));
let (lvl_style, txt, with_loc) = match rec.level() {
Level::Critical => (self.style_error, "CRIT ", true),
Level::Error => (self.style_error, "ERROR", true),
Level::Warning => (self.style_warn, "WARN ", true),
Level::Info => (self.style_info, "INFO ", false),
Level::Debug => (self.style_debug, "DEBUG", true),
Level::Trace => (self.style_trace, "TRACE", true),
};
line.push(Span::styled(txt, lvl_style.unwrap_or_default()));
if self.show_module {
line.push(Span::raw(" "));
line.push(Span::raw(rec.module()));
}
if with_loc {
let loc = format!(" {}:{}", rec.file(), rec.line());
line.push(Span::raw(loc));
}
let msg = format!(" {}", rec.msg());
line.push(Span::styled(msg, style_msg));
lines.push(Spans::from(line));
});
if lines.len() == la_height {
break;
}
}
}
lines.reverse();
let text = tui::text::Text::from(lines);
Paragraph::new(text)
.wrap(Wrap { trim: true })
.render(list_area, buf);
}
}

View File

@@ -3,18 +3,18 @@
//! the record affects, to add the records to the page cache.
//!
use bytes::{Buf, BufMut, Bytes, BytesMut};
use crc32c::*;
use log::*;
use postgres_ffi::pg_constants;
use postgres_ffi::xlog_utils::*;
use postgres_ffi::XLogLongPageHeaderData;
use postgres_ffi::XLogPageHeaderData;
use postgres_ffi::XLogRecord;
use postgres_ffi::{Oid, TransactionId};
use std::cmp::min;
use thiserror::Error;
use zenith_utils::lsn::Lsn;
pub type Oid = u32;
pub type TransactionId = u32;
pub type BlockNumber = u32;
pub type OffsetNumber = u16;
pub type MultiXactId = TransactionId;
@@ -167,6 +167,14 @@ impl WalStreamDecoder {
// XLOG_SWITCH records are special. If we see one, we need to skip
// to the next WAL segment.
let xlogrec = XLogRecord::from_bytes(&mut buf);
let mut crc = crc32c_append(0, &recordbuf[XLOG_RECORD_CRC_OFFS + 4..]);
crc = crc32c_append(crc, &recordbuf[0..XLOG_RECORD_CRC_OFFS]);
if crc != xlogrec.xl_crc {
return Err(WalDecodeError {
msg: "WAL record crc mismatch".into(),
lsn: self.lsn,
});
}
if xlogrec.is_xlog_switch_record() {
trace!("saw xlog switch record at {}", self.lsn);
self.padlen =
@@ -496,6 +504,24 @@ impl XlXactParsedRecord {
}
}
#[repr(C)]
#[derive(Debug)]
pub struct XlClogTruncate {
pub pageno: u32,
pub oldest_xid: TransactionId,
pub oldest_xid_db: Oid,
}
impl XlClogTruncate {
pub fn decode(buf: &mut Bytes) -> XlClogTruncate {
XlClogTruncate {
pageno: buf.get_u32_le(),
oldest_xid: buf.get_u32_le(),
oldest_xid_db: buf.get_u32_le(),
}
}
}
#[repr(C)]
#[derive(Debug)]
pub struct MultiXactMember {

View File

@@ -4,12 +4,11 @@
//!
//! We keep one WAL receiver active per timeline.
use crate::object_key::*;
use crate::page_cache;
use crate::relish::*;
use crate::restore_local_repo;
use crate::waldecoder::*;
use crate::PageServerConf;
use crate::ZTimelineId;
use crate::{PageServerConf, RepositoryFormat};
use anyhow::{Error, Result};
use lazy_static::lazy_static;
use log::*;
@@ -25,13 +24,14 @@ use std::collections::HashMap;
use std::fs;
use std::fs::{File, OpenOptions};
use std::io::{Seek, SeekFrom, Write};
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Mutex;
use std::thread;
use std::thread::sleep;
use std::time::{Duration, SystemTime};
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTenantId;
use zenith_utils::zid::ZTimelineId;
//
// We keep one WAL Receiver active per timeline.
@@ -50,6 +50,7 @@ pub fn launch_wal_receiver(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
wal_producer_connstr: &str,
tenantid: ZTenantId,
) {
let mut receivers = WAL_RECEIVERS.lock().unwrap();
@@ -67,7 +68,7 @@ pub fn launch_wal_receiver(
let _walreceiver_thread = thread::Builder::new()
.name("WAL receiver thread".into())
.spawn(move || {
thread_main(conf, timelineid);
thread_main(conf, timelineid, &tenantid);
})
.unwrap();
}
@@ -88,7 +89,7 @@ fn get_wal_producer_connstr(timelineid: ZTimelineId) -> String {
//
// This is the entry point for the WAL receiver thread.
//
fn thread_main(conf: &'static PageServerConf, timelineid: ZTimelineId) {
fn thread_main(conf: &'static PageServerConf, timelineid: ZTimelineId, tenantid: &ZTenantId) {
info!(
"WAL receiver thread started for timeline : '{}'",
timelineid
@@ -102,7 +103,7 @@ fn thread_main(conf: &'static PageServerConf, timelineid: ZTimelineId) {
// Look up the current WAL producer address
let wal_producer_connstr = get_wal_producer_connstr(timelineid);
let res = walreceiver_main(conf, timelineid, &wal_producer_connstr);
let res = walreceiver_main(conf, timelineid, &wal_producer_connstr, tenantid);
if let Err(e) = res {
info!(
@@ -115,9 +116,10 @@ fn thread_main(conf: &'static PageServerConf, timelineid: ZTimelineId) {
}
fn walreceiver_main(
_conf: &PageServerConf,
conf: &PageServerConf,
timelineid: ZTimelineId,
wal_producer_connstr: &str,
tenantid: &ZTenantId,
) -> Result<(), Error> {
// Connect to the database in replication mode.
info!("connecting to {:?}", wal_producer_connstr);
@@ -129,12 +131,21 @@ fn walreceiver_main(
let mut rclient = Client::connect(&connect_cfg, NoTls)?;
info!("connected!");
// Immediately increment the gauge, then create a job to decrement it on thread exit.
// One of the pros of `defer!` is that this will *most probably*
// get called, even in presence of panics.
let gauge = crate::LIVE_CONNECTIONS_COUNT.with_label_values(&["wal_receiver"]);
gauge.inc();
scopeguard::defer! {
gauge.dec();
}
let identify = identify_system(&mut rclient)?;
info!("{:?}", identify);
let end_of_wal = Lsn::from(u64::from(identify.xlogpos));
let mut caught_up = false;
let repository = page_cache::get_repository();
let repository = page_cache::get_repository_for_tenant(tenantid)?;
let timeline = repository.get_timeline(timelineid).unwrap();
//
@@ -169,8 +180,7 @@ fn walreceiver_main(
let mut waldecoder = WalStreamDecoder::new(startpoint);
let checkpoint_bytes =
timeline.get_page_at_lsn_nowait(ObjectTag::Checkpoint, startpoint, false)?;
let checkpoint_bytes = timeline.get_page_at_lsn_nowait(RelishTag::Checkpoint, 0, startpoint)?;
let mut checkpoint = CheckPoint::decode(&checkpoint_bytes)?;
trace!("CheckPoint.nextXid = {}", checkpoint.nextXid.value);
@@ -184,7 +194,14 @@ fn walreceiver_main(
let endlsn = startlsn + data.len() as u64;
let prev_last_rec_lsn = last_rec_lsn;
write_wal_file(startlsn, timelineid, pg_constants::WAL_SEGMENT_SIZE, data)?;
write_wal_file(
conf,
startlsn,
&timelineid,
pg_constants::WAL_SEGMENT_SIZE,
data,
tenantid,
)?;
trace!("received XLogData between {} and {}", startlsn, endlsn);
@@ -207,7 +224,8 @@ fn walreceiver_main(
// Check if checkpoint data was updated by save_decoded_record
if new_checkpoint_bytes != old_checkpoint_bytes {
timeline.put_page_image(
ObjectTag::Checkpoint,
RelishTag::Checkpoint,
0,
lsn,
new_checkpoint_bytes,
false,
@@ -238,13 +256,19 @@ fn walreceiver_main(
{
info!("switched segment {} to {}", prev_last_rec_lsn, last_rec_lsn);
let (oldest_segno, newest_segno) = find_wal_file_range(
timelineid,
conf,
&timelineid,
pg_constants::WAL_SEGMENT_SIZE,
last_rec_lsn,
tenantid,
)?;
if newest_segno - oldest_segno >= 10 {
timeline.checkpoint()?;
// FIXME: The layered repository performs checkpointing in a separate thread, so this
// isn't needed anymore. Remove 'checkpoint' from the Timeline trait altogether?
if conf.repository_format == RepositoryFormat::RocksDb {
timeline.checkpoint()?;
}
// TODO: This is where we could remove WAL older than last_rec_lsn.
//remove_wal_files(timelineid, pg_constants::WAL_SEGMENT_SIZE, last_rec_lsn)?;
@@ -297,16 +321,18 @@ fn walreceiver_main(
}
fn find_wal_file_range(
timeline: ZTimelineId,
conf: &PageServerConf,
timeline: &ZTimelineId,
wal_seg_size: usize,
written_upto: Lsn,
tenant: &ZTenantId,
) -> Result<(u64, u64)> {
let written_upto_segno = written_upto.segment_number(wal_seg_size);
let mut oldest_segno = written_upto_segno;
let mut newest_segno = written_upto_segno;
// Scan the wal directory, and count how many WAL filed we could remove
let wal_dir = PathBuf::from(format!("timelines/{}/wal", timeline));
let wal_dir = conf.wal_dir_path(timeline, tenant);
for entry in fs::read_dir(wal_dir)? {
let entry = entry?;
let path = entry.path();
@@ -383,10 +409,12 @@ pub fn identify_system(client: &mut Client) -> Result<IdentifySystem, Error> {
}
fn write_wal_file(
conf: &PageServerConf,
startpos: Lsn,
timeline: ZTimelineId,
timelineid: &ZTimelineId,
wal_seg_size: usize,
buf: &[u8],
tenantid: &ZTenantId,
) -> anyhow::Result<()> {
let mut bytes_left: usize = buf.len();
let mut bytes_written: usize = 0;
@@ -394,7 +422,7 @@ fn write_wal_file(
let mut start_pos = startpos;
const ZERO_BLOCK: &[u8] = &[0u8; XLOG_BLCKSZ];
let wal_dir = PathBuf::from(format!("timelines/{}/wal", timeline));
let wal_dir = conf.wal_dir_path(timelineid, tenantid);
/* Extract WAL location for this block */
let mut xlogoff = start_pos.segment_offset(wal_seg_size);

View File

@@ -1,6 +1,7 @@
//!
//! WAL redo. This service runs PostgreSQL in a special wal_redo mode
//! to apply given WAL records over an old page image and return new page image.
//! to apply given WAL records over an old page image and return new
//! page image.
//!
//! We rely on Postgres to perform WAL redo for us. We launch a
//! postgres process in special "wal redo" mode that's similar to
@@ -12,12 +13,16 @@
//! See src/backend/tcop/zenith_wal_redo.c for the other side of
//! this communication.
//!
//! TODO: Even though the postgres code runs in a separate process,
//! it's not a secure sandbox.
//! The Postgres process is assumed to be secure against malicious WAL
//! records. It achieves it by dropping privileges before replaying
//! any WAL records, so that even if an attacker hijacks the Postgres
//! process, he cannot escape out of it.
//!
use byteorder::{ByteOrder, LittleEndian};
use bytes::{Buf, BufMut, Bytes, BytesMut};
use lazy_static::lazy_static;
use log::*;
use serde::{Deserialize, Serialize};
use std::cell::RefCell;
use std::fs;
use std::fs::OpenOptions;
@@ -25,8 +30,6 @@ use std::io::prelude::*;
use std::io::Error;
use std::path::PathBuf;
use std::process::Stdio;
use std::sync::mpsc;
use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use std::time::Instant;
@@ -34,11 +37,12 @@ use tokio::io::AsyncBufReadExt;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::process::{ChildStdin, ChildStdout, Command};
use tokio::time::timeout;
use zenith_metrics::{register_histogram, register_int_counter, Histogram, IntCounter};
use zenith_utils::bin_ser::BeSer;
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTenantId;
use crate::object_key::*;
use crate::repository::BufferTag;
use crate::relish::*;
use crate::repository::WALRecord;
use crate::waldecoder::XlXactParsedRecord;
use crate::waldecoder::{MultiXactId, XlMultiXactCreate};
@@ -47,6 +51,18 @@ use postgres_ffi::nonrelfile_utils::transaction_id_set_status;
use postgres_ffi::pg_constants;
use postgres_ffi::XLogRecord;
///
/// `RelTag` + block number (`blknum`) gives us a unique id of the page in the cluster.
///
/// In Postgres `BufferTag` structure is used for exactly the same purpose.
/// [See more related comments here](https://github.com/postgres/postgres/blob/99c5852e20a0987eca1c38ba0c09329d4076b6a0/src/include/storage/buf_internals.h#L91).
///
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, Copy, Serialize, Deserialize)]
pub struct BufferTag {
pub rel: RelTag,
pub blknum: u32,
}
///
/// WAL Redo Manager is responsible for replaying WAL records.
///
@@ -60,7 +76,8 @@ pub trait WalRedoManager: Send + Sync {
/// the reords.
fn request_redo(
&self,
tag: ObjectTag,
rel: RelishTag,
blknum: u32,
lsn: Lsn,
base_img: Option<Bytes>,
records: Vec<WALRecord>,
@@ -76,7 +93,8 @@ pub struct DummyRedoManager {}
impl crate::walredo::WalRedoManager for DummyRedoManager {
fn request_redo(
&self,
_tag: ObjectTag,
_rel: RelishTag,
_blknum: u32,
_lsn: Lsn,
_base_img: Option<Bytes>,
_records: Vec<WALRecord>,
@@ -87,34 +105,50 @@ impl crate::walredo::WalRedoManager for DummyRedoManager {
static TIMEOUT: Duration = Duration::from_secs(20);
// Metrics collected on WAL redo operations
//
// We collect the time spent in actual WAL redo ('redo'), and time waiting
// for access to the postgres process ('wait') since there is only one for
// each tenant.
lazy_static! {
static ref WAL_REDO_TIME: Histogram =
register_histogram!("pageserver_wal_redo_time", "Time spent on WAL redo")
.expect("failed to define a metric");
static ref WAL_REDO_WAIT_TIME: Histogram = register_histogram!(
"pageserver_wal_redo_wait_time",
"Time spent waiting for access to the WAL redo process"
)
.expect("failed to define a metric");
static ref WAL_REDO_RECORD_COUNTER: IntCounter = register_int_counter!(
"pageserver_wal_records_replayed",
"Number of WAL records replayed"
)
.unwrap();
}
///
/// The implementation consists of two parts: PostgresRedoManager, and
/// PostgresRedoManagerInternal. PostgresRedoManager is the public struct
/// that can be used to send redo requests to the manager.
/// PostgresRedoManagerInternal is used by the manager thread itself.
/// This is the real implementation that uses a Postgres process to
/// perform WAL replay. Only one thread can use the processs at a time,
/// that is controlled by the Mutex. In the future, we might want to
/// launch a pool of processes to allow concurrent replay of multiple
/// records.
///
pub struct PostgresRedoManager {
request_tx: Mutex<mpsc::Sender<WalRedoRequest>>,
}
struct PostgresRedoManagerInternal {
tenantid: ZTenantId,
conf: &'static PageServerConf,
request_rx: mpsc::Receiver<WalRedoRequest>,
}
#[derive(Debug)]
struct WalRedoRequestData {
tag: ObjectTag,
lsn: Lsn,
base_img: Option<Bytes>,
records: Vec<WALRecord>,
runtime: tokio::runtime::Runtime,
process: Mutex<Option<PostgresRedoProcess>>,
}
#[derive(Debug)]
struct WalRedoRequest {
data: WalRedoRequestData,
response_channel: mpsc::Sender<Result<Bytes, WalRedoError>>,
rel: RelishTag,
blknum: u32,
lsn: Lsn,
base_img: Option<Bytes>,
records: Vec<WALRecord>,
}
/// An error happened in WAL redo
@@ -130,37 +164,6 @@ pub enum WalRedoError {
///
/// Public interface of WAL redo manager
///
impl PostgresRedoManager {
///
/// Create a new PostgresRedoManager.
///
/// This launches a new thread to handle the requests.
pub fn new(conf: &'static PageServerConf) -> PostgresRedoManager {
let (tx, rx) = mpsc::channel();
//
// Launch the WAL redo thread
//
// Get mutable references to the values that we need to pass to the
// thread.
let request_rx = rx;
// Currently, the join handle is not saved anywhere and we
// won't try restart the thread if it dies.
let _walredo_thread = std::thread::Builder::new()
.name("WAL redo thread".into())
.spawn(move || {
let mut internal = PostgresRedoManagerInternal { conf, request_rx };
internal.wal_redo_main();
})
.unwrap();
PostgresRedoManager {
request_tx: Mutex::new(tx),
}
}
}
impl WalRedoManager for PostgresRedoManager {
///
/// Request the WAL redo manager to apply some WAL records
@@ -170,32 +173,47 @@ impl WalRedoManager for PostgresRedoManager {
///
fn request_redo(
&self,
tag: ObjectTag,
rel: RelishTag,
blknum: u32,
lsn: Lsn,
base_img: Option<Bytes>,
records: Vec<WALRecord>,
) -> Result<Bytes, WalRedoError> {
// Create a channel where to receive the response
let (tx, rx) = mpsc::channel::<Result<Bytes, WalRedoError>>();
let start_time;
let lock_time;
let end_time;
let request = WalRedoRequest {
data: WalRedoRequestData {
tag,
lsn,
base_img,
records,
},
response_channel: tx,
rel,
blknum,
lsn,
base_img,
records,
};
self.request_tx
.lock()
.unwrap()
.send(request)
.expect("could not send WAL redo request");
start_time = Instant::now();
let result = {
let mut process_guard = self.process.lock().unwrap();
lock_time = Instant::now();
rx.recv()
.expect("could not receive response to WAL redo request")
// launch the WAL redo process on first use
if process_guard.is_none() {
let p = self
.runtime
.block_on(PostgresRedoProcess::launch(self.conf, &self.tenantid))?;
*process_guard = Some(p);
}
let process = (*process_guard).as_ref().unwrap();
self.runtime
.block_on(self.handle_apply_request(&process, &request))
};
end_time = Instant::now();
WAL_REDO_WAIT_TIME.observe(lock_time.duration_since(start_time).as_secs_f64());
WAL_REDO_TIME.observe(end_time.duration_since(lock_time).as_secs_f64());
result
}
}
@@ -217,16 +235,11 @@ fn mx_offset_to_member_offset(xid: MultiXactId) -> usize {
+ (xid as u16 % pg_constants::MULTIXACT_MEMBERS_PER_MEMBERGROUP) * 4) as usize
}
///
/// WAL redo thread
///
impl PostgresRedoManagerInternal {
//
// Main entry point for the WAL applicator thread.
//
fn wal_redo_main(&mut self) {
info!("WAL redo thread started");
impl PostgresRedoManager {
///
/// Create a new PostgresRedoManager.
///
pub fn new(conf: &'static PageServerConf, tenantid: ZTenantId) -> PostgresRedoManager {
// We block on waiting for requests on the walredo request channel, but
// use async I/O to communicate with the child process. Initialize the
// runtime for the async part.
@@ -235,66 +248,25 @@ impl PostgresRedoManagerInternal {
.build()
.unwrap();
let processes: Vec<PostgresRedoProcess>;
info!("launching WAL redo postgres process");
let wal_redoers = self.conf.wal_redoers;
processes = (0..wal_redoers)
.map(|i| {
runtime
.block_on(PostgresRedoProcess::launch(self.conf, i))
.unwrap()
})
.collect();
// Loop forever, handling requests as they come.
loop {
let mut requests: Vec<WalRedoRequest> = Vec::new();
requests.push(
self.request_rx
.recv()
.expect("WAL redo request channel was closed"),
);
loop {
let req = self.request_rx.try_recv();
match req {
Ok(req) => requests.push(req),
Err(_) => break,
}
}
let request_data = requests.iter().map(|req| &req.data);
let mut rr = 0; // round robin
let results = runtime.block_on(async {
let futures = request_data.map(|req| {
rr += 1;
self.handle_apply_request(&processes[rr % wal_redoers], &req)
});
let mut results: Vec<Result<Bytes, WalRedoError>> = Vec::new();
for future in futures {
results.push(future.await);
}
results
});
for (result, request) in results.into_iter().zip(requests.iter()) {
let result_ok = result.is_ok();
// Send the result to the requester
let _ = request.response_channel.send(result);
if !result_ok {
error!("wal-redo-postgres failed to apply request {:?}", request);
}
}
// The actual process is launched lazily, on first request.
PostgresRedoManager {
runtime,
tenantid,
conf,
process: Mutex::new(None),
}
}
///
/// Process one request for WAL redo.
///
async fn handle_apply_request(
&self,
process: &PostgresRedoProcess,
request: &WalRedoRequestData,
request: &WalRedoRequest,
) -> Result<Bytes, WalRedoError> {
let tag = request.tag;
let rel = request.rel;
let blknum = request.blknum;
let lsn = request.lsn;
let base_img = request.base_img.clone();
let records = &request.records;
@@ -304,11 +276,13 @@ impl PostgresRedoManagerInternal {
let start = Instant::now();
let apply_result: Result<Bytes, Error>;
if let ObjectTag::RelationBuffer(buf_tag) = tag {
if let RelishTag::Relation(rel) = rel {
// Relational WAL records are applied using wal-redo-postgres
let buf_tag = BufferTag { rel, blknum };
apply_result = process.apply_wal_records(buf_tag, base_img, records).await;
} else {
// Non-relational WAL records we apply ourselves.
// Non-relational WAL records are handled here, with custom code that has the
// same effects as the corresponding Postgres WAL redo function.
const ZERO_PAGE: [u8; 8192] = [0u8; 8192];
let mut page = BytesMut::new();
if let Some(fpi) = base_img {
@@ -322,6 +296,8 @@ impl PostgresRedoManagerInternal {
for record in records {
let mut buf = record.rec.clone();
WAL_REDO_RECORD_COUNTER.inc();
// 1. Parse XLogRecord struct
// FIXME: refactor to avoid code duplication.
let xlogrec = XLogRecord::from_bytes(&mut buf);
@@ -334,25 +310,16 @@ impl PostgresRedoManagerInternal {
buf.advance(skip);
}
if xlogrec.xl_rmid == pg_constants::RM_CLOG_ID {
let info = xlogrec.xl_info & !pg_constants::XLR_INFO_MASK;
if info == pg_constants::CLOG_ZEROPAGE {
// The only operation we need to implement is CLOG_ZEROPAGE
page.copy_from_slice(&ZERO_PAGE);
}
} else if xlogrec.xl_rmid == pg_constants::RM_XACT_ID {
if xlogrec.xl_rmid == pg_constants::RM_XACT_ID {
// Transaction manager stuff
let info = xlogrec.xl_info & pg_constants::XLOG_XACT_OPMASK;
let tag_blknum = match tag {
ObjectTag::Clog(slru) => slru.blknum,
ObjectTag::TwoPhase(_) => {
assert!(info == pg_constants::XLOG_XACT_PREPARE);
trace!("Apply prepare {} record", xlogrec.xl_xid);
page.clear();
page.extend_from_slice(&buf[..]);
continue;
let rec_segno = match rel {
RelishTag::Slru { slru, segno } => {
if slru != SlruKind::Clog {
panic!("Not valid XACT relish tag {:?}", rel);
}
segno
}
_ => panic!("Not valid XACT object tag {:?}", tag),
_ => panic!("Not valid XACT relish tag {:?}", rel),
};
let parsed_xact =
XlXactParsedRecord::decode(&mut buf, xlogrec.xl_xid, xlogrec.xl_info);
@@ -365,9 +332,11 @@ impl PostgresRedoManagerInternal {
&mut page,
);
for subxact in &parsed_xact.subxacts {
let blkno = *subxact as u32 / pg_constants::CLOG_XACTS_PER_PAGE;
let pageno = *subxact as u32 / pg_constants::CLOG_XACTS_PER_PAGE;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
// only update xids on the requested page
if tag_blknum == blkno {
if rec_segno == segno && blknum == rpageno {
transaction_id_set_status(
*subxact,
pg_constants::TRANSACTION_STATUS_SUB_COMMITTED,
@@ -384,9 +353,11 @@ impl PostgresRedoManagerInternal {
&mut page,
);
for subxact in &parsed_xact.subxacts {
let blkno = *subxact as u32 / pg_constants::CLOG_XACTS_PER_PAGE;
let pageno = *subxact as u32 / pg_constants::CLOG_XACTS_PER_PAGE;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
// only update xids on the requested page
if tag_blknum == blkno {
if rec_segno == segno && blknum == rpageno {
transaction_id_set_status(
*subxact,
pg_constants::TRANSACTION_STATUS_ABORTED,
@@ -396,51 +367,59 @@ impl PostgresRedoManagerInternal {
}
}
} else if xlogrec.xl_rmid == pg_constants::RM_MULTIXACT_ID {
// Multiexact operations
// Multixact operations
let info = xlogrec.xl_info & pg_constants::XLR_RMGR_INFO_MASK;
if info == pg_constants::XLOG_MULTIXACT_ZERO_OFF_PAGE
|| info == pg_constants::XLOG_MULTIXACT_ZERO_MEM_PAGE
{
// Just need to zero page
page.copy_from_slice(&ZERO_PAGE);
} else if info == pg_constants::XLOG_MULTIXACT_CREATE_ID {
if info == pg_constants::XLOG_MULTIXACT_CREATE_ID {
let xlrec = XlMultiXactCreate::decode(&mut buf);
if let ObjectTag::MultiXactMembers(slru) = tag {
for i in 0..xlrec.nmembers {
let blkno = i / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32;
if blkno == slru.blknum {
// update only target block
let offset = xlrec.moff + i;
let memberoff = mx_offset_to_member_offset(offset);
let flagsoff = mx_offset_to_flags_offset(offset);
let bshift = mx_offset_to_flags_bitshift(offset);
let mut flagsval =
LittleEndian::read_u32(&page[flagsoff..flagsoff + 4]);
flagsval &=
!(((1 << pg_constants::MXACT_MEMBER_BITS_PER_XACT) - 1)
if let RelishTag::Slru {
slru,
segno: rec_segno,
} = rel
{
if slru == SlruKind::MultiXactMembers {
for i in 0..xlrec.nmembers {
let pageno =
i / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
if segno == rec_segno && rpageno == blknum {
// update only target block
let offset = xlrec.moff + i;
let memberoff = mx_offset_to_member_offset(offset);
let flagsoff = mx_offset_to_flags_offset(offset);
let bshift = mx_offset_to_flags_bitshift(offset);
let mut flagsval =
LittleEndian::read_u32(&page[flagsoff..flagsoff + 4]);
flagsval &= !(((1
<< pg_constants::MXACT_MEMBER_BITS_PER_XACT)
- 1)
<< bshift);
flagsval |= xlrec.members[i as usize].status << bshift;
LittleEndian::write_u32(
&mut page[flagsoff..flagsoff + 4],
flagsval,
);
LittleEndian::write_u32(
&mut page[memberoff..memberoff + 4],
xlrec.members[i as usize].xid,
);
flagsval |= xlrec.members[i as usize].status << bshift;
LittleEndian::write_u32(
&mut page[flagsoff..flagsoff + 4],
flagsval,
);
LittleEndian::write_u32(
&mut page[memberoff..memberoff + 4],
xlrec.members[i as usize].xid,
);
}
}
} else {
// Multixact offsets SLRU
let offs = (xlrec.mid
% pg_constants::MULTIXACT_OFFSETS_PER_PAGE as u32
* 4) as usize;
LittleEndian::write_u32(&mut page[offs..offs + 4], xlrec.moff);
}
} else {
// Multixact offsets SLRU
let offs = (xlrec.mid % pg_constants::MULTIXACT_OFFSETS_PER_PAGE as u32
* 4) as usize;
LittleEndian::write_u32(&mut page[offs..offs + 4], xlrec.moff);
panic!();
}
} else {
panic!();
}
} else if xlogrec.xl_rmid == pg_constants::RM_RELMAP_ID {
// Ralation map file has size 512 bytes
// Relation map file has size 512 bytes
page.clear();
page.extend_from_slice(&buf[12..]); // skip xl_relmap_update
assert!(page.len() == 512); // size of pg_filenode.map
@@ -475,20 +454,26 @@ impl PostgresRedoManagerInternal {
}
}
///
/// Handle to the Postgres WAL redo process
///
struct PostgresRedoProcess {
stdin: Arc<RefCell<ChildStdin>>,
stdout: Arc<RefCell<ChildStdout>>,
stdin: RefCell<ChildStdin>,
stdout: RefCell<ChildStdout>,
}
impl PostgresRedoProcess {
//
// Start postgres binary in special WAL redo mode.
//
async fn launch(conf: &PageServerConf, id: usize) -> Result<PostgresRedoProcess, Error> {
async fn launch(
conf: &PageServerConf,
tenantid: &ZTenantId,
) -> Result<PostgresRedoProcess, Error> {
// FIXME: We need a dummy Postgres cluster to run the process in. Currently, we
// just create one with constant name. That fails if you try to launch more than
// one WAL redo manager concurrently.
let datadir = conf.workdir.join(format!("wal-redo-datadir-{}", id));
let datadir = conf.tenant_path(&tenantid).join("wal-redo-datadir");
// Create empty data directory for wal-redo postgres, deleting old one first.
if datadir.exists() {
@@ -568,8 +553,8 @@ impl PostgresRedoProcess {
tokio::spawn(f_stderr);
Ok(PostgresRedoProcess {
stdin: Arc::new(RefCell::new(stdin)),
stdout: Arc::new(RefCell::new(stdout)),
stdin: RefCell::new(stdin),
stdout: RefCell::new(stdout),
})
}
@@ -613,6 +598,8 @@ impl PostgresRedoProcess {
for rec in records.iter() {
let r = rec.clone();
WAL_REDO_RECORD_COUNTER.inc();
stdin
.write_all(&build_apply_record_msg(r.lsn, r.rec))
.await?;

View File

@@ -20,6 +20,7 @@ log = "0.4.14"
memoffset = "0.6.2"
thiserror = "1.0"
workspace_hack = { path = "../workspace_hack" }
zenith_utils = { path = "../zenith_utils" }
[build-dependencies]
bindgen = "0.57"

View File

@@ -0,0 +1,98 @@
# PostgreSQL Client Authentication Configuration File
# ===================================================
#
# Refer to the "Client Authentication" section in the PostgreSQL
# documentation for a complete description of this file. A short
# synopsis follows.
#
# This file controls: which hosts are allowed to connect, how clients
# are authenticated, which PostgreSQL user names they can use, which
# databases they can access. Records take one of these forms:
#
# local DATABASE USER METHOD [OPTIONS]
# host DATABASE USER ADDRESS METHOD [OPTIONS]
# hostssl DATABASE USER ADDRESS METHOD [OPTIONS]
# hostnossl DATABASE USER ADDRESS METHOD [OPTIONS]
# hostgssenc DATABASE USER ADDRESS METHOD [OPTIONS]
# hostnogssenc DATABASE USER ADDRESS METHOD [OPTIONS]
#
# (The uppercase items must be replaced by actual values.)
#
# The first field is the connection type:
# - "local" is a Unix-domain socket
# - "host" is a TCP/IP socket (encrypted or not)
# - "hostssl" is a TCP/IP socket that is SSL-encrypted
# - "hostnossl" is a TCP/IP socket that is not SSL-encrypted
# - "hostgssenc" is a TCP/IP socket that is GSSAPI-encrypted
# - "hostnogssenc" is a TCP/IP socket that is not GSSAPI-encrypted
#
# DATABASE can be "all", "sameuser", "samerole", "replication", a
# database name, or a comma-separated list thereof. The "all"
# keyword does not match "replication". Access to replication
# must be enabled in a separate record (see example below).
#
# USER can be "all", a user name, a group name prefixed with "+", or a
# comma-separated list thereof. In both the DATABASE and USER fields
# you can also write a file name prefixed with "@" to include names
# from a separate file.
#
# ADDRESS specifies the set of hosts the record matches. It can be a
# host name, or it is made up of an IP address and a CIDR mask that is
# an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that
# specifies the number of significant bits in the mask. A host name
# that starts with a dot (.) matches a suffix of the actual host name.
# Alternatively, you can write an IP address and netmask in separate
# columns to specify the set of hosts. Instead of a CIDR-address, you
# can write "samehost" to match any of the server's own IP addresses,
# or "samenet" to match any address in any subnet that the server is
# directly connected to.
#
# METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
# Note that "password" sends passwords in clear text; "md5" or
# "scram-sha-256" are preferred since they send encrypted passwords.
#
# OPTIONS are a set of options for the authentication in the format
# NAME=VALUE. The available options depend on the different
# authentication methods -- refer to the "Client Authentication"
# section in the documentation for a list of which options are
# available for which authentication methods.
#
# Database and user names containing spaces, commas, quotes and other
# special characters must be quoted. Quoting one of the keywords
# "all", "sameuser", "samerole" or "replication" makes the name lose
# its special character, and just match a database or username with
# that name.
#
# This file is read on server startup and when the server receives a
# SIGHUP signal. If you edit the file on a running system, you have to
# SIGHUP the server for the changes to take effect, run "pg_ctl reload",
# or execute "SELECT pg_reload_conf()".
#
# Put your actual configuration here
# ----------------------------------
#
# If you want to allow non-local connections, you need to add more
# "host" records. In that case you will also need to make PostgreSQL
# listen on a non-local interface via the listen_addresses
# configuration parameter, or via the -i or -h command line switches.
# CAUTION: Configuring the system for local "trust" authentication
# allows any local user to connect as any PostgreSQL user, including
# the database superuser. If you do not trust all your local users,
# use another authentication method.
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust

View File

@@ -1,6 +1,9 @@
#![allow(non_upper_case_globals)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
// suppress warnings on rust 1.53 due to bindgen unit tests.
// https://github.com/rust-lang/rust-bindgen/issues/1651
#![allow(deref_nullptr)]
include!(concat!(env!("OUT_DIR"), "/bindings.rs"));
pub mod controlfile_utils;

View File

@@ -46,7 +46,6 @@ pub const SIZE_OF_PAGE_HEADER: u16 = 24;
pub const BITS_PER_HEAPBLOCK: u16 = 2;
pub const HEAPBLOCKS_PER_PAGE: u16 = (BLCKSZ - SIZE_OF_PAGE_HEADER) * 8 / BITS_PER_HEAPBLOCK;
pub const TRANSACTION_STATUS_IN_PROGRESS: u8 = 0x00;
pub const TRANSACTION_STATUS_COMMITTED: u8 = 0x01;
pub const TRANSACTION_STATUS_ABORTED: u8 = 0x02;
pub const TRANSACTION_STATUS_SUB_COMMITTED: u8 = 0x03;
@@ -185,3 +184,41 @@ pub const XLOG_BLCKSZ: usize = 8192;
pub const XLOG_CHECKPOINT_SHUTDOWN: u8 = 0x00;
pub const XLOG_CHECKPOINT_ONLINE: u8 = 0x10;
pub const XLP_LONG_HEADER: u16 = 0x0002;
pub const PG_MAJORVERSION: &'static str = "14";
// List of subdirectories inside pgdata.
// Copied from src/bin/initdb/initdb.c
pub const PGDATA_SUBDIRS: [&'static str; 22] = [
"global",
"pg_wal/archive_status",
"pg_commit_ts",
"pg_dynshmem",
"pg_notify",
"pg_serial",
"pg_snapshots",
"pg_subtrans",
"pg_twophase",
"pg_multixact",
"pg_multixact/members",
"pg_multixact/offsets",
"base",
"base/1",
"pg_replslot",
"pg_tblspc",
"pg_stat",
"pg_stat_tmp",
"pg_xact",
"pg_logical",
"pg_logical/snapshots",
"pg_logical/mappings",
];
pub const PGDATA_SPECIAL_FILES: [&'static str; 4] = [
"pg_hba.conf",
"pg_ident.conf",
"postgresql.conf",
"postgresql.auto.conf",
];
pub static PG_HBA: &'static str = include_str!("../samples/pg_hba.conf");

View File

@@ -332,29 +332,6 @@ impl XLogLongPageHeaderData {
pub const SIZEOF_CHECKPOINT: usize = std::mem::size_of::<CheckPoint>();
impl CheckPoint {
pub fn new(lsn: u64, timeline: u32) -> CheckPoint {
CheckPoint {
redo: lsn,
ThisTimeLineID: timeline,
PrevTimeLineID: timeline,
fullPageWrites: true, // TODO: get actual value of full_page_writes
nextXid: FullTransactionId {
value: pg_constants::FIRST_NORMAL_TRANSACTION_ID as u64,
}, // TODO: handle epoch?
nextOid: pg_constants::FIRST_BOOTSTRAP_OBJECT_ID,
nextMulti: 1,
nextMultiOffset: 0,
oldestXid: pg_constants::FIRST_NORMAL_TRANSACTION_ID,
oldestXidDB: 0,
oldestMulti: 1,
oldestMultiDB: 0,
time: 0,
oldestCommitTsXid: 0,
newestCommitTsXid: 0,
oldestActiveXid: pg_constants::INVALID_TRANSACTION_ID,
}
}
pub fn encode(&self) -> Bytes {
let b: [u8; SIZEOF_CHECKPOINT];
b = unsafe { std::mem::transmute::<CheckPoint, [u8; SIZEOF_CHECKPOINT]>(*self) };
@@ -451,3 +428,73 @@ pub fn generate_wal_segment(pg_control: &ControlFileData) -> Bytes {
seg_buf.resize(pg_constants::WAL_SEGMENT_SIZE, 0);
seg_buf.freeze()
}
#[cfg(test)]
mod tests {
use super::*;
use regex::Regex;
use std::{env, process::Command, str::FromStr};
use zenith_utils::lsn::Lsn;
// Run find_end_of_wal against file in test_wal dir
// Ensure that it finds last record correctly
#[test]
pub fn test_find_end_of_wal() {
// 1. Run initdb to generate some WAL
let top_path = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("..");
let data_dir = top_path.join("test_output/test_find_end_of_wal");
let initdb_path = top_path.join("tmp_install/bin/initdb");
let lib_path = top_path.join("tmp_install/lib");
if data_dir.exists() {
fs::remove_dir_all(&data_dir).unwrap();
}
println!("Using initdb from '{}'", initdb_path.display());
println!("Data directory '{}'", data_dir.display());
let initdb_output = Command::new(initdb_path)
.args(&["-D", data_dir.to_str().unwrap()])
.arg("--no-instructions")
.arg("--no-sync")
.env_clear()
.env("LD_LIBRARY_PATH", &lib_path)
.env("DYLD_LIBRARY_PATH", &lib_path)
.output()
.unwrap();
assert!(initdb_output.status.success());
// 2. Pick WAL generated by initdb
let wal_dir = data_dir.join("pg_wal");
let wal_seg_size = 16 * 1024 * 1024;
// 3. Check end_of_wal on non-partial WAL segment (we treat it as fully populated)
let (wal_end, tli) = find_end_of_wal(&wal_dir, wal_seg_size, true);
let wal_end = Lsn(wal_end);
println!("wal_end={}, tli={}", wal_end, tli);
assert_eq!(wal_end, "0/2000000".parse::<Lsn>().unwrap());
// 4. Get the actual end of WAL by pg_waldump
let waldump_path = top_path.join("tmp_install/bin/pg_waldump");
let waldump_output = Command::new(waldump_path)
.arg(wal_dir.join("000000010000000000000001"))
.env_clear()
.env("LD_LIBRARY_PATH", &lib_path)
.env("DYLD_LIBRARY_PATH", &lib_path)
.output()
.unwrap();
let waldump_output = std::str::from_utf8(&waldump_output.stderr).unwrap();
println!("waldump_output = '{}'", &waldump_output);
let re = Regex::new(r"invalid record length at (.+):").unwrap();
let caps = re.captures(&waldump_output).unwrap();
let waldump_wal_end = Lsn::from_str(caps.get(1).unwrap().as_str()).unwrap();
// 5. Rename file to partial to actually find last valid lsn
fs::rename(
wal_dir.join("000000010000000000000001"),
wal_dir.join("000000010000000000000001.partial"),
)
.unwrap();
let (wal_end, tli) = find_end_of_wal(&wal_dir, wal_seg_size, true);
let wal_end = Lsn(wal_end);
println!("wal_end={}, tli={}", wal_end, tli);
assert_eq!(wal_end, waldump_wal_end);
}
}

View File

@@ -7,7 +7,7 @@ use anyhow::bail;
use bytes::Bytes;
use serde::{Deserialize, Serialize};
use zenith_utils::{
postgres_backend::{self, query_from_cstring, PostgresBackend},
postgres_backend::{self, query_from_cstring, AuthType, PostgresBackend},
pq_proto::{BeMessage, SINGLE_COL_ROWDESC},
};
@@ -34,7 +34,7 @@ pub fn thread_main(state: &'static ProxyState, listener: TcpListener) -> anyhow:
pub fn mgmt_conn_main(state: &'static ProxyState, socket: TcpStream) -> anyhow::Result<()> {
let mut conn_handler = MgmtHandler { state };
let mut pgbackend = PostgresBackend::new(socket, postgres_backend::AuthType::Trust)?;
let pgbackend = PostgresBackend::new(socket, AuthType::Trust)?;
pgbackend.run(&mut conn_handler)
}

21
snapfile/Cargo.toml Normal file
View File

@@ -0,0 +1,21 @@
[package]
name = "snapfile"
version = "0.1.0"
edition = "2018"
[[bin]]
name = "snaptool"
path = "snaptool/main.rs"
[dependencies]
anyhow = "1.0"
aversion = "0.2"
bookfile = "0.3"
serde = { version = "1.0", features = ["derive"] }
rand = "0.8.3"
structopt = "0.3"
zenith_utils = { path = "../zenith_utils" }
hex = "0.4.3"
[dev-dependencies]
tempfile = "3.2"

64
snapfile/snaptool/main.rs Normal file
View File

@@ -0,0 +1,64 @@
use anyhow::{Context, Result};
use snapfile::{squash, SnapFile};
use std::env::current_dir;
use std::path::PathBuf;
use structopt::StructOpt;
#[derive(StructOpt)]
#[structopt(about = "A tool for manipulating snapshot files")]
enum Params {
Squash(Squash),
Describe(Describe),
}
#[derive(StructOpt)]
struct Squash {
older: PathBuf,
newer: PathBuf,
}
#[derive(StructOpt)]
struct Describe {
file: PathBuf,
}
fn print_errors(error: anyhow::Error) {
let formatted: Vec<_> = error.chain().map(ToString::to_string).collect();
eprintln!("{}", formatted.join(": "));
}
fn main() {
let res = snaptool_main();
if let Err(e) = res {
print_errors(e);
}
}
fn snaptool_main() -> Result<()> {
let params = Params::from_args();
match &params {
Params::Squash(squash_params) => {
let out_dir = current_dir()?;
squash(&squash_params.older, &squash_params.newer, &out_dir).with_context(|| {
format!(
"squash {} {}",
squash_params.older.to_string_lossy(),
squash_params.newer.to_string_lossy()
)
})?;
}
Params::Describe(describe_params) => {
describe(describe_params)
.with_context(|| format!("describe {}", describe_params.file.to_string_lossy()))?;
}
}
Ok(())
}
fn describe(params: &Describe) -> Result<()> {
let mut snap = SnapFile::new(&params.file)?;
let meta = snap.read_meta()?;
println!("{:?}: {:#?}", params.file, meta);
Ok(())
}

339
snapfile/src/lib.rs Normal file
View File

@@ -0,0 +1,339 @@
//! A file format for storage a snapshot of pages.
#![warn(missing_docs)]
#![forbid(unsafe_code)]
#![warn(clippy::cast_possible_truncation)]
mod page;
mod squash;
mod versioned;
#[doc(inline)]
pub use page::Page;
#[doc(inline)]
pub use squash::squash;
use anyhow::{bail, Context, Result};
use aversion::group::{DataSink, DataSourceExt};
use aversion::util::cbor::CborData;
use bookfile::{Book, BookWriter, ChapterWriter};
use std::ffi::OsString;
use std::fs::File;
use std::io::Write;
use std::ops::AddAssign;
use std::path::{Path, PathBuf};
pub use versioned::{PageIndex, PageLocation, Predecessor, SnapFileMeta};
use zenith_utils::lsn::Lsn;
impl SnapFileMeta {
pub fn new(previous: Option<SnapFileMeta>, timeline: [u8; 16], lsn: Lsn) -> Self {
// Store the metadata of the predecessor snapshot, if there is one.
let predecessor = previous.map(|prev| Predecessor {
timeline: prev.timeline,
lsn: prev.lsn,
});
SnapFileMeta {
timeline,
predecessor,
lsn: lsn.into(),
}
}
fn to_filename(&self) -> OsString {
let timeline_string = hex::encode(self.timeline);
let pred_lsn = match &self.predecessor {
None => 0,
Some(pred) => pred.lsn,
};
format!("{}_{:x}_{:x}.zdb", timeline_string, pred_lsn, self.lsn).into()
}
}
impl PageIndex {
/// Retrieve the page offset from the index.
///
/// If the page is not in the index, returns `None`.
fn get_page_location(&self, page_num: u64) -> Option<PageLocation> {
self.map.get(&page_num).copied()
}
fn page_count(&self) -> usize {
self.map.len()
}
}
impl PageLocation {
fn to_offset(&self) -> u64 {
// Counts in units of one page.
self.0 * 8192
}
}
impl AddAssign<u64> for PageLocation {
fn add_assign(&mut self, rhs: u64) {
self.0 += rhs;
}
}
/// A read-only snapshot file.
pub struct SnapFile {
book: Book<File>,
page_index: PageIndex,
}
impl SnapFile {
/// Open a new `SnapFile` for reading.
///
/// This call will validate some of the file's format and read the file's
/// metadata; it may return an error if the file format is invalid.
pub fn new(path: &Path) -> Result<Self> {
let file =
File::open(path).with_context(|| format!("snapfile {}", path.to_string_lossy()))?;
let book = Book::new(file)?;
if book.magic() != versioned::SNAPFILE_MAGIC {
bail!("bad magic number");
}
// Read the page index into memory.
let chapter_reader = book
.chapter_reader(versioned::CHAPTER_PAGE_INDEX)
.context("snapfile missing index chapter")?;
let mut source = CborData::new(chapter_reader);
let page_index: PageIndex = source.expect_message()?;
Ok(SnapFile { book, page_index })
}
/// Read the snapshot metadata.
pub fn read_meta(&mut self) -> Result<SnapFileMeta> {
let chapter_reader = self
.book
.chapter_reader(versioned::CHAPTER_SNAP_META)
.context("snapfile missing meta")?;
let mut source = CborData::new(chapter_reader);
let meta: SnapFileMeta = source.expect_message()?;
Ok(meta)
}
/// Return the number of pages stored in this snapshot.
pub fn page_count(&self) -> usize {
self.page_index.page_count()
}
/// Check if a page exists in this snapshot's index.
///
/// Returns `true` if the given page is stored in this snapshot file,
/// `false` if not.
pub fn has_page(&self, page_num: u64) -> bool {
self.page_index.get_page_location(page_num).is_some()
}
/// Read a page.
///
/// If this returns Ok(None), that means that this file does not store
/// the requested page.
/// This should only fail (returning `Err`) if an IO error occurs.
pub fn read_page(&self, page_num: u64) -> Result<Option<Page>> {
match self.page_index.get_page_location(page_num) {
None => Ok(None),
Some(page_offset) => Ok(Some(self._read_page(page_offset)?)),
}
}
/// Read page data from the file.
///
/// This does the work for read_page and PageIter.
fn _read_page(&self, page_location: PageLocation) -> Result<Page> {
// Compute the true byte offset in the file.
let page_offset = page_location.to_offset();
let chapter_reader = self
.book
.chapter_reader(versioned::CHAPTER_PAGES)
.context("snapfile missing pages chapter")?;
let mut page_data = Page::default();
let bytes_read = chapter_reader.read_at(page_data.as_mut(), page_offset)?;
if bytes_read != 8192 {
bail!("read truncated page");
}
Ok(page_data)
}
/// Iterate over pages.
///
/// This will return an iterator over (usize, )
pub fn all_pages(&self) -> PageIter {
let inner = (&self.page_index.map).into_iter();
PageIter {
snapfile: self,
inner,
}
}
}
/// An iterator over all pages in the snapshot file.
pub struct PageIter<'a> {
snapfile: &'a SnapFile,
inner: std::collections::btree_map::Iter<'a, u64, PageLocation>,
}
impl Iterator for PageIter<'_> {
type Item = Result<(u64, Page)>;
fn next(&mut self) -> Option<Self::Item> {
let (page_num, page_offset) = self.inner.next()?;
let result = self
.snapfile
._read_page(*page_offset)
.map(|page_data| (*page_num, page_data));
Some(result)
}
}
/// `SnapWriter` creates a new snapshot file.
///
/// A SnapWriter is created, has pages written into it, and is then closed.
pub struct SnapWriter {
writer: ChapterWriter<File>,
page_index: PageIndex,
meta: SnapFileMeta,
current_offset: PageLocation,
}
impl SnapWriter {
/// Create a new `SnapWriter`.
///
pub fn new(dir: &Path, meta: SnapFileMeta) -> Result<Self> {
let mut path = PathBuf::from(dir);
path.push(meta.to_filename());
let file = File::create(path)?;
let book = BookWriter::new(file, versioned::SNAPFILE_MAGIC)?;
// Write a chapter for the snapshot metadata.
let writer = book.new_chapter(versioned::CHAPTER_SNAP_META);
let mut sink = CborData::new(writer);
sink.write_message(&meta)?;
let book = sink.into_inner().close()?;
// Open a new chapter for raw page data.
let writer = book.new_chapter(versioned::CHAPTER_PAGES);
Ok(SnapWriter {
writer,
page_index: PageIndex::default(),
meta,
current_offset: PageLocation::default(),
})
}
/// Write a page into the snap file.
pub fn write_page<P>(&mut self, page_num: u64, page_data: P) -> Result<()>
where
P: Into<Page>,
{
let page_data: Page = page_data.into();
self.writer.write_all(page_data.as_ref())?;
let prev = self.page_index.map.insert(page_num, self.current_offset);
if prev.is_some() {
panic!("duplicate index for page {}", page_num);
}
self.current_offset += 1;
Ok(())
}
/// Finish writing pages.
///
/// This consumes the PagesWriter and completes the snapshot.
//
pub fn finish(self) -> Result<SnapFileMeta> {
let book = self.writer.close()?;
// Write out a page index and close the book. This will write out any
// necessary file metadata.
// FIXME: these 3 lines could be combined into a single function
// that means "serialize this data structure with this format into this chapter".
let writer = book.new_chapter(versioned::CHAPTER_PAGE_INDEX);
let mut sink = CborData::new(writer);
sink.write_message(&self.page_index)?;
// Close the chapter, then close the book.
sink.into_inner().close()?.close()?;
// Return the snapshot metadata to the caller.
Ok(self.meta)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
use tempfile::TempDir;
const TEST_TIMELINE: [u8; 16] = [99u8; 16];
#[test]
fn snap_two_pages() {
// When `dir` goes out of scope the directory will be unlinked.
let dir = TempDir::new().unwrap();
let snap_meta = {
// Write out a new snapshot file with two pages.
let meta = SnapFileMeta::new(None, TEST_TIMELINE, Lsn(1234));
let mut snap = SnapWriter::new(dir.path(), meta).unwrap();
// Write the pages out of order, because why not?
let page99 = [99u8; 8192];
snap.write_page(99, page99).unwrap();
let page33 = [33u8; 8192];
snap.write_page(33, page33).unwrap();
snap.finish().unwrap()
};
assert_eq!(snap_meta.lsn, 1234);
{
// Read the snapshot file and verify the contents.
let mut path = PathBuf::from(dir.path());
path.push(snap_meta.to_filename());
let mut snap = SnapFile::new(&path).unwrap();
assert_eq!(snap.page_count(), 2);
assert!(!snap.has_page(0));
assert!(snap.has_page(33));
assert!(!snap.has_page(98));
assert!(snap.has_page(99));
assert!(snap.read_page(0).unwrap().is_none());
let page = snap.read_page(33).unwrap().unwrap();
assert_eq!(*page.0, [33u8; 8192]);
let page = snap.read_page(99).unwrap().unwrap();
assert_eq!(*page.0, [99u8; 8192]);
// Make sure the deserialized metadata matches what we think we wrote.
let meta2 = snap.read_meta().unwrap();
assert_eq!(snap_meta, meta2);
}
}
#[test]
fn snap_zero_pages() {
// When `dir` goes out of scope the directory will be unlinked.
let dir = TempDir::new().unwrap();
let snap_meta = {
// Write out a new snapshot file with no pages.
let meta = SnapFileMeta::new(None, TEST_TIMELINE, Lsn(1234));
let snap = SnapWriter::new(dir.path(), meta).unwrap();
snap.finish().unwrap()
};
{
// Read the snapshot file.
let mut path = PathBuf::from(dir.path());
path.push(snap_meta.to_filename());
let snap = SnapFile::new(&path).unwrap();
assert_eq!(snap.page_index.page_count(), 0);
assert!(!snap.has_page(0));
assert!(!snap.has_page(99));
assert!(snap.read_page(0).unwrap().is_none());
assert!(snap.read_page(99).unwrap().is_none());
}
}
}

42
snapfile/src/page.rs Normal file
View File

@@ -0,0 +1,42 @@
/// A single 8KB page.
pub struct Page(pub Box<[u8; 8192]>);
impl Page {
/// Create a page by copying bytes from another slice.
///
/// This is a copy, not a move. If the caller already has
/// an owned array then `From<[u8; 8192]>` can be used instead.
pub fn copy_slice(x: &[u8; 8192]) -> Self {
Page(Box::new(x.clone()))
}
}
impl Default for Page {
fn default() -> Self {
Page(Box::new([0u8; 8192]))
}
}
impl From<[u8; 8192]> for Page {
fn from(array: [u8; 8192]) -> Self {
Page(Box::new(array))
}
}
impl From<Box<[u8; 8192]>> for Page {
fn from(heap_array: Box<[u8; 8192]>) -> Self {
Page(heap_array)
}
}
impl AsRef<[u8; 8192]> for Page {
fn as_ref(&self) -> &[u8; 8192] {
self.0.as_ref()
}
}
impl AsMut<[u8; 8192]> for Page {
fn as_mut(&mut self) -> &mut [u8; 8192] {
self.0.as_mut()
}
}

100
snapfile/src/squash.rs Normal file
View File

@@ -0,0 +1,100 @@
use crate::{Page, PageIter, SnapFile, SnapFileMeta, SnapWriter};
use anyhow::{bail, Result};
use std::cmp::Ordering;
use std::path::Path;
// A helper struct that holds an iterator, along with the last
// value taken from the iterator.
struct PageStepper<'a> {
it: PageIter<'a>,
pub cache: Option<(u64, Page)>,
}
impl<'a> PageStepper<'a> {
fn new(snapfile: &'a SnapFile) -> Result<Self> {
let mut it = snapfile.all_pages();
let cache = it.next().transpose()?;
Ok(PageStepper { it, cache })
}
/// Read a new page from the iterator, returning the previous page.
fn step(&mut self) -> Result<Option<(u64, Page)>> {
let mut next = self.it.next().transpose()?;
std::mem::swap(&mut self.cache, &mut next);
Ok(next)
}
}
/// Squash two snapshot files into one.
///
/// The resulting snapshot will contain all of the pages from both files.
/// If the same page number is stored in both, it will keep the page from
/// the newer snapshot.
///
/// The name of the resulting file will be automatically generated from
/// the snapshot metadata.
pub fn squash(older: &Path, newer: &Path, out_dir: &Path) -> Result<()> {
let mut snap1 = SnapFile::new(older)?;
let mut snap2 = SnapFile::new(newer)?;
let meta1 = snap1.read_meta()?;
let meta2 = snap2.read_meta()?;
// Check that snap1 is the predecessor of snap2.
match meta2.predecessor {
Some(pred) if pred.timeline == meta1.timeline => {}
_ => {
bail!(
"snap file {:?} is not the predecessor of {:?}",
&older,
&newer,
);
}
}
// The new combined snapshot will have most fields from meta2 (the later
// snapshot), but will have the predecessor from meta1.
let new_meta = SnapFileMeta {
// There is some danger in squashing snapshots across two timelines,
// in that it's possible to get confused about what the history
// looks like. Ultimately, it should be possible to squash our way
// to a "complete" snapshot (that contains all pages), so this must
// be possible.
timeline: meta2.timeline,
predecessor: meta1.predecessor,
lsn: meta2.lsn,
};
let mut snap_writer = SnapWriter::new(&out_dir, new_meta)?;
let mut iter1 = PageStepper::new(&snap1)?;
let mut iter2 = PageStepper::new(&snap2)?;
loop {
let next_page = match (&iter1.cache, &iter2.cache) {
(None, None) => break,
(Some(_), None) => iter1.step()?,
(None, Some(_)) => iter2.step()?,
(Some(x), Some(y)) => {
// If these are two different page numbers, then advance the iterator
// with the numerically lower number.
// If they are the same page number, then store the one from the newer
// snapshot, and discard the other (advancing both iterators).
match x.0.cmp(&y.0) {
Ordering::Less => iter1.step()?,
Ordering::Greater => iter2.step()?,
Ordering::Equal => {
let _ = iter1.step()?;
iter2.step()?
}
}
}
};
// This can't be None, because we would already checked inside the match
// statement.
let (page_num, page_data) = next_page.unwrap();
snap_writer.write_page(page_num, page_data)?;
}
snap_writer.finish()?;
Ok(())
}

88
snapfile/src/versioned.rs Normal file
View File

@@ -0,0 +1,88 @@
//! Versioned data structures for snapshot files
//!
//! To ensure that future versions of software can read snapshot files,
//! all data structures that are serialized into the snapshot files should
//! live in this module.
//!
//! Once released, versioned data structures should never be modified.
//! Instead, new versions should be created and conversion functions should
//! be defined using the `FromVersion` trait.
use aversion::{assign_message_ids, UpgradeLatest, Versioned};
use serde::{Deserialize, Serialize};
use std::collections::BTreeMap;
// A random constant, to identify this file type.
pub(crate) const SNAPFILE_MAGIC: u32 = 0x7fb8_38a8;
// Constant chapter numbers
// FIXME: the bookfile crate should use something better to index, e.g. strings.
/// Snapshot-specific file metadata
pub(crate) const CHAPTER_SNAP_META: u64 = 1;
/// A packed set of 8KB pages.
pub(crate) const CHAPTER_PAGES: u64 = 2;
/// An index of pages.
pub(crate) const CHAPTER_PAGE_INDEX: u64 = 3;
/// Information about the predecessor snapshot.
///
/// It contains the snap_id of the predecessor snapshot, and the LSN
/// of that snapshot.
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct Predecessor {
/// This is the ID number of the predecessor timeline.
///
/// This may match the current snapshot's timeline id, but
/// it may not (if the precessor was the branch point).
pub timeline: [u8; 16],
/// This is the LSN of the predecessor snapshot.
pub lsn: u64,
}
#[derive(Debug, PartialEq, Serialize, Deserialize, Versioned, UpgradeLatest)]
pub struct SnapFileMetaV1 {
/// This is a unique ID number for this timeline.
///
/// This number guarantees that snapshot history is unique.
pub timeline: [u8; 16],
/// Information about the predecessor snapshot.
///
/// If `None`, this snapshot is the start of a new database.
pub predecessor: Option<Predecessor>,
/// This is the last LSN stored in this snapshot.
pub lsn: u64,
}
/// A type alias for the latest version of `SnapFileMeta`.
pub type SnapFileMeta = SnapFileMetaV1;
/// A page location within a file.
///
/// Note: this is an opaque value that may not be the true byte offset;
/// it may be relative to some other location or measured in units other
/// than bytes.
#[derive(Debug, Default, Clone, Copy, Serialize, Deserialize)]
#[serde(transparent)]
pub struct PageLocationV1(pub(crate) u64);
/// A type alias for the latest version of `PageLocation`.
pub type PageLocation = PageLocationV1;
/// An index from page number to offset within the pages chapter.
#[derive(Debug, Default, Serialize, Deserialize, Versioned, UpgradeLatest)]
pub struct PageIndexV1 {
/// A map from page number to file offset.
pub(crate) map: BTreeMap<u64, PageLocationV1>,
}
/// A type alias for the latest version of `PageIndex`.
pub type PageIndex = PageIndexV1;
// Each message gets a unique message id, for tracking by the aversion traits.
assign_message_ids! {
PageIndex: 100,
SnapFileMeta: 101,
}

View File

@@ -7,6 +7,7 @@ name = "pypi"
pytest = ">=6.0.0"
psycopg2 = "*"
typing-extensions = "*"
pyjwt = {extras = ["crypto"], version = "*"}
[dev-packages]
yapf = "*"

226
test_runner/Pipfile.lock generated
View File

@@ -1,7 +1,7 @@
{
"_meta": {
"hash": {
"sha256": "4c20c05c20c50cf7e8f78ab461ab23841125345e63e00e2efa7661c165b6b364"
"sha256": "f60a966726bcc19670402ad3fa57396b5dacf0a027544418ceb7cc0d42d94a52"
},
"pipfile-spec": 6,
"requires": {
@@ -24,13 +24,72 @@
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
"version": "==21.2.0"
},
"importlib-metadata": {
"cffi": {
"hashes": [
"sha256:833b26fb89d5de469b24a390e9df088d4e52e4ba33b01dc5e0e4f41b81a16c00",
"sha256:b142cc1dd1342f31ff04bb7d022492b09920cb64fed867cd3ea6f80fe3ebd139"
"sha256:06c54a68935738d206570b20da5ef2b6b6d92b38ef3ec45c5422c0ebaf338d4d",
"sha256:0c0591bee64e438883b0c92a7bed78f6290d40bf02e54c5bf0978eaf36061771",
"sha256:19ca0dbdeda3b2615421d54bef8985f72af6e0c47082a8d26122adac81a95872",
"sha256:22b9c3c320171c108e903d61a3723b51e37aaa8c81255b5e7ce102775bd01e2c",
"sha256:26bb2549b72708c833f5abe62b756176022a7b9a7f689b571e74c8478ead51dc",
"sha256:33791e8a2dc2953f28b8d8d300dde42dd929ac28f974c4b4c6272cb2955cb762",
"sha256:3c8d896becff2fa653dc4438b54a5a25a971d1f4110b32bd3068db3722c80202",
"sha256:4373612d59c404baeb7cbd788a18b2b2a8331abcc84c3ba40051fcd18b17a4d5",
"sha256:487d63e1454627c8e47dd230025780e91869cfba4c753a74fda196a1f6ad6548",
"sha256:48916e459c54c4a70e52745639f1db524542140433599e13911b2f329834276a",
"sha256:4922cd707b25e623b902c86188aca466d3620892db76c0bdd7b99a3d5e61d35f",
"sha256:55af55e32ae468e9946f741a5d51f9896da6b9bf0bbdd326843fec05c730eb20",
"sha256:57e555a9feb4a8460415f1aac331a2dc833b1115284f7ded7278b54afc5bd218",
"sha256:5d4b68e216fc65e9fe4f524c177b54964af043dde734807586cf5435af84045c",
"sha256:64fda793737bc4037521d4899be780534b9aea552eb673b9833b01f945904c2e",
"sha256:6d6169cb3c6c2ad50db5b868db6491a790300ade1ed5d1da29289d73bbe40b56",
"sha256:7bcac9a2b4fdbed2c16fa5681356d7121ecabf041f18d97ed5b8e0dd38a80224",
"sha256:80b06212075346b5546b0417b9f2bf467fea3bfe7352f781ffc05a8ab24ba14a",
"sha256:818014c754cd3dba7229c0f5884396264d51ffb87ec86e927ef0be140bfdb0d2",
"sha256:8eb687582ed7cd8c4bdbff3df6c0da443eb89c3c72e6e5dcdd9c81729712791a",
"sha256:99f27fefe34c37ba9875f224a8f36e31d744d8083e00f520f133cab79ad5e819",
"sha256:9f3e33c28cd39d1b655ed1ba7247133b6f7fc16fa16887b120c0c670e35ce346",
"sha256:a8661b2ce9694ca01c529bfa204dbb144b275a31685a075ce123f12331be790b",
"sha256:a9da7010cec5a12193d1af9872a00888f396aba3dc79186604a09ea3ee7c029e",
"sha256:aedb15f0a5a5949ecb129a82b72b19df97bbbca024081ed2ef88bd5c0a610534",
"sha256:b315d709717a99f4b27b59b021e6207c64620790ca3e0bde636a6c7f14618abb",
"sha256:ba6f2b3f452e150945d58f4badd92310449876c4c954836cfb1803bdd7b422f0",
"sha256:c33d18eb6e6bc36f09d793c0dc58b0211fccc6ae5149b808da4a62660678b156",
"sha256:c9a875ce9d7fe32887784274dd533c57909b7b1dcadcc128a2ac21331a9765dd",
"sha256:c9e005e9bd57bc987764c32a1bee4364c44fdc11a3cc20a40b93b444984f2b87",
"sha256:d2ad4d668a5c0645d281dcd17aff2be3212bc109b33814bbb15c4939f44181cc",
"sha256:d950695ae4381ecd856bcaf2b1e866720e4ab9a1498cba61c602e56630ca7195",
"sha256:e22dcb48709fc51a7b58a927391b23ab37eb3737a98ac4338e2448bef8559b33",
"sha256:e8c6a99be100371dbb046880e7a282152aa5d6127ae01783e37662ef73850d8f",
"sha256:e9dc245e3ac69c92ee4c167fbdd7428ec1956d4e754223124991ef29eb57a09d",
"sha256:eb687a11f0a7a1839719edd80f41e459cc5366857ecbed383ff376c4e3cc6afd",
"sha256:eb9e2a346c5238a30a746893f23a9535e700f8192a68c07c0258e7ece6ff3728",
"sha256:ed38b924ce794e505647f7c331b22a693bee1538fdf46b0222c4717b42f744e7",
"sha256:f0010c6f9d1a4011e429109fda55a225921e3206e7f62a0c22a35344bfd13cca",
"sha256:f0c5d1acbfca6ebdd6b1e3eded8d261affb6ddcf2186205518f1428b8569bb99",
"sha256:f10afb1004f102c7868ebfe91c28f4a712227fe4cb24974350ace1f90e1febbf",
"sha256:f174135f5609428cc6e1b9090f9268f5c8935fddb1b25ccb8255a2d50de6789e",
"sha256:f3ebe6e73c319340830a9b2825d32eb6d8475c1dac020b4f0aa774ee3b898d1c",
"sha256:f627688813d0a4140153ff532537fbe4afea5a3dffce1f9deb7f91f848a832b5",
"sha256:fd4305f86f53dfd8cd3522269ed7fc34856a8ee3709a5e28b2836b2db9d4cd69"
],
"markers": "python_version < '3.8'",
"version": "==4.5.0"
"version": "==1.14.6"
},
"cryptography": {
"hashes": [
"sha256:0f1212a66329c80d68aeeb39b8a16d54ef57071bf22ff4e521657b27372e327d",
"sha256:1e056c28420c072c5e3cb36e2b23ee55e260cb04eee08f702e0edfec3fb51959",
"sha256:240f5c21aef0b73f40bb9f78d2caff73186700bf1bc6b94285699aff98cc16c6",
"sha256:26965837447f9c82f1855e0bc8bc4fb910240b6e0d16a664bb722df3b5b06873",
"sha256:37340614f8a5d2fb9aeea67fd159bfe4f5f4ed535b1090ce8ec428b2f15a11f2",
"sha256:3d10de8116d25649631977cb37da6cbdd2d6fa0e0281d014a5b7d337255ca713",
"sha256:3d8427734c781ea5f1b41d6589c293089704d4759e34597dce91014ac125aad1",
"sha256:7ec5d3b029f5fa2b179325908b9cd93db28ab7b85bb6c1db56b10e0b54235177",
"sha256:8e56e16617872b0957d1c9742a3f94b43533447fd78321514abbe7db216aa250",
"sha256:de4e5f7f68220d92b7637fc99847475b59154b7a1b3868fb7385337af54ac9ca",
"sha256:eb8cc2afe8b05acbd84a43905832ec78e7b3873fb124ca190f574dca7389a87d",
"sha256:ee77aa129f481be46f8d92a1a7db57269a2f23052d5f2433b4621bb457081cc9"
],
"version": "==3.4.7"
},
"iniconfig": {
"hashes": [
@@ -41,11 +100,11 @@
},
"packaging": {
"hashes": [
"sha256:5b327ac1320dc863dca72f4514ecc086f31186744b84a230374cc1fd776feae5",
"sha256:67714da7f7bc052e064859c05c595155bd1ee9f69f76557e21f051443c20947a"
"sha256:7dc96269f53a4ccec5c0670940a4281106dd0bb343f47b7471f779df49c2fbe7",
"sha256:c86254f9220d55e31cc94d69bade760f0847da8000def4dfe1c6b872fd14ff14"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==20.9"
"markers": "python_version >= '3.6'",
"version": "==21.0"
},
"pluggy": {
"hashes": [
@@ -57,18 +116,18 @@
},
"psycopg2": {
"hashes": [
"sha256:03a485bf71498870e38b535c0e6e7162d6ac06a91487edddc3b959894d65f79c",
"sha256:22102cfeb904898254f287b1a77360bf66c636858e7476593acd5267e5c24ff9",
"sha256:8f4c1800e57ad128d20b2e91d222ca238fffd316cef65be781361cdf35e37979",
"sha256:b12073fdf2002e828e5921be2c39ff9c6eab361c5c0bd6c529619fc23677accc",
"sha256:b6f47af317af8110818d255e693cfa80b7f1e435285be09778db7b66efd95789",
"sha256:d549db98fc0e6db41a2aa0d65f7434c4308a9f64012adb209b9e489f26fe87c6",
"sha256:e44e39a46af7c30566b7667fb27e701e652ab0a51e05c263a01d3ff0e223b765",
"sha256:e84c80be7a238d3c9c099b71f6890eaa35fc881146232cce888a88ab1bfb431e",
"sha256:f3d42bd42302293767b84206d9a446abc67ed4a133e4fe04dad8952de06c2091"
"sha256:079d97fc22de90da1d370c90583659a9f9a6ee4007355f5825e5f1c70dffc1fa",
"sha256:2087013c159a73e09713294a44d0c8008204d06326006b7f652bef5ace66eebb",
"sha256:2c992196719fadda59f72d44603ee1a2fdcc67de097eea38d41c7ad9ad246e62",
"sha256:7640e1e4d72444ef012e275e7b53204d7fab341fb22bc76057ede22fe6860b25",
"sha256:7f91312f065df517187134cce8e395ab37f5b601a42446bdc0f0d51773621854",
"sha256:830c8e8dddab6b6716a4bf73a09910c7954a92f40cf1d1e702fb93c8a919cc56",
"sha256:89409d369f4882c47f7ea20c42c5046879ce22c1e4ea20ef3b00a4dfc0a7f188",
"sha256:bf35a25f1aaa8a3781195595577fcbb59934856ee46b4f252f56ad12b8043bcf",
"sha256:de5303a6f1d0a7a34b9d40e4d3bef684ccc44a49bbe3eb85e3c0bffb4a131b7c"
],
"index": "pypi",
"version": "==2.9"
"version": "==2.9.1"
},
"py": {
"hashes": [
@@ -78,6 +137,25 @@
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==1.10.0"
},
"pycparser": {
"hashes": [
"sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0",
"sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==2.20"
},
"pyjwt": {
"extras": [
"crypto"
],
"hashes": [
"sha256:934d73fbba91b0483d3857d1aff50e96b2a892384ee2c17417ed3203f173fca1",
"sha256:fba44e7898bbca160a2b2b501f492824fc8382485d3a6f11ba5d0c1937ce6130"
],
"index": "pypi",
"version": "==2.1.0"
},
"pyparsing": {
"hashes": [
"sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1",
@@ -110,14 +188,6 @@
],
"index": "pypi",
"version": "==3.10.0.0"
},
"zipp": {
"hashes": [
"sha256:3607921face881ba3e026887d8150cca609d517579abe052ac81fc5aeffdbd76",
"sha256:51cb66cc54621609dd593d1787f286ee42a5c0adbb4b29abea5a63edc3e03098"
],
"markers": "python_version >= '3.6'",
"version": "==3.4.1"
}
},
"develop": {
@@ -129,14 +199,6 @@
"index": "pypi",
"version": "==3.9.2"
},
"importlib-metadata": {
"hashes": [
"sha256:833b26fb89d5de469b24a390e9df088d4e52e4ba33b01dc5e0e4f41b81a16c00",
"sha256:b142cc1dd1342f31ff04bb7d022492b09920cb64fed867cd3ea6f80fe3ebd139"
],
"markers": "python_version < '3.8'",
"version": "==4.5.0"
},
"mccabe": {
"hashes": [
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
@@ -146,32 +208,32 @@
},
"mypy": {
"hashes": [
"sha256:0190fb77e93ce971954c9e54ea61de2802065174e5e990c9d4c1d0f54fbeeca2",
"sha256:0756529da2dd4d53d26096b7969ce0a47997123261a5432b48cc6848a2cb0bd4",
"sha256:2f9fedc1f186697fda191e634ac1d02f03d4c260212ccb018fabbb6d4b03eee8",
"sha256:353aac2ce41ddeaf7599f1c73fed2b75750bef3b44b6ad12985a991bc002a0da",
"sha256:3f12705eabdd274b98f676e3e5a89f247ea86dc1af48a2d5a2b080abac4e1243",
"sha256:4efc67b9b3e2fddbe395700f91d5b8deb5980bfaaccb77b306310bd0b9e002eb",
"sha256:517e7528d1be7e187a5db7f0a3e479747307c1b897d9706b1c662014faba3116",
"sha256:68a098c104ae2b75e946b107ef69dd8398d54cb52ad57580dfb9fc78f7f997f0",
"sha256:746e0b0101b8efec34902810047f26a8c80e1efbb4fc554956d848c05ef85d76",
"sha256:8be7bbd091886bde9fcafed8dd089a766fa76eb223135fe5c9e9798f78023a20",
"sha256:9236c21194fde5df1b4d8ebc2ef2c1f2a5dc7f18bcbea54274937cae2e20a01c",
"sha256:9ef5355eaaf7a23ab157c21a44c614365238a7bdb3552ec3b80c393697d974e1",
"sha256:9f1d74eeb3f58c7bd3f3f92b8f63cb1678466a55e2c4612bf36909105d0724ab",
"sha256:a26d0e53e90815c765f91966442775cf03b8a7514a4e960de7b5320208b07269",
"sha256:ae94c31bb556ddb2310e4f913b706696ccbd43c62d3331cd3511caef466871d2",
"sha256:b5ba1f0d5f9087e03bf5958c28d421a03a4c1ad260bf81556195dffeccd979c4",
"sha256:b5dfcd22c6bab08dfeded8d5b44bdcb68c6f1ab261861e35c470b89074f78a70",
"sha256:cd01c599cf9f897b6b6c6b5d8b182557fb7d99326bcdf5d449a0fbbb4ccee4b9",
"sha256:e89880168c67cf4fde4506b80ee42f1537ad66ad366c101d388b3fd7d7ce2afd",
"sha256:ebe2bc9cb638475f5d39068d2dbe8ae1d605bb8d8d3ff281c695df1670ab3987",
"sha256:f89bfda7f0f66b789792ab64ce0978e4a991a0e4dd6197349d0767b0f1095b21",
"sha256:fc4d63da57ef0e8cd4ab45131f3fe5c286ce7dd7f032650d0fbc239c6190e167",
"sha256:fd634bc17b1e2d6ce716f0e43446d0d61cdadb1efcad5c56ca211c22b246ebc8"
"sha256:088cd9c7904b4ad80bec811053272986611b84221835e079be5bcad029e79dd9",
"sha256:0aadfb2d3935988ec3815952e44058a3100499f5be5b28c34ac9d79f002a4a9a",
"sha256:119bed3832d961f3a880787bf621634ba042cb8dc850a7429f643508eeac97b9",
"sha256:1a85e280d4d217150ce8cb1a6dddffd14e753a4e0c3cf90baabb32cefa41b59e",
"sha256:3c4b8ca36877fc75339253721f69603a9c7fdb5d4d5a95a1a1b899d8b86a4de2",
"sha256:3e382b29f8e0ccf19a2df2b29a167591245df90c0b5a2542249873b5c1d78212",
"sha256:42c266ced41b65ed40a282c575705325fa7991af370036d3f134518336636f5b",
"sha256:53fd2eb27a8ee2892614370896956af2ff61254c275aaee4c230ae771cadd885",
"sha256:704098302473cb31a218f1775a873b376b30b4c18229421e9e9dc8916fd16150",
"sha256:7df1ead20c81371ccd6091fa3e2878559b5c4d4caadaf1a484cf88d93ca06703",
"sha256:866c41f28cee548475f146aa4d39a51cf3b6a84246969f3759cb3e9c742fc072",
"sha256:a155d80ea6cee511a3694b108c4494a39f42de11ee4e61e72bc424c490e46457",
"sha256:adaeee09bfde366d2c13fe6093a7df5df83c9a2ba98638c7d76b010694db760e",
"sha256:b6fb13123aeef4a3abbcfd7e71773ff3ff1526a7d3dc538f3929a49b42be03f0",
"sha256:b94e4b785e304a04ea0828759172a15add27088520dc7e49ceade7834275bedb",
"sha256:c0df2d30ed496a08de5daed2a9ea807d07c21ae0ab23acf541ab88c24b26ab97",
"sha256:c6c2602dffb74867498f86e6129fd52a2770c48b7cd3ece77ada4fa38f94eba8",
"sha256:ceb6e0a6e27fb364fb3853389607cf7eb3a126ad335790fa1e14ed02fba50811",
"sha256:d9dd839eb0dc1bbe866a288ba3c1afc33a202015d2ad83b31e875b5905a079b6",
"sha256:e4dab234478e3bd3ce83bac4193b2ecd9cf94e720ddd95ce69840273bf44f6de",
"sha256:ec4e0cd079db280b6bdabdc807047ff3e199f334050db5cbb91ba3e959a67504",
"sha256:ecd2c3fe726758037234c93df7e98deb257fd15c24c9180dacf1ef829da5f921",
"sha256:ef565033fa5a958e62796867b1df10c40263ea9ded87164d67572834e57a174d"
],
"index": "pypi",
"version": "==0.902"
"version": "==0.910"
},
"mypy-extensions": {
"hashes": [
@@ -204,42 +266,6 @@
"markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
"version": "==0.10.2"
},
"typed-ast": {
"hashes": [
"sha256:01ae5f73431d21eead5015997ab41afa53aa1fbe252f9da060be5dad2c730ace",
"sha256:067a74454df670dcaa4e59349a2e5c81e567d8d65458d480a5b3dfecec08c5ff",
"sha256:0fb71b8c643187d7492c1f8352f2c15b4c4af3f6338f21681d3681b3dc31a266",
"sha256:1b3ead4a96c9101bef08f9f7d1217c096f31667617b58de957f690c92378b528",
"sha256:2068531575a125b87a41802130fa7e29f26c09a2833fea68d9a40cf33902eba6",
"sha256:209596a4ec71d990d71d5e0d312ac935d86930e6eecff6ccc7007fe54d703808",
"sha256:2c726c276d09fc5c414693a2de063f521052d9ea7c240ce553316f70656c84d4",
"sha256:398e44cd480f4d2b7ee8d98385ca104e35c81525dd98c519acff1b79bdaac363",
"sha256:52b1eb8c83f178ab787f3a4283f68258525f8d70f778a2f6dd54d3b5e5fb4341",
"sha256:5feca99c17af94057417d744607b82dd0a664fd5e4ca98061480fd8b14b18d04",
"sha256:7538e495704e2ccda9b234b82423a4038f324f3a10c43bc088a1636180f11a41",
"sha256:760ad187b1041a154f0e4d0f6aae3e40fdb51d6de16e5c99aedadd9246450e9e",
"sha256:777a26c84bea6cd934422ac2e3b78863a37017618b6e5c08f92ef69853e765d3",
"sha256:95431a26309a21874005845c21118c83991c63ea800dd44843e42a916aec5899",
"sha256:9ad2c92ec681e02baf81fdfa056fe0d818645efa9af1f1cd5fd6f1bd2bdfd805",
"sha256:9c6d1a54552b5330bc657b7ef0eae25d00ba7ffe85d9ea8ae6540d2197a3788c",
"sha256:aee0c1256be6c07bd3e1263ff920c325b59849dc95392a05f258bb9b259cf39c",
"sha256:af3d4a73793725138d6b334d9d247ce7e5f084d96284ed23f22ee626a7b88e39",
"sha256:b36b4f3920103a25e1d5d024d155c504080959582b928e91cb608a65c3a49e1a",
"sha256:b9574c6f03f685070d859e75c7f9eeca02d6933273b5e69572e5ff9d5e3931c3",
"sha256:bff6ad71c81b3bba8fa35f0f1921fb24ff4476235a6e94a26ada2e54370e6da7",
"sha256:c190f0899e9f9f8b6b7863debfb739abcb21a5c054f911ca3596d12b8a4c4c7f",
"sha256:c907f561b1e83e93fad565bac5ba9c22d96a54e7ea0267c708bffe863cbe4075",
"sha256:cae53c389825d3b46fb37538441f75d6aecc4174f615d048321b716df2757fb0",
"sha256:dd4a21253f42b8d2b48410cb31fe501d32f8b9fbeb1f55063ad102fe9c425e40",
"sha256:dde816ca9dac1d9c01dd504ea5967821606f02e510438120091b84e852367428",
"sha256:f2362f3cb0f3172c42938946dbc5b7843c2a28aec307c49100c8b38764eb6927",
"sha256:f328adcfebed9f11301eaedfa48e15bdece9b519fb27e6a8c01aa52a17ec31b3",
"sha256:f8afcf15cc511ada719a88e013cec87c11aff7b91f019295eb4530f96fe5ef2f",
"sha256:fb1bbeac803adea29cedd70781399c99138358c26d05fcbd23c13016b7f5ec65"
],
"markers": "python_version < '3.8'",
"version": "==1.4.3"
},
"typing-extensions": {
"hashes": [
"sha256:0ac0f89795dd19de6b97debb0c6af1c70987fd80a2d62d1958f7e56fcc31b497",
@@ -256,14 +282,6 @@
],
"index": "pypi",
"version": "==0.31.0"
},
"zipp": {
"hashes": [
"sha256:3607921face881ba3e026887d8150cca609d517579abe052ac81fc5aeffdbd76",
"sha256:51cb66cc54621609dd593d1787f286ee42a5c0adbb4b29abea5a63edc3e03098"
],
"markers": "python_version >= '3.6'",
"version": "==3.4.1"
}
}
}

View File

@@ -0,0 +1,117 @@
from contextlib import closing
from pathlib import Path
from uuid import uuid4
from dataclasses import dataclass
import jwt
import psycopg2
from fixtures.zenith_fixtures import Postgres, ZenithCli, ZenithPageserver
import pytest
@pytest.fixture
def pageserver_auth_enabled(zenith_cli: ZenithCli):
with ZenithPageserver(zenith_cli).init(enable_auth=True).start() as ps:
# For convenience in tests, create a branch from the freshly-initialized cluster.
zenith_cli.run(["branch", "empty", "main"])
yield ps
@dataclass
class AuthKeys:
pub: bytes
priv: bytes
def generate_management_token(self):
token = jwt.encode({"scope": "pageserverapi"}, self.priv, algorithm="RS256")
# jwt.encode can return 'bytes' or 'str', depending on Python version or type
# hinting or something (not sure what). If it returned 'bytes', convert it to 'str'
# explicitly.
if isinstance(token, bytes):
token = token.decode()
return token
def generate_tenant_token(self, tenant_id):
token = jwt.encode({"scope": "tenant", "tenant_id": tenant_id}, self.priv, algorithm="RS256")
if isinstance(token, bytes):
token = token.decode()
return token
@pytest.fixture
def auth_keys(repo_dir: str) -> AuthKeys:
# TODO probably this should be specified in cli config and used in tests for single source of truth
pub = (Path(repo_dir) / 'auth_public_key.pem').read_bytes()
priv = (Path(repo_dir) / 'auth_private_key.pem').read_bytes()
return AuthKeys(pub=pub, priv=priv)
def test_pageserver_auth(pageserver_auth_enabled: ZenithPageserver, auth_keys: AuthKeys):
ps = pageserver_auth_enabled
tenant_token = auth_keys.generate_tenant_token(ps.initial_tenant)
invalid_tenant_token = auth_keys.generate_tenant_token(uuid4().hex)
management_token = auth_keys.generate_management_token()
# this does not invoke auth check and only decodes jwt and checks it for validity
# check both tokens
ps.safe_psql("status", password=tenant_token)
ps.safe_psql("status", password=management_token)
# tenant can create branches
ps.safe_psql(f"branch_create {ps.initial_tenant} new1 main", password=tenant_token)
# console can create branches for tenant
ps.safe_psql(f"branch_create {ps.initial_tenant} new2 main", password=management_token)
# fail to create branch using token with different tenantid
with pytest.raises(psycopg2.DatabaseError, match='Tenant id mismatch. Permission denied'):
ps.safe_psql(f"branch_create {ps.initial_tenant} new2 main", password=invalid_tenant_token)
# create tenant using management token
ps.safe_psql(f"tenant_create {uuid4().hex}", password=management_token)
# fail to create tenant using tenant token
with pytest.raises(psycopg2.DatabaseError, match='Attempt to access management api with tenant scope. Permission denied'):
ps.safe_psql(f"tenant_create {uuid4().hex}", password=tenant_token)
@pytest.mark.parametrize('with_wal_acceptors', [False, True])
def test_compute_auth_to_pageserver(
zenith_cli: ZenithCli,
wa_factory,
pageserver_auth_enabled: ZenithPageserver,
repo_dir: str,
with_wal_acceptors: bool,
auth_keys: AuthKeys,
):
ps = pageserver_auth_enabled
# since we are in progress of refactoring protocols between compute safekeeper and page server
# use hardcoded management token in safekeeper
management_token = auth_keys.generate_management_token()
branch = f"test_compute_auth_to_pageserver{with_wal_acceptors}"
zenith_cli.run(["branch", branch, "empty"])
if with_wal_acceptors:
wa_factory.start_n_new(3, management_token)
with Postgres(
zenith_cli=zenith_cli,
repo_dir=repo_dir,
tenant_id=ps.initial_tenant,
port=55432, # FIXME port distribution is hardcoded in tests and in cli
).create_start(
branch,
wal_acceptors=wa_factory.get_connstrs() if with_wal_acceptors else None,
) as pg:
with closing(pg.connect()) as conn:
with conn.cursor() as cur:
# we rely upon autocommit after each statement
# as waiting for acceptors happens there
cur.execute('CREATE TABLE t(key int primary key, value text)')
cur.execute("INSERT INTO t SELECT generate_series(1,100000), 'payload'")
cur.execute('SELECT sum(key) FROM t')
assert cur.fetchone() == (5000050000, )

View File

@@ -1,10 +1,13 @@
from fixtures.zenith_fixtures import PostgresFactory, ZenithPageserver
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Create a couple of branches off the main branch, at a historical point in time.
#
def test_branch_behind(zenith_cli, pageserver, postgres, pg_bin):
def test_branch_behind(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, pg_bin):
# Branch at the point where only 100 rows were inserted
zenith_cli.run(["branch", "test_branch_behind", "empty"])

View File

@@ -1,25 +0,0 @@
from contextlib import closing
import psycopg2.extras
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test insertion of larg number of records
#
# This test is pretty tightly coupled with the current implementation of page version storage
# and garbage collection in object_repository.rs.
#
def test_bulk_insert(zenith_cli, pageserver, postgres, pg_bin):
zenith_cli.run(["branch", "test_bulk_insert", "empty"])
pg = postgres.create_start('test_bulk_insert')
with closing(pg.connect()) as conn:
with conn.cursor() as cur:
cur.execute("create table t(c1 bigint, c2 bigint, c3 bigint, c4 bigint, c5 bigint)")
cur.execute("create index on t(c1)")
cur.execute("create index on t(c2)")
cur.execute("create index on t(c3)")
cur.execute("create index on t(c4)")
cur.execute("create index on t(c5)")
cur.execute("insert into t values (generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000))")
cur.execute("insert into t values (generate_series(1,1000000),random()*1000000,random()*1000000,random()*1000000,random()*1000000)")

View File

@@ -0,0 +1,72 @@
import time
import os
from contextlib import closing
from fixtures.zenith_fixtures import PostgresFactory, ZenithPageserver
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test compute node start after clog truncation
#
def test_clog_truncate(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, pg_bin):
# Create a branch for us
zenith_cli.run(["branch", "test_clog_truncate", "empty"])
# set agressive autovacuum to make sure that truncation will happen
config = [
'autovacuum_max_workers=10', 'autovacuum_vacuum_threshold=0',
'autovacuum_vacuum_insert_threshold=0', 'autovacuum_vacuum_cost_delay=0',
'autovacuum_vacuum_cost_limit=10000', 'autovacuum_naptime =1s',
'autovacuum_freeze_max_age=100000'
]
pg = postgres.create_start('test_clog_truncate', config_lines=config)
print('postgres is running on test_clog_truncate branch')
# Install extension containing function needed for test
pg.safe_psql('CREATE EXTENSION zenith_test_utils')
# Consume many xids to advance clog
with closing(pg.connect()) as conn:
with conn.cursor() as cur:
cur.execute('select test_consume_xids(1000*1000*10);')
print('xids consumed')
# call a checkpoint to trigger TruncateSubtrans
cur.execute('CHECKPOINT;')
# ensure WAL flush
cur.execute('select txid_current()')
print(cur.fetchone())
# wait for autovacuum to truncate the pg_xact
# XXX Is it worth to add a timeout here?
pg_xact_0000_path = os.path.join(pg.pg_xact_dir_path(), '0000')
print("pg_xact_0000_path = " + pg_xact_0000_path)
while os.path.isfile(pg_xact_0000_path):
print("file exists. wait for truncation. " "pg_xact_0000_path = " + pg_xact_0000_path)
time.sleep(5)
# checkpoint to advance latest lsn
with closing(pg.connect()) as conn:
with conn.cursor() as cur:
cur.execute('CHECKPOINT;')
cur.execute('select pg_current_wal_insert_lsn()')
lsn_after_truncation = cur.fetchone()[0]
# create new branch after clog truncation and start a compute node on it
print('create branch at lsn_after_truncation ' + lsn_after_truncation)
zenith_cli.run(
["branch", "test_clog_truncate_new", "test_clog_truncate@" + lsn_after_truncation])
pg2 = postgres.create_start('test_clog_truncate_new')
print('postgres is running on test_clog_truncate_new branch')
# check that new node doesn't contain truncated segment
pg_xact_0000_path_new = os.path.join(pg2.pg_xact_dir_path(), '0000')
print("pg_xact_0000_path_new = " + pg_xact_0000_path_new)
assert os.path.isfile(pg_xact_0000_path_new) is False

View File

@@ -1,12 +1,14 @@
from contextlib import closing
from fixtures.zenith_fixtures import PostgresFactory, ZenithPageserver
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test starting Postgres with custom options
#
def test_config(zenith_cli, pageserver, postgres, pg_bin):
def test_config(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, pg_bin):
# Create a branch for us
zenith_cli.run(["branch", "test_config", "empty"])

View File

@@ -1,4 +1,5 @@
from contextlib import closing
from fixtures.zenith_fixtures import ZenithPageserver, PostgresFactory, ZenithCli
pytest_plugins = ("fixtures.zenith_fixtures")
@@ -6,7 +7,12 @@ pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test CREATE DATABASE when there have been relmapper changes
#
def test_createdb(zenith_cli, pageserver, postgres, pg_bin):
def test_createdb(
zenith_cli: ZenithCli,
pageserver: ZenithPageserver,
postgres: PostgresFactory,
pg_bin,
):
zenith_cli.run(["branch", "test_createdb", "empty"])
pg = postgres.create_start('test_createdb')

View File

@@ -1,12 +1,14 @@
from contextlib import closing
from fixtures.zenith_fixtures import PostgresFactory, ZenithPageserver
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test CREATE USER to check shared catalog restore
#
def test_createuser(zenith_cli, pageserver, postgres, pg_bin):
def test_createuser(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, pg_bin):
zenith_cli.run(["branch", "test_createuser", "empty"])
pg = postgres.create_start('test_createuser')

View File

@@ -1,4 +1,7 @@
import pytest
from contextlib import closing
from fixtures.zenith_fixtures import PostgresFactory, ZenithPageserver
import psycopg2.extras
pytest_plugins = ("fixtures.zenith_fixtures")
@@ -9,7 +12,12 @@ pytest_plugins = ("fixtures.zenith_fixtures")
# This test is pretty tightly coupled with the current implementation of page version storage
# and garbage collection in object_repository.rs.
#
def test_gc(zenith_cli, pageserver, postgres, pg_bin):
@pytest.mark.skip(reason=""""
Current GC test is flaky and overly strict. Since we are migrating to the layered repo format
with different GC implementation let's just silence this test for now. This test only
works with the RocksDB implementation.
""")
def test_gc(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, pg_bin):
zenith_cli.run(["branch", "test_gc", "empty"])
pg = postgres.create_start('test_gc')
@@ -30,7 +38,7 @@ def test_gc(zenith_cli, pageserver, postgres, pg_bin):
# before running the actual tests below, otherwise the counts won't match
# what we expect.
print("Running GC before test")
pscur.execute(f"do_gc {timeline} 0")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print("GC duration {elapsed} ms, relations: {n_relations}, dropped {dropped}, truncated: {truncated}, deleted: {deleted}".format_map(row))
# remember the number of relations
@@ -39,50 +47,50 @@ def test_gc(zenith_cli, pageserver, postgres, pg_bin):
# Insert a row. The first insert will also create a metadata entry for the
# relation, with size == 1 block. Hence, bump up the expected relation count.
n_relations += 1;
n_relations += 1
print("Inserting one row and running GC")
cur.execute("INSERT INTO foo VALUES (1)")
pscur.execute(f"do_gc {timeline} 0")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print("GC duration {elapsed} ms, relations: {n_relations}, dropped {dropped}, truncated: {truncated}, deleted: {deleted}".format_map(row))
assert row['n_relations'] == n_relations
assert row['dropped'] == 0
assert row['truncated'] == 30
assert row['deleted'] == 3
assert row['truncated'] == 31
assert row['deleted'] == 4
# Insert two more rows and run GC.
print("Inserting two more rows and running GC")
cur.execute("INSERT INTO foo VALUES (2)")
cur.execute("INSERT INTO foo VALUES (3)")
pscur.execute(f"do_gc {timeline} 0")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print("GC duration {elapsed} ms, relations: {n_relations}, dropped {dropped}, truncated: {truncated}, deleted: {deleted}".format_map(row))
assert row['n_relations'] == n_relations
assert row['dropped'] == 0
assert row['truncated'] == 30
assert row['deleted'] == 2
assert row['truncated'] == 31
assert row['deleted'] == 4
# Insert one more row. It creates one more page version, but doesn't affect the
# relation size.
print("Inserting one more row")
cur.execute("INSERT INTO foo VALUES (3)")
pscur.execute(f"do_gc {timeline} 0")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print("GC duration {elapsed} ms, relations: {n_relations}, dropped {dropped}, truncated: {truncated}, deleted: {deleted}".format_map(row))
assert row['n_relations'] == n_relations
assert row['dropped'] == 0
assert row['truncated'] == 30
assert row['deleted'] == 1
assert row['truncated'] == 31
assert row['deleted'] == 2
# Run GC again, with no changes in the database. Should not remove anything.
pscur.execute(f"do_gc {timeline} 0")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print("GC duration {elapsed} ms, relations: {n_relations}, dropped {dropped}, truncated: {truncated}, deleted: {deleted}".format_map(row))
assert row['n_relations'] == n_relations
assert row['dropped'] == 0
assert row['truncated'] == 30
assert row['truncated'] == 31
assert row['deleted'] == 0
#
@@ -90,7 +98,7 @@ def test_gc(zenith_cli, pageserver, postgres, pg_bin):
#
cur.execute("DROP TABLE foo")
pscur.execute(f"do_gc {timeline} 0")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print("GC duration {elapsed} ms, relations: {n_relations}, dropped {dropped}, truncated: {truncated}, deleted: {deleted}".format_map(row))
# Each relation fork is counted separately, hence 3.

View File

@@ -1,3 +1,5 @@
from fixtures.zenith_fixtures import PostgresFactory, ZenithPageserver
pytest_plugins = ("fixtures.zenith_fixtures")
@@ -7,7 +9,7 @@ pytest_plugins = ("fixtures.zenith_fixtures")
# it only checks next_multixact_id field in restored pg_control,
# since we don't have functions to check multixact internals.
#
def test_multixact(pageserver, postgres, pg_bin, zenith_cli, base_dir):
def test_multixact(pageserver: ZenithPageserver, postgres: PostgresFactory, pg_bin, zenith_cli, base_dir):
# Create a branch for us
zenith_cli.run(["branch", "test_multixact", "empty"])
pg = postgres.create_start('test_multixact')

View File

@@ -1,4 +1,8 @@
import json
import uuid
import pytest
import psycopg2
from fixtures.zenith_fixtures import ZenithPageserver
pytest_plugins = ("fixtures.zenith_fixtures")
@@ -9,14 +13,14 @@ def test_status(pageserver):
]
def test_branch_list(pageserver, zenith_cli):
def test_branch_list(pageserver: ZenithPageserver, zenith_cli):
# Create a branch for us
zenith_cli.run(["branch", "test_branch_list_main", "empty"])
conn = pageserver.connect()
cur = conn.cursor()
cur.execute('branch_list')
cur.execute(f'branch_list {pageserver.initial_tenant}')
branches = json.loads(cur.fetchone()[0])
# Filter out branches created by other tests
branches = [x for x in branches if x['name'].startswith('test_branch_list')]
@@ -32,7 +36,7 @@ def test_branch_list(pageserver, zenith_cli):
zenith_cli.run(['branch', 'test_branch_list_experimental', 'test_branch_list_main'])
zenith_cli.run(['pg', 'create', 'test_branch_list_experimental'])
cur.execute('branch_list')
cur.execute(f'branch_list {pageserver.initial_tenant}')
new_branches = json.loads(cur.fetchone()[0])
# Filter out branches created by other tests
new_branches = [x for x in new_branches if x['name'].startswith('test_branch_list')]
@@ -46,3 +50,27 @@ def test_branch_list(pageserver, zenith_cli):
assert new_branches[1] == branches[0]
conn.close()
def test_tenant_list(pageserver: ZenithPageserver, zenith_cli):
res = zenith_cli.run(["tenant", "list"])
res.check_returncode()
tenants = res.stdout.splitlines()
assert tenants == [pageserver.initial_tenant]
conn = pageserver.connect()
cur = conn.cursor()
# check same tenant cannot be created twice
with pytest.raises(psycopg2.DatabaseError, match=f'tenant {pageserver.initial_tenant} already exists'):
cur.execute(f'tenant_create {pageserver.initial_tenant}')
# create one more tenant
tenant1 = uuid.uuid4().hex
cur.execute(f'tenant_create {tenant1}')
cur.execute('tenant_list')
# compare tenants list
new_tenants = sorted(json.loads(cur.fetchone()[0]))
assert sorted([pageserver.initial_tenant, tenant1]) == new_tenants

View File

@@ -1,8 +1,9 @@
from fixtures.zenith_fixtures import PostgresFactory
pytest_plugins = ("fixtures.zenith_fixtures")
def test_pgbench(pageserver, postgres, pg_bin, zenith_cli):
def test_pgbench(postgres: PostgresFactory, pg_bin, zenith_cli):
# Create a branch for us
zenith_cli.run(["branch", "test_pgbench", "empty"])

View File

@@ -1,4 +1,7 @@
import pytest
from contextlib import closing
from fixtures.zenith_fixtures import ZenithPageserver, PostgresFactory
pytest_plugins = ("fixtures.zenith_fixtures")
@@ -6,10 +9,24 @@ pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test restarting and recreating a postgres instance
#
def test_restart_compute(zenith_cli, pageserver, postgres, pg_bin):
@pytest.mark.parametrize('with_wal_acceptors', [False, True])
def test_restart_compute(
zenith_cli,
pageserver: ZenithPageserver,
postgres: PostgresFactory,
pg_bin,
wa_factory,
with_wal_acceptors: bool,
):
wal_acceptor_connstrs = None
zenith_cli.run(["branch", "test_restart_compute", "empty"])
pg = postgres.create_start('test_restart_compute')
if with_wal_acceptors:
wa_factory.start_n_new(3)
wal_acceptor_connstrs = wa_factory.get_connstrs()
pg = postgres.create_start('test_restart_compute',
wal_acceptors=wal_acceptor_connstrs)
print("postgres is running on 'test_restart_compute' branch")
with closing(pg.connect()) as conn:
@@ -19,7 +36,8 @@ def test_restart_compute(zenith_cli, pageserver, postgres, pg_bin):
cur.execute("INSERT INTO foo VALUES ('bar')")
# Stop and restart the Postgres instance
pg.stop_and_destroy().create_start('test_restart_compute')
pg.stop_and_destroy().create_start('test_restart_compute',
wal_acceptors=wal_acceptor_connstrs)
with closing(pg.connect()) as conn:
with conn.cursor() as cur:
@@ -33,7 +51,8 @@ def test_restart_compute(zenith_cli, pageserver, postgres, pg_bin):
assert cur.fetchone() == (2, )
# Stop, and destroy the Postgres instance. Then recreate and restart it.
pg.stop_and_destroy().create_start('test_restart_compute')
pg.stop_and_destroy().create_start('test_restart_compute',
wal_acceptors=wal_acceptor_connstrs)
with closing(pg.connect()) as conn:
with conn.cursor() as cur:

View File

@@ -1,26 +0,0 @@
from contextlib import closing
import psycopg2.extras
import time
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test insertion of larg number of records
#
# This test is pretty tightly coupled with the current implementation of page version storage
# and garbage collection in object_repository.rs.
#
def test_seq_scan(zenith_cli, pageserver, postgres, pg_bin):
zenith_cli.run(["branch", "test_seq_scan", "empty"])
pg = postgres.create_start('test_seq_scan')
with closing(pg.connect()) as conn:
with conn.cursor() as cur:
cur.execute("create table t(c1 bigint, c2 bigint, c3 bigint, c4 bigint, c5 bigint)")
cur.execute("insert into t values (generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000),generate_series(1,1000000))")
cur.execute("set max_parallel_workers_per_gather=0");
for i in range(100):
start = time.time()
cur.execute("select count(*) from t");
stop = time.time()
print(f'Elapsed time for iterating through 1000000 records is {stop - start}')

View File

@@ -0,0 +1,124 @@
from contextlib import closing
import psycopg2.extras
import time;
pytest_plugins = ("fixtures.zenith_fixtures")
def print_gc_result(row):
print("GC duration {elapsed} ms".format_map(row));
print(" REL total: {snapshot_relfiles_total}, needed_by_cutoff {snapshot_relfiles_needed_by_cutoff}, needed_by_branches: {snapshot_relfiles_needed_by_branches}, not_updated: {snapshot_relfiles_not_updated}, removed: {snapshot_relfiles_removed}, dropped: {snapshot_relfiles_dropped}".format_map(row))
print(" NONREL total: {snapshot_nonrelfiles_total}, needed_by_cutoff {snapshot_nonrelfiles_needed_by_cutoff}, needed_by_branches: {snapshot_nonrelfiles_needed_by_branches}, not_updated: {snapshot_nonrelfiles_not_updated}, removed: {snapshot_nonrelfiles_removed}, dropped: {snapshot_nonrelfiles_dropped}".format_map(row))
#
# Test Garbage Collection of old snapshot files
#
# This test is pretty tightly coupled with the current implementation of layered
# storage, in layered_repository.rs.
#
def test_snapfiles_gc(zenith_cli, pageserver, postgres, pg_bin):
zenith_cli.run(["branch", "test_snapfiles_gc", "empty"])
pg = postgres.create_start('test_snapfiles_gc')
with closing(pg.connect()) as conn:
with conn.cursor() as cur:
with closing(pageserver.connect()) as psconn:
with psconn.cursor(cursor_factory = psycopg2.extras.DictCursor) as pscur:
# Get the timeline ID of our branch. We need it for the 'do_gc' command
cur.execute("SHOW zenith.zenith_timeline")
timeline = cur.fetchone()[0]
# Create a test table
cur.execute("CREATE TABLE foo(x integer)")
cur.execute("INSERT INTO foo VALUES (1)")
cur.execute("select relfilenode from pg_class where oid = 'foo'::regclass");
row = cur.fetchone();
print("relfilenode is {}", row[0]);
# Run GC, to clear out any garbage left behind in the catalogs by
# the CREATE TABLE command. We want to have a clean slate with no garbage
# before running the actual tests below, otherwise the counts won't match
# what we expect.
#
# Also run vacuum first to make it less likely that autovacuum or pruning
# kicks in and confuses our numbers.
cur.execute("VACUUM")
# delete the row, to update the Visibility Map. We don't want the VM
# update to confuse our numbers either.
cur.execute("DELETE FROM foo")
print("Running GC before test")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print_gc_result(row);
# remember the number of files
snapshot_relfiles_remain = row['snapshot_relfiles_total'] - row['snapshot_relfiles_removed']
assert snapshot_relfiles_remain > 0
# Insert a row.
print("Inserting one row and running GC")
cur.execute("INSERT INTO foo VALUES (1)")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print_gc_result(row);
assert row['snapshot_relfiles_total'] == snapshot_relfiles_remain + 1
assert row['snapshot_relfiles_removed'] == 1
assert row['snapshot_relfiles_dropped'] == 0
# Insert two more rows and run GC.
# This should create a new snapshot file with the new contents, and
# remove the old one.
print("Inserting two more rows and running GC")
cur.execute("INSERT INTO foo VALUES (2)")
cur.execute("INSERT INTO foo VALUES (3)")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print_gc_result(row);
assert row['snapshot_relfiles_total'] == snapshot_relfiles_remain + 1
assert row['snapshot_relfiles_removed'] == 1
assert row['snapshot_relfiles_dropped'] == 0
# Do it again. Should again create a new snapshot file and remove old one.
print("Inserting two more rows and running GC")
cur.execute("INSERT INTO foo VALUES (2)")
cur.execute("INSERT INTO foo VALUES (3)")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print_gc_result(row);
assert row['snapshot_relfiles_total'] == snapshot_relfiles_remain + 1
assert row['snapshot_relfiles_removed'] == 1
assert row['snapshot_relfiles_dropped'] == 0
# Run GC again, with no changes in the database. Should not remove anything.
print("Run GC again, with nothing to do")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print_gc_result(row);
assert row['snapshot_relfiles_total'] == snapshot_relfiles_remain
assert row['snapshot_relfiles_removed'] == 0
assert row['snapshot_relfiles_dropped'] == 0
#
# Test DROP TABLE checks that relation data and metadata was deleted by GC from object storage
#
print("Drop table and run GC again");
cur.execute("DROP TABLE foo")
pscur.execute(f"do_gc {pageserver.initial_tenant} {timeline} 0")
row = pscur.fetchone()
print_gc_result(row);
# Each relation fork is counted separately, hence 3.
assert row['snapshot_relfiles_dropped'] == 3
# The catalog updates also create new snapshot files of the catalogs, which
# are counted as 'removed'
assert row['snapshot_relfiles_removed'] > 0
# TODO: perhaps we should count catalog and user relations separately,
# to make this kind of testing more robust

View File

@@ -0,0 +1,48 @@
from contextlib import closing
import pytest
from fixtures.zenith_fixtures import (
TenantFactory,
ZenithCli,
PostgresFactory,
)
@pytest.mark.parametrize('with_wal_acceptors', [False, True])
def test_tenants_normal_work(
zenith_cli: ZenithCli,
tenant_factory: TenantFactory,
postgres: PostgresFactory,
wa_factory,
with_wal_acceptors: bool,
):
"""Tests tenants with and without wal acceptors"""
tenant_1 = tenant_factory.create()
tenant_2 = tenant_factory.create()
zenith_cli.run(["branch", f"test_tenants_normal_work_with_wal_acceptors{with_wal_acceptors}", "main", f"--tenantid={tenant_1}"])
zenith_cli.run(["branch", f"test_tenants_normal_work_with_wal_acceptors{with_wal_acceptors}", "main", f"--tenantid={tenant_2}"])
if with_wal_acceptors:
wa_factory.start_n_new(3)
pg_tenant1 = postgres.create_start(
f"test_tenants_normal_work_with_wal_acceptors{with_wal_acceptors}",
tenant_1,
wal_acceptors=wa_factory.get_connstrs() if with_wal_acceptors else None,
)
pg_tenant2 = postgres.create_start(
f"test_tenants_normal_work_with_wal_acceptors{with_wal_acceptors}",
tenant_2,
wal_acceptors=wa_factory.get_connstrs() if with_wal_acceptors else None,
)
for pg in [pg_tenant1, pg_tenant2]:
with closing(pg.connect()) as conn:
with conn.cursor() as cur:
# we rely upon autocommit after each statement
# as waiting for acceptors happens there
cur.execute("CREATE TABLE t(key int primary key, value text)")
cur.execute("INSERT INTO t SELECT generate_series(1,100000), 'payload'")
cur.execute("SELECT sum(key) FROM t")
assert cur.fetchone() == (5000050000,)

View File

@@ -1,10 +1,13 @@
from fixtures.zenith_fixtures import PostgresFactory, ZenithPageserver
pytest_plugins = ("fixtures.zenith_fixtures")
#
# Test branching, when a transaction is in prepared state
#
def test_twophase(zenith_cli, pageserver, postgres, pg_bin):
def test_twophase(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, pg_bin):
zenith_cli.run(["branch", "test_twophase", "empty"])
pg = postgres.create_start('test_twophase', config_lines=['max_prepared_transactions=5'])
@@ -28,8 +31,10 @@ def test_twophase(zenith_cli, pageserver, postgres, pg_bin):
# Create a branch with the transaction in prepared state
zenith_cli.run(["branch", "test_twophase_prepared", "test_twophase"])
pg2 = postgres.create_start('test_twophase_prepared',
config_lines=['max_prepared_transactions=5'])
pg2 = postgres.create_start(
'test_twophase_prepared',
config_lines=['max_prepared_transactions=5'],
)
conn2 = pg2.connect()
cur2 = conn2.cursor()

View File

@@ -4,13 +4,14 @@ import time
from contextlib import closing
from multiprocessing import Process, Value
from fixtures.zenith_fixtures import WalAcceptorFactory, ZenithPageserver, PostgresFactory
pytest_plugins = ("fixtures.zenith_fixtures")
# basic test, write something in setup with wal acceptors, ensure that commits
# succeed and data is written
def test_normal_work(zenith_cli, pageserver, postgres, wa_factory):
def test_normal_work(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, wa_factory):
zenith_cli.run(["branch", "test_wal_acceptors_normal_work", "empty"])
wa_factory.start_n_new(3)
pg = postgres.create_start('test_wal_acceptors_normal_work',
@@ -28,7 +29,7 @@ def test_normal_work(zenith_cli, pageserver, postgres, wa_factory):
# Run page server and multiple acceptors, and multiple compute nodes running
# against different timelines.
def test_many_timelines(zenith_cli, pageserver, postgres, wa_factory):
def test_many_timelines(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, wa_factory):
n_timelines = 2
wa_factory.start_n_new(3)
@@ -60,7 +61,7 @@ def test_many_timelines(zenith_cli, pageserver, postgres, wa_factory):
# Check that dead minority doesn't prevent the commits: execute insert n_inserts
# times, with fault_probability chance of getting a wal acceptor down or up
# along the way. 2 of 3 are always alive, so the work keeps going.
def test_restarts(zenith_cli, pageserver, postgres, wa_factory):
def test_restarts(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, wa_factory: WalAcceptorFactory):
fault_probability = 0.01
n_inserts = 1000
n_acceptors = 3
@@ -101,7 +102,7 @@ def delayed_wal_acceptor_start(wa):
# When majority of acceptors is offline, commits are expected to be frozen
def test_unavailability(zenith_cli, pageserver, postgres, wa_factory):
def test_unavailability(zenith_cli, postgres: PostgresFactory, wa_factory):
wa_factory.start_n_new(2)
zenith_cli.run(["branch", "test_wal_acceptors_unavailability", "empty"])
@@ -171,7 +172,7 @@ def stop_value():
# do inserts while concurrently getting up/down subsets of acceptors
def test_race_conditions(zenith_cli, pageserver, postgres, wa_factory, stop_value):
def test_race_conditions(zenith_cli, pageserver: ZenithPageserver, postgres: PostgresFactory, wa_factory, stop_value):
wa_factory.start_n_new(3)

View File

@@ -1,43 +1,50 @@
import json
import uuid
from fixtures.zenith_fixtures import ZenithCli, ZenithPageserver
pytest_plugins = ("fixtures.zenith_fixtures")
def helper_compare_branch_list(page_server_cur, zenith_cli):
def helper_compare_branch_list(page_server_cur, zenith_cli, initial_tenant: str):
"""
Compare branches list returned by CLI and directly via API.
Filters out branches created by other tests.
"""
page_server_cur.execute('branch_list')
page_server_cur.execute(f'branch_list {initial_tenant}')
branches_api = sorted(map(lambda b: b['name'], json.loads(page_server_cur.fetchone()[0])))
branches_api = [b for b in branches_api if b.startswith('test_cli_') or b in ('empty', 'main')]
res = zenith_cli.run(["branch"])
assert res.stderr == ''
res.check_returncode()
branches_cli = sorted(map(lambda b: b.split(':')[-1].strip(), res.stdout.strip().split("\n")))
branches_cli = [b for b in branches_cli if b.startswith('test_cli_') or b in ('empty', 'main')]
assert branches_api == branches_cli
res = zenith_cli.run(["branch", f"--tenantid={initial_tenant}"])
res.check_returncode()
branches_cli_with_tenant_arg = sorted(map(lambda b: b.split(':')[-1].strip(), res.stdout.strip().split("\n")))
branches_cli_with_tenant_arg = [b for b in branches_cli if b.startswith('test_cli_') or b in ('empty', 'main')]
assert branches_api == branches_cli == branches_cli_with_tenant_arg
def test_cli_branch_list(pageserver, zenith_cli):
def test_cli_branch_list(pageserver: ZenithPageserver, zenith_cli):
page_server_conn = pageserver.connect()
page_server_cur = page_server_conn.cursor()
# Initial sanity check
helper_compare_branch_list(page_server_cur, zenith_cli)
helper_compare_branch_list(page_server_cur, zenith_cli, pageserver.initial_tenant)
# Create a branch for us
res = zenith_cli.run(["branch", "test_cli_branch_list_main", "main"])
assert res.stderr == ''
helper_compare_branch_list(page_server_cur, zenith_cli)
helper_compare_branch_list(page_server_cur, zenith_cli, pageserver.initial_tenant)
# Create a nested branch
res = zenith_cli.run(["branch", "test_cli_branch_list_nested", "test_cli_branch_list_main"])
assert res.stderr == ''
helper_compare_branch_list(page_server_cur, zenith_cli)
helper_compare_branch_list(page_server_cur, zenith_cli, pageserver.initial_tenant)
# Check that all new branches are visible via CLI
res = zenith_cli.run(["branch"])
@@ -46,3 +53,45 @@ def test_cli_branch_list(pageserver, zenith_cli):
assert 'test_cli_branch_list_main' in branches_cli
assert 'test_cli_branch_list_nested' in branches_cli
def helper_compare_tenant_list(page_server_cur, zenith_cli: ZenithCli):
page_server_cur.execute(f'tenant_list')
tenants_api = sorted(json.loads(page_server_cur.fetchone()[0]))
res = zenith_cli.run(["tenant", "list"])
assert res.stderr == ''
tenants_cli = sorted(res.stdout.splitlines())
assert tenants_api == tenants_cli
def test_cli_tenant_list(pageserver: ZenithPageserver, zenith_cli: ZenithCli):
page_server_conn = pageserver.connect()
page_server_cur = page_server_conn.cursor()
# Initial sanity check
helper_compare_tenant_list(page_server_cur, zenith_cli)
# Create new tenant
tenant1 = uuid.uuid4().hex
res = zenith_cli.run(["tenant", "create", tenant1])
res.check_returncode()
# check tenant1 appeared
helper_compare_tenant_list(page_server_cur, zenith_cli)
# Create new tenant
tenant2 = uuid.uuid4().hex
res = zenith_cli.run(["tenant", "create", tenant2])
res.check_returncode()
# check tenant2 appeared
helper_compare_tenant_list(page_server_cur, zenith_cli)
res = zenith_cli.run(["tenant", "list"])
res.check_returncode()
tenants = sorted(res.stdout.splitlines())
assert pageserver.initial_tenant in tenants
assert tenant1 in tenants
assert tenant2 in tenants

View File

@@ -1,11 +1,12 @@
import os
from fixtures.utils import mkdir_if_needed
from fixtures.zenith_fixtures import ZenithPageserver, PostgresFactory
pytest_plugins = ("fixtures.zenith_fixtures")
def test_isolation(pageserver, postgres, pg_bin, zenith_cli, test_output_dir, pg_distrib_dir,
def test_isolation(pageserver: ZenithPageserver, postgres: PostgresFactory, pg_bin, zenith_cli, test_output_dir, pg_distrib_dir,
base_dir, capsys):
# Create a branch for us

View File

@@ -1,11 +1,12 @@
import os
from fixtures.utils import mkdir_if_needed
from fixtures.zenith_fixtures import PostgresFactory, ZenithPageserver
pytest_plugins = ("fixtures.zenith_fixtures")
def test_pg_regress(pageserver, postgres, pg_bin, zenith_cli, test_output_dir, pg_distrib_dir,
def test_pg_regress(pageserver: ZenithPageserver, postgres: PostgresFactory, pg_bin, zenith_cli, test_output_dir, pg_distrib_dir,
base_dir, capsys):
# Create a branch for us

View File

@@ -1,11 +1,12 @@
import os
from fixtures.utils import mkdir_if_needed
from fixtures.zenith_fixtures import PostgresFactory
pytest_plugins = ("fixtures.zenith_fixtures")
def test_zenith_regress(pageserver, postgres, pg_bin, zenith_cli, test_output_dir, pg_distrib_dir,
def test_zenith_regress(postgres: PostgresFactory, pg_bin, zenith_cli, test_output_dir, pg_distrib_dir,
base_dir, capsys):
# Create a branch for us

View File

@@ -1,13 +1,16 @@
import getpass
import os
import pathlib
import uuid
import psycopg2
import pytest
import shutil
import signal
import subprocess
import time
from contextlib import closing
from pathlib import Path
from dataclasses import dataclass
# Type-related stuff
from psycopg2.extensions import connection as PgConnection
@@ -83,25 +86,32 @@ class PgProtocol:
def __init__(self, host: str, port: int, username: Optional[str] = None):
self.host = host
self.port = port
self.username = username or getpass.getuser()
self.username = username or "zenith_admin"
def connstr(self, *, dbname: str = 'postgres', username: Optional[str] = None) -> str:
def connstr(self, *, dbname: str = 'postgres', username: Optional[str] = None, password: Optional[str] = None) -> str:
"""
Build a libpq connection string for the Postgres instance.
"""
username = username or self.username
return f'host={self.host} port={self.port} user={username} dbname={dbname}'
res = f'host={self.host} port={self.port} user={username} dbname={dbname}'
if not password:
return res
return f'{res} password={password}'
# autocommit=True here by default because that's what we need most of the time
def connect(self, *, autocommit=True, **kwargs: Any) -> PgConnection:
def connect(self, *, autocommit=True, dbname: str = 'postgres', username: Optional[str] = None, password: Optional[str] = None) -> PgConnection:
"""
Connect to the node.
Returns psycopg2's connection object.
This method passes all extra params to connstr.
"""
conn = psycopg2.connect(self.connstr(**kwargs))
conn = psycopg2.connect(self.connstr(
dbname=dbname,
username=username,
password=password,
))
# WARNING: this setting affects *all* tests!
conn.autocommit = autocommit
return conn
@@ -169,17 +179,19 @@ class ZenithPageserver(PgProtocol):
""" An object representing a running pageserver. """
def __init__(self, zenith_cli: ZenithCli):
super().__init__(host='localhost', port=DEFAULT_PAGESERVER_PORT)
self.zenith_cli = zenith_cli
self.running = False
self.initial_tenant = None
def init(self) -> 'ZenithPageserver':
def init(self, enable_auth: bool = False) -> 'ZenithPageserver':
"""
Initialize the repository, i.e. run "zenith init".
Returns self.
"""
self.zenith_cli.run(['init'])
cmd = ['init']
if enable_auth:
cmd.append('--enable-auth')
self.zenith_cli.run(cmd)
return self
def start(self) -> 'ZenithPageserver':
@@ -190,6 +202,8 @@ class ZenithPageserver(PgProtocol):
self.zenith_cli.run(['start'])
self.running = True
# get newly created tenant id
self.initial_tenant = self.zenith_cli.run(['tenant', 'list']).stdout.strip()
return self
def stop(self) -> 'ZenithPageserver':
@@ -204,6 +218,12 @@ class ZenithPageserver(PgProtocol):
return self
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
self.stop()
@zenfixture
def pageserver(zenith_cli: ZenithCli) -> Iterator[ZenithPageserver]:
@@ -232,20 +252,23 @@ def pageserver(zenith_cli: ZenithCli) -> Iterator[ZenithPageserver]:
class Postgres(PgProtocol):
""" An object representing a running postgres daemon. """
def __init__(self, zenith_cli: ZenithCli, repo_dir: str, instance_num: int):
super().__init__(host='localhost', port=55431 + instance_num)
def __init__(self, zenith_cli: ZenithCli, repo_dir: str, tenant_id: str, port: int):
super().__init__(host='localhost', port=port)
self.zenith_cli = zenith_cli
self.instance_num = instance_num
self.running = False
self.repo_dir = repo_dir
self.branch: Optional[str] = None # dubious, see asserts below
# path to conf is <repo_dir>/pgdatadirs/<branch_name>/postgresql.conf
self.tenant_id = tenant_id
# path to conf is <repo_dir>/pgdatadirs/tenants/<tenant_id>/<branch_name>/postgresql.conf
def create(self,
branch: str,
wal_acceptors: Optional[str] = None,
config_lines: Optional[List[str]] = None) -> 'Postgres':
def create(
self,
branch: str,
wal_acceptors: Optional[str] = None,
config_lines: Optional[List[str]] = None,
config_only: bool = False,
) -> 'Postgres':
"""
Create the pg data directory.
If wal_acceptors is not None, node will use wal acceptors; config is
@@ -256,7 +279,10 @@ class Postgres(PgProtocol):
if not config_lines:
config_lines = []
self.zenith_cli.run(['pg', 'create', branch])
if config_only:
self.zenith_cli.run(['pg', 'create', '--config-only', branch, f'--tenantid={self.tenant_id}'])
else:
self.zenith_cli.run(['pg', 'create', branch, f'--tenantid={self.tenant_id}'])
self.branch = branch
if wal_acceptors is not None:
self.adjust_for_wal_acceptors(wal_acceptors)
@@ -273,14 +299,19 @@ class Postgres(PgProtocol):
"""
assert self.branch is not None
self.zenith_cli.run(['pg', 'start', self.branch])
self.zenith_cli.run(['pg', 'start', self.branch, f'--tenantid={self.tenant_id}'])
self.running = True
return self
def pg_xact_dir_path(self) -> str:
""" Path to pg_xact dir """
path = pathlib.Path('pgdatadirs') / 'tenants' / self.tenant_id / self.branch / 'pg_xact'
return os.path.join(self.repo_dir, path)
def config_file_path(self) -> str:
""" Path to postgresql.conf """
filename = f'pgdatadirs/{self.branch}/postgresql.conf'
filename = pathlib.Path('pgdatadirs') / 'tenants' / self.tenant_id / self.branch / 'postgresql.conf'
return os.path.join(self.repo_dir, filename)
def adjust_for_wal_acceptors(self, wal_acceptors: str) -> 'Postgres':
@@ -326,7 +357,7 @@ class Postgres(PgProtocol):
if self.running:
assert self.branch is not None
self.zenith_cli.run(['pg', 'stop', self.branch])
self.zenith_cli.run(['pg', 'stop', self.branch, f'--tenantid={self.tenant_id}'])
self.running = False
return self
@@ -338,42 +369,120 @@ class Postgres(PgProtocol):
"""
assert self.branch is not None
self.zenith_cli.run(['pg', 'stop', '--destroy', self.branch])
assert self.tenant_id is not None
self.zenith_cli.run(['pg', 'stop', '--destroy', self.branch, f'--tenantid={self.tenant_id}'])
return self
def create_start(self,
branch: str,
wal_acceptors: Optional[str] = None,
config_lines: Optional[List[str]] = None) -> 'Postgres':
def create_start(
self,
branch: str,
wal_acceptors: Optional[str] = None,
config_lines: Optional[List[str]] = None,
) -> 'Postgres':
"""
Create a Postgres instance, then start it.
Create a Postgres instance, apply config
and then start it.
Returns self.
"""
self.create(branch, wal_acceptors, config_lines).start()
self.create(
branch=branch,
wal_acceptors=wal_acceptors,
config_lines=config_lines,
config_only=True,
).start()
return self
def __enter__(self):
return self
def __exit__(self, exc_type, exc, tb):
self.stop()
class PostgresFactory:
""" An object representing multiple running postgres daemons. """
def __init__(self, zenith_cli: ZenithCli, repo_dir: str):
def __init__(self, zenith_cli: ZenithCli, repo_dir: str, initial_tenant: str, base_port: int = 55431):
self.zenith_cli = zenith_cli
self.repo_dir = repo_dir
self.num_instances = 0
self.instances: List[Postgres] = []
self.initial_tenant: str = initial_tenant
self.base_port = base_port
def create_start(self,
branch: str = "main",
wal_acceptors: Optional[str] = None,
config_lines: Optional[List[str]] = None) -> Postgres:
def create_start(
self,
branch: str = "main",
tenant_id: Optional[str] = None,
wal_acceptors: Optional[str] = None,
config_lines: Optional[List[str]] = None
) -> Postgres:
pg = Postgres(
zenith_cli=self.zenith_cli,
repo_dir=self.repo_dir,
tenant_id=tenant_id or self.initial_tenant,
port=self.base_port + self.num_instances + 1,
)
pg = Postgres(self.zenith_cli, self.repo_dir, self.num_instances + 1)
self.num_instances += 1
self.instances.append(pg)
return pg.create_start(branch, wal_acceptors, config_lines)
return pg.create_start(
branch=branch,
wal_acceptors=wal_acceptors,
config_lines=config_lines,
)
def create(
self,
branch: str = "main",
tenant_id: Optional[str] = None,
wal_acceptors: Optional[str] = None,
config_lines: Optional[List[str]] = None
) -> Postgres:
pg = Postgres(
zenith_cli=self.zenith_cli,
repo_dir=self.repo_dir,
tenant_id=tenant_id or self.initial_tenant,
port=self.base_port + self.num_instances + 1,
)
self.num_instances += 1
self.instances.append(pg)
return pg.create(
branch=branch,
wal_acceptors=wal_acceptors,
config_lines=config_lines,
)
def config(
self,
branch: str = "main",
tenant_id: Optional[str] = None,
wal_acceptors: Optional[str] = None,
config_lines: Optional[List[str]] = None
) -> Postgres:
pg = Postgres(
zenith_cli=self.zenith_cli,
repo_dir=self.repo_dir,
tenant_id=tenant_id or self.initial_tenant,
port=self.base_port + self.num_instances + 1,
)
self.num_instances += 1
self.instances.append(pg)
return pg.config(
branch=branch,
wal_acceptors=wal_acceptors,
config_lines=config_lines,
)
def stop_all(self) -> 'PostgresFactory':
for pg in self.instances:
@@ -381,10 +490,14 @@ class PostgresFactory:
return self
@zenfixture
def initial_tenant(pageserver: ZenithPageserver):
return pageserver.initial_tenant
@zenfixture
def postgres(zenith_cli: ZenithCli, repo_dir: str) -> Iterator[PostgresFactory]:
pgfactory = PostgresFactory(zenith_cli, repo_dir)
def postgres(zenith_cli: ZenithCli, initial_tenant: str, repo_dir: str) -> Iterator[PostgresFactory]:
pgfactory = PostgresFactory(zenith_cli, repo_dir, initial_tenant=initial_tenant)
yield pgfactory
@@ -453,25 +566,27 @@ def pg_bin(test_output_dir: str, pg_distrib_dir: str) -> PgBin:
return PgBin(test_output_dir, pg_distrib_dir)
def read_pid(path):
def read_pid(path: Path):
""" Read content of file into number """
return int(Path(path).read_text())
return int(path.read_text())
@dataclass
class WalAcceptor:
""" An object representing a running wal acceptor daemon. """
def __init__(self, wa_binpath, data_dir, port, num):
self.wa_binpath = wa_binpath
self.data_dir = data_dir
self.port = port
self.num = num # identifier for logging
bin_path: Path
data_dir: Path
port: int
num: int # identifier for logging
auth_token: Optional[str] = None
def start(self) -> 'WalAcceptor':
# create data directory if not exists
Path(self.data_dir).mkdir(parents=True, exist_ok=True)
self.data_dir.mkdir(parents=True, exist_ok=True)
self.pidfile.unlink(missing_ok=True)
cmd = [self.wa_binpath]
cmd.extend(["-D", self.data_dir])
cmd = [str(self.bin_path)]
cmd.extend(["-D", str(self.data_dir)])
cmd.extend(["-l", "localhost:{}".format(self.port)])
cmd.append("--daemonize")
cmd.append("--no-sync")
@@ -479,62 +594,80 @@ class WalAcceptor:
cmd.extend(["--pageserver", "localhost:{}".format(DEFAULT_PAGESERVER_PORT)])
cmd.extend(["--recall", "1 second"])
print('Running command "{}"'.format(' '.join(cmd)))
subprocess.run(cmd, check=True)
env = {'PAGESERVER_AUTH_TOKEN': self.auth_token} if self.auth_token else None
subprocess.run(cmd, check=True, env=env)
return self
# wait for wal acceptor start by checkking that pid is readable
for _ in range(3):
pid = self.get_pid()
if pid is not None:
return self
time.sleep(0.5)
raise RuntimeError("cannot get wal acceptor pid")
@property
def pidfile(self) -> Path:
return self.data_dir / "wal_acceptor.pid"
def get_pid(self) -> Optional[int]:
if not self.pidfile.exists():
return None
try:
pid = read_pid(self.pidfile)
except ValueError:
return None
return pid
def stop(self) -> 'WalAcceptor':
print('Stopping wal acceptor {}'.format(self.num))
pidfile_path = os.path.join(self.data_dir, "wal_acceptor.pid")
try:
pid = read_pid(pidfile_path)
try:
os.kill(pid, signal.SIGTERM)
except Exception:
pass # pidfile might be obsolete
# TODO: cleanup pid file on exit in wal acceptor
return self
# for _ in range(5):
# print('waiting wal acceptor {} (pid {}) to stop...', self.num, pid)
# try:
# read_pid(pidfile_path)
# except FileNotFoundError:
# return # done
# time.sleep(1)
# raise Exception('Failed to wait for wal acceptor {} shutdown'.format(self.num))
except FileNotFoundError:
pid = self.get_pid()
if pid is None:
print("Wal acceptor {} is not running".format(self.num))
return self
try:
os.kill(pid, signal.SIGTERM)
except Exception:
# TODO: cleanup pid file on exit in wal acceptor
pass # pidfile might be obsolete
return self
class WalAcceptorFactory:
""" An object representing multiple running wal acceptors. """
def __init__(self, zenith_binpath, data_dir):
self.wa_binpath = os.path.join(zenith_binpath, 'wal_acceptor')
def __init__(self, zenith_binpath: Path, data_dir: Path):
self.wa_binpath = zenith_binpath / 'wal_acceptor'
self.data_dir = data_dir
self.instances = []
self.instances: List[WalAcceptor] = []
self.initial_port = 54321
def start_new(self) -> WalAcceptor:
def start_new(self, auth_token: Optional[str] = None) -> WalAcceptor:
"""
Start new wal acceptor.
"""
wa_num = len(self.instances)
wa = WalAcceptor(self.wa_binpath,
os.path.join(self.data_dir, "wal_acceptor_{}".format(wa_num)),
self.initial_port + wa_num, wa_num)
wa = WalAcceptor(
self.wa_binpath,
self.data_dir / "wal_acceptor_{}".format(wa_num),
self.initial_port + wa_num,
wa_num,
auth_token,
)
wa.start()
self.instances.append(wa)
return wa
def start_n_new(self, n: int) -> None:
def start_n_new(self, n: int, auth_token: Optional[str] = None) -> None:
"""
Start n new wal acceptors.
"""
for _ in range(n):
self.start_new()
self.start_new(auth_token)
def stop_all(self) -> 'WalAcceptorFactory':
for wa in self.instances:
@@ -549,7 +682,7 @@ class WalAcceptorFactory:
@zenfixture
def wa_factory(zenith_binpath: str, repo_dir: str) -> Iterator[WalAcceptorFactory]:
""" Gives WalAcceptorFactory providing wal acceptors. """
wafactory = WalAcceptorFactory(zenith_binpath, os.path.join(repo_dir, "wal_acceptors"))
wafactory = WalAcceptorFactory(Path(zenith_binpath), Path(repo_dir) / "wal_acceptors")
yield wafactory
# After the yield comes any cleanup code we need.
print('Starting wal acceptors cleanup')
@@ -636,3 +769,20 @@ def pg_distrib_dir(base_dir: str) -> str:
if not os.path.exists(os.path.join(pg_dir, 'bin/postgres')):
raise Exception('postgres not found at "{}"'.format(pg_dir))
return pg_dir
class TenantFactory:
def __init__(self, cli: ZenithCli):
self.cli = cli
def create(self, tenant_id: Optional[str] = None):
if tenant_id is None:
tenant_id = uuid.uuid4().hex
res = self.cli.run(['tenant', 'create', tenant_id])
res.check_returncode()
return tenant_id
@zenfixture
def tenant_factory(zenith_cli: ZenithCli):
return TenantFactory(zenith_cli)

View File

@@ -8,6 +8,7 @@ edition = "2018"
[dependencies]
regex = "1.4.5"
bincode = "1.3"
bytes = "1.0.1"
byteorder = "1.4.3"
fs2 = "0.4.3"

View File

@@ -1,24 +1,64 @@
# WAL safekeeper
# WAL service
Also know as the WAL service, WAL keeper or WAL acceptor.
The WAL safekeeper acts as a holding area and redistribution center
for recently generated WAL. The primary Postgres server streams the
WAL to the WAL safekeeper, and treats it like a (synchronous)
The zenith WAL service acts as a holding area and redistribution
center for recently generated WAL. The primary Postgres server streams
the WAL to the WAL safekeeper, and treats it like a (synchronous)
replica. A replication slot is used in the primary to prevent the
primary from discarding WAL that hasn't been streamed to the
safekeeper yet.
primary from discarding WAL that hasn't been streamed to the WAL
service yet.
The primary connects to the WAL safekeeper, so it works in a "push"
+--------------+ +------------------+
| | WAL | |
| Compute node | ----------> | WAL Service |
| | | |
+--------------+ +------------------+
|
|
| WAL
|
|
V
+--------------+
| |
| Pageservers |
| |
+--------------+
The WAL service consists of multiple WAL safekeepers that all store a
copy of the WAL. A WAL record is considered durable when the majority
of safekeepers have received and stored the WAL to local disk. A
consensus algorithm based on Paxos is used to manage the quorum.
+-------------------------------------------+
| WAL Service |
| |
| |
| +------------+ |
| | safekeeper | |
| +------------+ |
| |
| +------------+ |
| | safekeeper | |
| +------------+ |
| |
| +------------+ |
| | safekeeper | |
| +------------+ |
| |
+-------------------------------------------+
The primary connects to the WAL safekeepers, so it works in a "push"
fashion. That's different from how streaming replication usually
works, where the replica initiates the connection. To do that, there
is a component called the "WAL proposer". The WAL proposer is a
background worker that runs in the primary Postgres server. It
connects to the WAL safekeeper, and
sends all the WAL. (PostgreSQL's archive_commands works in the
"push" style, but it operates on a WAL segment granularity. If
PostgreSQL had a push style API for streaming, WAL propose could be
implemented using it.)
connects to the WAL safekeeper, and sends all the WAL. (PostgreSQL's
archive_commands works in the "push" style, but it operates on a WAL
segment granularity. If PostgreSQL had a push style API for streaming,
WAL propose could be implemented using it.)
The Page Server connects to the WAL safekeeper, using the same
streaming replication protocol that's used between Postgres primary
@@ -33,4 +73,17 @@ safekeepers. The Paxos and crash recovery algorithm ensures that only
one primary node can be actively streaming WAL to the quorum of
safekeepers.
See README.md for a more detailed desription of the consensus protocol.
See README_PROTO.md for a more detailed desription of the consensus
protocol. spec/ contains TLA+ specification of it.
# Terminology
WAL service - The service as whole that ensures that WAL is stored durably.
WAL safekeeper - One node that participates in the quorum. All the safekeepers
together form the WAL service.
WAL acceptor, WAL proposer - In the context of the consensus algorithm, the Postgres
compute node is also known as the WAL proposer, and the safekeeper is also known
as the acceptor. Those are the standard terms in the Paxos algorithm.

View File

@@ -0,0 +1,34 @@
\* MV CONSTANT declarations
CONSTANT NULL = NULL
CONSTANTS
p1 = p1
p2 = p2
p3 = p3
a1 = a1
a2 = a2
a3 = a3
\* MV CONSTANT definitions
CONSTANT
proposers = {p1, p2}
acceptors = {a1, a2, a3}
\* SYMMETRY definition
SYMMETRY perms
\* CONSTANT definitions
CONSTANT
max_term = 3
CONSTANT
max_entries = 3
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
TypeOk
ElectionSafety
LogIsMonotonic
LogSafety
CommittedNotOverwritten
CHECK_DEADLOCK FALSE

View File

@@ -0,0 +1,363 @@
---- MODULE ProposerAcceptorConsensus ----
\* Differences from current implementation:
\* - unified not-globally-unique epoch & term (node_id)
\* Simplifications:
\* - instant message delivery
\* - feedback is not modeled separately, commit_lsn is updated directly
EXTENDS Integers, Sequences, FiniteSets, TLC
VARIABLES
prop_state, \* prop_state[p] is state of proposer p
acc_state, \* acc_state[a] is state of acceptor a
commit_lsns \* map of acceptor -> commit_lsn
CONSTANT
acceptors,
proposers,
max_entries, \* model constraint: max log entries acceptor/proposer can hold
max_term \* model constraint: max allowed term
CONSTANT NULL
ASSUME max_entries \in Nat /\ max_term \in Nat
\* For specifying symmetry set in manual cfg file, see
\* https://github.com/tlaplus/tlaplus/issues/404
perms == Permutations(proposers) \union Permutations(acceptors)
\********************************************************************************
\* Helpers
\********************************************************************************
Maximum(S) ==
(*************************************************************************)
(* If S is a set of numbers, then this define Maximum(S) to be the *)
(* maximum of those numbers, or -1 if S is empty. *)
(*************************************************************************)
IF S = {} THEN -1
ELSE CHOOSE n \in S : \A m \in S : n \geq m
\* minimum of numbers in the set, error if set is empty
Minimum(S) ==
CHOOSE min \in S : \A n \in S : min <= n
\* Min of two numbers
Min(a, b) == IF a < b THEN a ELSE b
\* Set of values of function f. XXX is there a such builtin?
FValues(f) == {f[a] : a \in DOMAIN f}
\* Sort of 0 for functions
EmptyF == [x \in {} |-> 42]
IsEmptyF(f) == DOMAIN f = {}
\* Next entry proposer p will push to acceptor a or NULL.
NextEntry(p, a) ==
IF Len(prop_state[p].wal) >= prop_state[p].next_send_lsn[a] THEN
CHOOSE r \in FValues(prop_state[p].wal) : r.lsn = prop_state[p].next_send_lsn[a]
ELSE
NULL
\*****************
NumAccs == Cardinality(acceptors)
\* does acc_set form the quorum?
Quorum(acc_set) == Cardinality(acc_set) >= (NumAccs \div 2 + 1)
\* all quorums of acceptors
Quorums == {subset \in SUBSET acceptors: Quorum(subset)}
\* flush_lsn of acceptor a.
FlushLsn(a) == Len(acc_state[a].wal)
\********************************************************************************
\* Type assertion
\********************************************************************************
\* Defining sets of all possible tuples and using them in TypeOk in usual
\* all-tuples constructor is not practical because such definitions force
\* TLC to enumerate them, while they are are horribly enormous
\* (TLC screams "Attempted to construct a set with too many elements").
\* So instead check types manually.
TypeOk ==
/\ \A p \in proposers:
/\ DOMAIN prop_state[p] = {"state", "term", "votes", "donor_epoch", "vcl", "wal", "next_send_lsn"}
\* in campaign proposer sends RequestVote and waits for acks;
\* in leader he is elected
/\ prop_state[p].state \in {"campaign", "leader"}
\* 0..max_term should be actually Nat in the unbouned model, but TLC won't
\* swallow it
/\ prop_state[p].term \in 0..max_term
\* votes received
/\ \A voter \in DOMAIN prop_state[p].votes:
/\ voter \in acceptors
/\ prop_state[p].votes[voter] \in [epoch: 0..max_term, flush_lsn: 0..max_entries]
/\ prop_state[p].donor_epoch \in 0..max_term
\* wal is sequence of just <lsn, epoch of author> records
/\ \A i \in DOMAIN prop_state[p].wal:
prop_state[p].wal[i] \in [lsn: 1..max_entries, epoch: 1..max_term]
\* Following implementation, we skew the original Aurora meaning of this;
\* here it is lsn of highest definitely committed record as set by proposer
\* when it is elected; it doesn't change since then
/\ prop_state[p].vcl \in 0..max_entries
\* map of acceptor -> next lsn to send
/\ \A a \in DOMAIN prop_state[p].next_send_lsn:
/\ a \in acceptors
/\ prop_state[p].next_send_lsn[a] \in 1..(max_entries + 1)
/\ \A a \in acceptors:
/\ DOMAIN acc_state[a] = {"term", "epoch", "wal"}
/\ acc_state[a].term \in 0..max_term
/\ acc_state[a].epoch \in 0..max_term
/\ \A i \in DOMAIN acc_state[a].wal:
acc_state[a].wal[i] \in [lsn: 1..max_entries, epoch: 1..max_term]
/\ \A a \in DOMAIN commit_lsns:
/\ a \in acceptors
/\ commit_lsns[a] \in 0..max_entries
\********************************************************************************
\* Initial
\********************************************************************************
Init ==
/\ prop_state = [p \in proposers |-> [
state |-> "campaign",
term |-> 1,
votes |-> EmptyF,
donor_epoch |-> 0,
vcl |-> 0,
wal |-> << >>,
next_send_lsn |-> EmptyF
]]
/\ acc_state = [a \in acceptors |-> [
\* there will be no leader in this term, 1 is the first real
term |-> 0,
epoch |-> 0,
wal |-> << >>
]]
/\ commit_lsns = [a \in acceptors |-> 0]
\********************************************************************************
\* Actions
\********************************************************************************
\* Proposer loses all state.
\* For simplicity (and to reduct state space), we assume it immediately gets
\* current state from quorum q of acceptors determining the term he will request
\* to vote for.
RestartProposer(p, q) ==
/\ Quorum(q)
/\ LET
new_term == Maximum({acc_state[a].term : a \in q}) + 1
IN
/\ new_term <= max_term
/\ prop_state' = [prop_state EXCEPT ![p].state = "campaign",
![p].term = new_term,
![p].votes = EmptyF,
![p].donor_epoch = 0,
![p].vcl = 0,
![p].wal = << >>,
![p].next_send_lsn = EmptyF]
/\ UNCHANGED <<acc_state, commit_lsns>>
\* Acceptor a immediately votes for proposer p.
Vote(p, a) ==
/\ prop_state[p].state = "campaign"
/\ acc_state[a].term < prop_state[p].term \* main voting condition
/\ acc_state' = [acc_state EXCEPT ![a].term = prop_state[p].term]
/\ LET
vote == [epoch |-> acc_state[a].epoch, flush_lsn |-> FlushLsn(a)]
IN
prop_state' = [prop_state EXCEPT ![p].votes = prop_state[p].votes @@ (a :> vote)]
/\ UNCHANGED <<commit_lsns>>
\* Proposer p gets elected.
BecomeLeader(p) ==
/\ prop_state[p].state = "campaign"
/\ Quorum(DOMAIN prop_state[p].votes)
/\ LET
max_epoch == Maximum({v.epoch : v \in FValues(prop_state[p].votes)})
max_epoch_votes == {v \in FValues(prop_state[p].votes) : v.epoch = max_epoch}
donor == CHOOSE dv \in DOMAIN prop_state[p].votes :
/\ prop_state[p].votes[dv].epoch = max_epoch
/\ \A v \in max_epoch_votes:
prop_state[p].votes[dv].flush_lsn >= v.flush_lsn
max_vote == prop_state[p].votes[donor]
\* Establish lsn to stream from for voters.
\* At some point it seemed like we can regard log as correct and only
\* append to it if has in the max_epoch, however TLC showed that's not
\* the case; we must always stream since first not matching record.
next_send_lsn == [voter \in DOMAIN prop_state[p].votes |-> 1]
IN
\* we fetch log from the most advanced node (this is separate
\* roundtrip), make sure node is still on one term with us
/\ acc_state[donor].term = prop_state[p].term
/\ prop_state' = [prop_state EXCEPT ![p].state = "leader",
\* fetch the log from donor
![p].wal = acc_state[donor].wal,
![p].donor_epoch = max_epoch,
![p].vcl = max_vote.flush_lsn,
![p].next_send_lsn = next_send_lsn]
/\ UNCHANGED <<acc_state, commit_lsns>>
\* acceptor a learns about elected proposer p's term.
UpdateTerm(p, a) ==
/\ prop_state[p].state = "leader"
/\ acc_state[a].term < prop_state[p].term
/\ acc_state' = [acc_state EXCEPT ![a].term = prop_state[p].term]
/\ UNCHANGED <<prop_state, commit_lsns>>
\* Acceptor a which didn't participate in voting connects to elected proposer p
\* and p sets the streaming point
HandshakeWithLeader(p, a) ==
/\ prop_state[p].state = "leader"
/\ acc_state[a].term = prop_state[p].term
/\ a \notin DOMAIN prop_state[p].next_send_lsn
/\ LET
next_send_lsn == prop_state[p].next_send_lsn @@ (a :> 1)
IN
prop_state' = [prop_state EXCEPT ![p].next_send_lsn = next_send_lsn]
/\ UNCHANGED <<acc_state, commit_lsns>>
\* Append new log entry to elected proposer
NewEntry(p) ==
/\ prop_state[p].state = "leader"
/\ Len(prop_state[p].wal) < max_entries \* model constraint
/\ LET
new_lsn == IF Len(prop_state[p].wal) = 0 THEN
prop_state[p].vcl + 1
ELSE
\* lsn of last record + 1
prop_state[p].wal[Len(prop_state[p].wal)].lsn + 1
new_entry == [lsn |-> new_lsn, epoch |-> prop_state[p].term]
IN
/\ prop_state' = [prop_state EXCEPT ![p].wal = Append(prop_state[p].wal, new_entry)]
/\ UNCHANGED <<acc_state, commit_lsns>>
\* Write entry new_e to log wal, rolling back all higher entries if e is different.
\* If bump_epoch is TRUE, it means we get record with lsn=vcl and going to update
\* the epoch. Truncate log in this case as well, as we might have correct <= vcl
\* part and some outdated entries behind it which we want to purge before
\* declaring us as recovered. Another way to accomplish this (in previous commit)
\* is wait for first-entry-from-new-epoch before bumping it.
WriteEntry(wal, new_e, bump_epoch) ==
(new_e.lsn :> new_e) @@
\* If wal has entry with such lsn and it is different, truncate all higher log.
IF \/ (new_e.lsn \in DOMAIN wal /\ wal[new_e.lsn] /= new_e)
\/ bump_epoch THEN
SelectSeq(wal, LAMBDA e: e.lsn < new_e.lsn)
ELSE
wal
\* Try to transfer entry from elected proposer p to acceptor a
TransferEntry(p, a) ==
/\ prop_state[p].state = "leader"
/\ prop_state[p].term = acc_state[a].term
/\ a \in DOMAIN prop_state[p].next_send_lsn
/\ LET
next_e == NextEntry(p, a)
IN
/\ next_e /= NULL
/\ LET
\* Consider bumping epoch if getting this entry recovers the acceptor,
\* that is, we reach first record behind VCL.
new_epoch ==
IF /\ acc_state[a].epoch < prop_state[p].term
/\ next_e.lsn >= prop_state[p].vcl
THEN
prop_state[p].term
ELSE
acc_state[a].epoch
\* Also check whether this entry allows to advance commit_lsn and
\* if so, bump it where possible. Modeling this as separate action
\* significantly bloats the space (5m vs 15m on max_entries=3 max_term=3,
\* so act immediately.
entry_owners == {o \in acceptors:
/\ o /= a
\* only recovered acceptors advance commit_lsn
/\ acc_state[o].epoch = prop_state[p].term
/\ next_e \in FValues(acc_state[o].wal)} \cup {a}
IN
/\ acc_state' = [acc_state EXCEPT ![a].wal = WriteEntry(acc_state[a].wal, next_e, new_epoch /= acc_state[a].epoch),
![a].epoch = new_epoch]
/\ prop_state' = [prop_state EXCEPT ![p].next_send_lsn[a] =
prop_state[p].next_send_lsn[a] + 1]
/\ commit_lsns' = IF /\ new_epoch = prop_state[p].term
/\ Quorum(entry_owners)
THEN
[acc \in acceptors |->
IF /\ acc \in entry_owners
/\ next_e.lsn > commit_lsns[acc]
THEN
next_e.lsn
ELSE
commit_lsns[acc]]
ELSE
commit_lsns
\*******************************************************************************
\* Final spec
\*******************************************************************************
Next ==
\/ \E q \in Quorums: \E p \in proposers: RestartProposer(p, q)
\/ \E p \in proposers: \E a \in acceptors: Vote(p, a)
\/ \E p \in proposers: BecomeLeader(p)
\/ \E p \in proposers: \E a \in acceptors: UpdateTerm(p, a)
\/ \E p \in proposers: \E a \in acceptors: HandshakeWithLeader(p, a)
\/ \E p \in proposers: NewEntry(p)
\/ \E p \in proposers: \E a \in acceptors: TransferEntry(p, a)
Spec == Init /\ [][Next]_<<prop_state, acc_state, commit_lsns>>
\********************************************************************************
\* Invariants
\********************************************************************************
\* we don't track history, but this property is fairly convincing anyway
ElectionSafety ==
\A p1, p2 \in proposers:
(/\ prop_state[p1].state = "leader"
/\ prop_state[p2].state = "leader"
/\ prop_state[p1].term = prop_state[p2].term) => (p1 = p2)
LogIsMonotonic ==
\A a \in acceptors:
\A i \in DOMAIN acc_state[a].wal: \A j \in DOMAIN acc_state[a].wal:
(i > j) => (/\ acc_state[a].wal[i].lsn > acc_state[a].wal[j].lsn
/\ acc_state[a].wal[i].epoch >= acc_state[a].wal[j].epoch)
\* Main invariant: log under commit_lsn must match everywhere.
LogSafety ==
\A a1 \in acceptors: \A a2 \in acceptors:
LET
common_len == Min(commit_lsns[a1], commit_lsns[a2])
IN
SubSeq(acc_state[a1].wal, 1, common_len) = SubSeq(acc_state[a2].wal, 1, common_len)
\* Next record we are going to push to acceptor must never overwrite committed
\* different record.
CommittedNotOverwritten ==
\A p \in proposers: \A a \in acceptors:
(/\ prop_state[p].state = "leader"
/\ prop_state[p].term = acc_state[a].term
/\ a \in DOMAIN prop_state[p].next_send_lsn) =>
LET
next_e == NextEntry(p, a)
IN
(next_e /= NULL) =>
((commit_lsns[a] >= next_e.lsn) => (acc_state[a].wal[next_e.lsn] = next_e))
====

View File

@@ -6,9 +6,9 @@ use clap::{App, Arg};
use daemonize::Daemonize;
use log::*;
use slog::Drain;
use std::io;
use std::path::{Path, PathBuf};
use std::thread;
use std::{env, io};
use std::{fs::File, fs::OpenOptions};
use walkeeper::s3_offload;
@@ -74,6 +74,7 @@ fn main() -> Result<()> {
listen_addr: "localhost:5454".to_string(),
ttl: None,
recall_period: None,
pageserver_auth_token: env::var("PAGESERVER_AUTH_TOKEN").ok(),
};
if let Some(dir) = arg_matches.value_of("datadir") {

View File

@@ -16,6 +16,8 @@ pub struct WalAcceptorConf {
pub no_sync: bool,
pub listen_addr: String,
pub pageserver_addr: Option<String>,
// TODO (create issue) this is temporary, until protocol between PG<->SK<->PS rework
pub pageserver_auth_token: Option<String>,
pub ttl: Option<Duration>,
pub recall_period: Option<Duration>,
}

View File

@@ -1,29 +1,33 @@
//! This implements the Safekeeper protocol.
//! This implements the Safekeeper protocol, picking up immediately after the "START_WAL_PUSH" message
//!
//! FIXME: better description needed here
use anyhow::{bail, Result};
use bincode::config::Options;
use bytes::{Buf, Bytes};
use log::*;
use postgres::{Client, Config, NoTls};
use serde::{Deserialize, Serialize};
use std::cmp::{max, min};
use std::fs::{self, File, OpenOptions};
use std::io::{BufReader, Read, Seek, SeekFrom, Write};
use std::net::{SocketAddr, TcpStream};
use std::io::{Read, Seek, SeekFrom, Write};
use std::net::SocketAddr;
use std::str;
use std::sync::Arc;
use std::thread;
use std::thread::sleep;
use zenith_utils::bin_ser::LeSer;
use zenith_utils::bin_ser::{self, le_coder, LeSer};
use zenith_utils::connstring::connection_host_port;
use zenith_utils::lsn::Lsn;
use zenith_utils::postgres_backend::PostgresBackend;
use zenith_utils::pq_proto::{BeMessage, FeMessage, SystemId};
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use crate::replication::HotStandbyFeedback;
use crate::send_wal::SendWalHandler;
use crate::timeline::{Timeline, TimelineTools};
use crate::WalAcceptorConf;
use pageserver::ZTimelineId;
use postgres_ffi::xlog_utils::{TimeLineID, XLogFileName, MAX_SEND_SIZE, XLOG_BLCKSZ};
use zenith_utils::pq_proto::SystemId;
pub const SK_MAGIC: u32 = 0xcafeceefu32;
pub const SK_FORMAT_VERSION: u32 = 1;
@@ -52,6 +56,7 @@ pub struct ServerInfo {
pub wal_end: Lsn,
pub timeline: TimeLineID,
pub wal_seg_size: u32,
pub tenant_id: ZTenantId,
}
/// Vote request sent from proposer to safekeepers
@@ -101,6 +106,7 @@ impl SafeKeeperInfo {
wal_end: Lsn(0),
timeline: 0,
wal_seg_size: 0,
tenant_id: ZTenantId::from([0u8; 16]),
},
commit_lsn: Lsn(0), /* part of WAL acknowledged by quorum */
flush_lsn: Lsn(0), /* locally flushed part of WAL */
@@ -132,35 +138,34 @@ struct SafeKeeperResponse {
hs_feedback: HotStandbyFeedback,
}
#[derive(Debug)]
pub struct ReceiveWalConn {
pub timeline: Option<Arc<Timeline>>,
/// Postgres connection, buffered input
pub stream_in: BufReader<TcpStream>,
/// Postgres connection, output
pub stream_out: TcpStream,
/// The cached result of socket.peer_addr()
pub peer_addr: SocketAddr,
/// wal acceptor configuration
pub conf: WalAcceptorConf,
pub struct ReceiveWalConn<'pg> {
/// Postgres connection
pg_backend: &'pg mut PostgresBackend,
/// The cached result of `pg_backend.socket().peer_addr()` (roughly)
peer_addr: SocketAddr,
}
///
/// Periodically request pageserver to call back.
/// If pageserver already has replication channel, it will just ignore this request
///
fn request_callback(conf: WalAcceptorConf, timelineid: ZTimelineId) {
fn request_callback(conf: WalAcceptorConf, timelineid: ZTimelineId, tenantid: ZTenantId) {
let ps_addr = conf.pageserver_addr.unwrap();
let ps_connstr = format!("postgresql://no_user@{}/no_db", ps_addr);
let ps_connstr = format!(
"postgresql://no_user:{}@{}/no_db",
&conf.pageserver_auth_token.unwrap_or_default(),
ps_addr
);
// use Config parsing because SockAddr parsing doesnt allow to use host names instead of ip addresses
let me_connstr = format!("postgresql://no_user@{}/no_db", conf.listen_addr);
let me_conf: Config = me_connstr.parse().unwrap();
let (host, port) = connection_host_port(&me_conf);
let callme = format!(
"callmemaybe {} host={} port={} options='-c ztimelineid={}'",
timelineid, host, port, timelineid
"callmemaybe {} {} host={} port={} options='-c ztimelineid={}'",
tenantid, timelineid, host, port, timelineid,
);
loop {
info!(
"requesting page server to connect to us: start {} {}",
@@ -183,37 +188,64 @@ fn request_callback(conf: WalAcceptorConf, timelineid: ZTimelineId) {
}
}
impl ReceiveWalConn {
pub fn new(socket: TcpStream, conf: WalAcceptorConf) -> Result<ReceiveWalConn> {
let peer_addr = socket.peer_addr()?;
let conn = ReceiveWalConn {
timeline: None,
stream_in: BufReader::new(socket.try_clone()?),
stream_out: socket,
impl<'pg> ReceiveWalConn<'pg> {
pub fn new(pg: &'pg mut PostgresBackend) -> Result<ReceiveWalConn<'pg>> {
let peer_addr = pg.get_peer_addr()?;
Ok(ReceiveWalConn {
pg_backend: pg,
peer_addr,
conf,
};
Ok(conn)
})
}
fn read_req<T: LeSer>(&mut self) -> Result<T> {
// As the trait bound implies, this always encodes little-endian.
Ok(T::des_from(&mut self.stream_in)?)
// Read and extract the bytes of a `CopyData` message from the postgres instance
fn read_msg_bytes(&mut self) -> Result<Bytes> {
match self.pg_backend.read_message()? {
Some(FeMessage::CopyData(bytes)) => Ok(bytes),
Some(msg) => bail!("expected `CopyData` message, found {:?}", msg),
None => bail!("connection closed unexpectedly"),
}
}
// Read the result of a `CopyData` message sent from the postgres instance
//
// As the trait bound implies, this always encodes little-endian.
fn read_msg<T: LeSer>(&mut self) -> Result<T> {
let data = self.read_msg_bytes()?;
// Taken directly from `LeSer::des`:
let value = le_coder()
.reject_trailing_bytes()
.deserialize(&data)
.or(Err(bin_ser::DeserializeError::BadInput))?;
Ok(value)
}
// Writes the value into a `CopyData` message sent to the postgres instance
fn write_msg<T: LeSer>(&mut self, value: &T) -> Result<()> {
let mut buf = Vec::new();
value.ser_into(&mut buf)?;
self.pg_backend.write_message(&BeMessage::CopyData(&buf))?;
Ok(())
}
/// Receive WAL from wal_proposer
pub fn run(&mut self) -> Result<()> {
pub fn run(&mut self, swh: &mut SendWalHandler) -> Result<()> {
let mut this_timeline: Option<Arc<Timeline>> = None;
// Notify the libpq client that it's allowed to send `CopyData` messages
self.pg_backend
.write_message(&BeMessage::CopyBothResponse)?;
// Receive information about server
let server_info = self.read_req::<ServerInfo>()?;
let server_info = self.read_msg::<ServerInfo>()?;
info!(
"Start handshake with wal_proposer {} sysid {} timeline {}",
self.peer_addr, server_info.system_id, server_info.timeline_id,
"Start handshake with wal_proposer {} sysid {} timeline {} tenant {}",
self.peer_addr, server_info.system_id, server_info.timeline_id, server_info.tenant_id,
);
// FIXME: also check that the system identifier matches
self.timeline.set(server_info.timeline_id)?;
self.timeline.get().load_control_file(&self.conf)?;
this_timeline.set(server_info.timeline_id)?;
this_timeline.get().load_control_file(&swh.conf)?;
let mut my_info = self.timeline.get().get_info();
let mut my_info = this_timeline.get().get_info();
/* Check protocol compatibility */
if server_info.protocol_version != SK_PROTOCOL_VERSION {
@@ -239,19 +271,24 @@ impl ReceiveWalConn {
my_info.server.node_id = node_id;
/* Calculate WAL end based on local data */
let (flush_lsn, timeline) = self.timeline.find_end_of_wal(&self.conf.data_dir, true);
let (flush_lsn, timeline_id) = this_timeline.find_end_of_wal(&swh.conf.data_dir, true);
my_info.flush_lsn = flush_lsn;
my_info.server.timeline = timeline;
my_info.server.timeline = timeline_id;
info!(
"find_end_of_wal in {:?}: timeline={} flush_lsn={}",
&swh.conf.data_dir, timeline_id, flush_lsn
);
/* Report my identifier to proposer */
my_info.ser_into(&mut self.stream_out)?;
self.write_msg(&my_info)?;
/* Wait for vote request */
let prop = self.read_req::<RequestVote>()?;
let prop = self.read_msg::<RequestVote>()?;
/* This is Paxos check which should ensure that only one master can perform commits */
if prop.node_id < my_info.server.node_id {
/* Send my node-id to inform proposer that it's candidate was rejected */
my_info.server.node_id.ser_into(&mut self.stream_out)?;
self.write_msg(&my_info.server.node_id)?;
bail!(
"Reject connection attempt with term {} because my term is {}",
prop.node_id.term,
@@ -259,29 +296,30 @@ impl ReceiveWalConn {
);
}
my_info.server.node_id = prop.node_id;
self.timeline.get().set_info(&my_info);
this_timeline.get().set_info(&my_info);
/* Need to persist our vote first */
self.timeline.get().save_control_file(true)?;
this_timeline.get().save_control_file(true)?;
let mut flushed_restart_lsn = Lsn(0);
let wal_seg_size = server_info.wal_seg_size as usize;
/* Acknowledge the proposed candidate by returning it to the proposer */
prop.node_id.ser_into(&mut self.stream_out)?;
self.write_msg(&prop.node_id)?;
if self.conf.pageserver_addr.is_some() {
if swh.conf.pageserver_addr.is_some() {
// Need to establish replication channel with page server.
// Add far as replication in postgres is initiated by receiver, we should use callme mechanism
let conf = self.conf.clone();
let timelineid = self.timeline.get().timelineid;
let conf = swh.conf.clone();
let timelineid = this_timeline.get().timelineid;
let tenantid = server_info.tenant_id;
thread::spawn(move || {
request_callback(conf, timelineid);
request_callback(conf, timelineid, tenantid);
});
}
info!(
"Start streaming from timeline {} address {:?}",
server_info.timeline_id, self.peer_addr,
"Start streaming from timeline {} tenant {} address {:?} flush_lsn={}",
server_info.timeline_id, server_info.tenant_id, self.peer_addr, my_info.flush_lsn
);
// Main loop
@@ -289,7 +327,10 @@ impl ReceiveWalConn {
let mut sync_control_file = false;
/* Receive message header */
let req = self.read_req::<SafeKeeperRequest>()?;
let msg_bytes = self.read_msg_bytes()?;
let mut msg_reader = msg_bytes.reader();
let req = SafeKeeperRequest::des_from(&mut msg_reader)?;
if req.sender_id != my_info.server.node_id {
bail!("Sender NodeId is changed");
}
@@ -307,12 +348,20 @@ impl ReceiveWalConn {
rec_size, start_pos, end_pos,
);
/* Receive message body */
let mut inbuf = vec![0u8; rec_size];
self.stream_in.read_exact(&mut inbuf)?;
/* Receive message body (from the rest of the message) */
let mut buf = Vec::with_capacity(rec_size);
msg_reader.read_to_end(&mut buf)?;
assert_eq!(buf.len(), rec_size);
/* Save message in file */
self.write_wal_file(start_pos, timeline, wal_seg_size, &inbuf)?;
Self::write_wal_file(
swh,
start_pos,
timeline_id,
this_timeline.get(),
wal_seg_size,
&buf,
)?;
my_info.restart_lsn = req.restart_lsn;
my_info.commit_lsn = req.commit_lsn;
@@ -337,7 +386,7 @@ impl ReceiveWalConn {
* when restart_lsn delta exceeds WAL segment size.
*/
sync_control_file |= flushed_restart_lsn + (wal_seg_size as u64) < my_info.restart_lsn;
self.timeline.get().save_control_file(sync_control_file)?;
this_timeline.get().save_control_file(sync_control_file)?;
if sync_control_file {
flushed_restart_lsn = my_info.restart_lsn;
@@ -348,25 +397,27 @@ impl ReceiveWalConn {
let resp = SafeKeeperResponse {
epoch: my_info.epoch,
flush_lsn: end_pos,
hs_feedback: self.timeline.get().get_hs_feedback(),
hs_feedback: this_timeline.get().get_hs_feedback(),
};
resp.ser_into(&mut self.stream_out)?;
self.write_msg(&resp)?;
/*
* Ping wal sender that new data is available.
* FlushLSN (end_pos) can be smaller than commitLSN in case we are at catching-up safekeeper.
*/
self.timeline
this_timeline
.get()
.notify_wal_senders(min(req.commit_lsn, end_pos));
}
Ok(())
}
fn write_wal_file(
&self,
swh: &SendWalHandler,
startpos: Lsn,
timeline: TimeLineID,
timeline_id: TimeLineID,
timeline: &Arc<Timeline>,
wal_seg_size: usize,
buf: &[u8],
) -> Result<()> {
@@ -394,16 +445,16 @@ impl ReceiveWalConn {
/* Open file */
let segno = start_pos.segment_number(wal_seg_size);
let wal_file_name = XLogFileName(timeline, segno, wal_seg_size);
let wal_file_path = self
let wal_file_name = XLogFileName(timeline_id, segno, wal_seg_size);
let wal_file_path = swh
.conf
.data_dir
.join(self.timeline.get().timelineid.to_string())
.join(timeline.timelineid.to_string())
.join(wal_file_name.clone());
let wal_file_partial_path = self
let wal_file_partial_path = swh
.conf
.data_dir
.join(self.timeline.get().timelineid.to_string())
.join(timeline.timelineid.to_string())
.join(wal_file_name.clone() + ".partial");
{
@@ -441,7 +492,7 @@ impl ReceiveWalConn {
wal_file.write_all(&buf[bytes_written..(bytes_written + bytes_to_write)])?;
// Flush file is not prohibited
if !self.conf.no_sync {
if !swh.conf.no_sync {
wal_file.sync_all()?;
}
}

View File

@@ -1,5 +1,5 @@
//! This module implements the streaming side of replication protocol, starting
//! with the "START REPLICATION" message.
//! with the "START_REPLICATION" message.
use crate::send_wal::SendWalHandler;
use crate::timeline::{Timeline, TimelineTools};
@@ -42,6 +42,19 @@ pub struct ReplicationConn {
stream_in: Option<BufReader<TcpStream>>,
}
// TODO: move this to crate::timeline when there's more users
// TODO: design a proper Timeline mock api
trait HsFeedbackSubscriber {
fn add_hs_feedback(&self, _feedback: HotStandbyFeedback) {}
}
impl HsFeedbackSubscriber for Arc<Timeline> {
#[inline(always)]
fn add_hs_feedback(&self, feedback: HotStandbyFeedback) {
Timeline::add_hs_feedback(self, feedback);
}
}
impl ReplicationConn {
/// Create a new `ReplicationConn`
pub fn new(pgb: &mut PostgresBackend) -> Self {
@@ -51,23 +64,26 @@ impl ReplicationConn {
}
/// Handle incoming messages from the network.
///
/// This is spawned into the background by `handle_start_replication`.
///
fn background_thread(mut stream_in: impl Read, timeline: Arc<Timeline>) -> Result<()> {
fn background_thread(
mut stream_in: impl Read,
subscriber: impl HsFeedbackSubscriber,
) -> Result<()> {
// Wait for replica's feedback.
// We only handle `CopyData` messages. Anything else is ignored.
loop {
match FeMessage::read(&mut stream_in)? {
Some(FeMessage::CopyData(m)) => {
while let Some(msg) = FeMessage::read(&mut stream_in)? {
match msg {
FeMessage::CopyData(m) => {
let feedback = HotStandbyFeedback::des(&m)?;
timeline.add_hs_feedback(feedback)
subscriber.add_hs_feedback(feedback);
}
msg => {
_ => {
// We only handle `CopyData` messages. Anything else is ignored.
info!("unexpected message {:?}", msg);
}
}
}
Ok(())
}
/// Helper function that parses a pair of LSNs.
@@ -132,7 +148,7 @@ impl ReplicationConn {
break;
}
}
let (wal_end, timeline) = swh.timeline.find_end_of_wal(&swh.conf.data_dir, false);
let (wal_end, timeline) = swh.timeline.find_end_of_wal(&swh.conf.data_dir, true);
if start_pos == Lsn(0) {
start_pos = wal_end;
}
@@ -212,3 +228,18 @@ impl ReplicationConn {
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
// A no-op impl for tests
impl HsFeedbackSubscriber for () {}
#[test]
fn test_replication_conn_background_thread_eof() {
// Test that background_thread recognizes EOF
let stream: &[u8] = &[];
ReplicationConn::background_thread(stream, ()).unwrap();
}
}

View File

@@ -2,17 +2,18 @@
//! pageserver/any other consumer.
//!
use crate::receive_wal::ReceiveWalConn;
use crate::replication::ReplicationConn;
use crate::timeline::{Timeline, TimelineTools};
use crate::WalAcceptorConf;
use anyhow::{bail, Result};
use bytes::Bytes;
use pageserver::ZTimelineId;
use std::str::FromStr;
use std::sync::Arc;
use zenith_utils::postgres_backend;
use zenith_utils::postgres_backend::PostgresBackend;
use zenith_utils::pq_proto::{BeMessage, FeStartupMessage, RowDescriptor};
use zenith_utils::zid::ZTimelineId;
/// Handler for streaming WAL from acceptor
pub struct SendWalHandler {
@@ -45,6 +46,9 @@ impl postgres_backend::Handler for SendWalHandler {
} else if query_string.starts_with(b"START_REPLICATION") {
ReplicationConn::new(pgb).run(self, pgb, &query_string)?;
Ok(())
} else if query_string.starts_with(b"START_WAL_PUSH") {
ReceiveWalConn::new(pgb)?.run(self)?;
Ok(())
} else {
bail!("Unexpected command {:?}", query_string);
}

View File

@@ -5,7 +5,6 @@ use anyhow::{bail, Result};
use fs2::FileExt;
use lazy_static::lazy_static;
use log::*;
use pageserver::ZTimelineId;
use postgres_ffi::xlog_utils::{find_end_of_wal, TimeLineID};
use std::cmp::{max, min};
use std::collections::HashMap;
@@ -15,6 +14,7 @@ use std::path::Path;
use std::sync::{Arc, Condvar, Mutex};
use zenith_utils::bin_ser::LeSer;
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTimelineId;
use crate::receive_wal::{SafeKeeperInfo, CONTROL_FILE_NAME, SK_FORMAT_VERSION, SK_MAGIC};
use crate::replication::{HotStandbyFeedback, END_REPLICATION_MARKER};
@@ -232,7 +232,8 @@ impl TimelineTools for Option<Arc<Timeline>> {
/// Find last WAL record. If "precise" is false then just locate last partial segment
fn find_end_of_wal(&self, data_dir: &Path, precise: bool) -> (Lsn, TimeLineID) {
let seg_size = self.get().get_info().server.wal_seg_size as usize;
let (lsn, timeline) = find_end_of_wal(data_dir, seg_size, precise);
let wal_dir = data_dir.join(format!("{}", self.get().timelineid));
let (lsn, timeline) = find_end_of_wal(&wal_dir, seg_size, precise);
(Lsn(lsn), timeline)
}
}

View File

@@ -4,11 +4,9 @@
//!
use anyhow::Result;
use log::*;
use std::io::Read;
use std::net::{TcpListener, TcpStream};
use std::thread;
use crate::receive_wal::ReceiveWalConn;
use crate::send_wal::SendWalHandler;
use crate::WalAcceptorConf;
use zenith_utils::postgres_backend::{AuthType, PostgresBackend};
@@ -37,35 +35,15 @@ pub fn thread_main(conf: WalAcceptorConf) -> Result<()> {
}
}
/// This is run by main_loop, inside a background thread.
/// This is run by `thread_main` above, inside a background thread.
///
fn handle_socket(mut socket: TcpStream, conf: WalAcceptorConf) -> Result<()> {
fn handle_socket(socket: TcpStream, conf: WalAcceptorConf) -> Result<()> {
socket.set_nodelay(true)?;
// Peek at the incoming data to see what protocol is being sent.
let peeked = peek_u32(&mut socket)?;
if peeked == 0 {
// Consume the 4 bytes we peeked at. This protocol begins after them.
socket.read_exact(&mut [0u8; 4])?;
ReceiveWalConn::new(socket, conf)?.run()?; // internal protocol between wal_proposer and wal_acceptor
} else {
let mut conn_handler = SendWalHandler::new(conf);
let mut pgbackend = PostgresBackend::new(socket, AuthType::Trust)?;
// libpq replication protocol between wal_acceptor and replicas/pagers
pgbackend.run(&mut conn_handler)?;
}
let mut conn_handler = SendWalHandler::new(conf);
let pgbackend = PostgresBackend::new(socket, AuthType::Trust)?;
// libpq replication protocol between wal_acceptor and replicas/pagers
pgbackend.run(&mut conn_handler)?;
Ok(())
}
/// Fetch the first 4 bytes from the network (big endian), without consuming them.
///
/// This is used to help determine what protocol the peer is using.
fn peek_u32(stream: &mut TcpStream) -> Result<u32> {
let mut buf = [0u8; 4];
loop {
let num_bytes = stream.peek(&mut buf)?;
if num_bytes == 4 {
return Ok(u32::from_be_bytes(buf));
}
}
}

View File

@@ -12,7 +12,7 @@ anyhow = "1.0"
serde_json = "1"
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
# FIXME: 'pageserver' is needed for ZTimelineId. Refactor
# FIXME: 'pageserver' is needed for BranchInfo. Refactor
pageserver = { path = "../pageserver" }
control_plane = { path = "../control_plane" }
postgres_ffi = { path = "../postgres_ffi" }

View File

@@ -8,8 +8,11 @@ use std::collections::btree_map::Entry;
use std::collections::HashMap;
use std::process::exit;
use std::str::FromStr;
use zenith_utils::auth::{encode_from_key_path, Claims, Scope};
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use pageserver::{branches::BranchInfo, ZTimelineId};
use pageserver::branches::BranchInfo;
use zenith_utils::lsn::Lsn;
///
@@ -33,9 +36,15 @@ fn main() -> Result<()> {
let timeline_arg = Arg::with_name("timeline")
.short("n")
.index(1)
.help("timeline name")
.help("Timeline name")
.required(true);
let tenantid_arg = Arg::with_name("tenantid")
.long("tenantid")
.help("Tenant id. Represented as a hexadecimal string 32 symbols length")
.takes_value(true)
.required(false);
let matches = App::new("Zenith CLI")
.setting(AppSettings::ArgRequiredElseHelp)
.subcommand(
@@ -46,13 +55,33 @@ fn main() -> Result<()> {
.long("remote-pageserver")
.required(false)
.value_name("pageserver-url"),
)
.arg(
Arg::with_name("enable-auth")
.long("enable-auth")
.takes_value(false)
.help("Enable authentication using ZenithJWT")
)
.arg(
Arg::with_name("repository-format")
.long("repository-format")
.takes_value(false)
.value_name("repository-format")
.help("Choose repository format, 'layered' or 'rocksdb'")
),
)
.subcommand(
SubCommand::with_name("branch")
.about("Create a new branch")
.arg(Arg::with_name("branchname").required(false).index(1))
.arg(Arg::with_name("start-point").required(false).index(2)),
.arg(Arg::with_name("start-point").required(false).index(2))
.arg(tenantid_arg.clone()),
).subcommand(
SubCommand::with_name("tenant")
.setting(AppSettings::ArgRequiredElseHelp)
.about("Manage tenants")
.subcommand(SubCommand::with_name("list"))
.subcommand(SubCommand::with_name("create").arg(Arg::with_name("tenantid").required(false).index(1)))
)
.subcommand(SubCommand::with_name("status"))
.subcommand(SubCommand::with_name("start").about("Start local pageserver"))
@@ -62,12 +91,23 @@ fn main() -> Result<()> {
SubCommand::with_name("pg")
.setting(AppSettings::ArgRequiredElseHelp)
.about("Manage postgres instances")
.subcommand(SubCommand::with_name("list"))
.subcommand(SubCommand::with_name("create").arg(timeline_arg.clone()))
.subcommand(SubCommand::with_name("start").arg(timeline_arg.clone()))
.subcommand(SubCommand::with_name("list").arg(tenantid_arg.clone()))
.subcommand(SubCommand::with_name("create")
.about("Create a postgres compute node")
.arg(timeline_arg.clone()).arg(tenantid_arg.clone())
.arg(
Arg::with_name("config-only")
.help("Don't do basebackup, create compute node with only config files")
.long("config-only")
.required(false)
))
.subcommand(SubCommand::with_name("start")
.about("Start a postrges compute node.\n This command actually creates new node from scrath, but preserves existing config files")
.arg(timeline_arg.clone()).arg(tenantid_arg.clone()))
.subcommand(
SubCommand::with_name("stop")
.arg(timeline_arg.clone())
.arg(tenantid_arg.clone())
.arg(
Arg::with_name("destroy")
.help("Also delete data directory (now optional, should be default in future)")
@@ -100,9 +140,17 @@ fn main() -> Result<()> {
.get_matches();
// Create config file
if let ("init", Some(sub_args)) = matches.subcommand() {
let pageserver_uri = sub_args.value_of("pageserver-url");
local_env::init(pageserver_uri).with_context(|| "Failed to create config file")?;
if let ("init", Some(init_match)) = matches.subcommand() {
let tenantid = ZTenantId::generate();
let pageserver_uri = init_match.value_of("pageserver-url");
let auth_type = if init_match.is_present("enable-auth") {
AuthType::ZenithJWT
} else {
AuthType::Trust
};
let repository_format = init_match.value_of("repository-format");
local_env::init(pageserver_uri, tenantid, auth_type, repository_format)
.with_context(|| "Failed to create config file")?;
}
// all other commands would need config
@@ -115,13 +163,23 @@ fn main() -> Result<()> {
};
match matches.subcommand() {
("init", Some(_)) => {
("init", Some(init_match)) => {
let pageserver = PageServerNode::from_env(&env);
if let Err(e) = pageserver.init() {
if let Err(e) = pageserver.init(
Some(&env.tenantid.to_string()),
init_match.is_present("enable-auth"),
init_match.value_of("repository-format"),
) {
eprintln!("pageserver init failed: {}", e);
exit(1);
}
}
("tenant", Some(args)) => {
if let Err(e) = handle_tenant(args, &env) {
eprintln!("tenant command failed: {}", e);
exit(1);
}
}
("branch", Some(sub_args)) => {
if let Err(e) = handle_branch(sub_args, &env) {
@@ -308,9 +366,12 @@ fn print_branch(
/// Returns a map of timeline IDs to branch_name@lsn strings.
/// Connects to the pageserver to query this information.
fn get_branch_infos(env: &local_env::LocalEnv) -> Result<HashMap<ZTimelineId, BranchInfo>> {
fn get_branch_infos(
env: &local_env::LocalEnv,
tenantid: &ZTenantId,
) -> Result<HashMap<ZTimelineId, BranchInfo>> {
let page_server = PageServerNode::from_env(env);
let branch_infos: Vec<BranchInfo> = page_server.branches_list()?;
let branch_infos: Vec<BranchInfo> = page_server.branches_list(tenantid)?;
let branch_infos: HashMap<ZTimelineId, BranchInfo> = branch_infos
.into_iter()
.map(|branch_info| (branch_info.timeline_id, branch_info))
@@ -319,23 +380,51 @@ fn get_branch_infos(env: &local_env::LocalEnv) -> Result<HashMap<ZTimelineId, Br
Ok(branch_infos)
}
fn handle_tenant(tenant_match: &ArgMatches, env: &local_env::LocalEnv) -> Result<()> {
let pageserver = PageServerNode::from_env(&env);
match tenant_match.subcommand() {
("list", Some(_)) => {
for tenant in pageserver.tenants_list()? {
println!("{}", tenant);
}
}
("create", Some(create_match)) => {
let tenantid = match create_match.value_of("tenantid") {
Some(tenantid) => ZTenantId::from_str(tenantid)?,
None => ZTenantId::generate(),
};
println!("using tenant id {}", tenantid);
pageserver.tenant_create(&tenantid)?;
println!("tenant successfully created on the pageserver");
}
_ => {}
}
Ok(())
}
fn handle_branch(branch_match: &ArgMatches, env: &local_env::LocalEnv) -> Result<()> {
let pageserver = PageServerNode::from_env(&env);
if let Some(branchname) = branch_match.value_of("branchname") {
if let Some(startpoint_str) = branch_match.value_of("start-point") {
let branch = pageserver.branch_create(branchname, startpoint_str)?;
println!(
"Created branch '{}' at {:?}",
branch.name,
branch.latest_valid_lsn.unwrap_or(Lsn(0))
);
} else {
bail!("Missing start-point");
}
let startpoint_str = branch_match
.value_of("start-point")
.ok_or(anyhow!("Missing start-point"))?;
let tenantid: ZTenantId = branch_match
.value_of("tenantid")
.map_or(Ok(env.tenantid), |value| value.parse())?;
let branch = pageserver.branch_create(branchname, startpoint_str, &tenantid)?;
println!(
"Created branch '{}' at {:?} for tenant: {}",
branch.name,
branch.latest_valid_lsn.unwrap_or(Lsn(0)),
tenantid,
);
} else {
// No arguments, list branches
let branches = pageserver.branches_list()?;
let tenantid: ZTenantId = branch_match
.value_of("tenantid")
.map_or(Ok(env.tenantid), |value| value.parse())?;
// No arguments, list branches for tenant
let branches = pageserver.branches_list(&tenantid)?;
print_branches_tree(branches)?;
}
@@ -346,14 +435,21 @@ fn handle_pg(pg_match: &ArgMatches, env: &local_env::LocalEnv) -> Result<()> {
let mut cplane = ComputeControlPlane::load(env.clone())?;
match pg_match.subcommand() {
("list", Some(_sub_m)) => {
let branch_infos = get_branch_infos(env).unwrap_or_else(|e| {
("list", Some(list_match)) => {
let tenantid: ZTenantId = list_match
.value_of("tenantid")
.map_or(Ok(env.tenantid), |value| value.parse())?;
let branch_infos = get_branch_infos(env, &tenantid).unwrap_or_else(|e| {
eprintln!("Failed to load branch info: {}", e);
HashMap::new()
});
println!("BRANCH\tADDRESS\t\tLSN\t\tSTATUS");
for (timeline_name, node) in cplane.nodes.iter() {
for ((_, timeline_name), node) in cplane
.nodes
.iter()
.filter(|((node_tenantid, _), _)| node_tenantid == &tenantid)
{
println!(
"{}\t{}\t{}\t{}",
timeline_name,
@@ -368,30 +464,48 @@ fn handle_pg(pg_match: &ArgMatches, env: &local_env::LocalEnv) -> Result<()> {
);
}
}
("create", Some(sub_m)) => {
let timeline_name = sub_m.value_of("timeline").unwrap_or("main");
cplane.new_node(timeline_name)?;
}
("start", Some(sub_m)) => {
let timeline_name = sub_m.value_of("timeline").unwrap_or("main");
("create", Some(create_match)) => {
let tenantid: ZTenantId = create_match
.value_of("tenantid")
.map_or(Ok(env.tenantid), |value| value.parse())?;
let timeline_name = create_match.value_of("timeline").unwrap_or("main");
let config_only = create_match.is_present("config-only");
let node = cplane.nodes.get(timeline_name);
cplane.new_node(tenantid, timeline_name, config_only)?;
}
("start", Some(start_match)) => {
let tenantid: ZTenantId = start_match
.value_of("tenantid")
.map_or(Ok(env.tenantid), |value| value.parse())?;
let timeline_name = start_match.value_of("timeline").unwrap_or("main");
let node = cplane.nodes.get(&(tenantid, timeline_name.to_owned()));
let auth_token = if matches!(env.auth_type, AuthType::ZenithJWT) {
let claims = Claims::new(Some(tenantid), Scope::Tenant);
Some(encode_from_key_path(&claims, &env.private_key_path)?)
} else {
None
};
println!("Starting postgres on timeline {}...", timeline_name);
if let Some(node) = node {
node.start()?;
node.start(&auth_token)?;
} else {
let node = cplane.new_node(timeline_name)?;
node.start()?;
let node = cplane.new_node(tenantid, timeline_name, false)?;
node.start(&auth_token)?;
}
}
("stop", Some(sub_m)) => {
let timeline_name = sub_m.value_of("timeline").unwrap_or("main");
let destroy = sub_m.is_present("destroy");
("stop", Some(stop_match)) => {
let timeline_name = stop_match.value_of("timeline").unwrap_or("main");
let destroy = stop_match.is_present("destroy");
let tenantid: ZTenantId = stop_match
.value_of("tenantid")
.map_or(Ok(env.tenantid), |value| value.parse())?;
let node = cplane
.nodes
.get(timeline_name)
.get(&(tenantid, timeline_name.to_owned()))
.ok_or_else(|| anyhow!("postgres {} is not found", timeline_name))?;
node.stop(destroy)?;
}

12
zenith_metrics/Cargo.toml Normal file
View File

@@ -0,0 +1,12 @@
[package]
name = "zenith_metrics"
version = "0.1.0"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
prometheus = "0.12"
[dev-dependencies]
lazy_static = "1.4.0"

16
zenith_metrics/src/lib.rs Normal file
View File

@@ -0,0 +1,16 @@
//! We re-export those from prometheus crate to
//! make sure that we use the same dep version everywhere.
//! Otherwise, we might not see all metrics registered via
//! a default registry.
pub use prometheus::gather;
pub use prometheus::{exponential_buckets, linear_buckets};
pub use prometheus::{register_histogram, Histogram};
pub use prometheus::{register_histogram_vec, HistogramVec};
pub use prometheus::{register_int_counter, IntCounter};
pub use prometheus::{register_int_counter_vec, IntCounterVec};
pub use prometheus::{register_int_gauge, IntGauge};
pub use prometheus::{register_int_gauge_vec, IntGaugeVec};
pub use prometheus::{Encoder, TextEncoder};
mod wrappers;
pub use wrappers::{CountedReader, CountedWriter};

View File

@@ -0,0 +1,211 @@
use std::io::{Read, Result, Write};
/// A wrapper for an object implementing [Read](std::io::Read)
/// which allows a closure to observe the amount of bytes read.
/// This is useful in conjunction with metrics (e.g. [IntCounter](crate::IntCounter)).
///
/// Example:
///
/// ```
/// # use std::io::{Result, Read};
/// # use zenith_metrics::{register_int_counter, IntCounter};
/// # use zenith_metrics::CountedReader;
/// #
/// # lazy_static::lazy_static! {
/// # static ref INT_COUNTER: IntCounter = register_int_counter!(
/// # "int_counter",
/// # "let's count something!"
/// # ).unwrap();
/// # }
/// #
/// fn do_some_reads(stream: impl Read, count: usize) -> Result<Vec<u8>> {
/// let mut reader = CountedReader::new(stream, |cnt| {
/// // bump a counter each time we do a read
/// INT_COUNTER.inc_by(cnt as u64);
/// });
///
/// let mut proto_header = [0; 8];
/// reader.read_exact(&mut proto_header)?;
/// assert!(&proto_header == b"deadbeef");
///
/// let mut payload = vec![0; count];
/// reader.read_exact(&mut payload)?;
/// Ok(payload)
/// }
/// ```
///
/// NB: rapid concurrent bumping of an atomic counter might incur
/// a performance penalty. Please make sure to amortize the amount
/// of atomic operations by either using [BufReader](std::io::BufReader)
/// or choosing a non-atomic (thread local) counter.
pub struct CountedReader<'a, T> {
reader: T,
update_counter: Box<dyn FnMut(usize) + Sync + Send + 'a>,
}
impl<'a, T> CountedReader<'a, T> {
pub fn new(reader: T, update_counter: impl FnMut(usize) + Sync + Send + 'a) -> Self {
Self {
reader,
update_counter: Box::new(update_counter),
}
}
/// Get an immutable reference to the underlying [Read](std::io::Read) implementor
pub fn inner(&self) -> &T {
&self.reader
}
/// Get a mutable reference to the underlying [Read](std::io::Read) implementor
pub fn inner_mut(&mut self) -> &mut T {
&mut self.reader
}
/// Consume the wrapper and return the underlying [Read](std::io::Read) implementor
pub fn into_inner(self) -> T {
self.reader
}
}
impl<T: Read> Read for CountedReader<'_, T> {
fn read(&mut self, buf: &mut [u8]) -> Result<usize> {
let count = self.reader.read(buf)?;
(self.update_counter)(count);
Ok(count)
}
}
/// A wrapper for an object implementing [Write](std::io::Write)
/// which allows a closure to observe the amount of bytes written.
/// This is useful in conjunction with metrics (e.g. [IntCounter](crate::IntCounter)).
///
/// Example:
///
/// ```
/// # use std::io::{Result, Write};
/// # use zenith_metrics::{register_int_counter, IntCounter};
/// # use zenith_metrics::CountedWriter;
/// #
/// # lazy_static::lazy_static! {
/// # static ref INT_COUNTER: IntCounter = register_int_counter!(
/// # "int_counter",
/// # "let's count something!"
/// # ).unwrap();
/// # }
/// #
/// fn do_some_writes(stream: impl Write, payload: &[u8]) -> Result<()> {
/// let mut writer = CountedWriter::new(stream, |cnt| {
/// // bump a counter each time we do a write
/// INT_COUNTER.inc_by(cnt as u64);
/// });
///
/// let proto_header = b"deadbeef";
/// writer.write_all(proto_header)?;
/// writer.write_all(payload)
/// }
/// ```
///
/// NB: rapid concurrent bumping of an atomic counter might incur
/// a performance penalty. Please make sure to amortize the amount
/// of atomic operations by either using [BufWriter](std::io::BufWriter)
/// or choosing a non-atomic (thread local) counter.
pub struct CountedWriter<'a, T> {
writer: T,
update_counter: Box<dyn FnMut(usize) + Sync + Send + 'a>,
}
impl<'a, T> CountedWriter<'a, T> {
pub fn new(writer: T, update_counter: impl FnMut(usize) + Sync + Send + 'a) -> Self {
Self {
writer,
update_counter: Box::new(update_counter),
}
}
/// Get an immutable reference to the underlying [Write](std::io::Write) implementor
pub fn inner(&self) -> &T {
&self.writer
}
/// Get a mutable reference to the underlying [Write](std::io::Write) implementor
pub fn inner_mut(&mut self) -> &mut T {
&mut self.writer
}
/// Consume the wrapper and return the underlying [Write](std::io::Write) implementor
pub fn into_inner(self) -> T {
self.writer
}
}
impl<T: Write> Write for CountedWriter<'_, T> {
fn write(&mut self, buf: &[u8]) -> Result<usize> {
let count = self.writer.write(buf)?;
(self.update_counter)(count);
Ok(count)
}
fn flush(&mut self) -> Result<()> {
self.writer.flush()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_counted_reader() {
let stream = [0; 16];
let mut total = 0;
let mut reader = CountedReader::new(stream.as_ref(), |cnt| {
total += cnt;
});
let mut buffer = [0; 8];
reader.read_exact(&mut buffer).unwrap();
reader.read_exact(&mut buffer).unwrap();
drop(reader);
assert_eq!(total, stream.len());
}
#[test]
fn test_counted_writer() {
let mut stream = [0; 16];
let mut total = 0;
let mut writer = CountedWriter::new(stream.as_mut(), |cnt| {
total += cnt;
});
let buffer = [0; 8];
writer.write_all(&buffer).unwrap();
writer.write_all(&buffer).unwrap();
drop(writer);
assert_eq!(total, stream.len());
}
// This mimicks the constraints of std::thread::spawn
fn assert_send_sync(_x: impl Sync + Send + 'static) {}
#[test]
fn test_send_sync_counted_reader() {
let stream: &[u8] = &[];
let mut reader = CountedReader::new(stream, |_| {});
assert_send_sync(move || {
reader.read_exact(&mut []).unwrap();
});
}
#[test]
fn test_send_sync_counted_writer() {
let stream = Vec::<u8>::new();
let mut writer = CountedWriter::new(stream, |_| {});
assert_send_sync(move || {
writer.write_all(&[]).unwrap();
});
}
}

View File

@@ -6,15 +6,22 @@ edition = "2018"
[dependencies]
anyhow = "1.0"
bytes = "1.0.1"
byteorder = "1.4.3"
log = "0.4.14"
serde = { version = "1.0", features = ["derive"] }
bincode = "1.3"
thiserror = "1.0"
byteorder = "1.4.3"
bytes = "1.0.1"
hyper = { version = "0.14.7", features = ["full"] }
lazy_static = "1.4.0"
log = "0.4.14"
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
serde = { version = "1.0", features = ["derive"] }
thiserror = "1.0"
tokio = { version = "1.5.0", features = ["full"] }
zenith_metrics = { path = "../zenith_metrics" }
workspace_hack = { path = "../workspace_hack" }
rand = "0.8.3"
jsonwebtoken = "7"
hex = { version = "0.4.3", features = ["serde"] }
[dev-dependencies]
hex-literal = "0.3"

104
zenith_utils/src/auth.rs Normal file
View File

@@ -0,0 +1,104 @@
// For details about authentication see docs/authentication.md
// TODO there are two issues for our use case in jsonwebtoken library which will be resolved in next release
// The fisrt one is that there is no way to disable expiration claim, but it can be excluded from validation, so use this as a workaround for now.
// Relevant issue: https://github.com/Keats/jsonwebtoken/issues/190
// The second one is that we wanted to use ed25519 keys, but they are also not supported until next version. So we go with RSA keys for now.
// Relevant issue: https://github.com/Keats/jsonwebtoken/issues/162
use hex::{self, FromHex};
use serde::de::Error;
use serde::{self, Deserializer, Serializer};
use std::{fs, path::PathBuf};
use anyhow::Result;
use jsonwebtoken::{
decode, encode, Algorithm, DecodingKey, EncodingKey, Header, TokenData, Validation,
};
use serde::{Deserialize, Serialize};
use crate::zid::ZTenantId;
const JWT_ALGORITHM: Algorithm = Algorithm::RS256;
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum Scope {
Tenant,
PageServerApi,
}
pub fn to_hex_option<S>(value: &Option<ZTenantId>, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match value {
Some(tid) => hex::serialize(tid, serializer),
None => Option::serialize(value, serializer),
}
}
fn from_hex_option<'de, D>(deserializer: D) -> Result<Option<ZTenantId>, D::Error>
where
D: Deserializer<'de>,
{
let opt: Option<String> = Option::deserialize(deserializer)?;
match opt {
Some(tid) => return Ok(Some(ZTenantId::from_hex(tid).map_err(Error::custom)?)),
None => return Ok(None),
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct Claims {
// this custom serialize/deserialize_with is needed because Option is not transparent to serde
// so clearest option is serde(with = "hex") but it is not working, for details see https://github.com/serde-rs/serde/issues/1301
#[serde(
default,
skip_serializing_if = "Option::is_none",
serialize_with = "to_hex_option",
deserialize_with = "from_hex_option"
)]
pub tenant_id: Option<ZTenantId>,
pub scope: Scope,
}
impl Claims {
pub fn new(tenant_id: Option<ZTenantId>, scope: Scope) -> Self {
Self { tenant_id, scope }
}
}
#[derive(Debug)]
pub struct JwtAuth {
decoding_key: DecodingKey<'static>,
validation: Validation,
}
impl JwtAuth {
pub fn new<'a>(decoding_key: DecodingKey<'a>) -> Self {
Self {
decoding_key: decoding_key.into_static(),
validation: Validation {
algorithms: vec![JWT_ALGORITHM],
validate_exp: false,
..Default::default()
},
}
}
pub fn from_key_path(key_path: &PathBuf) -> Result<Self> {
let public_key = fs::read_to_string(key_path)?;
Ok(Self::new(DecodingKey::from_rsa_pem(public_key.as_bytes())?))
}
pub fn decode(&self, token: &str) -> Result<TokenData<Claims>> {
Ok(decode(token, &self.decoding_key, &self.validation)?)
}
}
// this function is used only for testing purposes in CLI e g generate tokens during init
pub fn encode_from_key_path(claims: &Claims, key_path: &PathBuf) -> Result<String> {
let key_data = fs::read_to_string(key_path)?;
let key = EncodingKey::from_rsa_pem(&key_data.as_bytes())?;
Ok(encode(&Header::new(JWT_ALGORITHM), claims, &key)?)
}

Some files were not shown because too many files have changed in this diff Show More