Compare commits

..

458 Commits

Author SHA1 Message Date
Alexey Kondratov
b1df043794 Do cargo audit in CI 2021-08-16 19:06:43 +03:00
Heikki Linnakangas
7ee8de3725 Add metrics to WAL redo.
Track the time spent on replaying WAL records by the special Postgres
process, the time spent waiting for acces to the Postgres process (since
there is only one per tenant), and the number of records replayed.
2021-08-16 15:49:17 +03:00
Heikki Linnakangas
047a05efb2 Minor formatting and comment fixes. 2021-08-16 15:48:59 +03:00
Dmitry Rodionov
0c4ab80eac try to be more intelligent in WalAcceptor.start, added a bunch of typing sugar to wal acceptor fixtures 2021-08-16 14:27:44 +03:00
Heikki Linnakangas
2450f82de5 Introduce a new "layered" repository implementation.
This replaces the RocksDB based implementation with an approach using
"snapshot files" on disk, and in-memory btreemaps to hold the recent
changes.

This make the repository implementation a configuration option. You can
choose 'layered' or 'rocksdb' with "zenith init --repository-format=<format>"
The unit tests have been refactored to exercise both implementations.
'layered' is now the default.

Push/pull is not implemented. The 'test_history_inmemory' test has been
commented out accordingly. It's not clear how we will implement that
functionality; probably by copying the snapshot files directly.
2021-08-16 10:06:48 +03:00
Max Sharnoff
5eb1738e8b Rework walkeeper protocol to use libpq (#366)
Most of the work here was done on the postgres side. There's more
information in the commit message there.
 (see: 04cfa326a5)

On the WAL acceptor side, we're now expecting 'START_WAL_PUSH' to
initialize the WAL keeper protocol. Everything else is mostly the same,
with the only real difference being that protocol messages are now
discrete CopyData messages sent over the postgres protocol.

For the sake of documentation, the full set of these messages is:

  <- recv: START_WAL_PUSH query
  <- recv: server info from postgres   (type `ServerInfo`)
  -> send: walkeeper info              (type `SafeKeeperInfo`)
  <- recv: vote info                   (type `RequestVote`)

  if node id mismatch:
    -> send: self node id (type `NodeId`); exit

  -> send: confirm vote (with node id) (type `NodeId`)

  loop:
    <- recv: info and maybe WAL block  (type `SafeKeeperRequest` + bytes)
         (break loop if done)
    -> send: confirm receipt           (type `SafeKeeperResponse`)
2021-08-13 11:25:16 -07:00
Heikki Linnakangas
6e22a8f709 Refactor WAL redo to not use a separate thread.
My main motivation is to make it easier to attribute time spent in WAL
redo to the request that needed the WAL redo. With this patch, the WAL
redo is performed by the requester thread, so it shows up in stack traces
and in 'perf' report as part of the requester's call stack. This is also
slightly simpler (less lines of code) and should be a bit faster too.
2021-08-13 17:23:36 +03:00
Heikki Linnakangas
f8de71eab0 Update vendor/postgres to fix race condition leading to CRC errors.
Fixes https://github.com/zenithdb/zenith/issues/413
2021-08-13 14:02:26 +03:00
Heikki Linnakangas
8517d9696d Move gc_iteration() function to Repository trait.
The upcoming layered storage implementation handles GC as a
repository-wide operation because it needs to pay attention to the branch
points of all timelines.
2021-08-12 23:46:01 +03:00
Heikki Linnakangas
97f9021c88 Fix JWT token encoding issue in test.
On my laptop, the server was receiving the token as a string with extra
b'...' escaping, e.g as "b'eyJ0....0ifQA'" instead of just "eyJ0....0ifQA".
That was causing the test to fail.

I'm using Python 3.9, while the CI is using Python 3.8. I suspect that's
why. My version of pyjwt might be different too.

See also https://github.com/jpadilla/pyjwt/issues/391.
2021-08-12 20:46:14 +03:00
Heikki Linnakangas
0a92b31496 If a pg_regress test fails in CI, save regression.diffs 2021-08-12 18:39:23 +03:00
anastasia
6c3726913f Introduce check for physical relishes.
They represent files and use RelationSizeEntry to track existing and dropped files.
They can be both blocky and non-blocky.
get_relish_size() and get_rel_exists() functions work with physical relishes, not only with blocky ones.
2021-08-12 14:42:21 +03:00
anastasia
1bfade8adc Issue #330. Use put_unlink for twophase relishes.
Follow PostgreSQL logic: remove Twophase files when prepared transaction is committed/aborted.

Always store Twophase segments as materialized page images (no wal records).
2021-08-12 14:42:21 +03:00
anastasia
4eebe22fbb cargo fmt 2021-08-12 14:42:21 +03:00
Heikki Linnakangas
20d5e757ca Remove now-unused get_next_tag function.
The only caller was removed by commit c99a211b01.
2021-08-11 22:16:38 +03:00
Heikki Linnakangas
70cb399d59 Add convenience function to create a RowDescriptor message for an int8 col.
Makes the code to construct a result set a bit more terse and readable.
2021-08-11 20:17:33 +03:00
Dmitry Rodionov
ce5333656f Introduce authentication v0.1.
Current state with authentication.
Page server validates JWT token passed as a password during connection
phase and later when performing an action such as create branch tenant
parameter of an operation is validated to match one submitted in token.
To allow access from console there is dedicated scope: PageServerApi,
this scope allows access to all tenants. See code for access validation in:
PageServerHandler::check_permission.

Because we are in progress of refactoring of communication layer
involving wal proposer protocol, and safekeeper<->pageserver. Safekeeper
now doesn’t check token passed from compute, and uses “hardcoded” token
passed via environment variable to communicate with pageserver.

Compute postgres now takes token from environment variable and passes it
as a password field in pageserver connection. It is not passed through
settings because then user will be able to retrieve it using pg_settings
or SHOW ..

I’ve added basic test in test_auth.py. Probably after we add
authentication to remaining network paths we should enable it by default
and switch all existing tests to use it.
2021-08-11 20:05:54 +03:00
Arseny Sher
5f0fd093d7 Revert "Walkeeper safe info (#408)"
Temporary revert commit 0ee2e16b17 as it leads to
safekeeper state deserialization failure. Let's sort that out and get it back.
2021-08-11 16:26:35 +03:00
Konstantin Knizhnik
0ee2e16b17 Walkeeper safe info (#408)
* Align prev record CRC on 8-bytes boundary

* Upadate safekeeper in-memory status on receiving message from WAL proposer
2021-08-11 09:14:05 +03:00
Konstantin Knizhnik
b607f0fd8e Align prev record CRC on 8-bytes boundary (#407) 2021-08-11 08:56:37 +03:00
anastasia
c99a211b01 Fix CLOG truncate handling in case of wraparound. 2021-08-11 05:49:24 +03:00
anastasia
949ac54401 Add test of clog (pg_xact) truncation 2021-08-11 05:49:24 +03:00
anastasia
e406811375 Fixes for handling SLRU relishes:
replace get_tx_status() with self.get_tx_is_in_progress() to handle xacts in truncated SLRU segments correctly
2021-08-11 05:49:24 +03:00
anastasia
590ace104a Fixes for handling SLRU relishes:
- don't return ZERO_PAGE from get_page_at_lsn_nowait() for truncated SLRU segments;
2021-08-11 05:49:24 +03:00
anastasia
e475f82ff1 Rename get_rel_size() to get_relish_size(). Don't bail if relish is not found, just return None and let the caller to decide how to handle this 2021-08-11 05:49:24 +03:00
anastasia
a368642790 cargo fmt 2021-08-10 14:26:52 +03:00
anastasia
8c7983797b Remove unused SLRUTruncate ObjectValue 2021-08-10 14:26:32 +03:00
anastasia
5dd9a66f9e Move postgres backend messages to trace level 2021-08-10 14:26:28 +03:00
anastasia
cc877f1980 Add unit test for find_end_of_wal().
Based on previous attempt to add same test by @lubennikovaav
Now WAL files are generated by initdb command.
2021-08-10 12:30:21 +03:00
anastasia
a5d57ca10b list_nonrels() returns elements in arbitrary order.
Remove incorrect comments that say otherwise.
2021-08-06 15:23:46 +03:00
Konstantin Knizhnik
3ca3394170 [refer #395] Check WAL record CRC in waldecoder (#396) 2021-08-05 16:57:57 +03:00
Heikki Linnakangas
e59e0ae2dc Clarify the terms "WAL service", "safekeeper", "proposer" 2021-08-05 10:27:56 +03:00
Stas Kelvich
ec07acfb12 fix typo in run_initdb() 2021-08-04 23:57:17 +03:00
Stas Kelvich
fa04096733 cargo fmt pass 2021-08-04 23:51:02 +03:00
Dmitry Ivanov
754892402c Enable full feature set for hyper in zenith_utils
Server functionality requires not only the "server" feature flag, but
also either "http1" or "http2" (or both). To make things simpler
(and prevent analogous problems), enable all features.
2021-08-04 21:41:17 +03:00
Stas Kelvich
02b9be488b Disable GC test.
Current GC test is flaky and overly strict. Since we are migrating to the layered repo format
with different GC implementation let's just silence this test for now.
2021-08-04 18:33:33 +03:00
Arseny Sher
cc3ac2b74c Allow safekeeper to stream till real end of wal.
Otherwise it prematurely terminates, e.g. in test_compute_restart.

ref #388
2021-08-04 18:03:43 +03:00
Arseny Sher
1dc2ae6968 Point vendor/postgres to main. 2021-08-04 14:21:01 +03:00
Stas Kelvich
04ae63a5c4 use proper postgres version 2021-08-04 14:15:07 +03:00
Arseny Sher
b77fade7b8 Look up wal directory properly in all find_end_of_wal callers.
ref #388
2021-08-04 14:15:07 +03:00
Stas Kelvich
56565c0f58 look up WAL in right directory 2021-08-04 14:15:07 +03:00
Dmitry Ivanov
ed634ec320 Extract message processing function from PostgresBackend's event loop
This patch has been extracted from #348, where it became unnecessary
after we had decided that we didn't want to measure anything inside
PostgresBackend.

IMO the change is good enough to make its way into the codebase,
even though it brings nothing "new" to the code.
2021-08-04 10:49:02 +03:00
Alexey Kondratov
bcaa59c0b9 Test compute restart with AND without safekeepers 2021-08-04 00:05:19 +03:00
Dmitry Ivanov
cb1b4a12a6 Add some prometheus metrics to pageserver
The metrics are served by an http endpoint, which
is meant to be spawned in a new thread.

In the future the endpoint will provide more APIs,
but for the time being, we won't bother with proper routing.
2021-08-03 21:42:24 +03:00
Heikki Linnakangas
9ff122835f Refactor ObjectTags, intruducing a new concept called "relish"
This clarifies - I hope - the abstractions between Repository and
ObjectRepository. The ObjectTag struct was a mix of objects that could
be accessed directly through the public Timeline interface, and also
objects that were created and used internally by the ObjectRepository
implementation and not supposed to be accessed directly by the
callers.  With the RelishTag separaate from ObjectTag, the distinction
is more clear: RelishTag is used in the public interface, and
ObjectTag is used internally between object_repository.rs and
object_store.rs, and it contains the internal metadata object types.

One awkward thing with the ObjectTag struct was that the Repository
implementation had to distinguish between ObjectTags for relations,
and track the size of the relation, while others were used to store
"blobs".  With the RelishTags, some relishes are considered
"non-blocky", and the Repository implementation is expected to track
their sizes, while others are stored as blobs. I'm not 100% happy with
how RelishTag captures that either: it just knows that some relish
kinds are blocky and some non-blocky, and there's an is_block()
function to check that.  But this does enable size-tracking for SLRUs,
allowing us to treat them more like relations.

This changes the way SLRUs are stored in the repository. Each SLRU
segment, e.g. "pg_clog/0000", "pg_clog/0001", are now handled as a
separate relish.  This removes the need for the SLRU-specific
put_slru_truncate() function in the Timeline trait. SLRU truncation is
now handled by caling put_unlink() on the segment. This is more in
line with how PostgreSQL stores SLRUs and handles their trunction.

The SLRUs are "blocky", so they are accessed one 8k page at a time,
and repository tracks their size. I considered an alternative design
where we would treat each SLRU segment as non-blocky, and just store
the whole file as one blob. Each SLRU segment is up to 256 kB in size,
which isn't that large, so that might've worked fine, too. One reason
I didn't do that is that it seems better to have the WAL redo
routines be as close as possible to the PostgreSQL routines. It
doesn't matter much in the repository, though; we have to track the
size for relations anyway, so there's not much difference in whether
we also do it for SLRUs.

While working on this, I noticed that the CLOG and MultiXact redo code
did not handle wraparound correctly. We need to fix that, but for now,
I just commented them out with a FIXME comment.
2021-08-03 14:01:05 +03:00
Heikki Linnakangas
f0030ae003 Handle SLRU ZERO records directly by storing an all-zeros page image.
It's simpler than storing the original WAL record.
2021-08-03 13:59:51 +03:00
Heikki Linnakangas
acc0f41985 Don't try to launch duplicate WAL redo thread if tenant already exists.
The codepath for tenant_create command first launched the WAL redo
thread, and then called branches::create_repo() which checked if the
tenant's directory already exists. That's problematic, because
launching the WAL redo thread will run initdb if the directory doesn't
already exist. Race condition: If the tenant already exists, it will
have a WAL redo thread already running, and the old and new WAL redo
thread might try to run initdb at the same time, causing all kinds of
weird failures.

The test_pageserver_api test was failing 100% repeatably on my laptop
because of this. I'm not sure why this doesn't occur on the CI:

    Jul 31 18:05:48.877 INFO running initdb in "./tenants/5227e4eb90894775ac6b8a8c76f24b2e/wal-redo-datadir", location: pageserver::walredo, pageserver/src/walredo.rs:483
    thread 'WAL redo thread' panicked at 'initdb failed: The files belonging to this database system will be owned by user "heikki".
    This user must also own the server process.

    The database cluster will be initialized with locale "C".
    The default database encoding has accordingly been set to "SQL_ASCII".
    The default text search configuration will be set to "english".

    Data page checksums are disabled.

    creating directory ./tenants/0305b1326f3ea33add0929d516da7cb6/wal-redo-datadir ... ok
    creating subdirectories ... ok
    selecting dynamic shared memory implementation ... posix
    selecting default max_connections ... 100
    selecting default shared_buffers ... 128MB
    selecting default time zone ... Europe/Helsinki
    creating configuration files ... ok
    running bootstrap script ...
    stderr:
    2021-07-31 15:05:48.875 GMT [282569] LOG:  could not open configuration file "/home/heikki/git-sandbox/zenith/test_output/test_tenant_list/repo/./tenants/0305b1326f3ea33add0929d516da7cb6/wal-redo-datadir/postgresql.conf": No such file or directory
    2021-07-31 15:05:48.875 GMT [282569] FATAL:  configuration file "/home/heikki/git-sandbox/zenith/test_output/test_tenant_list/repo/./tenants/0305b1326f3ea33add0929d516da7cb6/wal-redo-datadir/postgresql.conf" contains errors
    child process exited with exit code 1
    initdb: removing data directory "./tenants/0305b1326f3ea33add0929d516da7cb6/wal-redo-datadir"
2021-07-31 18:13:21 +03:00
Alexey Kondratov
bd7d811921 Add libseccomp-dev as a dep to Dockerfile 2021-07-25 17:46:47 +03:00
anastasia
14b6796915 Send pgdata subdirs with basebackup. Fix for 1e6267a. 2021-07-25 17:46:47 +03:00
Max Sharnoff
3f4815efa2 Correct LeSer doc: "Big Endian" -> "Little Endian" (#362) 2021-07-23 12:38:37 -07:00
anastasia
ec03848d2f Fix pageserver.log destination for zenith init.
The problem was caused by merge conflict in 767590b
2021-07-23 16:22:01 +03:00
anastasia
1e6267a35f Get rid of snapshot directory + related code cleanup and refactoring.
- Add new subdir postgres_ffi/samples/ for config file samples.
- Don't copy wal to the new branch on zenith init or zenith branch.
- Import_timeline_wal on zenith init.
2021-07-23 13:21:45 +03:00
Heikki Linnakangas
47824c5fca Remove page server interactive mode.
It was pretty cool, but no one used it, and it had gotten badly out of
date. The main interesting thing with it was to see some basic metrics
on the fly, while the page server is running, but the metrics collection
had been broken for a long time, too. Best to just remove it.
2021-07-23 12:21:21 +03:00
Dmitry Rodionov
767590bbd5 support tenants
this patch adds support for tenants. This touches mostly pageserver.
Directory layout on disk is changed to contain new layer of indirection.
Now path to particular repository has the following structure: <pageserver workdir>/tenants/<tenant
id>. Tenant id has the same format as timeline id. Tenant id is included in
pageserver commands when needed. Also new commands are available in
pageserver: tenant_list, tenant_create. This is also reflected CLI.
During init default tenant is created and it's id is saved in CLI config,
so following commands can use it without extra options. Tenant id is also included in
compute postgres configuration, so it can be passed via ServerInfo to
safekeeper and in connection string to pageserver.
For more info see docs/multitenancy.md.
2021-07-22 20:54:20 +03:00
Stas Kelvich
d210ba5fdb Update README.md 2021-07-22 20:33:34 +03:00
Dmitry Ivanov
8b656bad5f Add a missing [cfg(test)]
We don't always need to compile tests.
2021-07-22 16:46:27 +03:00
Dmitry Ivanov
97329d4906 Add a test for EOF in walkeeper's background thread
It would be nice to have a proper Timeline mock api,
but this time we'll get by with what we have.
2021-07-22 12:12:55 +03:00
Dmitry Ivanov
6a3b9b1d46 Fix accidental busyloop in walkeeper's background thread
It used to be the case that walkeeper's background thread
failed to recognize the end of stream (EOF) signaled by the
`Ok(None)` result of `FeMessage::read`.
2021-07-22 12:12:55 +03:00
anastasia
c913404739 Redirect log to pageserver.log during zenith init.
Add new module logger.rs that contains shared code to init logging
2021-07-21 18:56:34 +03:00
anastasia
8e42af9b1d Remove unused 'identify_system' pageserver query 2021-07-21 18:55:41 +03:00
Arseny Sher
fe17188464 Alternative way to truncate behind-the-vcl part of log.
Which is important to do before bumping epoch.
2021-07-21 17:27:05 +03:00
Arseny Sher
51b50f5cf5 Fix truncating the wal after VCL. 2021-07-21 17:27:05 +03:00
Arseny Sher
9e3fe2b4d4 Truncate not matching part of log.
ref #296
2021-07-21 17:27:05 +03:00
Arseny Sher
eb1618f2ed TLA+ specification of proposer-acceptor consensus protocol.
And .cfg file for running TLC.

ref #293
2021-07-21 17:27:05 +03:00
Stas Kelvich
791312824d set superuser name in python tests too 2021-07-21 17:22:22 +03:00
Stas Kelvich
a17b2a4364 reflect postgres superuser changes in pageserver->compute connstring 2021-07-21 17:22:22 +03:00
sharnoff
c4b2bf7ebd Use 'zenith_admin' as superuser name in initdb 2021-07-21 17:22:22 +03:00
Konstantin Knizhnik
0723d49e0b Object push (#276)
* Introducing common enum ObjectVal for all values

* Rewrite push mechanism to use raw object copy

* Fix history unit test

* Add skip_nonrel_objects functions for history unit tests
2021-07-21 00:41:57 +03:00
Eric Seppanen
ad79ca05e9 suppress nullptr warnings on auto-generated bindgen unit tests
Hopefully, this will be addressed upstream before too long; see
rust-bindgen issue #1651.
2021-07-20 20:12:15 +03:00
Heikki Linnakangas
325dd41277 Remove unused constructor function.
This was failing to compile with rustc nightly version, because the
datatype of 'fullPageWrites' was changed. See discussion at
https://github.com/zenithdb/zenith/issues/207#issuecomment-881478570.
But since the function is actually unused, let's just remove it.
2021-07-20 16:01:37 +03:00
sharnoff
7c96c638aa Fix particluar typos: s/cofig/config/g 2021-07-20 10:32:59 +03:00
Konstantin Knizhnik
9838c71a47 Explicit compact (#341)
* Do no perform compaction of RocksDB storage on each GC iteration

* Increase GC timeout to let GC tests passed

* Add comment to gc_iteration
2021-07-19 16:49:12 +03:00
Stas Kelvich
79d9314ba6 terminate socket explicitly 2021-07-19 14:52:41 +03:00
Stas Kelvich
2b33894e7b few more review fixes 2021-07-19 14:52:41 +03:00
Stas Kelvich
a118557331 review fixes 2021-07-19 14:52:41 +03:00
Stas Kelvich
8ec234ba78 fix tokio features set for proxy standalone build 2021-07-19 14:52:41 +03:00
Stas Kelvich
70926adaba allow to specify redirect_uri in proxy 2021-07-19 14:52:41 +03:00
Stas Kelvich
560f088f05 fix query_string parsing for mgmt callback 2021-07-19 14:52:41 +03:00
Stas Kelvich
aa404b60fe change mgmt json format; add cli flags 2021-07-19 14:52:41 +03:00
Stas Kelvich
1b6d99db7c unfreeze client session upon callback 2021-07-19 14:52:41 +03:00
Stas Kelvich
605b90c6c7 do an actual proxy pass 2021-07-19 14:52:41 +03:00
Stas Kelvich
6f747893be quick mgmp api 2021-07-19 14:52:41 +03:00
Stas Kelvich
dab34c3dd6 distinguish between new and old users 2021-07-19 14:52:41 +03:00
Stas Kelvich
bf45bef284 md5 auth for postgres_backend.rs 2021-07-19 14:52:41 +03:00
Konstantin Knizhnik
d55095ab21 [refer #331] Move initialization of checkpoint object into import_timeline_from_postgres_datadir 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
a048e0c7c1 Revert assert cechk for advancing prev_record_lsn because in case of creation of branch on some older LSN we can not provide proper xl_prev value 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
9436c4161f Adding comment to prev_record_lsn update 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
e74b06d999 Pass prev_record_ptr through zenith.signal file to compute node 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
f6705b7a7d Fix TimestampTz type to i64 to be compatbile with Postgres 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
386403dcd1 Hold CLOG for prepared transactions 2021-07-16 18:43:07 +03:00
Heikki Linnakangas
46e613f423 Fix typos 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
56da87cadc Merge with main branch 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
dcabe694ee Fix usage of put_page_image in save_xlog_dbase_create 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
842419b91f Do not update relation metadata in get_page_at_lsn 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
3cded20662 Refactring after Heikki review 2021-07-16 18:43:07 +03:00
Konstantin Knizhnik
eb0a56eb22 Replay non-relational WAL records on page server 2021-07-16 18:43:07 +03:00
Dmitry Rodionov
8a541147e2 run cargo generate-lockfile
It removes remaining issues with running cargo audit. There was one
error and one warning:

Crate:         tokio
Version:       1.5.0
Title:         Task dropped in wrong thread when aborting `LocalSet` task
Date:          2021-07-07
ID:            RUSTSEC-2021-0072
URL:           https://rustsec.org/advisories/RUSTSEC-2021-0072
Solution:      Upgrade to >=1.5.1, <1.6.0 OR >=1.6.3, <1.7.0 OR >=1.7.2, <1.8.0 OR >=1.8.1

Crate:         cpuid-bool
Version:       0.1.2
Warning:       unmaintained
Title:         `cpuid-bool` has been renamed to `cpufeatures`
Date:          2021-05-06
ID:            RUSTSEC-2021-0064
URL:           https://rustsec.org/advisories/RUSTSEC-2021-0064
2021-07-16 15:04:56 +03:00
Dmitry Rodionov
ed0fcfa9b7 replace parse_duration crate because of unpatched known vulnerability
resolves #87
2021-07-16 14:30:27 +03:00
Heikki Linnakangas
c5509b05de Revert accidental change to vendor/postgres.
I accidentally changed it in befefe8d84.
2021-07-16 12:37:10 +03:00
Heikki Linnakangas
befefe8d84 Run 'cargo fmt'.
Fixes a few formatting discrepancies had crept in recently.
2021-07-14 22:03:14 +03:00
Konstantin Knizhnik
ad92b66eed Fix TimestampTz type to i64 to be compatbile with Postgres 2021-07-14 15:55:12 +03:00
Heikki Linnakangas
d119f2bcce Add unit test for branch creation.
This is pretty similar to the python 'test_branch_behind' test, but I
find it useful to have a small unit test for it too.
2021-07-13 09:54:27 +03:00
Dmitry Ivanov
038dc6c629 Symlink Pipfile (& Pipfile.lock) at the top level
When executed, pipenv shell creates a fresh Pipfile if none
is found in the current directory. This is confusing,
hence the patch to symlink it at the top level, which
is a good starting point for various commands.
2021-07-12 21:30:52 +03:00
Heikki Linnakangas
bfc27bee5e Revert the fix to allegedly inaccurate comment.
I misread the code. It does indeed only call checkpoint() every 10
segments. Revert that change, but keep the rest of the comment fixes.
2021-07-10 18:53:47 +03:00
Heikki Linnakangas
19528de03e Fix inaccurate comment on how often we checkpoint the repository. 2021-07-10 18:30:47 +03:00
Konstantin Knizhnik
3e69c41a47 Add XLOG_HEAP_OPMASK to pg_contants 2021-07-10 10:09:56 +03:00
Konstantin Knizhnik
97681acfcf Replace XLR_RMGR_INFO_MASK with XLOG_HEAP_OPMASK 2021-07-10 10:09:56 +03:00
Konstantin Knizhnik
baf8800b96 Fix incorrect mask in wldecoder 2021-07-10 10:09:56 +03:00
Konstantin Knizhnik
577af8a459 Store page image with the same LSN as replaced WAL record 2021-07-09 17:24:25 +03:00
Dmitry Rodionov
75e717fe86 allow both domains and ip addresses in connection options for
pageserver and wal keeper. Also updated PageServerNode definition in
control plane to account for that. resolves #303
2021-07-09 16:46:21 +03:00
Stas Kelvich
4987d5ee1f reduce lodding in wal_acceptor 2021-07-09 16:45:48 +03:00
anastasia
462b8801d2 Fix list_rels visibility check: some relations after initdb may have lsn equal to last_valid_lsn() 2021-07-09 15:25:53 +03:00
Dmitry Ivanov
2712eaee15 [postgres] Enable seccomp bpf 2021-07-09 14:59:45 +03:00
Eric Seppanen
96f4ddd243 cargo fmt 2021-07-05 21:50:42 -07:00
Heikki Linnakangas
3386ce6f35 Introduce common enum ObjectVal for all values stored in ObjectStore.
Based on Konstantin's original patch (PR #275), but I introduced helper
functions for serializing/deserializing the different kinds of
ObjectValues, which made it more pleasant to use, as the deserialization
checks are now performed in the helper functions.
2021-06-30 12:50:35 +03:00
Heikki Linnakangas
9c856ecf43 Update TODO comment now that we handle relation drops
Commit ced338fd20 neglected this comment. In the passing, also move around
the comments explaining the two kinds of thing we store per page.
2021-06-30 12:50:32 +03:00
anastasia
d4329887b3 remove restore_s3.rs that uses obsolete data format 2021-06-29 18:12:32 +03:00
Heikki Linnakangas
ced338fd20 Handle relation DROPs in page server.
Add back code to parse transaction commit and abort records, and in
particular the list of dropped relations in them. Add 'put_unlink'
function to the Timeline trait and implementation. We had the code to
handle dropped relations in the GC code and elsewhere in ObjectRepository
already, but there was nothing to create the RelationSizeEntry::Unlink
tombstone entries until now. Also add a test to check that GC correctly
removes all page versions of a dropped relation.

Implements https://github.com/zenithdb/zenith/issues/232, except for the
"orphaned" rels.

Reviewed-by: Konstantin Knizhnik
2021-06-29 00:27:10 +03:00
Heikki Linnakangas
44c35722d8 Remove a bunch of dead code
Some of these were related to handling various WAL records that are not
related to any relations, like pg_multixact updates. These should have
been removed in the revert commit 6a9c036ac1, but I missed them.

Also, we didn't anything with commit/abort records. We will start
parsing commit/abort records in the next commit, but seems better to
add that from clean slate.

Reviewed-by: Konstantin Knizhnik
2021-06-29 00:26:53 +03:00
Heikki Linnakangas
ec44f4b299 Add test for Garbage Collection.
This expose a command in in page server to run GC immediately on a given
timeline. It's just for testing purposes.
2021-06-28 17:07:28 +03:00
Heikki Linnakangas
a31bba19b0 Compact RocksDB database after garbage collection.
Without this step, the page versions won't actually be removed, they're
just marked for deletion on the next RocksDB "merge" or "compact"
operation.

Author: Konstantin Knizhnik
2021-06-28 16:53:26 +03:00
Heikki Linnakangas
0dd46061a8 Improve logging and comments in GC.
- Print the number of dropped relations, and the number of relations
  encountered overall.

- If a block has only one page version, the latest one, don't count it as
  a "truncated" version history. Only count pages for which we actually
  removed some old versions.

- Change "last" to "latest" in variable names and comments. "Last" could
  be interpreted as "oldest", but here it means "newest".

- Add a comment noting that the GC code depends on get_page_at_lsn_nowait
  to store the materialized page version in the repository.

- Change "last" to "latest" in variable names for clarity. "Last" could
  be interpreted as the oldest, but here it means newest.
2021-06-28 16:53:13 +03:00
Heikki Linnakangas
cb2ddf06d0 Refactor code to do one iteration of GC to separate function. 2021-06-28 16:49:21 +03:00
Dmitry Ivanov
eb7388e3e8 Add debug info to release builds
This is useful for profiling and, to some extent, debug.
Besides, debug info should not affect the performance.
2021-06-28 14:21:30 +03:00
Stas Kelvich
b314311f49 fix dockerfile with new init cli keys 2021-06-27 16:14:24 +03:00
Heikki Linnakangas
183a3022a5 cargo fmt 2021-06-25 11:47:59 +03:00
Konstantin Knizhnik
226204094a Fix recall parmeter handling in walkeeper 2021-06-25 09:43:55 +03:00
Heikki Linnakangas
4f1b22a2c8 Use ObjectTag enum instead of special fork number to store metadata objects.
Extracted from Konstantin's larger PR:
https://github.com/zenithdb/zenith/pull/268
2021-06-22 21:34:31 +03:00
Dmitry Ivanov
257ade0688 Extract PostgreSQL connection logic into PgProtocol
This patch aims to:

* Unify connection & querying logic of ZenithPagerserver and Postgres.
* Mitigate changes to transaction machinery introduced in `psycopg2 >= 2.9`.

Now it's possible to acquire db connection using the corresponding
method:

```python
pg = postgres.create_start('main')
conn = pg.connect()
...
conn.close()
```

This pattern can be further improved with the help of `closing`:

```python
from contextlib import closing

pg = postgres.create_start('main')

with closing(pg.connect()) as conn:
    ...
```

All connections produced by this method will have autocommit
enabled by default.
2021-06-17 20:19:04 +03:00
Dmitry Ivanov
43ece6e2a2 Fix test_runner's fixtures for python 3.6
Apparently, Literal type is only available since 3.8.
2021-06-17 20:19:04 +03:00
Arseny Sher
f923464b93 Remove pq_protocol.rs.
I forgot to do that in b2f51026aa.
2021-06-16 18:52:36 +03:00
Arseny Sher
11efafb05b Don't capture pytest output in CI.
Otherwise it is hard to see what is going on until the testing is done.
2021-06-16 14:45:43 +03:00
Arseny Sher
7c5532303e Preserve wal acceptor logs in CI.
And generally make removal of everything-but-logs a bit simpler, with files
staying in place.

Also renames postgres log from 'log' to 'pg.log'.
2021-06-16 14:45:43 +03:00
Eric Seppanen
39c1d4896c add a clippy shell script
The clippy maintainers have not provided an easy way for projects to
configure the set of lints they would like enabled/disabled. It's
particularly bad for projects using workspaces, which can easily lead to
duplicated clippy annotations for every crate, library, binary, etc.

Add a shell script that runs clippy, with a few unhelpful lints
disabled:
new_without_default
manual_range_contains
comparison_chain

If you save this in your path under the name "cargo-zclippy" (or
whatever name you like), then you can run it as "cargo zclippy" from the
shell prompt. If your text editor has rust-analyzer integration, you can
also use this new command as a replacement for "cargo check" or "cargo
clippy" and see clippy warnings and errors right in the editor.
2021-06-15 10:52:11 -07:00
Eric Seppanen
d2d5a01522 minor clippy fixes 2021-06-15 10:52:11 -07:00
Arseny Sher
36d6c401bf bump vendor/postgres 2021-06-15 15:14:27 +03:00
Arseny Sher
37b0236e9a Move wal acceptor tests to python.
Includes fixtures for wal acceptors and associated setup.

Nothing really new here, but surprisingly this caught some issues in
walproposer.

ref #182
2021-06-15 15:14:27 +03:00
Patrick Insinger
cc169a6896 pageserver - config file
To simplify cloud ops, allow configuration via file.
toml is used as the config format, and the file is stored in the working
directory.
Arguments used at initialization are saved in the config file.
Config file params may be overridden by CLI arguments.
2021-06-14 09:40:22 -07:00
Patrick Insinger
77366b7a76 pageserver - remove env variables
Use CLI args instead of environment variables to parameterize the
working directory and postgres distirbution.

Before this change, there was a mixture of environment variables and CLI
arguments that needed to be set. Moving to a single input simplifies
cloud configuration management.
2021-06-14 09:40:22 -07:00
Konstantin Knizhnik
9af04b3b8d Do not cache dropped relation metadata 2021-06-14 12:38:29 +03:00
Konstantin Knizhnik
9b7b311815 New version of postgres 2021-06-14 12:38:29 +03:00
Konstantin Knizhnik
cb4f5e911c Fix problem with rel_meta.get 2021-06-14 12:38:29 +03:00
Konstantin Knizhnik
6403f1745c Replace rel_meta.range() with rel_meta.get() 2021-06-14 12:38:29 +03:00
Konstantin Knizhnik
6d7942ece4 Use RwLock instead of mutex for relation metadata cache 2021-06-14 12:38:29 +03:00
Konstantin Knizhnik
09b2c66cf6 Cache relation size 2021-06-14 12:38:29 +03:00
Stas Kelvich
5c70b52f4a remove --systemid flag from wal_acceptor 2021-06-14 11:58:53 +03:00
Stas Kelvich
19602dc88a add wal_acceptor binary in Dockerfile 2021-06-14 11:58:53 +03:00
Stas Kelvich
2b66049b21 add wal_acceptor binary to docker image 2021-06-14 11:58:53 +03:00
Stas Kelvich
c3011359ab remove --systemid from walkeeper 2021-06-14 11:58:53 +03:00
Stas Kelvich
da117f431d run & init from dockerfile 2021-06-14 11:58:53 +03:00
Stas Kelvich
78aad4fe3f Show initdb stderr for unsuccessful runs 2021-06-14 11:58:53 +03:00
Heikki Linnakangas
1c5be12515 Fix compilation with Rust versions < 1.50
The bool::then function was added in Rust 1.50. I'm still using 1.48 on
my laptop. We haven't decided what Rust version we will require
(https://github.com/zenithdb/zenith/issues/138), and I'll probably need
to upgrade sooner or later, but this will do for now.
2021-06-11 10:37:25 +03:00
Dmitry Ivanov
96c7594d29 Enable some kind of gradual typing in test_runner (#222)
It's not realistic to enable full-blown type checks
within test_runner's codebase, since the amount of
warnings revealed by mypy is overwhelming.

Tests are supposed to be easy to use, so we can't
cripple everybody's workflow for the sake of imaginary benefit.

Ultimately, the purpose of this attempt is three-fold:

* Facilitate code navigation when paired with python-language-server.
* Make method signatures apparent to a fellow programmer.
* Occasionally catch some obvious type errors.
2021-06-10 22:53:15 +03:00
Eric Seppanen
7a3794ef18 tui_event: unwrap result of stdin.keys()
Clear a clippy warning about manual flatten.

This isn't good error handling, but panicking is probably better than
spinning forever if stdin returns EOF.
2021-06-10 12:04:52 -07:00
Stas Kelvich
bf56ea8c43 Locate postgres binary and libs for 'postgres --wal-redo'
based on POSTGRES_DISTRIB_DIR.
2021-06-09 20:17:27 +03:00
Dmitry Ivanov
bb1446e33a Change behavior of ComputeControlPlane::new_node() (#235)
Previously, transaction commit could happen regardless of whether
pageserver has caught up or not. This patch aims to fix that.

There are two notable changes:

1. ComputeControlPlane::new_node() now sets the
`synchronous_standby_names = 'pageserver'` parameter to delay
transaction commit until pageserver acting as a standby has
fetched and ack'd a relevant portion of WAL.

2. pageserver now has to:
    - Specify the `application_name = pageserver` which matches the
    one in `synchronous_standby_names`.
    - Properly reply with the ack'd LSNs.

This means that some tests don't need sleeps anymore.

TODO: We should probably make this behavior configurable.

Fixes #187.
2021-06-09 11:24:55 +03:00
anastasia
0969574d48 Use bindgen for various xlog structures and checkpoint.
Implement encode/decode methods for them.

Some methods are unused now. This is a preparatory commit for nonrel_wal
2021-06-09 01:00:42 +03:00
anastasia
05a681be2c add createuser test to test shared catalog restore 2021-06-09 00:31:09 +03:00
anastasia
507c1fbbac bump vendor/postgres 2021-06-08 17:46:43 +03:00
Arseny Sher
b2f51026aa Consolidate PG proto parsing-deparsing and backend code.
Now postgres_backend communicates with the client, passing queries to the
provided handler; we have two currently, for wal_acceptor and pageserver.

Now BytesMut is again used for writing data to avoid manual message length
calculation.

ref #118
2021-06-08 17:31:40 +03:00
anastasia
2b0193e6bf implement from_bytes for XLogPageHeader structs 2021-06-08 13:08:57 +03:00
anastasia
c31a5e2c8f move XLogPageHeader structs to xlog_utils 2021-06-08 13:08:57 +03:00
anastasia
d85d67a6f1 use constants defined in xlog_utils for waldecoder 2021-06-08 13:08:57 +03:00
anastasia
9b8e82c6cf Fix makefile for out-of-tree compiling of contrib/zenith 2021-06-07 19:24:20 +03:00
Heikki Linnakangas
434374ebb4 Turn encode/decode into methods
Like in PR #208
2021-06-04 23:05:30 +03:00
Heikki Linnakangas
a7ae552851 Use rust memoffset crate to replace C offsetof().
Cherry-picked from Eric's PR #208
2021-06-04 23:05:28 +03:00
Heikki Linnakangas
8b5a061c8e Add comments on the unsafe use of transmute in encode/decode_pg_control
Note the unsafety of the unsafe block, with a link to the ongoing
discussion. This doesn't try to solve the problem, but let's at least
document the status quo.
2021-06-04 23:05:26 +03:00
Heikki Linnakangas
8147aa7e93 Use u8 slice instead of Bytes in function argument.
Bytes is handy, but in decode_pg_control's case it's just complicating
things. Also, pass ControlFileData by ref to encode_pg_control().
2021-06-04 23:05:20 +03:00
Heikki Linnakangas
d18cc8a3a8 Update 'postgres_ffi' module's readme file and comments.
Explain the purpose of of the 'postgres_ffi' module, explain what the
PostgreSQL control file is, and some other minor cleanup.
2021-06-04 23:05:11 +03:00
Heikki Linnakangas
762e9859d6 Move functions for reading/writing control file to separate source file.
To follow the precedent of xlog_utils.rs and relfile_utils.rs.
2021-06-04 23:05:05 +03:00
Heikki Linnakangas
924261f7db Remove unused ControlFile::new() constructor.
It has never been used, AFAICS.
2021-06-04 23:05:02 +03:00
Konstantin Knizhnik
063429aade Implement GC for new object_store API (#229)
* Implement GC for new object_store API

* Add comments for GC

* Revert postgres module version reference
2021-06-04 20:11:56 +03:00
anastasia
445e88f50b Minor fix: preserve log of regression tests in circleci 2021-06-03 15:54:59 +03:00
Patrick Insinger
47694ea4f5 zenith push 2021-06-02 17:20:49 -04:00
Patrick Insinger
3364a8d442 pageserver - timeline history api 2021-06-02 16:20:26 -04:00
Dmitry Ivanov
f2243d7459 Merge pull request #219 from zenithdb/tidy-up-tests
Tidy up pytest-based tests
2021-06-01 22:06:13 +03:00
Dmitry Ivanov
244fcffc50 Fix typos found by codespell 2021-06-01 21:43:26 +03:00
Dmitry Ivanov
8c3c9c3394 Update README.md 2021-06-01 21:31:29 +03:00
Dmitry Ivanov
00ce635da9 Reformat tests using yapf 2021-06-01 21:09:09 +03:00
Dmitry Ivanov
7d5f7462c1 Tidy up pytest-based tests 2021-06-01 21:09:09 +03:00
Alexey Kondratov
bed75f800f Bump vendor/postgres to use rebased on top of REL_14_BETA1 version 2021-06-01 20:04:22 +03:00
anastasia
0e423d481e Update rustdoc comments and README for pageserver crate 2021-06-01 19:38:42 +03:00
anastasia
0c74f6fa4e Update README about source tree layout 2021-06-01 19:38:42 +03:00
Heikki Linnakangas
fc01fae9b4 Remove leftover references to safekeeper_proxy.
We don't use it anymore. The WAL proposer is now a background worker that
runs as part of the primary Postgres server.
2021-06-01 18:50:24 +03:00
Heikki Linnakangas
558a2214bc Fix comment 2021-06-01 18:28:01 +03:00
Heikki Linnakangas
31815bccc0 Remove obsolete script.
It was used to launch the page server a long time ago. Doesn't work,
and isn't needed anymore.
2021-06-01 18:27:54 +03:00
Stas Kelvich
e12cab7c17 fix dockerfile comments 2021-06-01 17:07:10 +03:00
Stas Kelvich
002cd8ed5b Dockerfile for pageserver. 2021-06-01 16:08:32 +03:00
Stas Kelvich
588a030803 Clean up control_plane's Cargo.toml 2021-06-01 16:08:32 +03:00
Stas Kelvich
0b9bbdc4ec Remove libxml2 and curl from postgres dependencies.
As lazyrestore was removed from our postgres main branch.
2021-06-01 16:08:32 +03:00
Heikki Linnakangas
922715a923 Use Timeline::get_last_record_lsn() to find end-of-WAL.
That's less hacky than scanning the WAL directory.
2021-06-01 09:37:54 +03:00
Heikki Linnakangas
ac60b68d50 Handle VM and FSM truncation WAL records in the page server.
Fixes issue #190.

Original patch by Konstantin Knizhnik.
2021-05-31 23:36:17 +03:00
Konstantin Knizhnik
1aceea1bdd Shutdown socket in ReplicationConn 2021-05-31 21:37:07 +03:00
Konstantin Knizhnik
e0cc4dee4f [refer #182] Make walkeeper periodically send callme requests to pageserver 2021-05-31 21:37:07 +03:00
Arseny Sher
fd20101e5c Configure pipenv for python tests. 2021-05-31 16:43:01 +03:00
anastasia
5a73a6fdfc add -w flag to wait till pg_ctl actually finishes what was asked 2021-05-28 20:33:16 +03:00
Stas Kelvich
4608b1ec70 Set wal_log_hints=on
That is mandatory to correctly maintain visibility map (see issue#192).
It also makes sense to check that wal_log_hints is enabled at the pageserver side,
but for now let just check that tests will pass with this on.
2021-05-28 11:38:46 +03:00
anastasia
ccb2eea7fd Add test_isolation that runs pg_isolation_regress for zenith 2021-05-28 11:38:46 +03:00
Heikki Linnakangas
6b615cbde1 Remove Copy marker from large ServerInfo struct.
We don't want to encourage passing it by value. Doesn't matter much in
practice, but let's be tidy.

Per discussion at https://github.com/zenithdb/zenith/pull/195#issuecomment-849897327
2021-05-27 23:16:54 +03:00
Heikki Linnakangas
adc0e04205 Misc cleanup of the 'zenith_regress' tests
- Remove serial_schedule. As was alluded to in the README, it's really
  quote pointless.
- Remove unused PORT/HOST variables
- Fix typos
2021-05-27 23:12:02 +03:00
Heikki Linnakangas
78e3edf2e9 Add CONTRIBUTING.md with some ground rules for submitting PRs.
Let's try to rachet up our code quality, but at the same time keep the
bureaucracy low.
2021-05-27 23:07:37 +03:00
Heikki Linnakangas
4b6563a55d Little cleanup on save_decoded_record function
Pass DecodedWALRecord by reference, seems nicer, and probably marginally
faster.
2021-05-27 20:50:45 +03:00
Heikki Linnakangas
34f4207501 Refactoring of the Repository/Timeline stuff
- All timelines are now stored in the same rocksdb repository. The GET
  functions have been taught to follow the ancestors.

- Change the way relation size is stored. Instead of inserting "tombstone"
  entries for blocks that are truncated away, store relation size as
  separate key-value entry for each relation

- Add an abstraction for the key-value store: ObjectStore. It allows
  swapping RocksDB with some other key-value store easily. Perhaps we
  will write our own storage implementation using that interface, or
  perhaps we'll need a different abstraction, but this is a small
  improvement over status quo in any case.

- Garbage Collection is broken and commented out. It's not clear where and
  how it should be implemented.
2021-05-27 20:07:50 +03:00
Heikki Linnakangas
d1d2d5ce69 Make multixact test more robust
There was no guarantee that the SELECT FOR KEY SHARE queries actually
run in parallel. With unlucky timing, one query might finish before
the next one starts, so that the server doesn't need to create a
multixact. I got a failure like that on the CI:

    batch_others/test_multixact.py:56: in test_multixact
        assert(int(next_multixact_id) > int(next_multixact_id_old))
    E   AssertionError: assert 1 > 1
    E    +  where 1 = int('1')
    E    +  and   1 = int('1')

This could be reproduced by adding a random sleep in the runQuery
function, to make each query run at different times.

To fix, keep the transactions open after running the queries, so that
they will surely be open concurrently. With that, we can run the
queries serially, and don't need the 'multiprocessing' module anymore.

Fixes https://github.com/zenithdb/zenith/issues/196
2021-05-27 20:00:52 +03:00
Heikki Linnakangas
60ea26316e Add LICENSE and COPYRIGHT files.
This declares the software as Apache 2.0 licensed, and notes that the
vendor/postgres submodule is licensed under the PostgreSQL license.
2021-05-27 15:33:08 +03:00
Heikki Linnakangas
cffc979058 Fix a couple of typos in comments. 2021-05-27 14:32:04 +03:00
Heikki Linnakangas
1af6607fc3 Add a test for restarting and recreating compute node.
This is working; let's keep it that way. This also adds test coverage for
the 'zenith pg stop --destroy' option that was added in commit 6ad6e5bd.
2021-05-27 12:59:45 +03:00
Heikki Linnakangas
cb6e2d9ddb Minor refactoring and cleanup of the Timeline interface.
Move `save_decoded_record` out of the Timeline trait. The storage
implementation shouldn't need to know how to decode records.

Also move put_create_database() out of the Timeline trait. Add a new
`list_rels` function to Timeline to support it, instead.

Rename `get_relsize` to `get_rel_size`, and `get_relsize_exists` to
`get_rel_exists`. Seems nicer.
2021-05-27 09:44:46 +03:00
Alexey Kondratov
1ccf82f932 More info about branches tree drawing routines
Debt for the #144
2021-05-26 12:05:13 +03:00
Alexey Kondratov
b1a424dfa9 Add more info about borrowed from Postgres structures (RelTag and BufferTag) 2021-05-26 12:05:13 +03:00
Eric Seppanen
263acef1cc README updates
Add build dependencies and other local packages needed (Ubuntu only).

Fix some weird formatting of psql commands due to `sh` syntax
highlighting.

Improve test directions, so pytest doesn't scan the whole tree.

Drop description of the integration_tests directory since it's on its
way out.
2021-05-25 16:41:15 -07:00
Eric Seppanen
7c73afc1af switch repository types to serde
Derive Serialize+Deserialize for RelTag, BufferTag, CacheKey. Replace
handwritten pack/unpack functions with ser, des from
zenith_utils::bin_ser (which uses the bincode crate).

There are some ugly hybrids in walredo.rs, but those functions are
already doing a lot of questionable manual byte-twiddling, so hopefully
the weirdness will go away when we get better postgres protocol
wrappers.
2021-05-25 14:56:19 -07:00
Heikki Linnakangas
e8f0a9bb80 Add test for prepared transactions. 2021-05-25 11:11:32 +03:00
Eric Seppanen
6f9175ca2d cargo fmt 2021-05-24 17:28:56 -07:00
Heikki Linnakangas
69fa10ff86 Fix rocksdb get_relsize() implementation to work with historic LSNs. 2021-05-24 17:12:18 +03:00
Heikki Linnakangas
d5fe515363 Implement "checkpointing" in the page server.
- Previously, we checked on first use of a timeline, whether there is
  a snapshot and WAL for the timeline, and loaded it all into the
  (rocksdb) repository. That's a waste of effort if we had done that
  earlier already, and stopped and restarted the server. Track the
  last LSN that we have loaded into the repository, and only load the
  recent missing WAL after that.

- When you create a new zenith repository with "zenith init",
  immediately load the initial empty postgres cluster into the rocksdb
  repository. Previously, we only did that on the first connection. This
  way, we don't need any "load from filesystem" codepath during normal
  operation, we can assume that the repository for a timeline is always
  up to date. (We might still want to use the functionality to import an
  existing PostgreSQL data directory into the repository in the future,
  as a separate Import feature, but not today.)
2021-05-24 17:02:05 +03:00
Heikki Linnakangas
6a9c036ac1 Revert all changes related to storing and restoring non-rel data in page server
This includes the following commits:

35a1c3d521 Specify right LSN in test_createdb.py
d95e1da742 Fix issue with propagation of CREATE DATABASE to the branch
8465738aa5 [refer #167] Fix handling of pg_filenode.map files in page server
86056abd0e Fix merge conflict: set initial WAL position to second segment because of pg_resetwal
2bf2dd1d88 Add nonrelfile_utils.rs file
20b6279beb Fix restoring non-relational data during compute node startup
06f96f9600 Do not transfer WAL to computation nodes: use pg_resetwal for node startup

As well as some older changes related to storing CLOG and MultiXact data as
"pseudorelation" in the page server.

With this revert, we go back to the situtation that when you create a
new compute node, we ship *all* the WAL from the beginning of time to
the compute node. Obviously we need a better solution, like the code
that this reverts. But per discussion with Konstantin and Stas, this
stuff was still half-baked, and it's better for it to live in a branch
for now, until it's more complete and has gone through some review.
2021-05-24 16:05:45 +03:00
anastasia
6f9a582973 increase wait_lsn timeout to make tests more stable 2021-05-24 15:29:16 +03:00
anastasia
a0e23e6f3f Debug Timed out while waiting for WAL record problem 2021-05-24 15:29:16 +03:00
anastasia
84508d4f68 fix replay of nextMulti and nextMultiOffset fields 2021-05-24 15:17:35 +03:00
anastasia
fb230dcf32 Add test_multixact to check that we replay multixact and advance next_multixact_id correctly 2021-05-24 15:17:35 +03:00
Eric Seppanen
4aabc9a682 easy clippy cleanups
Various things that clippy complains about, and are really easy to
fix.
2021-05-23 13:17:15 -07:00
Heikki Linnakangas
0fe81b2993 Update postgres submodule 2021-05-22 00:30:44 +03:00
Eric Seppanen
eb1f1a347d rust builds: disable incremental builds
For a CI build, storing incremental build data just makes the cache
bigger, for minimal gain.

Also, for Rust < 1.52.1 there are incremental compilation bugs. CircleCI
is currently building on 1.51.

This only affects the debug build; incremental compilation isn't used on
the release build.
2021-05-21 09:17:11 -07:00
Eric Seppanen
064aa44a06 fix cargo install to honor build_type
Also, bump the cache tag to invalidate the previous (incorrect) outputs.
2021-05-21 09:17:11 -07:00
Eric Seppanen
d6ee61b5cf circleci: build & test debug + release
Add parameters to specify which kind of build; run a debug and release
variant for each job.

Eventually this will be too many jobs, but for now this is a nice start.

Also, bump the cache string to "v02" so we don't mix up our cache output
with other branches.
2021-05-21 09:17:11 -07:00
Eric Seppanen
4b78a16b82 break circleci build into two jobs
This is a prerequisite to multiple Rust build jobs, e.g. MSRV, debug vs
release, etc.
2021-05-21 09:17:11 -07:00
Konstantin Knizhnik
c093ee5e4b Fix start_node_with_wal_proposer test 2021-05-21 14:07:16 +03:00
Konstantin Knizhnik
7685372cae Fix warning in test_wal_acceptor 2021-05-21 13:04:56 +03:00
Konstantin Knizhnik
ce54133ec4 Use zenith smgr for global relations 2021-05-21 12:46:17 +03:00
Konstantin Knizhnik
610e14a7fc Use new version of postgres 2021-05-21 12:36:24 +03:00
Konstantin Knizhnik
35a1c3d521 Specify right LSN in test_createdb.py 2021-05-21 12:20:38 +03:00
Heikki Linnakangas
22b7e74c83 Add test for following relmapper files at CREATE DATABASE 2021-05-21 12:13:47 +03:00
Konstantin Knizhnik
d95e1da742 Fix issue with propagation of CREATE DATABASE to the branch 2021-05-21 12:06:46 +03:00
Stas Kelvich
40d047c146 Remove double "zenith" mention in CLI header 2021-05-21 00:26:31 +03:00
Stas Kelvich
42f3dd47d2 update README.md according to recent CLI changes 2021-05-21 00:26:31 +03:00
Stas Kelvich
c2b2ab974c Hide initdb output from "zenith init" command 2021-05-21 00:26:31 +03:00
Stas Kelvich
6ad6e5bd84 Add --destroy flag to "pg stop" CLI command 2021-05-21 00:26:31 +03:00
Stas Kelvich
d534aeb9e1 Properly propagate control plane errors to CLI.
That allows to show decent error whenever we try to start already
started postgres.
2021-05-21 00:26:31 +03:00
Stas Kelvich
d45839879c Bind to socket earlier during pageserver init.
That allows printing reasonable error message instead of panicking if
address is already in use.
2021-05-21 00:26:31 +03:00
Stas Kelvich
1f6ca23db6 Be slightly more vocal about pageserver start/stop 2021-05-21 00:26:31 +03:00
Heikki Linnakangas
2127a65e27 Tidy up the code to launch WAL redo process a little bit
- if removing the old datadir fails, throw an error
- obey PageServerConf.workdir
2021-05-20 19:29:00 +03:00
Heikki Linnakangas
ecf2d181c4 Tidy up the code to create PageServerConf
Parse all the command line options before calling "zenith init" and
changing current working dir. The rest of the options don't make any
difference if we're initializing a new repository, but it seems strange
and error-prone to parse some arguments at different times.
2021-05-20 19:28:57 +03:00
Alexey Kondratov
c1bfa32771 Follow up for issue #144, better graphics in branches tree 2021-05-20 19:25:25 +03:00
Konstantin Knizhnik
8465738aa5 [refer #167] Fix handling of pg_filenode.map files in page server 2021-05-20 19:16:16 +03:00
Konstantin Knizhnik
87d7ce816d Add pause in pageserver stop method to avoid address already in use errors 2021-05-20 17:42:07 +03:00
anastasia
f38c2e620e Add test_zenith_regress.py that runs pg_regress styled tests from test_runner/zenith_regress.
TODO: remove similar tests from vendor/postgres testset
2021-05-20 17:24:39 +03:00
Konstantin Knizhnik
86056abd0e Fix merge conflict: set initial WAL position to second segment because of pg_resetwal 2021-05-20 15:26:39 +03:00
Konstantin Knizhnik
2bf2dd1d88 Add nonrelfile_utils.rs file 2021-05-20 14:48:55 +03:00
Konstantin Knizhnik
874d82fd4c Fix tests in lsn.rs after changing wal_seg_size type 2021-05-20 14:45:09 +03:00
Konstantin Knizhnik
3645133700 Fix conflicts with main branch 2021-05-20 14:39:27 +03:00
Konstantin Knizhnik
20b6279beb Fix restoring non-relational data during compute node startup 2021-05-20 14:14:52 +03:00
Konstantin Knizhnik
06f96f9600 Do not transfer WAL to computation nodes: use pg_resetwal for node startup 2021-05-20 14:13:47 +03:00
Alexey Kondratov
b5f60f3874 Issue #144: Refactor errors handling during branches tree printing 2021-05-20 12:49:04 +03:00
Alexey Kondratov
0ec56cd21f Issue #144: Branching output of zenith branch
* Add ancestor_id to pg_list->branch_list output of pageserver.
* Display branching point (LSN) for each non-root branch.
* Add tests for `zenith branch`.
2021-05-20 12:49:04 +03:00
Heikki Linnakangas
600e1a0080 Pass PageServerConf as static ref.
It's created once early in server startup, after parsing the
command-line options, and never modified afterwards. To simplify
things, pass it around as static ref, instead of making copies in all
the different structs. We still pass around a reference to it, rather
than putting it in a global variable, to allow unit testing with
different configs in the same process.
2021-05-20 09:11:36 +03:00
Eric Seppanen
9c94a34ae7 cargo fmt 2021-05-19 15:26:21 -07:00
Stas Kelvich
9c0ac251df Describe BeMessage::ErrorResponse format in comments 2021-05-20 00:37:46 +03:00
Stas Kelvich
872ed24408 Manually handle all error throwing for branch and init commands 2021-05-20 00:37:46 +03:00
Stas Kelvich
2f25d17e11 Set more error fields to satisfy rust-postgres parser 2021-05-20 00:37:46 +03:00
Stas Kelvich
8faa6fa392 Accept semicolon right after branch_create command 2021-05-20 00:37:46 +03:00
Stas Kelvich
4d5a41301d Support returning errors from page service 2021-05-20 00:37:46 +03:00
Eric Seppanen
4c35b22626 Remove FIXME about buffer pools
If I'm not going to do anything about it soon, it's not worth keeping
this comment.
2021-05-19 14:36:41 -07:00
Eric Seppanen
9fe3b73e13 walkeeper replication: remove the lock from the send stream.
I originally thought there would be multiple threads sending here, but
that's not currently the case, so remove the lock.
2021-05-19 14:36:41 -07:00
Eric Seppanen
e0146304e6 timeline: make SharedState and some constructors private
This was pointed out in code review: no need for these to be public.
2021-05-19 14:36:41 -07:00
Eric Seppanen
fbb04c592a wal_service: change error message at thread exit
Because many errors are propagated to this point, use a better message
than "socket error".
2021-05-19 14:36:41 -07:00
Eric Seppanen
8f43d7637c wal_service: move code around some more
Move ReceiveWalConn into its own file. Shuffle constants around so they
are close to the protocol they're associated with, or move them into
postgres_ffi if they seem to be global constants.
2021-05-19 14:36:41 -07:00
Eric Seppanen
cf30303d8f extract protocol peek code; rename Connection -> ReceiveWalConn
It may be more robust to use the TcpStream::peek function, so do all
protocol peeking before creating the protocol object. This reveals the
next cleanup step: rename Connection, since it's no longer the parent of
SendWalConn. Now we peek at the first bytes and choose which kind of
connection object to create.
2021-05-19 14:36:41 -07:00
Eric Seppanen
1ec157653e bin_ser: expand serialize error type, add serialized_size 2021-05-19 14:36:41 -07:00
Eric Seppanen
858ca3a4ce bin_ser: simplify ser_into_slice
The conversion of &mut [u8] into Write is a little tricky.

Also, remove an unused generic parameter.
2021-05-19 14:36:41 -07:00
Eric Seppanen
d744ddee7c bin_ser: preserve IO errors on deserialization
We're starting to deserialize directly from the TcpStream now, which
means that a socket error gets logged as "deserialize error". That's not
very helpful; preserve the io::Error so it can be logged.
2021-05-19 14:36:41 -07:00
Eric Seppanen
3296b7d770 wal_service: permit I/O errors while reading control file
I'm not sure why ignoring this error is a good idea, but the
test_embedded_wal_proposer test fails if we propagate the error upward.
2021-05-19 14:36:41 -07:00
Eric Seppanen
2148ae78ab wal_service: remove manual output buffering
Serialize objects directly to the stream. This allows us to remove a
bunch of buffer management code, along with the NewSerializer trait that
was a temporary bridge between the old code and the new.
2021-05-19 14:36:41 -07:00
Eric Seppanen
78dcf2207e replace manual deserialization with serde + BeSer
This struct is a little awkward, because in other places it is
serialized/deserialized as little-endian, but here it's big-endian.
2021-05-19 14:36:41 -07:00
Eric Seppanen
74b78608d9 split timeline code into a separate file 2021-05-19 14:36:41 -07:00
Eric Seppanen
a11558b84f break wal_service into multiple files
+ misc cleanups
2021-05-19 14:36:41 -07:00
Eric Seppanen
513696a485 break wal_service into multiple pieces
The pieces are:
base Connection
SendWal
ReplicationHandler

There are lots of other changes here:
- Put the replication reader in a background thread; this gets rid
  of some hacks with nonblocking mode.
- Stop manually buffering input data; use BufReader instead.
- Use BytesMut a lot less; use Read/Write traits where possible.
2021-05-19 14:36:41 -07:00
Eric Seppanen
cedc2eb5c2 wal_service: add BufReader
If we try to read a few bytes at a time, we will perform a lot more
syscalls than necessary. Wrap the socket in a BufReader, which will
buffer bytes as needed.
2021-05-19 14:36:41 -07:00
Heikki Linnakangas
e3e593f571 Don't send spurious ReadyForQuery messages in extended query protocol.
libpq tolerates and ignores them, but the Rust postgres client gets
confused by them in certain states. This explained the strange failure
I saw with the Copy Out protocol. I'm not sure what the condition was
exactly, but somehow the rust client got confused if it received a
ReadyForQuery message that it was not expecting.

Fixes https://github.com/zenithdb/zenith/issues/148.
2021-05-19 22:31:28 +03:00
Stas Kelvich
c12e393e74 move cargo_bin_dir() to integration_tests crate as is it used only there 2021-05-19 22:17:48 +03:00
Stas Kelvich
d59cb2ca7a clean up some leftovers after 746f66731 2021-05-19 22:17:48 +03:00
Stas Kelvich
58f34a8d76 Rework pg subcommand in CLI.
1. Create data directory on start
2. Remove distinct pg names, now pg name == branch name.
2021-05-19 22:17:48 +03:00
Stas Kelvich
31462f4b71 Reuse page_server.branches_list in CLI code 2021-05-19 22:17:48 +03:00
Heikki Linnakangas
538f903861 Optimize parse_relfilename() function.
Compiling a Regex is very expensive, so let's not do it on every
invocation. This was consuming a big fraction of the time in creating
a new base backup at "zenith pg create". This commits brings down the
time to run "zenith pg create" on a freshly created repository from
about 2 seconds to 1 second.

It's not worth spending much effort on optimizing things at this stage
in general, but might as well pick low-hanging fruit like this.
2021-05-19 14:08:37 +03:00
Heikki Linnakangas
e6a7241c3a Simplify construction of rocksdb keys and values.
I'm going nuts with the pattern:

    let k = iter.key().unwrap();
    buf.clear();
    buf.extend_from_slice(&k);
    let key = CacheKey::unpack(&mut buf);

Introduce helper functions to convert a CacheKey into BytesMut, and
from [u8] into CacheKey. Reduces the boilerplate code a lot.

The helper functions create a new BytesMut on each call, whereas the old
coding could reuse a single BytesMut, so this could be a bit slower. I
haven't tried measuring it, but at least it's not immediately noticeable,
and readability is much more imporatant at this point. We can optimize
later
2021-05-19 12:33:38 +03:00
Stas Kelvich
709b778904 Show help in CLI when no arguments provided 2021-05-19 12:32:57 +03:00
Heikki Linnakangas
aa8debf4e8 Add test for a relation that's larger than 1 GB.
This isn't very exciting with the current RocksDB implementation, because
it doesn't care about the PostgreSQL 1 GB segment boundaries at all.
But I think we will care about this in the future, and more tests is
generally better anyway.
2021-05-19 09:22:17 +03:00
Heikki Linnakangas
1912546e52 Change the meaning of PageServerConf.workdir
Commit 746f667311 added the 'workdir' field and the get_*_path()
functions, with the idea that we cd into the directory at page server
startup, so that the get_*_path() functions can always return paths
relative to '.', but 'workdir' shows the original path to it. Change it
so that 'conf.workdir' is always set to '.', too, and the get_*_path()
functions include 'workdir' in the returned paths. Why? Because that
allows writing unit tests without changing the current directory.

When I was working on commit 97992226d3, I initially wrote the test so
that it changed the current working directory, just like commit 746f667311
did. But that was problematic, when I tried to add another unit test that
*also* wants to change the current working dir, because they could then
not run concurrently. In fact, they could not even run serially, unless
the current directory was carefully reset after the test. So it is better
to avoid changing the current directory in tests.
2021-05-19 08:49:16 +03:00
Heikki Linnakangas
a6178c135f Fix starting page server in non-daemonize mode.
Commit 746f667311 moved the "chdir" earlier in the startup sequence,
before daemonizing. But it forgot to remove a corresponding chdir call
later in the sequence when not in daemonize mode. As a result, if you
tried to start the pageserver without the --daemonize option, it always
failed with "No such file or directory" error.
2021-05-19 08:49:09 +03:00
Heikki Linnakangas
2ff16da6af Remove redundant cargo tests.
We have coverage for these things in the python tests, we don't need both.

test_redo_cases() was a pretty simple case that created a couple of
table and inserted to them. We don't have another test exactly like
that, but there is enough similar stuff in the test_branch_behind and
test_pgbench tests to cover it.

test_regress() and pgbench() are redundant with the test_pg_regress and
test_pgbench python tests.

test_pageserver_two_timelines() is similar enough to the test_branch_behind
test that we don't need it. And many other tests create branches, too.
2021-05-19 08:35:12 +03:00
Stas Kelvich
21ea70c8f5 Fix test_acceptors_restarts.
In 746f667 I "optimized" wal_acceptor tests by setting "--pageserver"
flag only on one of wal_acceptors. Which obviously will hang the system if
that wal_acceptors is down. And test_acceptors_restarts does exctly this.

Set "--pageserver" on all wal_acceptors as it was before.
2021-05-18 21:32:16 +03:00
Heikki Linnakangas
2b2d24433a Don't panic on zenith-specific "fork numbers".
These can be printed as part of log messages, if you display a RelTag
that contains one.
2021-05-18 20:49:59 +03:00
Heikki Linnakangas
66bced0f36 Fix leftover comment about async I/O 2021-05-18 20:47:35 +03:00
anastasia
9ba7bc2695 update vendor/postgres 2021-05-18 14:48:46 +03:00
Heikki Linnakangas
8624bddc79 Rearrange "use" lines, mostly by cargo fmt. 2021-05-18 09:48:42 +03:00
Stas Kelvich
45b1495f37 store remotes in main config file 2021-05-18 01:04:00 -04:00
Patrick Insinger
23be5021f8 Remote CLI command 2021-05-18 01:04:00 -04:00
Patrick Insinger
f954d5c501 pageserver - separate pagestream messages 2021-05-17 17:17:08 -04:00
Heikki Linnakangas
ab2f0ad1a8 Fix and reorganize python tests.
- The 'pageserver' fixture now sets up the repository and starts up
  the Page Server automatically. In other words, the 'pageserver'
  fixture provides a Page Server that's up and running and ready to
  use in tests.

- The 'pageserver' fixture now also creates a branch called 'empty',
  right after initializing the repository. By convention, all the
  tests start by createing a new branch off 'empty' for the test. This
  allows running all the tests against the same Page Server
  concurrently.  (I haven't tested that though. pytest doensn't
  provide an option to run tests in parallel but there are extensions
  for that.)

- Remove the 'zen_simple' fixture. Now that 'pageserver' provides
  server that's up and running, it's pretty simple to use the
  'pageserver' and 'postgres' fixtures directly.

- Don't assume host name or ports in the tests. They now use the
  fields in the fixtures for that. That allows assigning the ports
  dynamically, making it possible to run multiple page servers in
  parallel, or running the tests in parallel with another page
  server. This commit still hard codes the Page Server's port in the
  fixture, though, so more work is needed to actually make it
  possible.

- I made some changes to the 'postgres' fixture in commit 532918e13d,
  which broke the other tests. Fix them.

- Divide the tests into two "batches" of roughly equal runtime, which
  can be run in parallel

- Merge the 'test_file' and 'test_filter' options in CircleCI config
  into one 'test_selection' option, for simplicity.
2021-05-17 20:44:00 +03:00
Heikki Linnakangas
52fbcbde0a Add junit.xml file to pytest output, to show more info in CircleCI 2021-05-17 20:44:00 +03:00
Heikki Linnakangas
e602807476 Be more lenient with branch names.
Notably, the "foo@0/12345678" syntax was not allowed, because '/' is not
a word character.
2021-05-17 20:44:00 +03:00
Eric Seppanen
398d522d88 cargo fmt 2021-05-17 09:29:58 -07:00
Stas Kelvich
746f667311 Refactor CLI and CLI<->pageserver interfaces to support remote pageserver
This patch started as an effort to support CLI working against remote
pageserver, but turned into a pretty big refactoring.

* CLI now does not look into repository files directly. New commands
'branch_create' and 'identify_system' were introduced into page_service to
support that.
* Branch management that was scattered between local_env and
zenith/main.rs is moved into pageserver/branches.rs. That code could better fit
in Repository/Timeline impl, but I'll leave that for a different patch.
* All tests-related code from local_env went into integration_tests/src/lib.rs as an
extension to PostgresNode trait.
* Paths-generating functions were concentrated around corresponding config
types (LocalEnv and PageserverConf).
2021-05-17 19:17:51 +03:00
Patrick Insinger
53ea6702bd zenith - pg list relax pageserver dependency 2021-05-17 11:14:01 -04:00
Heikki Linnakangas
952424b78c Move save_decoded_record() function to Repository trait.
The function doesn't depend on the implementation of the Repository, it
only calls the public interface functions.
2021-05-17 15:16:28 +03:00
Stas Kelvich
d737c40eec copy safekeeper README from older C version to current Rust version 2021-05-17 11:43:18 +03:00
Heikki Linnakangas
532918e13d Fix branch creation at a point other than end-of-WAL
When creating a new branch, we copied all WAL from the source timeline
to the new one, and it was being picked up and digested into the
repository on first use of the timeline. Fix by copying the WAL only
up to the branch's starting point.

We should probably move the branch-creation code from the CLI to page
server itself - that's what I was starting to hack on when I noticed this
bug - but let's fix this first.

Add a regression test. To test multiple branches, enhance the python
test fixture to manage multiple running Postgres instances. Also, for
convenience, add a function to the postgres fixture to open a connection
to the server with psycopg2.
2021-05-17 10:09:34 +03:00
Heikki Linnakangas
b266c28345 Use common Lsn datatype in a few more places
This isn't just cosmetic, this also fixes one bug: the code in
parse_point_in_time() function used str::parse::<u64>() to parse the
parts of the LSN string (e.g. 0/1A2B3C4D). That's wrong, because the
LSN consists of hex digits, not base-10.
2021-05-17 10:07:42 +03:00
Konstantin Knizhnik
04dc698d4b Add support of twophase transactions 2021-05-16 00:03:20 +03:00
Heikki Linnakangas
6b11b4250e Fix compilation with older rust version.
Commit 9ece1e863d used `slice.fill`, which isn't available until Rust
v1.50.0. I have 1.48.0 installed, so it was failing to compile for me.

We haven't really standardized on any particular Rust version, and if
there's a good feature we need in a recent version, let's bump up the
minimum requirement. But this is simple enough to work around.
2021-05-15 01:42:33 +03:00
Konstantin Knizhnik
15d1c1f8bf Update submodule version 2021-05-14 17:15:14 +03:00
Konstantin Knizhnik
9ece1e863d Compute and restore pg_xact, pg_multixact and pg_filenode.map files 2021-05-14 16:35:09 +03:00
anastasia
2870150365 bump vendor/postgres 2021-05-14 13:55:02 +03:00
anastasia
7b281900f9 Add a function to change postgresql.conf in python tests. Add test_config as an example 2021-05-14 13:55:02 +03:00
Heikki Linnakangas
97992226d3 Add some unit tests for the Repository/Timeline interface. 2021-05-14 12:44:52 +03:00
Heikki Linnakangas
270356ec38 Refactor WalRedoManager for easier testing.
Turn WalRedoManager into an abstract trait, so that it can be easily
mocked in unit tests.

One change here is that the WAL redo manager is no longer tied to a
specific zenith timeline. It didn't do anything with that information
aside from using it in the dummy datadir's name. We could use any
random string for that purpose, it's just to prevent two WAL redo
managers from stepping over each other. But this commit actually
changes things so that all timelines use the same WAL redo manager, so
that's not necessary. We will probably want to maintain a pool of WAL
redo processes in the future, but for now let's keep it simple.

In the passing, fix some comments.
2021-05-14 12:44:49 +03:00
Heikki Linnakangas
c2db828481 Create RocksDB databases under correct path.
We used to create them under .zenith/.zenith/<timelineid>. The double
.zenith was clearly not intentional. Change it to
.zenith/timelines/<timelineid>.

Fixes https://github.com/zenithdb/zenith/issues/127
2021-05-14 12:44:44 +03:00
Eric Seppanen
71e93faed7 fix endian typos in BeSer
Cut/paste error: BeSer was using the little-endian config in two places.

Add better unit tests so this can't happen again.
2021-05-13 19:04:17 -07:00
Eric Seppanen
54d52e07db .gitignore integration_tests/.zenith
It's a bit annoying that the .zenith state can show up in multiple
places, but since this is how the regression tests run if you launch
them from the git root directory, ignore this one too.
2021-05-13 13:47:22 -07:00
Heikki Linnakangas
4dccdb33ab Fix comment formatting.
The module comment should use "//!" instead of "///". Otherwise, it is
considered to apply to the *next* thing, in this case the "use" statement
that follows, not the file as whole. "cargo fmt" revealed this by insisting
to move the "use crate::pg_constants" line to before the comment.
2021-05-13 22:06:05 +03:00
anastasia
38c4b6f02f Move postgres code related to zenith pageserver to contrib/zenith.
- vendor/postgres changes
- Respective changes in RUST code: upload shared library, use new GUC names.
- Add contrib build to Makefile.
2021-05-13 16:23:21 +03:00
Eric Seppanen
6ff3f1b9fd don't open log files multiple times
Multiple fds writing to the same file doesn't work. One fd will
overwrite the output of the other fd. We were opening log files three
times (stdout, stderr, and slog).

The symptoms can be seen when the program panics; the final file will
have truncated or lost messages. After this change, all messages are
preserved. If panicking and logging are concurrent (and they definitely
can be), some of the messages may be interleaved in slightly
inconvenient ways.

File::try_clone() is essentially `dup` underneath, meaning the two will
share the same file offset.
2021-05-13 00:32:39 -07:00
Patrick Insinger
4c5e23d014 pageserver - fix ParameterStatus write call 2021-05-12 20:59:04 -04:00
Patrick Insinger
99d80aba52 use pageserver for pg list command 2021-05-12 12:34:03 +03:00
Konstantin Knizhnik
2f2dff4c8d Merge with main brnach 2021-05-12 10:46:01 +03:00
Konstantin Knizhnik
22e7fcbf2d Handle visbility map updates in WAL redo 2021-05-12 10:38:43 +03:00
Patrick Insinger
372617a4f5 test_runner - pgrep remove -c arg
macOS doesn't support it
2021-05-11 17:52:22 -04:00
Patrick Insinger
49d1921a28 page_server - add python api tests 2021-05-11 14:16:22 -04:00
Patrick Insinger
d8e509d29e page_service - use anyhow for error handling 2021-05-11 14:11:10 -04:00
Patrick Insinger
d5bfe84d9e cargo fmt 2021-05-11 12:35:09 -04:00
Arseny Sher
8fff26ad49 Make Repository API return abstract dyn Timeline.
+ minor cargo fmt cleanup
2021-05-11 15:27:23 +03:00
Heikki Linnakangas
5f4e32f505 Require valid WAL streaming point.
If timeline doesn't have a valid "last valid LSN", refuse WAL streaming.
The previous behavior was to start streaming from the very beginning of
time. That was needed to support bootstrapping the page server with no
data at all (see commit bd606ab37a), but we no longer do that.
2021-05-11 11:12:14 +03:00
Heikki Linnakangas
fb71c85a79 Implement std::fmt::Display for RelTag, for debug messages. 2021-05-11 10:55:51 +03:00
Heikki Linnakangas
ff76226a35 Remove obsolete mgmt-console.
It has served its purpose. A new management console is in the works. The
old code is available in git history if we need anything from it.
2021-05-11 10:54:41 +03:00
Eric Seppanen
6e748147b6 test_runner: fix relative import syntax
Somehow I never learned this part correctly: relative imports use the
syntax "import .file" for a file sitting in the same directory.

This error wasn't terribly obvious, but the Pylance linter is yelling at
me so I'll fix it now before anyone else notices.
2021-05-11 00:09:39 -07:00
Eric Seppanen
e5df42feef add workspace_hack dependency to zenith_utils
I didn't think this mattered, but it does: if you add a dependency to
zenith_utils, but forget to request a feature you need, the crate will
build from the workspace root, but not by itself.

It's probably better to pull in the whole dependency tree.

This leaves one problem unsolved: the missing feature above will now be
a latent bug. If that feature gets removed later by other crates, and
then the workspace_hack Cargo.toml is updated, this missing feature will
become a build failure.
2021-05-10 18:21:45 -07:00
Eric Seppanen
73647e5715 wal_service: fix NodeId order/endian issues
Add fixes suggested in code review.

In a previous commit, I changed the NodeId field order and types to try
to preserve the exact serialization that was happening. Unfortunately,
that serialization was incorrect and the original struct was mostly
correct.

Change uuid to be a [u8; 16] as it was intended to be a byte array; that
will clearly indicate to serde serializers that no endian swaps will
ever be needed.
2021-05-10 16:21:05 -07:00
Eric Seppanen
95db33f3f9 wal_service: comment cleanup 2021-05-10 16:21:05 -07:00
Eric Seppanen
bace19ffbe wal_service: switch to Lsn type
Replace XLogRecPtr with Lsn in wal_service.rs .

This removes the last use of XLogSegmentOffset and XLByteToSeg, so
delete them. (replaced by Lsn::segment_offset and Lsn::segment_number.)
2021-05-10 16:21:05 -07:00
Eric Seppanen
60d66267a9 add serde support to Lsn type
A serialized Lsn and a serialized u64 should be identical.
2021-05-10 16:21:05 -07:00
Eric Seppanen
294320e6a8 wal_service: drop repr(C)
The C memory representation is only needed if we want to guarantee the
same memory layout as some other program. Since we're using serde to
serialize these data structures, we can let the compiler do what it
wants.
2021-05-10 16:21:05 -07:00
Eric Seppanen
28b4d9abb3 wal_service: use anyhow for error handling
We may eventually want precise error types for some of this, but
anyhow::Error is a lot easier than trying to force io::Error.
2021-05-10 16:21:05 -07:00
Eric Seppanen
8d8bc304c1 work around NodeId endian issues
Instead of playing games during serialize/deserialize, just treat
NodeId::term as an 8-byte array instead of a u64.
2021-05-10 16:21:05 -07:00
Eric Seppanen
4788248e11 wal_service: remove manual serialization code
Commit to serde for serialization of data structures.
2021-05-10 16:21:05 -07:00
Eric Seppanen
0cbb3798da try using serde to do all the serialization in wal_service
This version validates on every call that our result is exactly the same
as the previous result.

NodeId is a strange corner case: one field is serialized little-endian
and one field is serialized big-endian. Hopefully we can fix that in the
future.
2021-05-10 16:21:05 -07:00
Eric Seppanen
36c12247b9 add bin_ser module
This module adds two traits that implement bincode-based serialization.
BeSer implements methods for big-endian encoding/decoding.
LeSer implements methods for little-endian encoding/decoding.

Right now, the BeSer and LeSer methods have the same names, meaning you
can't `use` them both at the same time. This is intended to be a safety
mechanism: mixing big-endian and little-endian encoding in the same file
is error-prone. There are ways around this, but the easiest fix is to
put the big-endian code and little-endian code in different files or
submodules.
2021-05-10 16:21:05 -07:00
Eric Seppanen
1767208563 remove tokio-postgres from dependencies 2021-05-10 15:24:55 -07:00
Eric Seppanen
d25656797c switch pageserver to blocking postgres interface 2021-05-10 15:24:55 -07:00
Eric Seppanen
6c825dcbaa switch walkeeper over to new postgres blocking interface
This is a big async -> sync conversion. Most of it is a pretty
straightforward conversion of removing `async` and `.await` and swapping
in the right std modules.

I didn't find a thread-blocking version of `Notify` so I wrote one, and
then realized that there was already a Mutex being used there, so I
deleted my Notify and just used Condvar instead.

There is one part that seems odd to me: in `handle_start_replication`
there is a place where the previous code was doing a non-blocking read;
there is no TcpStream::try_read() so I fell back on manually flipping
the socket to non-blocking mode and then back again. This seems pretty
gross, but I'm not sure exactly what to replace this with: a background
thread? Extract the fd and run select() on it to first test if it's
readable?
2021-05-10 15:24:55 -07:00
Eric Seppanen
4b46693c81 adapt to new upstream tokio-postgres replication interface
Switch over to a newer version of rust-postgres PR752. A few
minor changes are required:
- PgLsn::UNDEFINED -> PgLsn::from(0)
- PgTimestamp -> SystemTime
2021-05-10 15:24:55 -07:00
Eric Seppanen
8952066ecb circleci: Save the postgres logs as artifacts 2021-05-09 22:20:58 -07:00
Eric Seppanen
d26b76fe7c cargo fmt 2021-05-07 13:11:44 -07:00
Eric Seppanen
df5a55c445 add workspace_hack crate
Our builds can be a little inconsistent, because Cargo doesn't deal well
with workspaces where there are multiple crates which have different
dependencies that select different features. As a workaround, copy what
other big rust projects do: add a workspace_hack crate.

This crate just pins down a set of dependencies and features that
satisfies all of the workspace crates.

The benefits are:
- running `cargo build` from one of the workspace subdirectories now
  works without rebuilding anything.
- running `cargo install` works (without rebuilding anything).
- making small dependency changes is much less likely to trigger large
  dependency rebuilds.
2021-05-07 13:08:31 -07:00
Heikki Linnakangas
e5e5c3e067 Tidy up the parse_relfilename function.
A few things that Eric commented on at PR #96:

- Use thiserror to simplify the implemention of FilePathError
- Add unit tests
- Fix a few complaints from clippy
2021-05-07 11:01:34 +03:00
Heikki Linnakangas
b7575582b8 Add comments to the Repository/Timeline traits.
Let's try to have comments on every public function. This doesn't quite
get us there yet, but close.
2021-05-06 23:02:11 +03:00
Heikki Linnakangas
77fd24b950 Fix a few clippy warnings.
By either accepting clippy's suggestion, or by adding an 'allow'
directive to silence it.
2021-05-06 21:57:13 +03:00
Heikki Linnakangas
61af9bb889 Move a few functions that have been copy-pasted around to shared module. 2021-05-06 21:57:10 +03:00
Heikki Linnakangas
a68f60415b Change a few remaining functions to use the Lsn datatype for LSNs. 2021-05-06 21:57:07 +03:00
Heikki Linnakangas
e7ca580922 Improve comments. 2021-05-06 21:57:04 +03:00
Heikki Linnakangas
33d126ecbe Tidy up usage of a few constants from PostgreSQL headers. 2021-05-06 21:57:01 +03:00
anastasia
15db0d1d6f refactor walreciever and restore_local_repo 2021-05-06 12:58:08 +03:00
Heikki Linnakangas
29f122009a Don't restart WAL streaming in the middle of a record.
I think this was changed inadvertently by commit 2c308da4d2. Change it
back.

Fixes https://github.com/zenithdb/zenith/issues/98
2021-05-06 11:34:28 +03:00
Heikki Linnakangas
bf0a0cb55d Remove unused struct 2021-05-05 20:14:09 +03:00
Heikki Linnakangas
0fe5abadf5 Remove dead code around tracking first valid LSN.
We should track the range of LSNs that are valid in a GetPage@LSN request
somehow, but currently this is just dead code. Remove, until we get around
to actually implement it.

https://github.com/zenithdb/zenith/issues/95 tracks that.
2021-05-05 17:29:10 +03:00
anastasia
1591f058c6 implement Debug for Lsn type 2021-05-05 16:38:32 +03:00
Heikki Linnakangas
efa4ecaa7c Reduce the duration of the pgbench test.
We'll want to run it for longer when doing benchmarking, but 5 seconds
should be enough to tease out any basic bugs.
2021-05-05 15:47:13 +03:00
Heikki Linnakangas
8e57c2e413 Provide more context to a panic.
I just bumped into this panic, but couldn't reproduce. Not sure what
happened, but let's provide more context.
2021-05-05 15:47:11 +03:00
Heikki Linnakangas
4dd63821bd Improve trace log messages in page server 2021-05-05 10:39:28 +03:00
Heikki Linnakangas
eeec1a3dcb Refactor the way truncations are handled.
Currently, truncation is implemented in the RocksDB repository by storing
a special sentinel entry for each page that was truncated away. Hide that
implementation detail better in the abstract Repository interface, so
that caller doesn't need to construct the special sentinel WAL record.

While we're at it, refactor the CacheEntryContent struct to an enum.
2021-05-05 10:39:28 +03:00
Heikki Linnakangas
b484b896b6 Refactor the functionality page_cache.rs.
This moves things around:

- The PageCache is split into two structs: Repository and Timeline. A
  Repository holds multiple Timelines. In order to get a page version,
  you must first get a reference to the Repository, then the Timeline
  in the repository, and finally call the get_page_at_lsn() function
  on the Timeline object. This sounds complicated, but because each
  connection from a compute node, and each WAL receiver, only deals
  with one timeline at a time, the callers can get the reference to
  the Timeline object once and hold onto it. The Timeline corresponds
  most closely to the old PageCache object.

- Repository and Timeline are now abstract traits, so that we can
  support multiple implementations. I don't actually expect us to have
  multiple implementations for long. We have the RocksDB
  implementation now, but as soon as we have a different
  implementation that's usable, I expect that we will retire the
  RocksDB implementation. But I think this abstraction works as good
  documentation in any case: it's now easier to see what the interface
  for storing and loading pages from the repository is, by looking at
  the Repository/Timeline traits. They abstract traits are in
  repository.rs, and the RocksDB implementation of them is in
  repository/rocksdb.rs.

- page_cache.rs is now a "switchboard" to get a handle to the
  repository. Currently, the page server can only handle one
  repository at a time, so there isn't much there, but in the future
  we might do multi-tenancy there.
2021-05-05 10:37:36 +03:00
Heikki Linnakangas
e5413be5fa Update 'postgres' submodule to latest version. 2021-05-05 00:37:28 +03:00
Eric Seppanen
b9c0d22045 circleci: shrink python tracebacks
Mostly we're not testing python code, so verbose python tracebacks are
unhelpful. Add --tb=short to the pytest args to cut down on the noise.

To override this during testing, set the "extra_params" parameter on the
circleci job to "--tb=auto" or "--tb=long".
2021-05-04 12:43:36 -07:00
Eric Seppanen
2e0d45d092 Switch to upstream rust-s3
The local fork of rust-s3 has some code to support Google Cloud, but
that PR no longer applies upstream, and will need significant changes
before it can be re-submitted.

In the meantime, we might as well just use the most similar upstream
release. The benefit of switching is that it fixes a feature-resolution
bug that was causing us to build 24 more crates than needed (mostly
async-std and its dependencies).
2021-05-04 12:02:00 -07:00
Eric Seppanen
86932c20eb circleci: disable imperfect match on the rust cache
The cache keeps growing as stale packages accumulate; until we can
figure out a better strategy, just start over every time Cargo.lock
changes.
2021-05-03 23:53:59 -07:00
Eric Seppanen
f5b45a172c circleci: flush caches
The rust cache is growing dramatically. Change the cache key to start
over.

The weird "v98" was something I'd intended to reset before landing the
circleci config. Do the sane thing and start over at v01. The intent is
that we just increment the number each time something gets broken.
2021-05-03 23:51:10 -07:00
Eric Seppanen
e6a0987182 python fixtures: enable "zenith pageserver stop"
Replaces "killall pageserver", which doesn't work if you don't have the
psmisc package installed.
2021-05-03 23:32:06 -07:00
Eric Seppanen
aa64391265 fix clippy warning about redundant clone 2021-05-03 23:20:51 -07:00
Eric Seppanen
aac913f9dc use nix kill instead of spawning a process
Since we are now calling the syscall directly, read_pidfile can now
parse an integer.

We also verify the pid is >= 1, because calling kill on 0 or negative
values goes straight to crazytown.
2021-05-03 23:20:51 -07:00
Eric Seppanen
4e2e5bb4e6 implement "zenith pageserver stop"
Fixes #89.
2021-05-03 19:54:13 -07:00
Eric Seppanen
3e15a5c325 test_runner fixtures: feedback from review
I forgot to add these fixes before merging:
- typo in Postgres doc-comment
- add 'wal_acceptor' to the list of conflicting processes.
2021-05-03 18:46:50 -07:00
Eric Seppanen
ce646ea845 use tokio::try_join instead of futures::try_join
We don't use the `futures` crate much. Remove one of only two references
to it (tokio has the identical macro).
2021-05-03 18:46:10 -07:00
Eric Seppanen
effcabb590 circleci: do builds on a bigger container.
default(medium): 2 CPUs, 4GB RAM.
xlarge: 8 CPUs, 16GB RAM.

Some build jobs are getting killed with signal 9. I'm guessing that this
is probably an OOM condition...
2021-05-03 14:29:29 -07:00
Eric Seppanen
a08dfb1c2c gitignore .zenith only in git root
I found I had a few other .zenith directories hanging around in odd
places. I doubt we intended those directories to collect in multiple
locations, so only hide the one in the git root directory.
2021-05-03 14:07:46 -07:00
Eric Seppanen
a3818dee58 pin dependencies to versions
If there isn't any version specified for a dependency crate, Cargo may
choose a newer version. This could happen when Cargo.lock is updated
("cargo update") but can also happen unexpectedly when adding or
changing other dependencies. This can allow API-breaking changes to be
picked up, breaking the build.

To prevent this, specify versions for all dependencies. Cargo is still
allowed to pick newer versions that are (hopefully) non-breaking, by
analyzing the semver version number.

There are two special cases here:

1. serde_derive::{Serialize, Deserialize} isn't really used any more. It
was only a separate crate in the past because of compiler limitations.
Nowadays, people turn on the "derive" feature of the serde crate and
use serde::{Serialize, Deserialize}.

2. parse_duration is unmaintained and has an open security issue. (gh
iss. 87) That issue probably isn't critical for us because of where we
use that crate, but it's probably still better to pin the version so we
can't get hit with an API-breaking change at an awkward time.
2021-05-03 14:02:10 -07:00
Eric Seppanen
219cbe2d9c pytest: improve documentation and protect against wrong versions
It's quite hard to get python2 to exit gracefully when the code was
intended for python3, because the interpreter will SyntaxError before
running a single line of code. Thankfully, the pytest developers put a
version check in their .ini config, so that should gracefully handle
both wrong-pytest-version and wrong-python-version.

Also document the woes of trying to run the pytest version shipped by
e.g. Debian or Ubuntu.
2021-05-03 11:31:32 -07:00
Eric Seppanen
129f85f652 circleci: shallow clone the postgres repo
Fetching the postgres submodule is one of the more expensive steps of
the build. Doing a shallow clone ("--depth 1") should save some time and
a lot of network bandwidth.
2021-05-03 11:31:32 -07:00
Eric Seppanen
790f1b05c6 Add circleCI build & test jobs
This does the postgres & rust builds, caching the results, and preserves
its outputs in a "workspace" for downstream test jobs (which can run in
parallel).

Pytest jobs are parameterized, so adding new pytest-based tests requires
only adding a new job to the "workflows" section at the end.

This could use some optimization:
- The "apt-get install" step is quite slow.
- The rust build step will always happen, even if only unrelated changes
  are present (e.g. modified a python test file)
- Saving/restoring the rust cache (/target) is very slow (it contains
  1.3GB of data)
- Saving the workspace is very slow.
- The "install" step is ugly; postgres and rust artifacts could take a
  much better form.
2021-05-03 11:31:32 -07:00
Eric Seppanen
37cd662ab2 add pytest integration tests
Use pytest to manage background services, paths, and environment
variables.

Benefits:
- Tests are a little easier to write.
- Cleanup is more reliable. You can CTRL-C a test and it will still shut
  down gracefully. If you manually start a conflicting process, the test
  fixtures will detect this and abort at startup.
- Don't need to worry about remembering '--test-threads=1'
- Output of sub-processes can be captured to files.
- Test fixtures configure everything to operate under a single test
  output directory, making it easier to capture logs in CI.
- Detects all the necessary paths if run from the git root, but can also
  run from arbitrary paths by setting environment variables.

There is also a deliberately broken test (test_broken.py) that can be
used to test whether the test fixtures properly clean up after
themselves. It won't run by default; the comment at the top explains how
to enable it.
2021-05-03 11:31:32 -07:00
Eric Seppanen
277a4d4582 allow zenith to run using arbitrary paths
Remove the check that enforces running from the git root directory.
Discover the zenith binary path from current_exe().
Look for postgres in $POSTGRES_BIN or $CWD/tmp_install.
2021-05-03 11:31:32 -07:00
anastasia
1cdeba9db7 [issue #18] log module name and position in the file 2021-05-03 15:17:51 +03:00
Eric Seppanen
7d104e5660 update dependencies
Running 'cargo update' happens to synchronize a few transitive
dependencies, allowing us to build slightly fewer crates.
2021-05-02 16:01:18 -07:00
Eric Seppanen
49530145d8 cargo fmt 2021-05-02 11:03:58 -07:00
Arseny Sher
da96965897 Remove assert(is_ok) before unwrap.
It only hides the error.
2021-05-02 17:19:09 +03:00
Stas Kelvich
3762b53986 show branch name in "zenith pg list" 2021-05-01 03:32:48 +03:00
Konstantin Knizhnik
9ad99152b8 Merge pull request #84 from zenithdb/embedded_wal_proposer
Enable wal proposer test
2021-04-30 19:50:27 +03:00
Konstantin Knizhnik
651a8139f5 Fix bug in transaction_id_set_status_bit 2021-04-30 19:24:00 +03:00
Konstantin Knizhnik
f82c3eb5e2 Enable wal proposer test 2021-04-30 15:18:32 +03:00
Konstantin Knizhnik
eea6f0898e Restore CLOG from snapshot 2021-04-30 14:22:47 +03:00
Heikki Linnakangas
086c0ad829 Remove unused 'apply_pending' field. 2021-04-30 12:44:06 +03:00
Eric Seppanen
6c7ea82a61 Disable test_embedded_wal_proposer without compiler warning 2021-04-29 15:10:04 -07:00
Eric Seppanen
b77597bd99 remove old Cargo.lock files
When using a cargo workspace (defined by the root Cargo.toml), there is
one shared Cargo.lock file at the root.
2021-04-29 10:31:01 -07:00
Konstantin Knizhnik
68aa2febc9 Disable test_embedded_wal_proposer test 2021-04-29 19:27:17 +03:00
anastasia
1369145e83 code cleanup 2021-04-29 18:41:42 +03:00
anastasia
b49164a1d4 cargo fmt 2021-04-29 18:41:42 +03:00
anastasia
e7b112aacc Refactor pg_constants. Move them to postgres_ffi/ 2021-04-29 18:41:42 +03:00
Konstantin Knizhnik
f491a22d85 Add test for embedded WAL acceptor 2021-04-29 14:41:13 +03:00
Konstantin Knizhnik
26115818b7 Test for embedded wal acceptor 2021-04-29 10:48:56 +03:00
Heikki Linnakangas
158d1bbbb4 Don't create temp dir under current dir in 'zenith init'.
Previously, 'zenith init' would initialize a PostgreSQL cluster with
"initdb -D tmp", creating the temp cluster under current directory.
It moves the 'tmp' directory under the correct snapshot directory in
the zenith repository after that, but if something goes wrong in initdb,
or in the steps that follow, it could leave behind the 'tmp' directory
under current dir. Better to create the temporary directory under the
repository directory to begin with, as ".zenith/tmp".
2021-04-29 09:02:51 +03:00
Eric Seppanen
6a43b293ad fix a few CLI typos 2021-04-28 14:32:58 -07:00
Eric Seppanen
69df9f10ed limit github actions to main branch 2021-04-28 12:30:06 -07:00
Eric Seppanen
61aee52a90 split github jobs
- Move notifications to a separate job, run only on push.
- Build and test will execute on [pull_request, push].
- Use actions-rs/toolchain@v1 to get the rust toolchain.
- Add matrix hook to allow multiple toolchain versions in the future
  (now set to [stable]).
- Run all the cargo tests, not just test_pageserver
2021-04-28 12:14:03 -07:00
Eric Seppanen
975b2d12dc cargo fmt 2021-04-28 10:01:58 -07:00
anastasia
ab61ce2267 Fix merge conflict, add more comments to test_acceptors_unavailability 2021-04-28 17:24:31 +03:00
Konstantin Knizhnik
14168c7aa7 Increase downtime timeout to avoid address already in use error and fix checking for elapsed time 2021-04-28 17:24:31 +03:00
anastasia
7a8501d12f [issue #73] fix race in test_acceptors_unavailability test 2021-04-28 17:24:31 +03:00
anastasia
34d55b09a3 [issue #73] fix wal_acceptor merge problem caused by 3fea78d6 2021-04-28 17:24:31 +03:00
Heikki Linnakangas
41a3772e90 Replace pgbuild.sh with a Makefile
This allows building both Zenith and PostgreSQL in one command. The
command is 'make'

Reviewed-by: Arseny Sher <sher-ars@yandex.ru>
2021-04-28 16:54:45 +03:00
Konstantin Knizhnik
bbec5a13bd Extract appname from startup package 2021-04-28 15:26:08 +03:00
anastasia
421d586953 code cleanup for XLogRecord decoding 2021-04-28 13:56:27 +03:00
anastasia
ef37eb96b9 refactor XLogRecord reading 2021-04-28 13:56:27 +03:00
anastasia
d311f708b6 handle subtrans in COMMIT/ABORT records 2021-04-28 13:56:27 +03:00
Heikki Linnakangas
c7f54af1f1 Refactor page_cache <-> walredo interface.
Make the caller of request_redo() responsible for gathering the WAL records
to redo, and for storing the reconstructed page image back in the page
cache. This leaves the WAL redo manager purely responsible for dealing with
the postgres child process, removing its dependency on the PageCache.
2021-04-27 21:43:56 +03:00
Heikki Linnakangas
44a85d9176 Put back 'pgbuild.sh', which was removed accidentally.
Oops, I deleted it accidentally in commit 96beffb3c5. Put it back.
2021-04-27 15:33:38 +03:00
Heikki Linnakangas
96beffb3c5 Add tests for the Lsn::fetch_max function. 2021-04-27 13:43:39 +03:00
Heikki Linnakangas
cff671c1bd Remove duplicated LSN fields from the page cache.
Having multiple copies of the same values is a source of confusion.
Commit da9bf5dc63 fixed one race condition caused by that, for example.
See also discussion at
https://github.com/zenithdb/zenith/issues/57#issuecomment-824393470

This changes SeqWait.advance() to return the old number, and not panic if
you try to move the value backwards. The caller should check for that and
act accordingly.
2021-04-27 10:32:39 +03:00
Eric Seppanen
4acdcbe90f clippy cleanup #3
Fix issues raised by clippy. Mostly trivial ones, though some allow
4-5 lines of code to be reduced to 1.
2021-04-26 12:35:35 -07:00
Eric Seppanen
fdf6829de5 cargo fmt 2021-04-26 09:36:22 -07:00
anastasia
b361558a8a fix typo in transaction replay code 2021-04-26 18:35:26 +03:00
Konstantin Knizhnik
c59830fd01 Do not restart wal-redo-postgres 2021-04-26 17:57:29 +03:00
Konstantin Knizhnik
636194406f Dump log files in case of regress_tests failure 2021-04-26 17:04:26 +03:00
Konstantin Knizhnik
3b09a74f58 Implement offloading of old WAL files to S3 in walkeeper 2021-04-26 16:23:00 +03:00
Heikki Linnakangas
f617115467 Remove obsolete comment on async usage in the page cache 2021-04-26 14:12:57 +03:00
Heikki Linnakangas
4f529b7d4a Remove unused function. 2021-04-26 13:54:06 +03:00
Heikki Linnakangas
bc652e965e Save old 'async' version of SeqWait, in case we need it later.
It is currently unused, and is not built as part of 'cargo build', but
seems like a shame to throw it away completely.
2021-04-26 13:30:10 +03:00
Heikki Linnakangas
3b9e7fc5e6 Use explicit threads.
Remove 'async' usage a much as feasible. Async code is harder to debug,
and mixing async and non-async code is a recipe for confusion and bugs.

There are a couple of exceptions:

- The code in walredo.rs, which needs to read and write to the child
  process simultaneously, still uses async. It's more convenient there.
  The 'async' usage is carefully limited to just the functions that
  communicate with the child process.

- Code in walreceiver.rs that uses tokio-postgres to do streaming
  replication. We have to use async there, because tokio-postgres is
  async. Most rust-postgres functionality has non-async wrappers, but
  not the new replication client code. The async usage is very limited
  here, too: we use just block_on to call the tokio-postgres functions.

The code in 'page_service.rs' now launches a dedicated thread for each
connection.

This replaces tokio::sync::channel with std::sync:mpsc in
'seqwait.rs', to make that non-async. It's not a drop-in replacement,
though: std::sync::mpsc doesn't support multiple consumers, so we cannot
share a channel between multiple waiters. So this removes the code to
check if an existing channel can be reused, and creates a new one for
each waiter. That created another problem: BTreeMap cannot hold
duplicates, so I replaced that with BinaryHeap.

Similarly, the tokio::{mpsc, oneshot} channels used between WAL redo
manager and PageCache are replaced with std::sync::mpsc. (There is no
separate 'oneshot' channel in the standard library.)

Fixes github issue #58, and coincidentally also issue #66.
2021-04-26 13:07:51 +03:00
Konstantin Knizhnik
5292b502f3 Check regression test exit status 2021-04-26 11:06:31 +03:00
Konstantin Knizhnik
abcecc992e [refer #67] Replace File.write with File.write_all 2021-04-26 09:30:03 +03:00
Eric Seppanen
96b6f350a7 add test cases for Lsn math and AtomicLsn 2021-04-25 19:37:02 -07:00
Eric Seppanen
648755a25e add Lsn::block_offset, remaining_in_block, calc_padding
Replace open-coded math with member fns.
2021-04-25 19:37:02 -07:00
Eric Seppanen
1c775bdcac Drop LSNs from PageCacheStats
There's no clear way to sum LSNs across timelines, so just remove them
for now.
2021-04-25 19:37:02 -07:00
Eric Seppanen
07d0241076 add AtomicLsn
AtomicLsn is a wrapper around AtomicU64 that has load() and store()
members that are cheap (on x86, anyway) and can be safely used in any
context.

This commit uses AtomicLsn in the page cache, and fixes up some
downstream code that manually implemented LSN formatting.

There's also a bugfix to the logging in wait_lsn, which prints the
wrong lsn value.
2021-04-25 19:37:02 -07:00
Eric Seppanen
d760446053 remove Lsn::sub in favor of sub_checked
There is only one place doing subtraction, and it had a manually
implemented check.
2021-04-25 19:37:02 -07:00
Eric Seppanen
01e239afa3 apply Lsn type everywhere
Use the `Lsn` type everywhere that I can find u64 being used to
represent an LSN.
2021-04-25 19:37:02 -07:00
Eric Seppanen
f62ce4bcf7 make seqwait generic
SeqWait can use any type that is Ord + Debug + Copy. Debug is not
strictly necessary, but allows us to keep the panic message if a caller
wants the sequence number to go backwards.
2021-04-25 19:37:02 -07:00
Eric Seppanen
3d3eb0ed16 add Lsn type
This type is a zero-cost wrapper for a u64, meant to help code
communicate with precision what that value means.

It implements Display and Debug. Display "{}" will format as
"1234ABCD:5678CDEF" while Debug will format as Lsn{1234567890}.
2021-04-25 19:37:02 -07:00
Konstantin Knizhnik
da9bf5dc63 Store atomic last_valid_lsn after seqwait_lsn.advance 2021-04-25 14:11:31 +03:00
Eric Seppanen
1cb9b5523b cargo fmt 2021-04-24 16:03:44 -07:00
Konstantin Knizhnik
968cd8f20c Do not delete versions in GC 2021-04-24 23:52:50 +03:00
Konstantin Knizhnik
3e007b0eb9 Do not delete versions in GC 2021-04-24 22:32:22 +03:00
Heikki Linnakangas
5e0cc89de8 Re-group functions in page_cache.rs, and add comments. 2021-04-24 17:54:31 +03:00
Heikki Linnakangas
0fc05569e0 Improve comments in page_cache.rs.
Explain the mix of async and other functions in the page cache.
2021-04-24 17:54:28 +03:00
Heikki Linnakangas
021462da3e Refactor put_wal_record() so that it doesn't need to be marked 'async'.
It was only marked as async because it calls relsize_get(), but
relsize_get() will in fact never block when it's called with the max
LSN value, like put_wal_record() does. Refactor to avoid marking
put_wal_record() as 'async'.
2021-04-24 17:54:26 +03:00
Heikki Linnakangas
93d7d2ae2a Refactor pagecache <-> Wal redo communication
After the rocksdb patch (commit 6aa38d3f7d), the CacheEntry struct was
used only momentarily in the communication between the page_cache and
the walredo modules. It was in fact not stored in any cache anymore.
For clarity, refactor the communication.

There is now a WalRedoManager struct, with `request_redo` function,
that can be used to request WAL replay of a particular page. It sends
a request to a queue like before, but the queue has been replaced with
tokio::sync::mpsc. Previously, the resulting page image was stored
directly in the CacheEntry, and the requestor was notified using a
condition variable. Now, the requestor includes a 'oneshot' channel in
the request, and the WAL redo manager sends the response there.
2021-04-24 12:24:04 +03:00
Eric Seppanen
fe79082e29 require documentation in seqwait.rs 2021-04-23 15:01:22 -07:00
Eric Seppanen
6dfe196c40 add .zenith to .gitignore 2021-04-23 14:19:24 -07:00
Eric Seppanen
8beaf76c85 SeqWait: don't do wakeups under the lock
Clippy pointed out that `drop(waiters)` didn't do anything, because
there was a misplaced ";" causing `waiters` to be a unit type `()`.

This change makes it do what was intended: the lock should be dropped
first, then the wakeups should be processed.
2021-04-23 14:16:34 -07:00
Konstantin Knizhnik
499b4f7eba Log garbage collection statistics 2021-04-23 18:02:58 +03:00
Konstantin Knizhnik
52ee3a2bac Support CREATE DATABASE command 2021-04-23 17:03:56 +03:00
177 changed files with 21945 additions and 20789 deletions

288
.circleci/config.yml Normal file
View File

@@ -0,0 +1,288 @@
version: 2.1
orbs:
python: circleci/python@1.4.0
executors:
zenith-build-executor:
resource_class: xlarge
docker:
- image: cimg/rust:1.51.0
jobs:
# A job to build postgres
build-postgres:
executor: zenith-build-executor
steps:
# Checkout the git repo (circleci doesn't have a flag to enable submodules here)
- checkout
# Grab the postgres git revision to build a cache key.
# Note this works even though the submodule hasn't been checkout out yet.
- run:
name: Get postgres cache key
command: |
git rev-parse HEAD:vendor/postgres > /tmp/cache-key-postgres
- restore_cache:
name: Restore postgres cache
keys:
# Restore ONLY if the rev key matches exactly
- v03-postgres-cache-{{ checksum "/tmp/cache-key-postgres" }}
# FIXME We could cache our own docker container, instead of installing packages every time.
- run:
name: apt install dependencies
command: |
if [ ! -e tmp_install/bin/postgres ]; then
sudo apt update
sudo apt install build-essential libreadline-dev zlib1g-dev flex bison libseccomp-dev
fi
# Build postgres if the restore_cache didn't find a build.
# `make` can't figure out whether the cache is valid, since
# it only compares file timestamps.
- run:
name: build postgres
command: |
if [ ! -e tmp_install/bin/postgres ]; then
# "depth 1" saves some time by not cloning the whole repo
git submodule update --init --depth 1
make postgres
fi
- save_cache:
name: Save postgres cache
key: v03-postgres-cache-{{ checksum "/tmp/cache-key-postgres" }}
paths:
- tmp_install
# A job to build zenith rust code
build-zenith:
executor: zenith-build-executor
parameters:
build_type:
type: enum
enum: ["debug", "release"]
steps:
- run:
name: apt install dependencies
command: |
sudo apt update
sudo apt install libssl-dev clang
# Checkout the git repo (without submodules)
- checkout
# Grab the postgres git revision to build a cache key.
# Note this works even though the submodule hasn't been checkout out yet.
- run:
name: Get postgres cache key
command: |
git rev-parse HEAD:vendor/postgres > /tmp/cache-key-postgres
- restore_cache:
name: Restore postgres cache
keys:
# Restore ONLY if the rev key matches exactly
- v03-postgres-cache-{{ checksum "/tmp/cache-key-postgres" }}
- restore_cache:
name: Restore rust cache
keys:
# Require an exact match. While an out of date cache might speed up the build,
# there's no way to clean out old packages, so the cache grows every time something
# changes.
- v03-rust-cache-deps-<< parameters.build_type >>-{{ checksum "Cargo.lock" }}
# Build the rust code, including test binaries
- run:
name: Rust build << parameters.build_type >>
command: |
export CARGO_INCREMENTAL=0
BUILD_TYPE="<< parameters.build_type >>"
if [[ $BUILD_TYPE == "debug" ]]; then
echo "Build in debug mode"
cargo build --bins --tests
elif [[ $BUILD_TYPE == "release" ]]; then
echo "Build in release mode"
cargo build --release --bins --tests
fi
- save_cache:
name: Save rust cache
key: v03-rust-cache-deps-<< parameters.build_type >>-{{ checksum "Cargo.lock" }}
paths:
- ~/.cargo/registry
- ~/.cargo/git
- target
# Run rust unit tests
- run: cargo test
# Install the rust binaries, for use by test jobs
# `--locked` is required; otherwise, `cargo install` will ignore Cargo.lock.
# FIXME: this is a really silly way to install; maybe we should just output
# a tarball as an artifact? Or a .deb package?
- run:
name: cargo install
command: |
export CARGO_INCREMENTAL=0
BUILD_TYPE="<< parameters.build_type >>"
if [[ $BUILD_TYPE == "debug" ]]; then
echo "Install debug mode"
CARGO_FLAGS="--debug"
elif [[ $BUILD_TYPE == "release" ]]; then
echo "Install release mode"
# The default is release mode; there is no --release flag.
CARGO_FLAGS=""
fi
cargo install $CARGO_FLAGS --locked --root /tmp/zenith --path pageserver
cargo install $CARGO_FLAGS --locked --root /tmp/zenith --path walkeeper
cargo install $CARGO_FLAGS --locked --root /tmp/zenith --path zenith
# Install the postgres binaries, for use by test jobs
# FIXME: this is a silly way to do "install"; maybe just output a standard
# postgres package, whatever the favored form is (tarball? .deb package?)
# Note that pg_regress needs some build artifacts that probably aren't
# in the usual package...?
- run:
name: postgres install
command: |
cp -a tmp_install /tmp/zenith/pg_install
# Save the rust output binaries for other jobs in this workflow.
- persist_to_workspace:
root: /tmp/zenith
paths:
- "*"
check-zenith:
executor: zenith-build-executor
steps:
- checkout
- restore_cache:
name: Restore rust cache
keys:
- v03-rust-cache-deps-debug-{{ checksum "Cargo.lock" }}
# Look for security vulnerabilities in the advisory db
# https://github.com/rustsec/advisory-db
- run:
name: Cargo audit
command: |
cargo install cargo-audit
cargo audit
run-pytest:
#description: "Run pytest"
executor: python/default
parameters:
# pytest args to specify the tests to run.
#
# This can be a test file name, e.g. 'test_pgbench.py, or a subdirectory,
# or '-k foobar' to run tests containing string 'foobar'. See pytest man page
# section SPECIFYING TESTS / SELECTING TESTS for details.
#
# Select the type of Rust build. Must be "release" or "debug".
build_type:
type: string
default: "debug"
# This parameter is required, to prevent the mistake of running all tests in one job.
test_selection:
type: string
default: ""
# Arbitrary parameters to pytest. For example "-s" to prevent capturing stdout/stderr
extra_params:
type: string
default: ""
needs_postgres_source:
type: boolean
default: false
steps:
- attach_workspace:
at: /tmp/zenith
- checkout
- when:
condition: << parameters.needs_postgres_source >>
steps:
- run: git submodule update --init --depth 1
- run:
name: Install pipenv & deps
working_directory: test_runner
command: |
pip install pipenv
pipenv install
- run:
name: Run pytest
working_directory: test_runner
environment:
- ZENITH_BIN: /tmp/zenith/bin
- POSTGRES_DISTRIB_DIR: /tmp/zenith/pg_install
- TEST_OUTPUT: /tmp/test_output
command: |
TEST_SELECTION="<< parameters.test_selection >>"
EXTRA_PARAMS="<< parameters.extra_params >>"
if [ -z "$TEST_SELECTION" ]; then
echo "test_selection must be set"
exit 1
fi
# Run the tests.
#
# The junit.xml file allows CircleCI to display more fine-grained test information
# in its "Tests" tab in the results page.
# -s prevents pytest from capturing output, which helps to see
# what's going on if the test hangs
# --verbose prints name of each test (helpful when there are
# multiple tests in one file)
# -rA prints summary in the end
pipenv run pytest --junitxml=$TEST_OUTPUT/junit.xml --tb=short -s --verbose -rA $TEST_SELECTION $EXTRA_PARAMS
- run:
# CircleCI artifacts are preserved one file at a time, so skipping
# this step isn't a good idea. If you want to extract the
# pageserver state, perhaps a tarball would be a better idea.
name: Delete all data but logs
when: always
command: |
du -sh /tmp/test_output/*
find /tmp/test_output -type f ! -name "pg.log" ! -name "pageserver.log" ! -name "wal_acceptor.log" ! -name "regression.diffs" -delete
du -sh /tmp/test_output/*
- store_artifacts:
path: /tmp/test_output
# The store_test_results step tells CircleCI where to find the junit.xml file.
- store_test_results:
path: /tmp/test_output
workflows:
build_and_test:
jobs:
- build-postgres
- build-zenith:
name: build-zenith-<< matrix.build_type >>
matrix:
parameters:
build_type: ["debug", "release"]
requires:
- build-postgres
- check-zenith:
requires:
- build-zenith-debug
- run-pytest:
name: pg_regress tests << matrix.build_type >>
matrix:
parameters:
build_type: ["debug", "release"]
test_selection: batch_pg_regress
needs_postgres_source: true
requires:
- build-zenith-<< matrix.build_type >>
- run-pytest:
name: other tests << matrix.build_type >>
matrix:
parameters:
build_type: ["debug", "release"]
test_selection: batch_others
requires:
- build-zenith-<< matrix.build_type >>

13
.dockerignore Normal file
View File

@@ -0,0 +1,13 @@
**/.git/
**/__pycache__
**/.pytest_cache
/target
/tmp_check
/tmp_install
/tmp_check_cli
/test_output
/.vscode
/.zenith
/integration_tests/.zenith
/Dockerfile

45
.github/workflows/notifications.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Send Notifications
on:
push:
branches: [ main ]
jobs:
send-notifications:
timeout-minutes: 30
name: send commit notifications
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
submodules: true
fetch-depth: 2
- name: Form variables for notification message
id: git_info_grab
run: |
git_stat=$(git show --stat=50)
git_stat="${git_stat//'%'/'%25'}"
git_stat="${git_stat//$'\n'/'%0A'}"
git_stat="${git_stat//$'\r'/'%0D'}"
git_stat="${git_stat// /}" # space -> 'Space En', as github tends to eat ordinary spaces
echo "::set-output name=git_stat::$git_stat"
echo "::set-output name=sha_short::$(git rev-parse --short HEAD)"
echo "##[set-output name=git_branch;]$(echo ${GITHUB_REF#refs/heads/})"
- name: Send notification
uses: appleboy/telegram-action@master
with:
to: ${{ secrets.TELEGRAM_TO }}
token: ${{ secrets.TELEGRAM_TOKEN }}
format: markdown
args: |
*@${{ github.actor }} pushed to* [${{ github.repository }}:${{steps.git_info_grab.outputs.git_branch}}](github.com/${{ github.repository }}/commit/${{steps.git_info_grab.outputs.sha_short }})
```
${{ steps.git_info_grab.outputs.git_stat }}
```

View File

@@ -1,50 +1,41 @@
name: regression check
name: Build and Test
on: [push]
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
regression-check:
strategy:
matrix:
# If we want to duplicate this job for different
# Rust toolchains (e.g. nightly or 1.37.0), add them here.
rust_toolchain: [stable]
os: [ubuntu-latest]
timeout-minutes: 30
name: run regression test suite
runs-on: ubuntu-latest
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@v2
with:
submodules: true
fetch-depth: 2
- name: Form variables for notification message
id: git_info_grab
run: |
git_stat=$(git show --stat=50)
git_stat="${git_stat//'%'/'%25'}"
git_stat="${git_stat//$'\n'/'%0A'}"
git_stat="${git_stat//$'\r'/'%0D'}"
git_stat="${git_stat// /}" # space -> 'Space En', as github tends to eat ordinary spaces
echo "::set-output name=git_stat::$git_stat"
echo "::set-output name=sha_short::$(git rev-parse --short HEAD)"
echo "##[set-output name=git_branch;]$(echo ${GITHUB_REF#refs/heads/})"
- name: Send notification
uses: appleboy/telegram-action@master
- name: install rust toolchain ${{ matrix.rust_toolchain }}
uses: actions-rs/toolchain@v1
with:
to: ${{ secrets.TELEGRAM_TO }}
token: ${{ secrets.TELEGRAM_TOKEN }}
format: markdown
args: |
*@${{ github.actor }} pushed to* [${{ github.repository }}:${{steps.git_info_grab.outputs.git_branch}}](github.com/${{ github.repository }}/commit/${{steps.git_info_grab.outputs.sha_short }})
```
${{ steps.git_info_grab.outputs.git_stat }}
```
profile: minimal
toolchain: ${{ matrix.rust_toolchain }}
override: true
- name: Install postgres dependencies
run: |
sudo apt update
sudo apt install build-essential libreadline-dev zlib1g-dev flex bison libxml2-dev libcurl4-openssl-dev
sudo apt install build-essential libreadline-dev zlib1g-dev flex bison libseccomp-dev
- name: Set pg revision for caching
id: pg_ver
@@ -61,11 +52,7 @@ jobs:
- name: Build postgres
if: steps.cache_pg.outputs.cache-hit != 'true'
run: |
./pgbuild.sh
- name: Install rust
run: |
sudo apt install -y cargo
make postgres
- name: Cache cargo deps
id: cache_cargo
@@ -77,10 +64,10 @@ jobs:
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Build
- name: Run cargo build
run: |
cargo build
cargo build --workspace --bins --examples --tests
- name: Run test
- name: Run cargo test
run: |
cargo test --test test_pageserver -- --nocapture --test-threads=1
cargo test -- --nocapture --test-threads=1

4
.gitignore vendored
View File

@@ -2,4 +2,8 @@
/tmp_check
/tmp_install
/tmp_check_cli
__pycache__/
test_output/
.vscode
/.zenith
/integration_tests/.zenith

31
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,31 @@
# How to contribute
Howdy! Usual good software engineering practices apply. Write
tests. Write comments. Follow standard Rust coding practices where
possible. Use 'cargo fmt' and 'clippy' to tidy up formatting.
There are soft spots in the code, which could use cleanup,
refactoring, additional comments, and so forth. Let's try to raise the
bar, and clean things up as we go. Try to leave code in a better shape
than it was before.
## Submitting changes
1. Make a PR for every change.
Even seemingly trivial patches can break things in surprising ways.
Use of common sense is OK. If you're only fixing a typo in a comment,
it's probably fine to just push it. But if in doubt, open a PR.
2. Get at least one +1 on your PR before you push.
For simple patches, it will only take a minute for someone to review
it.
3. Always keep the CI green.
Do not push, if the CI failed on your PR. Even if you think it's not
your patch's fault. Help to fix the root cause if something else has
broken the CI, before pushing.
*Happy Hacking!*

20
COPYRIGHT Normal file
View File

@@ -0,0 +1,20 @@
This software is licensed under the Apache 2.0 License:
----------------------------------------------------------------------------
Copyright 2021 Zenith Labs, Inc
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
----------------------------------------------------------------------------
The PostgreSQL submodule in vendor/postgres is licensed under the
PostgreSQL license. See vendor/postgres/COPYRIGHT.

1290
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,17 @@
[workspace]
members = [
"integration_tests",
"pageserver",
"walkeeper",
"zenith",
"control_plane",
"pageserver",
"postgres_ffi",
"proxy",
"walkeeper",
"workspace_hack",
"zenith",
"zenith_metrics",
"zenith_utils",
]
[profile.release]
# This is useful for profiling and, to some extent, debug.
# Besides, debug info should not affect the performance.
debug = true

95
Dockerfile Normal file
View File

@@ -0,0 +1,95 @@
#
# Docker image for console integration testing.
#
# We may also reuse it in CI to unify installation process and as a general binaries building
# tool for production servers.
#
# Dynamic linking is used for librocksdb and libstdc++ bacause librocksdb-sys calls
# bindgen with "dynamic" feature flag. This also prevents usage of dockerhub alpine-rust
# images which are statically linked and have guards against any dlopen. I would rather
# prefer all static binaries so we may change the way librocksdb-sys builds or wait until
# we will have our own storage and drop rockdb dependency.
#
# Cargo-chef is used to separate dependencies building from main binaries building. This
# way `docker build` will download and install dependencies only of there are changes to
# out Cargo.toml files.
#
#
# build postgres separately -- this layer will be rebuilt only if one of
# mentioned paths will get any changes
#
FROM alpine:3.13 as pg-build
RUN apk add --update clang llvm compiler-rt compiler-rt-static lld musl-dev binutils \
make bison flex readline-dev zlib-dev perl linux-headers libseccomp-dev
WORKDIR zenith
COPY ./vendor/postgres vendor/postgres
COPY ./Makefile Makefile
# Build using clang and lld
RUN CC='clang' LD='lld' CFLAGS='-fuse-ld=lld --rtlib=compiler-rt' make postgres -j4
#
# Calculate cargo dependencies.
# This will always run, but only generate recipe.json with list of dependencies without
# installing them.
#
FROM alpine:20210212 as cargo-deps-inspect
RUN apk add --update rust cargo
RUN cargo install cargo-chef
WORKDIR zenith
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
#
# Build cargo dependencies.
# This temp cantainner would be build only if recipe.json was changed.
#
FROM alpine:20210212 as deps-build
RUN apk add --update rust cargo openssl-dev clang build-base
# rust-rocksdb can be built against system-wide rocksdb -- that saves about
# 10 minutes during build. Rocksdb apk package is in testing now, but use it
# anyway. In case of any troubles we can download and build rocksdb here manually
# (to cache it as a docker layer).
RUN apk --no-cache --update --repository https://dl-cdn.alpinelinux.org/alpine/edge/testing add rocksdb-dev
WORKDIR zenith
COPY --from=pg-build /zenith/tmp_install/include/postgresql/server tmp_install/include/postgresql/server
COPY --from=cargo-deps-inspect /root/.cargo/bin/cargo-chef /root/.cargo/bin/
COPY --from=cargo-deps-inspect /zenith/recipe.json recipe.json
RUN ROCKSDB_LIB_DIR=/usr/lib/ cargo chef cook --release --recipe-path recipe.json
#
# Build zenith binaries
#
FROM alpine:20210212 as build
RUN apk add --update rust cargo openssl-dev clang build-base
RUN apk --no-cache --update --repository https://dl-cdn.alpinelinux.org/alpine/edge/testing add rocksdb-dev
WORKDIR zenith
COPY . .
# Copy cached dependencies
COPY --from=pg-build /zenith/tmp_install/include/postgresql/server tmp_install/include/postgresql/server
COPY --from=deps-build /zenith/target target
COPY --from=deps-build /root/.cargo /root/.cargo
RUN cargo build --release
#
# Copy binaries to resulting image.
# build-base hare to provide libstdc++ (it will also bring gcc, but leave it this way until we figure
# out how to statically link rocksdb or avoid it at all).
#
FROM alpine:3.13
RUN apk add --update openssl build-base libseccomp-dev
RUN apk --no-cache --update --repository https://dl-cdn.alpinelinux.org/alpine/edge/testing add rocksdb
COPY --from=build /zenith/target/release/pageserver /usr/local/bin
COPY --from=build /zenith/target/release/wal_acceptor /usr/local/bin
COPY --from=build /zenith/target/release/proxy /usr/local/bin
COPY --from=pg-build /zenith/tmp_install /usr/local
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN addgroup zenith && adduser -h /data -D -G zenith zenith
VOLUME ["/data"]
WORKDIR /data
USER zenith
EXPOSE 6400
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["pageserver"]

202
LICENSE Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

70
Makefile Normal file
View File

@@ -0,0 +1,70 @@
# Seccomp BPF is only available for Linux
UNAME_S := $(shell uname -s)
ifeq ($(UNAME_S),Linux)
SECCOMP = --with-libseccomp
else
SECCOMP =
endif
#
# Top level Makefile to build Zenith and PostgreSQL
#
all: zenith postgres
# We don't want to run 'cargo build' in parallel with the postgres build,
# because interleaving cargo build output with postgres build output looks
# confusing. Also, 'cargo build' is parallel on its own, so it would be too
# much parallelism. (Recursive invocation of postgres target still gets any
# '-j' flag from the command line, so 'make -j' is still useful.)
.NOTPARALLEL:
### Zenith Rust bits
#
# The 'postgres_ffi' depends on the Postgres headers.
zenith: postgres-headers
cargo build
### PostgreSQL parts
tmp_install/build/config.status:
+@echo "Configuring postgres build"
mkdir -p tmp_install/build
(cd tmp_install/build && \
../../vendor/postgres/configure CFLAGS='-O0 -g3 $(CFLAGS)' \
--enable-cassert \
--enable-debug \
--enable-depend \
$(SECCOMP) \
--prefix=$(abspath tmp_install) > configure.log)
# nicer alias for running 'configure'
postgres-configure: tmp_install/build/config.status
# Install the PostgreSQL header files into tmp_install/include
postgres-headers: postgres-configure
+@echo "Installing PostgreSQL headers"
$(MAKE) -C tmp_install/build/src/include MAKELEVEL=0 install
# Compile and install PostgreSQL and contrib/zenith
postgres: postgres-configure
+@echo "Compiling PostgreSQL"
$(MAKE) -C tmp_install/build MAKELEVEL=0 install
+@echo "Compiling contrib/zenith"
$(MAKE) -C tmp_install/build/contrib/zenith install
+@echo "Compiling contrib/zenith_test_utils"
$(MAKE) -C tmp_install/build/contrib/zenith_test_utils install
postgres-clean:
$(MAKE) -C tmp_install/build MAKELEVEL=0 clean
# This doesn't remove the effects of 'configure'.
clean:
cd tmp_install/build && ${MAKE} clean
cargo clean
# This removes everything
distclean:
rm -rf tmp_install
cargo clean
.PHONY: postgres-configure postgres postgres-headers zenith

1
Pipfile Symbolic link
View File

@@ -0,0 +1 @@
./test_runner/Pipfile

1
Pipfile.lock generated Symbolic link
View File

@@ -0,0 +1 @@
./test_runner/Pipfile.lock

154
README.md
View File

@@ -4,89 +4,157 @@ Zenith substitutes PostgreSQL storage layer and redistributes data across a clus
## Running local installation
1. Build zenith and patched postgres
```sh
git clone --recursive https://github.com/libzenith/zenith.git
cd zenith
./pgbuild.sh # builds postgres and installs it to ./tmp_install
cargo build
1. Install build dependencies and other useful packages
On Ubuntu or Debian this set of packages should be sufficient to build the code:
```text
apt install build-essential libtool libreadline-dev zlib1g-dev flex bison libseccomp-dev \
libssl-dev clang
```
2. Start pageserver and postggres on top of it (should be called from repo root):
[Rust] 1.48 or later is also required.
To run the `psql` client, install the `postgresql-client` package or modify `PATH` and `LD_LIBRARY_PATH` to include `tmp_install/bin` and `tmp_install/lib`, respectively.
To run the integration tests (not required to use the code), install
Python (3.6 or higher), and install python3 packages with `pip` (called `pip3` on some systems):
```
pip install pytest psycopg2
```
2. Build zenith and patched postgres
```sh
# Create ~/.zenith with proper paths to binaries and data
git clone --recursive https://github.com/zenithdb/zenith.git
cd zenith
make -j5
```
3. Start pageserver and postgres on top of it (should be called from repo root):
```sh
# Create repository in .zenith with proper paths to binaries and data
# Later that would be responsibility of a package install script
>./target/debug/zenith init
> ./target/debug/zenith init
pageserver init succeeded
# start pageserver
> ./target/debug/zenith pageserver start
Starting pageserver at '127.0.0.1:64000'
> ./target/debug/zenith start
Starting pageserver at '127.0.0.1:64000' in .zenith
Pageserver started
# create and configure postgres data dir
> ./target/debug/zenith pg create
Creating new postgres: path=/Users/user/code/zenith/tmp_check_cli/compute/pg1 port=55432
Database initialized
# start postgres on top on the pageserver
> ./target/debug/zenith pg start main
Starting postgres node at 'host=127.0.0.1 port=55432 user=stas'
waiting for server to start.... done
# start it
> ./target/debug/zenith pg start pg1
# look up status and connection info
> ./target/debug/zenith pg list
NODE ADDRESS STATUS
pg1 127.0.0.1:55432 running
# check list of running postgres instances
> ./target/debug/zenith pg list
BRANCH ADDRESS LSN STATUS
main 127.0.0.1:55432 0/1609610 running
```
3. Now it is possible to connect to postgres and run some queries:
```
4. Now it is possible to connect to postgres and run some queries:
```text
> psql -p55432 -h 127.0.0.1 postgres
postgres=# CREATE TABLE t(key int primary key, value text);
CREATE TABLE
postgres=# insert into t values(1,1);
INSERT 0 1
postgres=# select * from t;
key | value
key | value
-----+-------
1 | 1
(1 row)
```
5. And create branches and run postgres on them:
```sh
# create branch named migration_check
> ./target/debug/zenith branch migration_check main
Created branch 'migration_check' at 0/1609610
# check branches tree
> ./target/debug/zenith branch
main
┗━ @0/1609610: migration_check
# start postgres on that branch
> ./target/debug/zenith pg start migration_check
Starting postgres node at 'host=127.0.0.1 port=55433 user=stas'
waiting for server to start.... done
# this new postgres instance will have all the data from 'main' postgres,
# but all modifications would not affect data in original postgres
> psql -p55433 -h 127.0.0.1 postgres
postgres=# select * from t;
key | value
-----+-------
1 | 1
(1 row)
postgres=# insert into t values(2,2);
INSERT 0 1
```
## Running tests
```sh
git clone --recursive https://github.com/libzenith/zenith.git
./pgbuild.sh # builds postgres and installs it to ./tmp_install
cargo test -- --test-threads=1
make # builds also postgres and installs it to ./tmp_install
cd test_runner
pytest
```
## Documentation
Now we use README files to cover design ideas and overall architecture for each module.
And rustdoc style documentation comments.
To view your documentation in a browser, try running `cargo doc --no-deps --open`
## Source tree layout
/walkeeper:
`/control_plane`:
WAL safekeeper. Written in Rust.
Local control plane.
Functions to start, configure and stop pageserver and postgres instances running as a local processes.
Intended to be used in integration tests and in CLI tools for local installations.
/pageserver:
`/zenith`
Main entry point for the 'zenith' CLI utility.
TODO: Doesn't it belong to control_plane?
`/postgres_ffi`:
Utility functions for interacting with PostgreSQL file formats.
Misc constants, copied from PostgreSQL headers.
`/zenith_utils`:
Helpers that are shared between other crates in this repository.
`/walkeeper`:
WAL safekeeper (also known as WAL acceptor). Written in Rust.
`/pageserver`:
Page Server. Written in Rust.
Depends on the modified 'postgres' binary for WAL redo.
/integration_tests:
Tests with different combinations of a Postgres compute node, WAL safekeeper and Page Server.
/mgmt-console:
Web UI to launch (modified) Postgres servers, using S3 as the backing store. Written in Python.
This is somewhat outdated, as it doesn't use the WAL safekeeper or Page Servers.
/vendor/postgres:
`/vendor/postgres`:
PostgreSQL source tree, with the modifications needed for Zenith.
/vendor/postgres/src/bin/safekeeper:
`/vendor/postgres/contrib/zenith`:
Extension (safekeeper_proxy) that runs in the compute node, and connects to the WAL safekeepers
and streams the WAL
PostgreSQL extension that implements storage manager API and network communications with remote page server.
`/test_runner`:
Integration tests, written in Python using the `pytest` framework.
`test_runner/zenith_regress`:
Quick way to add new SQL regression test to integration tests set.

View File

@@ -9,19 +9,20 @@ edition = "2018"
[dependencies]
rand = "0.8.3"
tar = "0.4.33"
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="a0d067b66447951d1276a53fb09886539c3fa094" }
tokio-postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="a0d067b66447951d1276a53fb09886539c3fa094" }
serde = ""
serde_derive = ""
toml = ""
lazy_static = ""
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1"
toml = "0.5"
lazy_static = "1.4"
regex = "1"
anyhow = "1.0"
hex = "0.4.3"
bytes = "1.0.1"
fs_extra = "1.2.0"
nix = "0.20"
url = "2.2.2"
hex = { version = "0.4.3", features = ["serde"] }
pageserver = { path = "../pageserver" }
walkeeper = { path = "../walkeeper" }
postgres_ffi = { path = "../postgres_ffi" }
zenith_utils = { path = "../zenith_utils" }
workspace_hack = { path = "../workspace_hack" }

View File

@@ -1,23 +1,26 @@
use std::fs::{self, File, OpenOptions};
use std::io::{Read, Write};
use std::io::Write;
use std::net::SocketAddr;
use std::net::TcpStream;
use std::os::unix::fs::PermissionsExt;
use std::path::Path;
use std::process::Command;
use std::sync::Arc;
use std::time::Duration;
use std::{collections::BTreeMap, path::PathBuf};
use std::{
fs::{self, OpenOptions},
io::Read,
};
use anyhow::{Context, Result};
use lazy_static::lazy_static;
use regex::Regex;
use postgres::{Client, NoTls};
use zenith_utils::connstring::connection_host_port;
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::ZTenantId;
use zenith_utils::zid::ZTimelineId;
use crate::local_env::LocalEnv;
use crate::storage::{PageServerNode, WalProposerNode};
use pageserver::{zenith_repo_dir, ZTimelineId};
use crate::storage::PageServerNode;
//
// ComputeControlPlane
@@ -25,27 +28,36 @@ use pageserver::{zenith_repo_dir, ZTimelineId};
pub struct ComputeControlPlane {
base_port: u16,
pageserver: Arc<PageServerNode>,
pub nodes: BTreeMap<String, Arc<PostgresNode>>,
pub nodes: BTreeMap<(ZTenantId, String), Arc<PostgresNode>>,
env: LocalEnv,
}
impl ComputeControlPlane {
// Load current nodes with ports from data directories on disk
// Directory structure has the following layout:
// pgdatadirs
// |- tenants
// | |- <tenant_id>
// | | |- <branch name>
pub fn load(env: LocalEnv) -> Result<ComputeControlPlane> {
// TODO: since pageserver do not have config file yet we believe here that
// it is running on default port. Change that when pageserver will have config.
let pageserver = Arc::new(PageServerNode::from_env(&env));
let pgdatadirspath = env.repo_path.join("pgdatadirs");
let nodes: Result<BTreeMap<_, _>> = fs::read_dir(&pgdatadirspath)
let mut nodes = BTreeMap::default();
let pgdatadirspath = &env.pg_data_dirs_path();
for tenant_dir in fs::read_dir(&pgdatadirspath)
.with_context(|| format!("failed to list {}", pgdatadirspath.display()))?
.into_iter()
.map(|f| {
PostgresNode::from_dir_entry(f?, &env, &pageserver)
.map(|node| (node.name.clone(), Arc::new(node)))
})
.collect();
let nodes = nodes?;
{
let tenant_dir = tenant_dir?;
for timeline_dir in fs::read_dir(tenant_dir.path())
.with_context(|| format!("failed to list {}", tenant_dir.path().display()))?
{
let node = PostgresNode::from_dir_entry(timeline_dir?, &env, &pageserver)?;
nodes.insert((node.tenantid, node.name.clone()), Arc::new(node));
}
}
Ok(ComputeControlPlane {
base_port: 55431,
@@ -79,65 +91,49 @@ impl ComputeControlPlane {
&mut self,
is_test: bool,
timelineid: ZTimelineId,
name: &str,
tenantid: ZTenantId,
) -> Result<Arc<PostgresNode>> {
let node_id = self.nodes.len() as u32 + 1;
let node = Arc::new(PostgresNode {
name: format!("pg{}", node_id),
name: name.to_owned(),
address: SocketAddr::new("127.0.0.1".parse().unwrap(), self.get_port()),
env: self.env.clone(),
pageserver: Arc::clone(&self.pageserver),
is_test,
timelineid,
tenantid,
});
node.init_from_page_server()?;
self.nodes.insert(node.name.clone(), Arc::clone(&node));
node.init_from_page_server(self.env.auth_type)?;
self.nodes
.insert((tenantid, node.name.clone()), Arc::clone(&node));
Ok(node)
}
pub fn new_test_node(&mut self, timelineid: ZTimelineId) -> Arc<PostgresNode> {
let node = self.new_from_page_server(true, timelineid);
assert!(node.is_ok());
let node = node.unwrap();
pub fn new_node(
&mut self,
tenantid: ZTenantId,
branch_name: &str,
) -> Result<Arc<PostgresNode>> {
let timeline_id = self
.pageserver
.branch_get_by_name(&tenantid, branch_name)?
.timeline_id;
let node = self.new_from_page_server(false, timeline_id, branch_name, tenantid)?;
// Configure the node to stream WAL directly to the pageserver
node.append_conf(
"postgresql.conf",
format!(
"callmemaybe_connstring = '{}'\n", // FIXME escaping
node.connstr()
concat!(
"shared_preload_libraries = zenith\n",
"synchronous_standby_names = 'pageserver'\n", // TODO: add a new function arg?
"zenith.callmemaybe_connstring = '{}'\n", // FIXME escaping
),
node.connstr(),
)
.as_str(),
);
node
}
pub fn new_test_master_node(&mut self, timelineid: ZTimelineId) -> Arc<PostgresNode> {
let node = self.new_from_page_server(true, timelineid).unwrap();
node.append_conf(
"postgresql.conf",
"synchronous_standby_names = 'safekeeper_proxy'\n",
);
node
}
pub fn new_node(&mut self, timelineid: ZTimelineId) -> Result<Arc<PostgresNode>> {
let node = self.new_from_page_server(false, timelineid).unwrap();
// Configure the node to stream WAL directly to the pageserver
node.append_conf(
"postgresql.conf",
format!(
"callmemaybe_connstring = '{}'\n", // FIXME escaping
node.connstr()
)
.as_str(),
);
)?;
Ok(node)
}
@@ -145,13 +141,15 @@ impl ComputeControlPlane {
///////////////////////////////////////////////////////////////////////////////
#[derive(Debug)]
pub struct PostgresNode {
pub address: SocketAddr,
name: String,
pub env: LocalEnv,
pageserver: Arc<PageServerNode>,
is_test: bool,
timelineid: ZTimelineId,
pub timelineid: ZTimelineId,
pub tenantid: ZTenantId,
}
impl PostgresNode {
@@ -169,6 +167,10 @@ impl PostgresNode {
lazy_static! {
static ref CONF_PORT_RE: Regex = Regex::new(r"(?m)^\s*port\s*=\s*(\d+)\s*$").unwrap();
static ref CONF_TIMELINE_RE: Regex =
Regex::new(r"(?m)^\s*zenith.zenith_timeline\s*=\s*'(\w+)'\s*$").unwrap();
static ref CONF_TENANT_RE: Regex =
Regex::new(r"(?m)^\s*zenith.zenith_tenant\s*=\s*'(\w+)'\s*$").unwrap();
}
// parse data directory name
@@ -184,6 +186,7 @@ impl PostgresNode {
)
})?;
// parse port
let err_msg = format!(
"failed to find port definition in config file {}",
cfg_path.to_str().unwrap()
@@ -199,11 +202,37 @@ impl PostgresNode {
.parse()
.with_context(|| err_msg)?;
// FIXME: What timeline is this server on? Would have to parse the postgresql.conf
// file for that, too. It's currently not needed for anything, but it would be
// nice to list the timeline in "zenith pg list"
let timelineid_buf = [0u8; 16];
let timelineid = ZTimelineId::from(timelineid_buf);
// parse timeline
let err_msg = format!(
"failed to find timeline definition in config file {}",
cfg_path.to_str().unwrap()
);
let timelineid: ZTimelineId = CONF_TIMELINE_RE
.captures(config.as_str())
.ok_or_else(|| anyhow::Error::msg(err_msg.clone() + " 1"))?
.iter()
.last()
.ok_or_else(|| anyhow::Error::msg(err_msg.clone() + " 2"))?
.ok_or_else(|| anyhow::Error::msg(err_msg.clone() + " 3"))?
.as_str()
.parse()
.with_context(|| err_msg)?;
// parse tenant
let err_msg = format!(
"failed to find tenant definition in config file {}",
cfg_path.to_str().unwrap()
);
let tenantid = CONF_TENANT_RE
.captures(config.as_str())
.ok_or_else(|| anyhow::Error::msg(err_msg.clone() + " 1"))?
.iter()
.last()
.ok_or_else(|| anyhow::Error::msg(err_msg.clone() + " 2"))?
.ok_or_else(|| anyhow::Error::msg(err_msg.clone() + " 3"))?
.as_str()
.parse()
.with_context(|| err_msg)?;
// ok now
Ok(PostgresNode {
@@ -213,12 +242,13 @@ impl PostgresNode {
pageserver: Arc::clone(pageserver),
is_test: false,
timelineid,
tenantid,
})
}
// Connect to a page server, get base backup, and untar it to initialize a
// new data directory
pub fn init_from_page_server(&self) -> Result<()> {
pub fn init_from_page_server(&self, auth_type: AuthType) -> Result<()> {
let pgdata = self.pgdata();
println!(
@@ -232,7 +262,7 @@ impl PostgresNode {
fs::remove_dir_all(&pgdata).ok();
}
let sql = format!("basebackup {}", self.timelineid);
let sql = format!("basebackup {} {}", self.tenantid, self.timelineid);
let mut client = self
.pageserver
.page_server_psql_client()
@@ -249,11 +279,6 @@ impl PostgresNode {
},
)?;
// FIXME: The compute node should be able to stream the WAL it needs from the WAL safekeepers or archive.
// But that's not implemented yet. For now, 'pg_wal' is included in the base backup tarball that
// we receive from the Page Server, so we don't need to create the empty 'pg_wal' directory here.
//fs::create_dir_all(pgdata.join("pg_wal"))?;
let mut copyreader = client
.copy_out(sql.as_str())
.with_context(|| "page server 'basebackup' command failed")?;
@@ -269,50 +294,67 @@ impl PostgresNode {
ar.unpack(&pgdata)
.with_context(|| "extracting page backup failed")?;
// listen for selected port
// wal_log_hints is mandatory when running against pageserver (see gh issue#192)
// TODO: is it possible to check wal_log_hints at pageserver side via XLOG_PARAMETER_CHANGE?
self.append_conf(
"postgresql.conf",
&format!(
"max_wal_senders = 10\n\
wal_log_hints = on\n\
max_replication_slots = 10\n\
hot_standby = on\n\
shared_buffers = 1MB\n\
fsync = off\n\
fsync = off\n\
max_connections = 100\n\
wal_sender_timeout = 0\n\
wal_sender_timeout = 0\n\
wal_level = replica\n\
listen_addresses = '{address}'\n\
port = {port}\n",
address = self.address.ip(),
port = self.address.port()
),
);
)?;
// Never clean up old WAL. TODO: We should use a replication
// slot or something proper, to prevent the compute node
// from removing WAL that hasn't been streamed to the safekeepr or
// page server yet. But this will do for now.
self.append_conf("postgresql.conf", "wal_keep_size='10TB'\n");
// page server yet. (gh issue #349)
self.append_conf("postgresql.conf", "wal_keep_size='10TB'\n")?;
// Connect it to the page server.
// set up authentication
let password = if let AuthType::ZenithJWT = auth_type {
"$ZENITH_AUTH_TOKEN"
} else {
""
};
// Configure that node to take pages from pageserver
let (host, port) = connection_host_port(&self.pageserver.connection_config);
self.append_conf(
"postgresql.conf",
&format!(
"page_server_connstring = 'host={} port={}'\n\
zenith_timeline='{}'\n",
self.pageserver.address().ip(),
self.pageserver.address().port(),
self.timelineid
),
);
format!(
concat!(
"shared_preload_libraries = zenith\n",
// $ZENITH_AUTH_TOKEN will be replaced with value from environment variable during compute pg startup
// it is done this way because otherwise user will be able to retrieve the value using SHOW command or pg_settings
"zenith.page_server_connstring = 'host={} port={} password={}'\n",
"zenith.zenith_timeline='{}'\n",
"zenith.zenith_tenant='{}'\n",
),
host, port, password, self.timelineid, self.tenantid,
)
.as_str(),
)?;
fs::create_dir_all(self.pgdata().join("pg_wal"))?;
fs::create_dir_all(self.pgdata().join("pg_wal").join("archive_status"))?;
Ok(())
}
fn pgdata(&self) -> PathBuf {
self.env.repo_path.join("pgdatadirs").join(&self.name)
pub fn pgdata(&self) -> PathBuf {
self.env.pg_data_dir(&self.tenantid, &self.name)
}
pub fn status(&self) -> &str {
@@ -328,61 +370,73 @@ impl PostgresNode {
}
}
pub fn append_conf(&self, config: &str, opts: &str) {
pub fn append_conf(&self, config: &str, opts: &str) -> Result<()> {
OpenOptions::new()
.append(true)
.open(self.pgdata().join(config).to_str().unwrap())
.unwrap()
.write_all(opts.as_bytes())
.unwrap();
.open(self.pgdata().join(config).to_str().unwrap())?
.write_all(opts.as_bytes())?;
Ok(())
}
fn pg_ctl(&self, args: &[&str]) -> Result<()> {
fn pg_ctl(&self, args: &[&str], auth_token: &Option<String>) -> Result<()> {
let pg_ctl_path = self.env.pg_bin_dir().join("pg_ctl");
let mut cmd = Command::new(pg_ctl_path);
cmd.args(
[
&[
"-D",
self.pgdata().to_str().unwrap(),
"-l",
self.pgdata().join("pg.log").to_str().unwrap(),
"-w", //wait till pg_ctl actually does what was asked
],
args,
]
.concat(),
)
.env_clear()
.env("LD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap());
if let Some(token) = auth_token {
cmd.env("ZENITH_AUTH_TOKEN", token);
}
let pg_ctl = cmd.status().with_context(|| "pg_ctl failed")?;
let pg_ctl = Command::new(pg_ctl_path)
.args(
[
&[
"-D",
self.pgdata().to_str().unwrap(),
"-l",
self.pgdata().join("log").to_str().unwrap(),
],
args,
]
.concat(),
)
.env_clear()
.env("LD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.status()
.with_context(|| "pg_ctl failed")?;
if !pg_ctl.success() {
anyhow::bail!("pg_ctl failed");
}
Ok(())
}
pub fn start(&self) -> Result<()> {
pub fn start(&self, auth_token: &Option<String>) -> Result<()> {
println!("Starting postgres node at '{}'", self.connstr());
self.pg_ctl(&["start"])
self.pg_ctl(&["start"], auth_token)
}
pub fn restart(&self) -> Result<()> {
self.pg_ctl(&["restart"])
pub fn restart(&self, auth_token: &Option<String>) -> Result<()> {
self.pg_ctl(&["restart"], auth_token)
}
pub fn stop(&self) -> Result<()> {
self.pg_ctl(&["-m", "immediate", "stop"])
pub fn stop(&self, destroy: bool) -> Result<()> {
self.pg_ctl(&["-m", "immediate", "stop"], &None)?;
if destroy {
println!(
"Destroying postgres data directory '{}'",
self.pgdata().to_str().unwrap()
);
fs::remove_dir_all(&self.pgdata())?;
}
Ok(())
}
pub fn connstr(&self) -> String {
format!(
"host={} port={} user={}",
"host={} port={} user={} dbname={}",
self.address.ip(),
self.address.port(),
self.whoami()
"zenith_admin",
"postgres"
)
}
@@ -398,131 +452,6 @@ impl PostgresNode {
String::from_utf8(output.stdout).unwrap().trim().to_string()
}
fn dump_log_file(&self) {
if let Ok(mut file) = File::open(self.env.repo_path.join("pageserver.log")) {
let mut buffer = String::new();
file.read_to_string(&mut buffer).unwrap();
println!("--------------- Dump pageserver.log:\n{}", buffer);
}
}
pub fn safe_psql(&self, db: &str, sql: &str) -> Vec<tokio_postgres::Row> {
let connstring = format!(
"host={} port={} dbname={} user={}",
self.address.ip(),
self.address.port(),
db,
self.whoami()
);
let mut client = Client::connect(connstring.as_str(), NoTls).unwrap();
println!("Running {}", sql);
let result = client.query(sql, &[]);
if result.is_err() {
self.dump_log_file();
}
result.unwrap()
}
pub fn open_psql(&self, db: &str) -> Client {
let connstring = format!(
"host={} port={} dbname={} user={}",
self.address.ip(),
self.address.port(),
db,
self.whoami()
);
Client::connect(connstring.as_str(), NoTls).unwrap()
}
pub fn start_proxy(&self, wal_acceptors: &str) -> WalProposerNode {
let proxy_path = self.env.pg_bin_dir().join("safekeeper_proxy");
match Command::new(proxy_path.as_path())
.args(&["--ztimelineid", &self.timelineid.to_string()])
.args(&["-s", wal_acceptors])
.args(&["-h", &self.address.ip().to_string()])
.args(&["-p", &self.address.port().to_string()])
.arg("-v")
.stderr(
OpenOptions::new()
.create(true)
.append(true)
.open(self.pgdata().join("safekeeper_proxy.log"))
.unwrap(),
)
.spawn()
{
Ok(child) => WalProposerNode { pid: child.id() },
Err(e) => panic!("Failed to launch {:?}: {}", proxy_path, e),
}
}
pub fn pg_regress(&self) {
self.safe_psql("postgres", "CREATE DATABASE regression");
let data_dir = zenith_repo_dir();
let regress_run_path = data_dir.join("regress");
fs::create_dir_all(&regress_run_path).unwrap();
fs::create_dir_all(regress_run_path.join("testtablespace")).unwrap();
std::env::set_current_dir(regress_run_path).unwrap();
let regress_build_path =
Path::new(env!("CARGO_MANIFEST_DIR")).join("../tmp_install/build/src/test/regress");
let regress_src_path =
Path::new(env!("CARGO_MANIFEST_DIR")).join("../vendor/postgres/src/test/regress");
let _regress_check = Command::new(regress_build_path.join("pg_regress"))
.args(&[
"--bindir=''",
"--use-existing",
format!("--bindir={}", self.env.pg_bin_dir().to_str().unwrap()).as_str(),
format!("--dlpath={}", regress_build_path.to_str().unwrap()).as_str(),
format!(
"--schedule={}",
regress_src_path.join("parallel_schedule").to_str().unwrap()
)
.as_str(),
format!("--inputdir={}", regress_src_path.to_str().unwrap()).as_str(),
])
.env_clear()
.env("LD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.env("PGPORT", self.address.port().to_string())
.env("PGUSER", self.whoami())
.env("PGHOST", self.address.ip().to_string())
.status()
.expect("pg_regress failed");
}
pub fn pg_bench(&self, clients: u32, seconds: u32) {
let port = self.address.port().to_string();
let clients = clients.to_string();
let seconds = seconds.to_string();
let _pg_bench_init = Command::new(self.env.pg_bin_dir().join("pgbench"))
.args(&["-i", "-p", port.as_str(), "postgres"])
.env("LD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.status()
.expect("pgbench -i");
let _pg_bench_run = Command::new(self.env.pg_bin_dir().join("pgbench"))
.args(&[
"-p",
port.as_str(),
"-T",
seconds.as_str(),
"-P",
"1",
"-c",
clients.as_str(),
"-M",
"prepared",
"postgres",
])
.env("LD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.status()
.expect("pgbench run");
}
}
impl Drop for PostgresNode {
@@ -531,7 +460,7 @@ impl Drop for PostgresNode {
// and checking it here. But let just clean datadirs on start.
fn drop(&mut self) {
if self.is_test {
let _ = self.stop();
let _ = self.stop(true);
}
}
}

View File

@@ -1,12 +1,31 @@
//
// Local control plane.
//
// Can start, cofigure and stop postgres instances running as a local processes.
// Can start, configure and stop postgres instances running as a local processes.
//
// Intended to be used in integration tests and in CLI tools for
// local installations.
//
use anyhow::{anyhow, bail, Context, Result};
use std::fs;
use std::path::Path;
pub mod compute;
pub mod local_env;
pub mod storage;
/// Read a PID file
///
/// We expect a file that contains a single integer.
/// We return an i32 for compatibility with libc and nix.
pub fn read_pidfile(pidfile: &Path) -> Result<i32> {
let pid_str = fs::read_to_string(pidfile)
.with_context(|| format!("failed to read pidfile {:?}", pidfile))?;
let pid: i32 = pid_str
.parse()
.map_err(|_| anyhow!("failed to parse pidfile {:?}", pidfile))?;
if pid < 1 {
bail!("pidfile {:?} contained bad value '{}'", pidfile, pid);
}
Ok(pid)
}

View File

@@ -4,34 +4,30 @@
// Now it also provides init method which acts like a stub for proper installation
// script which will use local paths.
//
use anyhow::Context;
use bytes::Bytes;
use rand::Rng;
use std::env;
use anyhow::{anyhow, Context, Result};
use hex;
use serde::{Deserialize, Serialize};
use std::fs;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::process::{Command, Stdio};
use std::{collections::BTreeMap, env};
use url::Url;
use zenith_utils::auth::{encode_from_key_path, Claims, Scope};
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::ZTenantId;
use anyhow::Result;
use serde_derive::{Deserialize, Serialize};
use pageserver::zenith_repo_dir;
use pageserver::ZTimelineId;
use postgres_ffi::xlog_utils;
pub type Remotes = BTreeMap<String, String>;
//
// This data structure represents deserialized zenith config, which should be
// located in ~/.zenith
// This data structures represent deserialized zenith CLI config
//
// TODO: should we also support ZENITH_CONF env var?
//
#[derive(Serialize, Deserialize, Clone)]
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct LocalEnv {
// Path to the Repository. Here page server and compute nodes will create and store their data.
pub repo_path: PathBuf,
// Pageserver connection strings
pub pageserver_connstring: String,
// System identifier, from the PostgreSQL control file
pub systemid: u64,
// Base directory for both pageserver and compute nodes
pub base_data_dir: PathBuf,
// Path to postgres distribution. It's expected that "bin", "include",
// "lib", "share" from postgres distribution are there. If at some point
@@ -39,191 +35,189 @@ pub struct LocalEnv {
// to four separate paths and match OS-specific installation layout.
pub pg_distrib_dir: PathBuf,
// Path to pageserver binary.
pub zenith_distrib_dir: PathBuf,
// Path to pageserver binary. Empty for remote pageserver.
pub zenith_distrib_dir: Option<PathBuf>,
// keeping tenant id in config to reduce copy paste when running zenith locally with single tenant
#[serde(with = "hex")]
pub tenantid: ZTenantId,
// Repository format, 'rocksdb' or 'layered' or None for default
pub repository_format: Option<String>,
// jwt auth token used for communication with pageserver
pub auth_token: String,
// used to determine which auth type is used
pub auth_type: AuthType,
// used to issue tokens during e.g pg start
pub private_key_path: PathBuf,
pub remotes: Remotes,
}
impl LocalEnv {
// postgres installation
// postgres installation paths
pub fn pg_bin_dir(&self) -> PathBuf {
self.pg_distrib_dir.join("bin")
}
pub fn pg_lib_dir(&self) -> PathBuf {
self.pg_distrib_dir.join("lib")
}
pub fn pageserver_bin(&self) -> Result<PathBuf> {
Ok(self
.zenith_distrib_dir
.as_ref()
.ok_or_else(|| anyhow!("Can not manage remote pageserver"))?
.join("pageserver"))
}
pub fn pg_data_dirs_path(&self) -> PathBuf {
self.base_data_dir.join("pgdatadirs").join("tenants")
}
pub fn pg_data_dir(&self, tenantid: &ZTenantId, branch_name: &str) -> PathBuf {
self.pg_data_dirs_path()
.join(tenantid.to_string())
.join(branch_name)
}
// TODO: move pageserver files into ./pageserver
pub fn pageserver_data_dir(&self) -> PathBuf {
self.base_data_dir.clone()
}
}
fn base_path() -> PathBuf {
match std::env::var_os("ZENITH_REPO_DIR") {
Some(val) => PathBuf::from(val.to_str().unwrap()),
None => ".zenith".into(),
}
}
//
// Initialize a new Zenith repository
//
pub fn init() -> Result<()> {
pub fn init(
remote_pageserver: Option<&str>,
tenantid: ZTenantId,
auth_type: AuthType,
repository_format: Option<&str>,
) -> Result<()> {
// check if config already exists
let repo_path = zenith_repo_dir();
if repo_path.exists() {
let base_path = base_path();
if base_path.exists() {
anyhow::bail!(
"{} already exists. Perhaps already initialized?",
repo_path.to_str().unwrap()
);
}
// Now we can run init only from crate directory, so check that current dir is our crate.
// Use 'pageserver/Cargo.toml' existence as evidendce.
let cargo_path = env::current_dir()?;
if !cargo_path.join("pageserver/Cargo.toml").exists() {
anyhow::bail!(
"Current dirrectory does not look like a zenith repo. \
Please, run 'init' from zenith repo root."
base_path.to_str().unwrap()
);
}
fs::create_dir(&base_path)?;
// ok, now check that expected binaries are present
// check postgres
let pg_distrib_dir = cargo_path.join("tmp_install");
let pg_path = pg_distrib_dir.join("bin/postgres");
if !pg_path.exists() {
anyhow::bail!(
"Can't find postres binary at {}. \
Perhaps './pgbuild.sh' is needed to build it first.",
pg_path.to_str().unwrap()
);
}
// check pageserver
let zenith_distrib_dir = cargo_path.join("target/debug/");
let pageserver_path = zenith_distrib_dir.join("pageserver");
if !pageserver_path.exists() {
anyhow::bail!(
"Can't find pageserver binary at {}. Please build it.",
pageserver_path.to_str().unwrap()
);
}
// ok, we are good to go
let mut conf = LocalEnv {
repo_path,
pg_distrib_dir,
zenith_distrib_dir,
systemid: 0,
// Find postgres binaries. Follow POSTGRES_DISTRIB_DIR if set, otherwise look in "tmp_install".
let pg_distrib_dir: PathBuf = {
if let Some(postgres_bin) = env::var_os("POSTGRES_DISTRIB_DIR") {
postgres_bin.into()
} else {
let cwd = env::current_dir()?;
cwd.join("tmp_install")
}
};
init_repo(&mut conf)?;
Ok(())
}
pub fn init_repo(local_env: &mut LocalEnv) -> Result<()> {
let repopath = &local_env.repo_path;
fs::create_dir(&repopath)
.with_context(|| format!("could not create directory {}", repopath.display()))?;
fs::create_dir(repopath.join("pgdatadirs"))?;
fs::create_dir(repopath.join("timelines"))?;
fs::create_dir(repopath.join("refs"))?;
fs::create_dir(repopath.join("refs").join("branches"))?;
fs::create_dir(repopath.join("refs").join("tags"))?;
println!("created directory structure in {}", repopath.display());
// Create initial timeline
let tli = create_timeline(&local_env, None)?;
let timelinedir = repopath.join("timelines").join(tli.to_string());
println!("created initial timeline {}", timelinedir.display());
// Run initdb
//
// FIXME: we create it temporarily in "tmp" directory, and move it into
// the repository. Use "tempdir()" or something? Or just create it directly
// in the repo?
let initdb_path = local_env.pg_bin_dir().join("initdb");
let initdb = Command::new(initdb_path)
.args(&["-D", "tmp"])
.arg("--no-instructions")
.env_clear()
.env("LD_LIBRARY_PATH", local_env.pg_lib_dir().to_str().unwrap())
.env(
"DYLD_LIBRARY_PATH",
local_env.pg_lib_dir().to_str().unwrap(),
)
.stdout(Stdio::null())
.status()
.with_context(|| "failed to execute initdb")?;
if !initdb.success() {
anyhow::bail!("initdb failed");
if !pg_distrib_dir.join("bin/postgres").exists() {
anyhow::bail!("Can't find postgres binary at {:?}", pg_distrib_dir);
}
println!("initdb succeeded");
// Read control file to extract the LSN and system id
let controlfile =
postgres_ffi::decode_pg_control(Bytes::from(fs::read("tmp/global/pg_control")?))?;
let systemid = controlfile.system_identifier;
let lsn = controlfile.checkPoint;
let lsnstr = format!("{:016X}", lsn);
// generate keys for jwt
// openssl genrsa -out private_key.pem 2048
let private_key_path = base_path.join("auth_private_key.pem");
let keygen_output = Command::new("openssl")
.arg("genrsa")
.args(&["-out", private_key_path.to_str().unwrap()])
.arg("2048")
.stdout(Stdio::null())
.output()
.with_context(|| "failed to generate auth private key")?;
if !keygen_output.status.success() {
anyhow::bail!(
"openssl failed: '{}'",
String::from_utf8_lossy(&keygen_output.stderr)
);
}
// Move the initial WAL file
fs::rename(
"tmp/pg_wal/000000010000000000000001",
timelinedir
.join("wal")
.join("000000010000000000000001.partial"),
)?;
println!("moved initial WAL file");
let public_key_path = base_path.join("auth_public_key.pem");
// openssl rsa -in private_key.pem -pubout -outform PEM -out public_key.pem
let keygen_output = Command::new("openssl")
.arg("rsa")
.args(&["-in", private_key_path.to_str().unwrap()])
.arg("-pubout")
.args(&["-outform", "PEM"])
.args(&["-out", public_key_path.to_str().unwrap()])
.stdout(Stdio::null())
.output()
.with_context(|| "failed to generate auth private key")?;
if !keygen_output.status.success() {
anyhow::bail!(
"openssl failed: '{}'",
String::from_utf8_lossy(&keygen_output.stderr)
);
}
// Remove pg_wal
fs::remove_dir_all("tmp/pg_wal")?;
println!("removed tmp/pg_wal");
let auth_token =
encode_from_key_path(&Claims::new(None, Scope::PageServerApi), &private_key_path)?;
force_crash_recovery(&PathBuf::from("tmp"))?;
println!("updated pg_control");
let conf = if let Some(addr) = remote_pageserver {
// check that addr is parsable
let _uri = Url::parse(addr).map_err(|e| anyhow!("{}: {}", addr, e))?;
let target = timelinedir.join("snapshots").join(&lsnstr);
fs::rename("tmp", &target)?;
println!("moved 'tmp' to {}", target.display());
LocalEnv {
pageserver_connstring: format!("postgresql://{}/", addr),
pg_distrib_dir,
zenith_distrib_dir: None,
base_data_dir: base_path,
remotes: BTreeMap::default(),
tenantid,
repository_format: repository_format.map(|x| x.into()),
auth_token,
auth_type,
private_key_path,
}
} else {
// Find zenith binaries.
let zenith_distrib_dir = env::current_exe()?.parent().unwrap().to_owned();
if !zenith_distrib_dir.join("pageserver").exists() {
anyhow::bail!("Can't find pageserver binary.",);
}
// Create 'main' branch to refer to the initial timeline
let data = tli.to_string();
fs::write(repopath.join("refs").join("branches").join("main"), data)?;
println!("created main branch");
LocalEnv {
pageserver_connstring: "postgresql://127.0.0.1:6400".to_string(),
pg_distrib_dir,
zenith_distrib_dir: Some(zenith_distrib_dir),
base_data_dir: base_path,
remotes: BTreeMap::default(),
tenantid,
repository_format: repository_format.map(|x| x.into()),
auth_token,
auth_type,
private_key_path,
}
};
// Also update the system id in the LocalEnv
local_env.systemid = systemid;
fs::create_dir_all(conf.pg_data_dirs_path())?;
// write config
let toml = toml::to_string(&local_env)?;
fs::write(repopath.join("config"), toml)?;
println!(
"new zenith repository was created in {}",
repopath.display()
);
let toml = toml::to_string_pretty(&conf)?;
fs::write(conf.base_data_dir.join("config"), toml)?;
Ok(())
}
// If control file says the cluster was shut down cleanly, modify it, to mark
// it as crashed. That forces crash recovery when you start the cluster.
//
// FIXME:
// We currently do this to the initial snapshot in "zenith init". It would
// be more natural to do this when the snapshot is restored instead, but we
// currently don't have any code to create new snapshots, so it doesn't matter
// Or better yet, use a less hacky way of putting the cluster into recovery.
// Perhaps create a backup label file in the data directory when it's restored.
fn force_crash_recovery(datadir: &Path) -> Result<()> {
// Read in the control file
let controlfilepath = datadir.to_path_buf().join("global").join("pg_control");
let mut controlfile =
postgres_ffi::decode_pg_control(Bytes::from(fs::read(controlfilepath.as_path())?))?;
// Locate and load config
pub fn load_config() -> Result<LocalEnv> {
let repopath = base_path();
controlfile.state = postgres_ffi::DBState_DB_IN_PRODUCTION;
fs::write(
controlfilepath.as_path(),
postgres_ffi::encode_pg_control(controlfile),
)?;
Ok(())
}
// check that config file is present
pub fn load_config(repopath: &Path) -> Result<LocalEnv> {
if !repopath.exists() {
anyhow::bail!(
"Zenith config is not found in {}. You need to run 'zenith init' first",
@@ -231,159 +225,18 @@ pub fn load_config(repopath: &Path) -> Result<LocalEnv> {
);
}
// TODO: check that it looks like a zenith repository
// load and parse file
let config = fs::read_to_string(repopath.join("config"))?;
toml::from_str(config.as_str()).map_err(|e| e.into())
}
// local env for tests
pub fn test_env(testname: &str) -> LocalEnv {
fs::create_dir_all("../tmp_check").expect("could not create directory ../tmp_check");
let repo_path = Path::new(env!("CARGO_MANIFEST_DIR"))
.join("../tmp_check/")
.join(testname);
// Remove remnants of old test repo
let _ = fs::remove_dir_all(&repo_path);
let mut local_env = LocalEnv {
repo_path,
pg_distrib_dir: Path::new(env!("CARGO_MANIFEST_DIR")).join("../tmp_install"),
zenith_distrib_dir: cargo_bin_dir(),
systemid: 0,
};
init_repo(&mut local_env).expect("could not initialize zenith repository");
local_env
}
// Find the directory where the binaries were put (i.e. target/debug/)
pub fn cargo_bin_dir() -> PathBuf {
let mut pathbuf = std::env::current_exe().unwrap();
pathbuf.pop();
if pathbuf.ends_with("deps") {
pathbuf.pop();
}
pathbuf
}
#[derive(Debug, Clone, Copy)]
pub struct PointInTime {
pub timelineid: ZTimelineId,
pub lsn: u64,
}
fn create_timeline(local_env: &LocalEnv, ancestor: Option<PointInTime>) -> Result<ZTimelineId> {
let repopath = &local_env.repo_path;
// Create initial timeline
let mut tli_buf = [0u8; 16];
rand::thread_rng().fill(&mut tli_buf);
let timelineid = ZTimelineId::from(tli_buf);
let timelinedir = repopath.join("timelines").join(timelineid.to_string());
fs::create_dir(&timelinedir)?;
fs::create_dir(&timelinedir.join("snapshots"))?;
fs::create_dir(&timelinedir.join("wal"))?;
if let Some(ancestor) = ancestor {
let data = format!(
"{}@{:X}/{:X}",
ancestor.timelineid,
ancestor.lsn >> 32,
ancestor.lsn & 0xffffffff
);
fs::write(timelinedir.join("ancestor"), data)?;
}
Ok(timelineid)
}
// Parse an LSN in the format used in filenames
//
// For example: 00000000015D3DD8
//
fn parse_lsn(s: &str) -> std::result::Result<u64, std::num::ParseIntError> {
u64::from_str_radix(s, 16)
}
// Create a new branch in the repository (for the "zenith branch" subcommand)
pub fn create_branch(
local_env: &LocalEnv,
branchname: &str,
startpoint: PointInTime,
) -> Result<()> {
let repopath = &local_env.repo_path;
// create a new timeline for it
let newtli = create_timeline(local_env, Some(startpoint))?;
let newtimelinedir = repopath.join("timelines").join(newtli.to_string());
let data = newtli.to_string();
fs::write(
repopath.join("refs").join("branches").join(branchname),
data,
)?;
// Copy the latest snapshot (TODO: before the startpoint) and all WAL
// TODO: be smarter and avoid the copying...
let (_maxsnapshot, oldsnapshotdir) = find_latest_snapshot(local_env, startpoint.timelineid)?;
let copy_opts = fs_extra::dir::CopyOptions::new();
fs_extra::dir::copy(oldsnapshotdir, newtimelinedir.join("snapshots"), &copy_opts)?;
let oldtimelinedir = repopath
.join("timelines")
.join(startpoint.timelineid.to_string());
let mut copy_opts = fs_extra::dir::CopyOptions::new();
copy_opts.content_only = true;
fs_extra::dir::copy(
oldtimelinedir.join("wal"),
newtimelinedir.join("wal"),
&copy_opts,
)?;
// Save config. We use that to change set of remotes from CLI itself.
pub fn save_config(conf: &LocalEnv) -> Result<()> {
let config_path = base_path().join("config");
let conf_str = toml::to_string_pretty(conf)?;
fs::write(config_path, conf_str)?;
Ok(())
}
// Find the end of valid WAL in a wal directory
pub fn find_end_of_wal(local_env: &LocalEnv, timeline: ZTimelineId) -> Result<u64> {
let repopath = &local_env.repo_path;
let waldir = repopath
.join("timelines")
.join(timeline.to_string())
.join("wal");
let (lsn, _tli) = xlog_utils::find_end_of_wal(&waldir, 16 * 1024 * 1024, true);
Ok(lsn)
}
// Find the latest snapshot for a timeline
fn find_latest_snapshot(local_env: &LocalEnv, timeline: ZTimelineId) -> Result<(u64, PathBuf)> {
let repopath = &local_env.repo_path;
let snapshotsdir = repopath
.join("timelines")
.join(timeline.to_string())
.join("snapshots");
let paths = fs::read_dir(&snapshotsdir)?;
let mut maxsnapshot: u64 = 0;
let mut snapshotdir: Option<PathBuf> = None;
for path in paths {
let path = path?;
let filename = path.file_name().to_str().unwrap().to_owned();
if let Ok(lsn) = parse_lsn(&filename) {
maxsnapshot = std::cmp::max(lsn, maxsnapshot);
snapshotdir = Some(path.path());
}
}
if maxsnapshot == 0 {
// TODO: check ancestor timeline
anyhow::bail!("no snapshot found in {}", snapshotsdir.display());
}
Ok((maxsnapshot, snapshotdir.unwrap()))
}

View File

@@ -1,194 +1,112 @@
use anyhow::Result;
use std::fs;
use std::io;
use std::net::SocketAddr;
use std::collections::HashMap;
use std::net::TcpStream;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::process::Command;
use std::str::FromStr;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::thread;
use std::time::Duration;
use postgres::{Client, NoTls};
use anyhow::{anyhow, bail, Result};
use nix::sys::signal::{kill, Signal};
use nix::unistd::Pid;
use postgres::{Config, NoTls};
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::ZTenantId;
use crate::local_env::LocalEnv;
use pageserver::ZTimelineId;
//
// Collection of several example deployments useful for tests.
//
// I'm intendedly modelling storage and compute control planes as a separate entities
// as it is closer to the actual setup.
//
pub struct TestStorageControlPlane {
pub wal_acceptors: Vec<WalAcceptorNode>,
pub pageserver: Arc<PageServerNode>,
pub test_done: AtomicBool,
pub repopath: PathBuf,
}
impl TestStorageControlPlane {
// Peek into the repository, to grab the timeline ID of given branch
pub fn get_branch_timeline(&self, branchname: &str) -> ZTimelineId {
let branchpath = self.repopath.join("refs/branches/".to_owned() + branchname);
ZTimelineId::from_str(&(fs::read_to_string(&branchpath).unwrap())).unwrap()
}
// postgres <-> page_server
//
// Initialize a new repository and configure a page server to run in it
//
pub fn one_page_server(local_env: &LocalEnv) -> TestStorageControlPlane {
let repopath = local_env.repo_path.clone();
let pserver = Arc::new(PageServerNode {
env: local_env.clone(),
kill_on_exit: true,
listen_address: None,
});
pserver.start().unwrap();
TestStorageControlPlane {
wal_acceptors: Vec::new(),
pageserver: pserver,
test_done: AtomicBool::new(false),
repopath,
}
}
pub fn one_page_server_no_start(local_env: &LocalEnv) -> TestStorageControlPlane {
let repopath = local_env.repo_path.clone();
let pserver = Arc::new(PageServerNode {
env: local_env.clone(),
kill_on_exit: true,
listen_address: None,
});
TestStorageControlPlane {
wal_acceptors: Vec::new(),
pageserver: pserver,
test_done: AtomicBool::new(false),
repopath,
}
}
// postgres <-> {wal_acceptor1, wal_acceptor2, ...}
pub fn fault_tolerant(local_env: &LocalEnv, redundancy: usize) -> TestStorageControlPlane {
let repopath = local_env.repo_path.clone();
let mut cplane = TestStorageControlPlane {
wal_acceptors: Vec::new(),
pageserver: Arc::new(PageServerNode {
env: local_env.clone(),
kill_on_exit: true,
listen_address: None,
}),
test_done: AtomicBool::new(false),
repopath,
};
cplane.pageserver.start().unwrap();
const WAL_ACCEPTOR_PORT: usize = 54321;
for i in 0..redundancy {
let wal_acceptor = WalAcceptorNode {
listen: format!("127.0.0.1:{}", WAL_ACCEPTOR_PORT + i)
.parse()
.unwrap(),
data_dir: local_env.repo_path.join(format!("wal_acceptor_{}", i)),
env: local_env.clone(),
};
wal_acceptor.init();
wal_acceptor.start();
cplane.wal_acceptors.push(wal_acceptor);
}
cplane
}
pub fn stop(&self) {
for wa in self.wal_acceptors.iter() {
let _ = wa.stop();
}
self.test_done.store(true, Ordering::Relaxed);
}
pub fn get_wal_acceptor_conn_info(&self) -> String {
self.wal_acceptors
.iter()
.map(|wa| wa.listen.to_string())
.collect::<Vec<String>>()
.join(",")
}
pub fn is_running(&self) -> bool {
self.test_done.load(Ordering::Relaxed)
}
}
impl Drop for TestStorageControlPlane {
fn drop(&mut self) {
self.stop();
}
}
use crate::read_pidfile;
use pageserver::branches::BranchInfo;
use zenith_utils::connstring::connection_address;
//
// Control routines for pageserver.
//
// Used in CLI and tests.
//
#[derive(Debug)]
pub struct PageServerNode {
kill_on_exit: bool,
listen_address: Option<SocketAddr>,
pub kill_on_exit: bool,
pub connection_config: Config,
pub env: LocalEnv,
}
impl PageServerNode {
pub fn from_env(env: &LocalEnv) -> PageServerNode {
let password = if matches!(env.auth_type, AuthType::ZenithJWT) {
&env.auth_token
} else {
""
};
PageServerNode {
kill_on_exit: false,
listen_address: None, // default
connection_config: Self::default_config(password), // default
env: env.clone(),
}
}
pub fn address(&self) -> SocketAddr {
match self.listen_address {
Some(addr) => addr,
None => "127.0.0.1:64000".parse().unwrap(),
fn default_config(password: &str) -> Config {
format!("postgresql://no_user:{}@localhost:64000/no_db", password)
.parse()
.unwrap()
}
pub fn init(&self, create_tenant: Option<&str>, enable_auth: bool, repository_format: Option<&str>) -> Result<()> {
let mut cmd = Command::new(self.env.pageserver_bin()?);
let mut args = vec![
"--init",
"-D",
self.env.base_data_dir.to_str().unwrap(),
"--postgres-distrib",
self.env.pg_distrib_dir.to_str().unwrap(),
];
if enable_auth {
args.extend(&["--auth-validation-public-key-path", "auth_public_key.pem"]);
args.extend(&["--auth-type", "ZenithJWT"]);
}
if let Some(repo_format) = repository_format {
args.extend(&["--repository-format", repo_format]);
}
create_tenant.map(|tenantid| args.extend(&["--create-tenant", tenantid]));
let status = cmd
.args(args)
.env_clear()
.env("RUST_BACKTRACE", "1")
.status()
.expect("pageserver init failed");
if status.success() {
Ok(())
} else {
Err(anyhow!("pageserver init failed"))
}
}
pub fn repo_path(&self) -> PathBuf {
self.env.repo_path.clone()
self.env.pageserver_data_dir()
}
pub fn pid_file(&self) -> PathBuf {
self.env.repo_path.join("pageserver.pid")
self.repo_path().join("pageserver.pid")
}
pub fn start(&self) -> Result<()> {
println!(
"Starting pageserver at '{}' in {}",
self.address(),
connection_address(&self.connection_config),
self.repo_path().display()
);
let mut cmd = Command::new(self.env.zenith_distrib_dir.join("pageserver"));
cmd.args(&["-l", self.address().to_string().as_str()])
let mut cmd = Command::new(self.env.pageserver_bin()?);
cmd.args(&["-D", self.repo_path().to_str().unwrap()])
.arg("-d")
.env_clear()
.env("RUST_BACKTRACE", "1")
.env("ZENITH_REPO_DIR", self.repo_path())
.env("PATH", self.env.pg_bin_dir().to_str().unwrap()) // needs postres-wal-redo binary
.env("LD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", self.env.pg_lib_dir().to_str().unwrap());
.env("RUST_BACKTRACE", "1");
if !cmd.status()?.success() {
anyhow::bail!(
bail!(
"Pageserver failed to start. See '{}' for details.",
self.repo_path().join("pageserver.log").display()
);
@@ -197,74 +115,141 @@ impl PageServerNode {
// It takes a while for the page server to start up. Wait until it is
// open for business.
for retries in 1..15 {
let client = self.page_server_psql_client();
if client.is_ok() {
break;
} else {
println!("page server not responding yet, retrying ({})...", retries);
thread::sleep(Duration::from_secs(1));
match self.page_server_psql_client() {
Ok(_) => {
println!("Pageserver started");
return Ok(());
}
Err(err) => {
println!(
"Pageserver not responding yet, err {} retrying ({})...",
err, retries
);
thread::sleep(Duration::from_secs(1));
}
}
}
Ok(())
bail!("pageserver failed to start");
}
pub fn stop(&self) -> Result<()> {
let pidfile = self.pid_file();
let pid = read_pidfile(&pidfile)?;
let status = Command::new("kill")
.arg(&pid)
.env_clear()
.status()
.expect("failed to execute kill");
if !status.success() {
anyhow::bail!("Failed to kill pageserver with pid {}", pid);
let pid = read_pidfile(&self.pid_file())?;
let pid = Pid::from_raw(pid);
if kill(pid, Signal::SIGTERM).is_err() {
bail!("Failed to kill pageserver with pid {}", pid);
}
// await for pageserver stop
// wait for pageserver stop
let address = connection_address(&self.connection_config);
for _ in 0..5 {
let stream = TcpStream::connect(self.address());
let stream = TcpStream::connect(&address);
thread::sleep(Duration::from_secs(1));
if let Err(_e) = stream {
println!("Pageserver stopped");
return Ok(());
}
println!("Stopping pageserver on {}", self.address());
thread::sleep(Duration::from_secs(1));
println!("Stopping pageserver on {}", address);
}
// ok, we failed to stop pageserver, let's panic
if !status.success() {
anyhow::bail!("Failed to stop pageserver with pid {}", pid);
} else {
Ok(())
}
bail!("Failed to stop pageserver with pid {}", pid);
}
pub fn page_server_psql(&self, sql: &str) -> Vec<postgres::SimpleQueryMessage> {
let connstring = format!(
"host={} port={} dbname={} user={}",
self.address().ip(),
self.address().port(),
"no_db",
"no_user",
);
let mut client = Client::connect(connstring.as_str(), NoTls).unwrap();
let mut client = self.connection_config.connect(NoTls).unwrap();
println!("Pageserver query: '{}'", sql);
client.simple_query(sql).unwrap()
}
pub fn page_server_psql_client(
pub fn page_server_psql_client(&self) -> Result<postgres::Client, postgres::Error> {
self.connection_config.connect(NoTls)
}
pub fn tenants_list(&self) -> Result<Vec<String>> {
let mut client = self.page_server_psql_client()?;
let query_result = client.simple_query("tenant_list")?;
let tenants_json = query_result
.first()
.map(|msg| match msg {
postgres::SimpleQueryMessage::Row(row) => row.get(0),
_ => None,
})
.flatten()
.ok_or_else(|| anyhow!("missing tenants"))?;
Ok(serde_json::from_str(tenants_json)?)
}
pub fn tenant_create(&self, tenantid: &ZTenantId) -> Result<()> {
let mut client = self.page_server_psql_client()?;
client.simple_query(format!("tenant_create {}", tenantid).as_str())?;
Ok(())
}
pub fn branches_list(&self, tenantid: &ZTenantId) -> Result<Vec<BranchInfo>> {
let mut client = self.page_server_psql_client()?;
let query_result = client.simple_query(&format!("branch_list {}", tenantid))?;
let branches_json = query_result
.first()
.map(|msg| match msg {
postgres::SimpleQueryMessage::Row(row) => row.get(0),
_ => None,
})
.flatten()
.ok_or_else(|| anyhow!("missing branches"))?;
Ok(serde_json::from_str(branches_json)?)
}
pub fn branch_create(
&self,
) -> std::result::Result<postgres::Client, postgres::Error> {
let connstring = format!(
"host={} port={} dbname={} user={}",
self.address().ip(),
self.address().port(),
"no_db",
"no_user",
);
Client::connect(connstring.as_str(), NoTls)
branch_name: &str,
startpoint: &str,
tenantid: &ZTenantId,
) -> Result<BranchInfo> {
let mut client = self.page_server_psql_client()?;
let query_result = client.simple_query(
format!("branch_create {} {} {}", tenantid, branch_name, startpoint).as_str(),
)?;
let branch_json = query_result
.first()
.map(|msg| match msg {
postgres::SimpleQueryMessage::Row(row) => row.get(0),
_ => None,
})
.flatten()
.ok_or_else(|| anyhow!("missing branch"))?;
let res: BranchInfo = serde_json::from_str(branch_json).map_err(|e| {
anyhow!(
"failed to parse branch_create response: {}: {}",
branch_json,
e
)
})?;
Ok(res)
}
// TODO: make this a separate request type and avoid loading all the branches
pub fn branch_get_by_name(
&self,
tenantid: &ZTenantId,
branch_name: &str,
) -> Result<BranchInfo> {
let branch_infos = self.branches_list(tenantid)?;
let branche_by_name: Result<HashMap<String, BranchInfo>> = branch_infos
.into_iter()
.map(|branch_info| Ok((branch_info.name.clone(), branch_info)))
.collect();
let branche_by_name = branche_by_name?;
let branch = branche_by_name
.get(branch_name)
.ok_or_else(|| anyhow!("Branch {} not found", branch_name))?;
Ok(branch.clone())
}
}
@@ -275,106 +260,3 @@ impl Drop for PageServerNode {
}
}
}
//
// Control routines for WalAcceptor.
//
// Now used only in test setups.
//
pub struct WalAcceptorNode {
listen: SocketAddr,
data_dir: PathBuf,
env: LocalEnv,
}
impl WalAcceptorNode {
pub fn init(&self) {
if self.data_dir.exists() {
fs::remove_dir_all(self.data_dir.clone()).unwrap();
}
fs::create_dir_all(self.data_dir.clone()).unwrap();
}
pub fn start(&self) {
println!(
"Starting wal_acceptor in {} listening '{}'",
self.data_dir.to_str().unwrap(),
self.listen
);
let status = Command::new(self.env.zenith_distrib_dir.join("wal_acceptor"))
.args(&["-D", self.data_dir.to_str().unwrap()])
.args(&["-l", self.listen.to_string().as_str()])
.args(&["--systemid", &self.env.systemid.to_string()])
// Tell page server it can receive WAL from this WAL safekeeper
// FIXME: If there are multiple safekeepers, they will all inform
// the page server. Only the last "notification" will stay in effect.
// So it's pretty random which safekeeper the page server will connect to
.args(&["--pageserver", "127.0.0.1:64000"])
.arg("-d")
.arg("-n")
.status()
.expect("failed to start wal_acceptor");
if !status.success() {
panic!("wal_acceptor start failed");
}
}
pub fn stop(&self) -> std::result::Result<(), io::Error> {
println!("Stopping wal acceptor on {}", self.listen);
let pidfile = self.data_dir.join("wal_acceptor.pid");
let pid = read_pidfile(&pidfile)?;
// Ignores any failures when running this command
let _status = Command::new("kill")
.arg(pid)
.env_clear()
.status()
.expect("failed to execute kill");
Ok(())
}
}
impl Drop for WalAcceptorNode {
fn drop(&mut self) {
self.stop().unwrap();
}
}
///////////////////////////////////////////////////////////////////////////////
pub struct WalProposerNode {
pub pid: u32,
}
impl WalProposerNode {
pub fn stop(&self) {
let status = Command::new("kill")
.arg(self.pid.to_string())
.env_clear()
.status()
.expect("failed to execute kill");
if !status.success() {
panic!("kill start failed");
}
}
}
impl Drop for WalProposerNode {
fn drop(&mut self) {
self.stop();
}
}
/// Read a PID file
///
/// This should contain an unsigned integer, but we return it as a String
/// because our callers only want to pass it back into a subcommand.
fn read_pidfile(pidfile: &Path) -> std::result::Result<String, io::Error> {
fs::read_to_string(pidfile).map_err(|err| {
eprintln!("failed to read pidfile {:?}: {:?}", pidfile, err);
err
})
}

11
docker-entrypoint.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/sh
if [ "$1" = 'pageserver' ]; then
if [ ! -d "/data/timelines" ]; then
echo "Initializing pageserver data directory"
pageserver --init -D /data --postgres-distrib /usr/local
fi
echo "Staring pageserver at 0.0.0.0:6400"
pageserver -l 0.0.0.0:6400 -D /data
else
"$@"
fi

30
docs/authentication.md Normal file
View File

@@ -0,0 +1,30 @@
## Authentication
### Overview
Current state of authentication includes usage of JWT tokens in communication between compute and pageserver and between CLI and pageserver. JWT token is signed using RSA keys. CLI generates a key pair during call to `zenith init`. Using following openssl commands:
```bash
openssl genrsa -out private_key.pem 2048
openssl rsa -in private_key.pem -pubout -outform PEM -out public_key.pem
```
CLI also generates signed token and saves it in the config for later access to pageserver. Now authentication is optional. Pageserver has two variables in config: `auth_validation_public_key_path` and `auth_type`, so when auth type present and set to `ZenithJWT` pageserver will require authentication for connections. Actual JWT is passed in password field of connection string. There is a caveat for psql, it silently truncates passwords to 100 symbols, so to correctly pass JWT via psql you have to either use PGPASSWORD environment variable, or store password in psql config file.
Currently there is no authentication between compute and safekeepers, because this communication layer is under heavy refactoring. After this refactoring support for authentication will be added there too. Now safekeeper supports "hardcoded" token passed via environment variable to be able to use callmemaybe command in pageserver.
Compute uses token passed via environment variable to communicate to pageserver and in the future to the safekeeper too.
JWT authentication now supports two scopes: tenant and pageserverapi. Tenant scope is intended for use in tenant related api calls, e.g. create_branch. Compute launched for particular tenant also uses this scope. Scope pageserver api is intended to be used by console to manage pageserver. For now we have only one management operation - create tenant.
Examples for token generation in python:
```python
# generate pageserverapi token
management_token = jwt.encode({"scope": "pageserverapi"}, auth_keys.priv, algorithm="RS256")
# generate tenant token
tenant_token = jwt.encode({"scope": "tenant", "tenant_id": ps.initial_tenant}, auth_keys.priv, algorithm="RS256")
```
Utility functions to work with jwts in rust are located in zenith_utils/src/auth.rs

59
docs/multitenancy.md Normal file
View File

@@ -0,0 +1,59 @@
## Multitenancy
### Overview
Zenith supports multitenancy. One pageserver can serve multiple tenants at once. Tenants can be managed via zenith CLI. During page server setup tenant can be created using ```zenith init --create-tenant``` Also tenants can be added into the system on the fly without pageserver restart. This can be done using the following cli command: ```zenith tenant create``` Tenants use random identifiers which can be represented as a 32 symbols hexadecimal string. So zenith tenant create accepts desired tenant id as an optional argument. The concept of timelines/branches is working independently per tenant.
### Tenants in other commands
By default during `zenith init` new tenant is created on the pageserver. Newly created tenant's id is saved to cli config, so other commands can use it automatically if no direct arugment `--tenantid=<tenantid>` is provided. So generally tenantid more frequently appears in internal pageserver interface. Its commands take tenantid argument to distinguish to which tenant operation should be applied. CLI support creation of new tenants.
Examples for cli:
```sh
zenith tenant list
zenith tenant create // generates new id
zenith tenant create ee6016ec31116c1b7c33dfdfca38892f
zenith pg create main // default tenant from zenith init
zenith pg create main --tenantid=ee6016ec31116c1b7c33dfdfca38892f
zenith branch --tenantid=ee6016ec31116c1b7c33dfdfca38892f
```
### Data layout
On the page server tenants introduce one level of indirection, so data directory structured the following way:
```
<pageserver working directory>
├── pageserver.log
├── pageserver.pid
├── pageserver.toml
└── tenants
├── 537cffa58a4fa557e49e19951b5a9d6b
├── de182bc61fb11a5a6b390a8aed3a804a
└── ee6016ec31116c1b7c33dfdfca38891f
```
Wal redo activity, timelines, snapshots are managed for each tenant independently.
For local environment used for example in tests there also new level of indirection for tenants. It touches `pgdatadirs` directory. Now it contains `tenants` subdirectory so the structure looks the following way:
```
pgdatadirs
└── tenants
├── de182bc61fb11a5a6b390a8aed3a804a
│ └── main
└── ee6016ec31116c1b7c33dfdfca38892f
└── main
```
### Changes to postgres
Tenant id is passed to postgres via GUC the same way as the timeline. Tenant id is added to commands issued to pageserver, namely: pagestream, callmemaybe. Tenant id is also exists in ServerInfo structure, this is needed to pass the value to wal receiver to be able to forward it to the pageserver.
### Safety
For now particular tenant can only appear on a particular pageserver. Set of WAL acceptors are also pinned to particular (tenantid, timeline) pair so there can only be one writer for particular (tenantid, timeline).

View File

@@ -1 +0,0 @@
tmp_check/

View File

@@ -1,17 +0,0 @@
[package]
name = "integration_tests"
version = "0.1.0"
authors = ["Stas Kelvich <stas@zenith.tech>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
lazy_static = "1.4.0"
rand = "0.8.3"
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="a0d067b66447951d1276a53fb09886539c3fa094" }
tokio-postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="a0d067b66447951d1276a53fb09886539c3fa094" }
pageserver = { path = "../pageserver" }
walkeeper = { path = "../walkeeper" }
control_plane = { path = "../control_plane" }

View File

@@ -1,11 +0,0 @@
// test node resettlement to an empty datadir
// TODO
/*
#[test]
fn test_resettlement() {}
// test seq scan of everythin after restart
#[test]
fn test_cold_seqscan() {}
*/

View File

@@ -1,8 +0,0 @@
// TODO
/*
#[test]
fn test_actions() {}
#[test]
fn test_regress() {}
*/

View File

@@ -1,148 +0,0 @@
// mod control_plane;
use control_plane::compute::ComputeControlPlane;
use control_plane::local_env;
use control_plane::local_env::PointInTime;
use control_plane::storage::TestStorageControlPlane;
// XXX: force all redo at the end
// -- restart + seqscan won't read deleted stuff
// -- pageserver api endpoint to check all rels
#[test]
fn test_redo_cases() {
let local_env = local_env::test_env("test_redo_cases");
// Start pageserver that reads WAL directly from that postgres
let storage_cplane = TestStorageControlPlane::one_page_server(&local_env);
let mut compute_cplane = ComputeControlPlane::local(&local_env, &storage_cplane.pageserver);
// start postgres
let maintli = storage_cplane.get_branch_timeline("main");
let node = compute_cplane.new_test_node(maintli);
node.start().unwrap();
// check basic work with table
node.safe_psql(
"postgres",
"CREATE TABLE t(key int primary key, value text)",
);
node.safe_psql(
"postgres",
"INSERT INTO t SELECT generate_series(1,100000), 'payload'",
);
let count: i64 = node
.safe_psql("postgres", "SELECT sum(key) FROM t")
.first()
.unwrap()
.get(0);
println!("sum = {}", count);
assert_eq!(count, 5000050000);
// check 'create table as'
node.safe_psql("postgres", "CREATE TABLE t2 AS SELECT * FROM t");
let count: i64 = node
.safe_psql("postgres", "SELECT sum(key) FROM t")
.first()
.unwrap()
.get(0);
println!("sum = {}", count);
assert_eq!(count, 5000050000);
}
// Runs pg_regress on a compute node
#[test]
fn test_regress() {
let local_env = local_env::test_env("test_regress");
// Start pageserver that reads WAL directly from that postgres
let storage_cplane = TestStorageControlPlane::one_page_server(&local_env);
let mut compute_cplane = ComputeControlPlane::local(&local_env, &storage_cplane.pageserver);
// start postgres
let maintli = storage_cplane.get_branch_timeline("main");
let node = compute_cplane.new_test_node(maintli);
node.start().unwrap();
node.pg_regress();
}
// Runs pg_bench on a compute node
#[test]
fn pgbench() {
let local_env = local_env::test_env("pgbench");
// Start pageserver that reads WAL directly from that postgres
let storage_cplane = TestStorageControlPlane::one_page_server(&local_env);
let mut compute_cplane = ComputeControlPlane::local(&local_env, &storage_cplane.pageserver);
// start postgres
let maintli = storage_cplane.get_branch_timeline("main");
let node = compute_cplane.new_test_node(maintli);
node.start().unwrap();
node.pg_bench(10, 100);
}
// Run two postgres instances on one pageserver, on different timelines
#[test]
fn test_pageserver_two_timelines() {
let local_env = local_env::test_env("test_pageserver_two_timelines");
// Start pageserver that reads WAL directly from that postgres
let storage_cplane = TestStorageControlPlane::one_page_server(&local_env);
let mut compute_cplane = ComputeControlPlane::local(&local_env, &storage_cplane.pageserver);
let maintli = storage_cplane.get_branch_timeline("main");
// Create new branch at the end of 'main'
let startpoint = local_env::find_end_of_wal(&local_env, maintli).unwrap();
local_env::create_branch(
&local_env,
"experimental",
PointInTime {
timelineid: maintli,
lsn: startpoint,
},
)
.unwrap();
let experimentaltli = storage_cplane.get_branch_timeline("experimental");
// Launch postgres instances on both branches
let node1 = compute_cplane.new_test_node(maintli);
let node2 = compute_cplane.new_test_node(experimentaltli);
node1.start().unwrap();
node2.start().unwrap();
// check node1
node1.safe_psql(
"postgres",
"CREATE TABLE t(key int primary key, value text)",
);
node1.safe_psql(
"postgres",
"INSERT INTO t SELECT generate_series(1,100000), 'payload'",
);
let count: i64 = node1
.safe_psql("postgres", "SELECT sum(key) FROM t")
.first()
.unwrap()
.get(0);
println!("sum = {}", count);
assert_eq!(count, 5000050000);
// check node2
node2.safe_psql(
"postgres",
"CREATE TABLE t(key int primary key, value text)",
);
node2.safe_psql(
"postgres",
"INSERT INTO t SELECT generate_series(100000,200000), 'payload'",
);
let count: i64 = node2
.safe_psql("postgres", "SELECT sum(key) FROM t")
.first()
.unwrap()
.get(0);
println!("sum = {}", count);
assert_eq!(count, 15000150000);
}

View File

@@ -1,308 +0,0 @@
// Restart acceptors one by one while compute is under the load.
use control_plane::compute::ComputeControlPlane;
use control_plane::local_env;
use control_plane::local_env::PointInTime;
use control_plane::storage::TestStorageControlPlane;
use pageserver::ZTimelineId;
use rand::Rng;
use std::sync::Arc;
use std::time::SystemTime;
use std::{thread, time};
#[test]
fn test_acceptors_normal_work() {
let local_env = local_env::test_env("test_acceptors_normal_work");
const REDUNDANCY: usize = 3;
let storage_cplane = TestStorageControlPlane::fault_tolerant(&local_env, REDUNDANCY);
let mut compute_cplane = ComputeControlPlane::local(&local_env, &storage_cplane.pageserver);
let wal_acceptors = storage_cplane.get_wal_acceptor_conn_info();
// start postgres
let maintli = storage_cplane.get_branch_timeline("main");
let node = compute_cplane.new_test_master_node(maintli);
node.start().unwrap();
// start proxy
let _proxy = node.start_proxy(&wal_acceptors);
// check basic work with table
node.safe_psql(
"postgres",
"CREATE TABLE t(key int primary key, value text)",
);
node.safe_psql(
"postgres",
"INSERT INTO t SELECT generate_series(1,100000), 'payload'",
);
let count: i64 = node
.safe_psql("postgres", "SELECT sum(key) FROM t")
.first()
.unwrap()
.get(0);
println!("sum = {}", count);
assert_eq!(count, 5000050000);
// check wal files equality
}
// Run page server and multiple safekeepers, and multiple compute nodes running
// against different timelines.
#[test]
fn test_many_timelines() {
// Initialize a new repository, and set up WAL safekeepers and page server.
const REDUNDANCY: usize = 3;
const N_TIMELINES: usize = 5;
let local_env = local_env::test_env("test_many_timelines");
let storage_cplane = TestStorageControlPlane::fault_tolerant(&local_env, REDUNDANCY);
let mut compute_cplane = ComputeControlPlane::local(&local_env, &storage_cplane.pageserver);
let wal_acceptors = storage_cplane.get_wal_acceptor_conn_info();
// Create branches
let mut timelines: Vec<ZTimelineId> = Vec::new();
let maintli = storage_cplane.get_branch_timeline("main"); // main branch
timelines.push(maintli);
let startpoint = local_env::find_end_of_wal(&local_env, maintli).unwrap();
for i in 1..N_TIMELINES {
// additional branches
let branchname = format!("experimental{}", i);
local_env::create_branch(
&local_env,
&branchname,
PointInTime {
timelineid: maintli,
lsn: startpoint,
},
)
.unwrap();
let tli = storage_cplane.get_branch_timeline(&branchname);
timelines.push(tli);
}
// start postgres on each timeline
let mut nodes = Vec::new();
for tli in timelines {
let node = compute_cplane.new_test_node(tli);
nodes.push(node.clone());
node.start().unwrap();
node.start_proxy(&wal_acceptors);
}
// create schema
for node in &nodes {
node.safe_psql(
"postgres",
"CREATE TABLE t(key int primary key, value text)",
);
}
// Populate data
for node in &nodes {
node.safe_psql(
"postgres",
"INSERT INTO t SELECT generate_series(1,100000), 'payload'",
);
}
// Check data
for node in &nodes {
let count: i64 = node
.safe_psql("postgres", "SELECT sum(key) FROM t")
.first()
.unwrap()
.get(0);
println!("sum = {}", count);
assert_eq!(count, 5000050000);
}
}
// Majority is always alive
#[test]
fn test_acceptors_restarts() {
let local_env = local_env::test_env("test_acceptors_restarts");
// Start pageserver that reads WAL directly from that postgres
const REDUNDANCY: usize = 3;
const FAULT_PROBABILITY: f32 = 0.01;
let storage_cplane = TestStorageControlPlane::fault_tolerant(&local_env, REDUNDANCY);
let mut compute_cplane = ComputeControlPlane::local(&local_env, &storage_cplane.pageserver);
let wal_acceptors = storage_cplane.get_wal_acceptor_conn_info();
let mut rng = rand::thread_rng();
// start postgres
let maintli = storage_cplane.get_branch_timeline("main");
let node = compute_cplane.new_test_master_node(maintli);
node.start().unwrap();
// start proxy
let _proxy = node.start_proxy(&wal_acceptors);
let mut failed_node: Option<usize> = None;
// check basic work with table
node.safe_psql(
"postgres",
"CREATE TABLE t(key int primary key, value text)",
);
let mut psql = node.open_psql("postgres");
for i in 1..=1000 {
psql.execute("INSERT INTO t values ($1, 'payload')", &[&i])
.unwrap();
let prob: f32 = rng.gen();
if prob <= FAULT_PROBABILITY {
if let Some(node) = failed_node {
storage_cplane.wal_acceptors[node].start();
failed_node = None;
} else {
let node: usize = rng.gen_range(0..REDUNDANCY);
failed_node = Some(node);
storage_cplane.wal_acceptors[node].stop().unwrap();
}
}
}
let count: i64 = node
.safe_psql("postgres", "SELECT sum(key) FROM t")
.first()
.unwrap()
.get(0);
println!("sum = {}", count);
assert_eq!(count, 500500);
}
fn start_acceptor(cplane: &Arc<TestStorageControlPlane>, no: usize) {
let cp = cplane.clone();
thread::spawn(move || {
thread::sleep(time::Duration::from_secs(1));
cp.wal_acceptors[no].start();
});
}
// Stop majority of acceptors while compute is under the load. Boot
// them again and check that nothing was losed. Repeat.
// N_CRASHES env var
#[test]
fn test_acceptors_unavailability() {
let local_env = local_env::test_env("test_acceptors_unavailability");
// Start pageserver that reads WAL directly from that postgres
const REDUNDANCY: usize = 2;
let storage_cplane = TestStorageControlPlane::fault_tolerant(&local_env, REDUNDANCY);
let mut compute_cplane = ComputeControlPlane::local(&local_env, &storage_cplane.pageserver);
let wal_acceptors = storage_cplane.get_wal_acceptor_conn_info();
// start postgres
let maintli = storage_cplane.get_branch_timeline("main");
let node = compute_cplane.new_test_master_node(maintli);
node.start().unwrap();
// start proxy
let _proxy = node.start_proxy(&wal_acceptors);
// check basic work with table
node.safe_psql(
"postgres",
"CREATE TABLE t(key int primary key, value text)",
);
let mut psql = node.open_psql("postgres");
psql.execute("INSERT INTO t values (1, 'payload')", &[])
.unwrap();
storage_cplane.wal_acceptors[0].stop().unwrap();
let cp = Arc::new(storage_cplane);
start_acceptor(&cp, 0);
let now = SystemTime::now();
psql.execute("INSERT INTO t values (2, 'payload')", &[])
.unwrap();
assert!(now.elapsed().unwrap().as_secs() > 1);
psql.execute("INSERT INTO t values (3, 'payload')", &[])
.unwrap();
cp.wal_acceptors[1].stop().unwrap();
start_acceptor(&cp, 1);
psql.execute("INSERT INTO t values (4, 'payload')", &[])
.unwrap();
assert!(now.elapsed().unwrap().as_secs() > 2);
psql.execute("INSERT INTO t values (5, 'payload')", &[])
.unwrap();
let count: i64 = node
.safe_psql("postgres", "SELECT sum(key) FROM t")
.first()
.unwrap()
.get(0);
println!("sum = {}", count);
assert_eq!(count, 15);
}
fn simulate_failures(cplane: Arc<TestStorageControlPlane>) {
let mut rng = rand::thread_rng();
let n_acceptors = cplane.wal_acceptors.len();
let failure_period = time::Duration::from_secs(1);
while cplane.is_running() {
thread::sleep(failure_period);
let mask: u32 = rng.gen_range(0..(1 << n_acceptors));
for i in 0..n_acceptors {
if (mask & (1 << i)) != 0 {
cplane.wal_acceptors[i].stop().unwrap();
}
}
thread::sleep(failure_period);
for i in 0..n_acceptors {
if (mask & (1 << i)) != 0 {
cplane.wal_acceptors[i].start();
}
}
}
}
// Race condition test
#[test]
fn test_race_conditions() {
let local_env = local_env::test_env("test_race_conditions");
// Start pageserver that reads WAL directly from that postgres
const REDUNDANCY: usize = 3;
let storage_cplane = Arc::new(TestStorageControlPlane::fault_tolerant(
&local_env, REDUNDANCY,
));
let mut compute_cplane = ComputeControlPlane::local(&local_env, &storage_cplane.pageserver);
let wal_acceptors = storage_cplane.get_wal_acceptor_conn_info();
// start postgres
let maintli = storage_cplane.get_branch_timeline("main");
let node = compute_cplane.new_test_master_node(maintli);
node.start().unwrap();
// start proxy
let _proxy = node.start_proxy(&wal_acceptors);
// check basic work with table
node.safe_psql(
"postgres",
"CREATE TABLE t(key int primary key, value text)",
);
let cp = storage_cplane.clone();
let failures_thread = thread::spawn(move || {
simulate_failures(cp);
});
let mut psql = node.open_psql("postgres");
for i in 1..=1000 {
psql.execute("INSERT INTO t values ($1, 'payload')", &[&i])
.unwrap();
}
let count: i64 = node
.safe_psql("postgres", "SELECT sum(key) FROM t")
.first()
.unwrap()
.get(0);
println!("sum = {}", count);
assert_eq!(count, 500500);
storage_cplane.stop();
failures_thread.join().unwrap();
}

View File

@@ -1,23 +0,0 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
# dependencies
/node_modules
/.pnp
.pnp.js
# testing
/coverage
# production
/build
# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

View File

@@ -1,55 +0,0 @@
Mock implementation of a management console.
See demo-howto.txt for usage.
Building and Installation
-------------------------
To compile Postgres:
sudo apt build-dep postgresql
sudo apt install bison flex libz-dev libssl-dev
sudo apt install ccache
sudo apt install libcurl4-openssl-dev libxml2-dev
For the webapp:
# NOTE: This requires at least version 1.1.0 of python3-flask. That's not
# available in Debian Buster, need at least Bullseye.
sudo apt install python3 python3-flask python3-pip npm webpack
pip3 install Flask-BasicAuth
pip3 install boto3
git clone and compile and install patched version of Postgres:
git clone https://github.com/libzenith/postgres.git
cd postgres
git checkout zenith-experiments
./configure --enable-debug --enable-cassert --with-openssl --prefix=/home/heikki/pgsql-install --with-libxml CC="ccache gcc" CFLAGS="-O0"
make -j4 -s install
Get the webapp:
cd ~
git clone https://github.com/libzenith/zenith-mgmt-console.git
cd zenith-mgmt-console
mkdir pgdatadirs
openssl req -new -x509 -days 365 -nodes -text -out server.crt \
-keyout server.key -subj "/CN=zenith-demo"
For Mock S3 server (unless you want to test against a real cloud service):
sudo apt install python3-tornado
cd ~/zenith-mgmt-console
git clone https://github.com/hlinnaka/ms3.git
Compile & run it:
npm install
webpack # compile React app
BASIC_AUTH_PASSWORD=<password> ./launch-local.sh
You can view the contents of the S3 bucket with browser:
http://<server>/list_bucket

View File

@@ -1,340 +0,0 @@
from flask import request
from flask_basicauth import BasicAuth
from flask import render_template
from subprocess import PIPE, STDOUT, run, Popen
import html
import os
import re
import shutil
import logging
import time
import boto3
from boto3.session import Session
from botocore.client import Config
from botocore.handlers import set_list_objects_encoding_type_url
from flask import Flask
import waldump
app = Flask(__name__)
app.config['BASIC_AUTH_USERNAME'] = 'zenith'
app.config['BASIC_AUTH_PASSWORD'] = os.getenv('BASIC_AUTH_PASSWORD')
app.config['BASIC_AUTH_FORCE'] = True
basic_auth = BasicAuth(app)
# S3 configuration:
ENDPOINT = os.getenv('S3_ENDPOINT', 'https://localhost:9000')
ACCESS_KEY = os.getenv('S3_ACCESSKEY', 'minioadmin')
SECRET = os.getenv('S3_SECRET', '')
BUCKET = os.getenv('S3_BUCKET', 'foobucket')
print("Using bucket at " + ENDPOINT);
#boto3.set_stream_logger('botocore', logging.DEBUG)
session = Session(aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET,
region_name=os.getenv('S3_REGION', 'auto'))
# needed for google cloud?
session.events.unregister('before-parameter-build.s3.ListObjects',
set_list_objects_encoding_type_url)
s3resource = session.resource('s3',
endpoint_url=ENDPOINT,
verify=False,
config=Config(signature_version='s3v4'))
s3bucket = s3resource.Bucket(BUCKET)
s3_client = boto3.client('s3',
endpoint_url=ENDPOINT,
verify=False,
config=Config(signature_version='s3v4'),
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET)
@app.route("/")
def index():
return render_template("index.html")
@app.route("/api/waldump")
def render_waldump():
return render_template("waldump.html")
@app.route('/api/fetch_wal')
def fetch_wal():
return waldump.fetch_wal(request, s3bucket);
@app.route("/api/server_status")
def server_status():
dirs = os.listdir("pgdatadirs")
dirs.sort()
primary = None
standbys = []
for dirname in dirs:
result = run("pg_ctl status -D pgdatadirs/" + dirname, stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=True)
srv = {
'datadir': dirname,
'status': result.stdout,
'port': None
}
if dirname == 'primary':
primary = srv;
primary['port'] = 5432;
else:
standby_match = re.search('standby_([0-9]+)', dirname)
if standby_match:
srv['port'] = int(standby_match.group(1))
standbys.append(srv);
return {'primary': primary, 'standbys': standbys}
@app.route('/api/list_bucket')
def list_bucket():
response = 'cloud bucket contents:<br>\n'
for file in s3bucket.objects.all():
response = response + html.escape(file.key) + '<br>\n'
return response
def walpos_str(walpos):
return '{:X}/{:X}'.format(walpos >> 32, walpos & 0xFFFFFFFF)
@app.route('/api/bucket_summary')
def bucket_summary():
nonrelimages = []
minwal = int(0)
maxwal = int(0)
minseqwal = int(0)
maxseqwal = int(0)
for file in s3bucket.objects.all():
path = file.key
match = re.search('nonreldata/nonrel_([0-9A-F]+).tar', path)
if match:
walpos = int(match.group(1), 16)
nonrelimages.append(walpos_str(walpos))
match = re.search('nonreldata/nonrel_([0-9A-F]+)-([0-9A-F]+)', path)
if match:
endwal = int(match.group(2), 16)
if endwal > maxwal:
maxwal = endwal
match = re.search('walarchive/([0-9A-F]{8})([0-9A-F]{8})([0-9A-F]{8})', path)
if match:
tli = int(match.group(1), 16)
logno = int(match.group(2), 16)
segno = int(match.group(3), 16)
# FIXME: this assumes default 16 MB wal segment size
logsegno = logno * (0x100000000 / (16*1024*1024)) + segno
seqwal = int((logsegno + 1) * (16*1024*1024))
if seqwal > maxseqwal:
maxseqwal = seqwal;
if minseqwal == 0 or seqwal < minseqwal:
minseqwal = seqwal;
return {
'nonrelimages': nonrelimages,
'minwal': walpos_str(minwal),
'maxwal': walpos_str(maxwal),
'minseqwal': walpos_str(minseqwal),
'maxseqwal': walpos_str(maxseqwal)
}
def print_cmd_result(cmd_result):
return print_cmd_result_ex(cmd_result.args, cmd_result.returncode, cmd_result.stdout)
def print_cmd_result_ex(cmd, returncode, stdout):
res = ''
res += 'ran command:\n' + str(cmd) + '\n'
res += 'It returned code ' + str(returncode) + '\n'
res += '\n'
res += 'stdout/stderr:\n'
res += stdout
return res
@app.route('/api/init_primary', methods=['GET', 'POST'])
def init_primary():
initdb_result = run("initdb -D pgdatadirs/primary --username=zenith --pwfile=pg-password.txt", stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=True)
if initdb_result.returncode != 0:
return print_cmd_result(initdb_result)
# Append archive_mode and archive_command and port to postgresql.conf
f=open("pgdatadirs/primary/postgresql.conf", "a+")
f.write("listen_addresses='*'\n")
f.write("archive_mode=on\n")
f.write("archive_command='zenith_push --archive-wal-path=%p --archive-wal-fname=%f'\n")
f.write("ssl=on\n")
f.close()
f=open("pgdatadirs/primary/pg_hba.conf", "a+")
f.write("# allow SSL connections with password from anywhere\n")
f.write("hostssl all all 0.0.0.0/0 md5\n")
f.write("hostssl all all ::0/0 md5\n")
f.close()
shutil.copyfile("server.crt", "pgdatadirs/primary/server.crt")
shutil.copyfile("server.key", "pgdatadirs/primary/server.key")
os.chmod("pgdatadirs/primary/server.key", 0o0600)
start_proc = Popen(args=["pg_ctl", "start", "-D", "pgdatadirs/primary", "-l", "pgdatadirs/primary/log"], stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=False, start_new_session=True, close_fds=True)
start_rc = start_proc.wait()
start_stdout, start_stderr = start_proc.communicate()
responsestr = print_cmd_result(initdb_result) + '\n'
responsestr += print_cmd_result_ex(start_proc.args, start_rc, start_stdout)
return responsestr
@app.route('/api/zenith_push', methods=['GET', 'POST'])
def zenith_push():
# Stop the primary if it's running
stop_result = run(args=["pg_ctl", "stop", "-D", "pgdatadirs/primary"], stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=False, start_new_session=True, close_fds=True)
# Call zenith_push
push_result = run("zenith_push -D pgdatadirs/primary", stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=True)
# Restart the primary
start_proc = Popen(args=["pg_ctl", "start", "-D", "pgdatadirs/primary", "-l", "pgdatadirs/primary/log"], stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=False, start_new_session=True, close_fds=True)
start_rc = start_proc.wait()
start_stdout, start_stderr = start_proc.communicate()
responsestr = print_cmd_result(stop_result) + '\n'
responsestr += print_cmd_result(push_result) + '\n'
responsestr += print_cmd_result_ex(start_proc.args, start_rc, start_stdout) + '\n'
return responsestr
@app.route('/api/create_standby', methods=['GET', 'POST'])
def create_standby():
walpos = request.form.get('walpos')
if not walpos:
return 'no walpos'
dirs = os.listdir("pgdatadirs")
last_port = 5432
for dirname in dirs:
standby_match = re.search('standby_([0-9]+)', dirname)
if standby_match:
port = int(standby_match.group(1))
if port > last_port:
last_port = port
standby_port = last_port + 1
standby_dir = "pgdatadirs/standby_" + str(standby_port)
# Call zenith_restore
restore_result = run(["zenith_restore", "--end=" + walpos, "-D", standby_dir], stdout=PIPE, stderr=STDOUT, encoding='latin1')
responsestr = print_cmd_result(restore_result)
if restore_result.returncode == 0:
# Append hot_standby and port to postgresql.conf
f=open(standby_dir + "/postgresql.conf", "a+")
f.write("hot_standby=on\n")
f.write("port=" + str(standby_port) + "\n")
f.close()
start_proc = Popen(args=["pg_ctl", "start", "-D", standby_dir, "-l", standby_dir + "/log"], stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=False, start_new_session=True, close_fds=True)
start_rc = start_proc.wait()
start_stdout, start_stderr = start_proc.communicate()
responsestr += '\n\n' + print_cmd_result_ex(start_proc.args, start_rc, start_stdout)
return responsestr
@app.route('/api/destroy_server', methods=['GET', 'POST'])
def destroy_primary():
datadir = request.form.get('datadir')
# Check that the datadir parameter doesn't contain anything funny.
if not re.match("^[A-Za-z0-9_-]+$", datadir):
raise Exception('invalid datadir: ' + datadir)
# Stop the server if it's running
stop_result = run(args=["pg_ctl", "stop", "-m", "immediate", "-D", "pgdatadirs/" + datadir], stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=False, start_new_session=True, close_fds=True)
shutil.rmtree('pgdatadirs/' + datadir, ignore_errors=True)
responsestr = print_cmd_result(stop_result) + '\n'
responsestr += 'Deleted datadir ' + datadir + '.\n'
return responsestr
@app.route('/api/restore_primary', methods=['GET', 'POST'])
def restore_primary():
# Call zenith_restore
restore_result = run(["zenith_restore", "-D", "pgdatadirs/primary"], stdout=PIPE, stderr=STDOUT, encoding='latin1')
responsestr = print_cmd_result(restore_result)
# Append restore_command to postgresql.conf, so that it can find the last raw WAL segments
f=open("pgdatadirs/primary/postgresql.conf", "a+")
f.write("listen_addresses='*'\n")
f.write("restore_command='zenith_restore --archive-wal-path=%p --archive-wal-fname=%f'\n")
f.write("ssl=on\n")
f.close()
if restore_result.returncode == 0:
start_proc = Popen(args=["pg_ctl", "start", "-D", "pgdatadirs/primary", "-l", "pgdatadirs/primary/log"], stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=False, start_new_session=True, close_fds=True)
start_rc = start_proc.wait()
start_stdout, start_stderr = start_proc.communicate()
responsestr += print_cmd_result_ex(start_proc.args, start_rc, start_stdout)
return responsestr
@app.route('/api/slicedice', methods=['GET', 'POST'])
def run_slicedice():
result = run("zenith_slicedice", stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=True)
responsestr = print_cmd_result(result)
return responsestr
@app.route('/api/reset_demo', methods=['POST'])
def reset_all():
result = run("pkill -9 postgres", stdout=PIPE, stderr=STDOUT, universal_newlines=True, shell=True)
dirs = os.listdir("pgdatadirs")
for dirname in dirs:
shutil.rmtree('pgdatadirs/' + dirname)
for file in s3bucket.objects.all():
s3_client.delete_object(Bucket = BUCKET, Key = file.key)
responsestr = print_cmd_result(result) + '\n'
responsestr += '''
Deleted all Postgres datadirs.
Deleted all files in object storage bucket.
'''
return responsestr
if __name__ == '__main__':
app.run()

View File

@@ -1,3 +0,0 @@
module.exports = {
presets: ["@babel/preset-env", "@babel/preset-react"],
};

View File

@@ -1,67 +0,0 @@
Mock implementation of a management console.
This isn't very different from a "normal" PostgreSQL installation with
a base backup and WAL archive. The main user-visible difference is
that when you create a standby server, we don't restore the whole data
directory, but only the "non-relation" files. Relation files are
restored on demand, when they're accessed the first time. That makes
the "create standby" operation is very fast, but with some delay when
you connect and start running queries instead. Most visible if you
have a large database. (However, see note below about large databases)
Note: lots of things are broken/unsafe. Things will fail if a table is
larger than 1 GB. Or if there are more than 1000 files in the cloud
bucket.
How to use this demo:
1. If there are any leftovers from previous runs, reset by clicking
the RESET DEMO button. This kills and deletes all Postgres servers,
and empties the cloud storage bucket
2. Create primary server by clicking on the "Init primary" button
3. Push a base image of the primary to cloud storage, by clicking the
"push base image" button. (This takes about 30 seconds, be
patient)
4. Connect to primary with psql, and create a test table with a little data.
psql postgres -p5432 -U zenith -h<host>
create table mytable (i int4);
insert into mytable values (1);
select pg_switch_wal();
The Postgres password is the same as for the management console.
3. Now that there's a new WAL segment in the arhive, we can "slice &
dice" it. Click on the "Slice & dice button".
4. Perform more updates on the primary, to generate more WAL.
insert into mytable values (2); select pg_switch_wal();
insert into mytable values (3); select pg_switch_wal();
insert into mytable values (4); select pg_switch_wal();
insert into mytable values (5); select pg_switch_wal();
5. Slice & Dice the WAL again
6. Now you can create read-only standby servers at any point in the
WAL. Type a WAL position in the text box (or use the slider), and
click "Create new standby". The first standby is created at port 5433,
the second at port 5434, and so forth.
7. Connect to the standby with "psql -p 5433". Note that it takes a
few seconds until the connection is established. That's because the
standby has to restore the basic system catalogs, like pg_database and
pg_authid from the backup. After connecting, you can do "\d" to list
tables, this will also take a few seconds, as more catalog tables are
restored from backup. Subsequent commands will be faster.
Run queries in the standby:
select * from mytable;
the result depends on the LSN that you picked when you created the server.

View File

@@ -1,463 +0,0 @@
import React, { useState, useEffect } from 'react';
import ReactDOM from 'react-dom';
import Loader from "react-loader-spinner";
import { Router, Route, Link, IndexRoute, hashHistory, browserHistory } from 'react-router';
function ServerStatus(props) {
const datadir = props.server.datadir;
const status = props.server.status;
const port = props.server.port;
return (
<div>
<h2>{ datadir == 'primary' ? 'Primary' : datadir }</h2>
status: <div className='status'>{status}</div><br/>
to connect: <span className='shellcommand'>psql -h { window.location.hostname } -p { port } -U zenith postgres</span><br/>
</div>
);
}
function StandbyList(props) {
const bucketSummary = props.bucketSummary;
const standbys = props.standbys;
const maxwalpos = bucketSummary.maxwal ? walpos_to_int(bucketSummary.maxwal) : 0;
const [walposInput, setWalposInput] = useState({ src: 'text', value: '0/0'});
// find earliest base image
const minwalpos = bucketSummary.nonrelimages ? bucketSummary.nonrelimages.reduce((minpos, imgpos_str, index, array) => {
const imgpos = walpos_to_int(imgpos_str);
return (minpos == 0 || imgpos < minpos) ? imgpos : minpos;
}, 0) : 0;
const can_create_standby = minwalpos > 0 && maxwalpos > 0 && maxwalpos >= minwalpos;
var walpos_valid = true;
function create_standby() {
const formdata = new FormData();
formdata.append("walpos", walposStr);
props.startOperation('Creating new standby at ' + walposStr + '...',
fetch("/api/create_standby", { method: 'POST', body: formdata }));
}
function destroy_standby(datadir) {
const formdata = new FormData();
formdata.append("datadir", datadir);
props.startOperation('Destroying ' + datadir + '...',
fetch("/api/destroy_server", { method: 'POST', body: formdata }));
}
const handleSliderChange = (event) => {
setWalposInput({ src: 'slider', value: event.target.value });
}
const handleWalposChange = (event) => {
setWalposInput({ src: 'text', value: event.target.value });
}
var sliderValue;
var walposStr;
if (walposInput.src == 'text')
{
const walpos = walpos_to_int(walposInput.value);
if (walpos >= minwalpos && walpos <= maxwalpos)
walpos_valid = true;
else
walpos_valid = false;
sliderValue = Math.round((walpos - minwalpos) / (maxwalpos - minwalpos) * 100);
walposStr = walposInput.value;
}
else
{
const slider = walposInput.value;
const new_walpos = minwalpos + slider / 100 * (maxwalpos - minwalpos);
console.log('minwalpos: '+ minwalpos);
console.log('maxwalpos: '+ maxwalpos);
walposStr = int_to_walpos(Math.round(new_walpos));
walpos_valid = true;
console.log(walposStr);
}
var standbystatus = ''
if (standbys)
{
standbystatus =
<div>
{
standbys.length > 0 ?
standbys.map((server) =>
<>
<ServerStatus key={ 'status_' + server.datadir} server={server}/>
<button key={ 'destroy_' + server.datadir} onClick={e => destroy_standby(server.datadir)}>Destroy standby</button>
</>
) : "no standby servers"
}
</div>
}
return (
<div>
<h2>Standbys</h2>
<button onClick={create_standby} disabled={!can_create_standby || !walpos_valid}>Create new Standby</button> at LSN
<input type="text" id="walpos_input" value={ walposStr } onChange={handleWalposChange} disabled={!can_create_standby}/>
<input type="range" id="walpos_slider" min="0" max="100" steps="1" value={sliderValue} onChange={handleSliderChange} disabled={!can_create_standby}/>
<br/>
{ standbystatus }
</div>
);
}
function ServerList(props) {
const primary = props.serverStatus ? props.serverStatus.primary : null;
const standbys = props.serverStatus ? props.serverStatus.standbys : [];
const bucketSummary = props.bucketSummary;
var primarystatus = '';
function destroy_primary() {
const formdata = new FormData();
formdata.append("datadir", 'primary');
props.startOperation('Destroying primary...',
fetch("/api/destroy_server", { method: 'POST', body: formdata }));
}
function restore_primary() {
props.startOperation('Restoring primary...',
fetch("/api/restore_primary", { method: 'POST' }));
}
if (primary)
{
primarystatus =
<div>
<ServerStatus server={primary}/>
<button onClick={destroy_primary}>Destroy primary</button>
</div>
}
else
{
primarystatus =
<div>
no primary server<br/>
<button onClick={restore_primary}>Restore primary</button>
</div>
}
return (
<>
{ primarystatus }
<StandbyList standbys={standbys} startOperation={props.startOperation} bucketSummary={props.bucketSummary}/>
<p className="todo">
Should we list the WAL safekeeper nodes here? Or are they part of the Storage? Or not visible to users at all?
</p>
</>
);
}
function BucketSummary(props) {
const bucketSummary = props.bucketSummary;
const startOperation = props.startOperation;
function slicedice() {
startOperation('Slicing sequential WAL to per-relation WAL...',
fetch("/api/slicedice", { method: 'POST' }));
}
if (!bucketSummary.nonrelimages)
{
return <>loading...</>
}
return (
<div>
<div>Base images at following WAL positions:
<ul>
{bucketSummary.nonrelimages.map((img) => (
<li key={img}>{img}</li>
))}
</ul>
</div>
Sliced WAL is available up to { bucketSummary.maxwal }<br/>
Raw WAL is available up to { bucketSummary.maxseqwal }<br/>
<br/>
<button onClick={slicedice}>Slice & Dice WAL</button>
<p className="todo">
Currently, the slicing or "sharding" of the WAL needs to be triggered manually, by clicking the above button.
<br/>
TODO: make it a continuous process that runs in the WAL safekeepers, or in the Page Servers, or as a standalone service.
</p>
</div>
);
}
function ProgressIndicator()
{
return (
<div>
<Loader
type="Puff"
color="#00BFFF"
height={100}
width={100}
/>
</div>
)
}
function walpos_to_int(walpos)
{
const [hi, lo] = walpos.split('/');
return parseInt(hi, 16) + parseInt(lo, 16);
}
function int_to_walpos(x)
{
console.log('converting ' + x);
return (Math.floor((x / 0x100000000)).toString(16) + '/' + (x % 0x100000000).toString(16)).toUpperCase();
}
function OperationStatus(props) {
const lastOperation = props.lastOperation;
const inProgress = props.inProgress;
const operationResult = props.operationResult;
if (lastOperation)
{
return (
<div><h2>Last operation:</h2>
<div>{lastOperation} { (!inProgress && lastOperation) ? 'done!' : '' }</div>
<div className='result'>
{inProgress ? <ProgressIndicator/> : <pre>{operationResult}</pre>}
</div>
</div>
);
}
else
return '';
}
function ActionButtons(props) {
const startOperation = props.startOperation;
const bucketSummary = props.bucketSummary;
function reset_demo() {
startOperation('resetting everything...',
fetch("/api/reset_demo", { method: 'POST' }));
}
function init_primary() {
startOperation('Initializing new primary...',
fetch("/api/init_primary", { method: 'POST' }));
}
function zenith_push() {
startOperation('Pushing new base image...',
fetch("/api/zenith_push", { method: 'POST' }));
}
return (
<div>
<p className="todo">
RESET DEMO deletes everything in the storage bucket, and stops and destroys all servers. This resets the whole demo environment to the initial state.
</p>
<button onClick={reset_demo}>RESET DEMO</button>
<p className="todo">
Init Primary runs initdb to create a new primary server. Click this after Resetting the demo.
</p>
<button onClick={init_primary}>Init primary</button>
<p className="todo">
Push Base Image stops the primary, copies the current state of the primary to the storage bucket as a new base backup, and restarts the primary.
<br/>
TODO: This should be handled by a continuous background process, probably running in the storage nodes. And without having to shut down the cluster, of course.
</p>
<button onClick={zenith_push}>Push base image</button>
</div>
);
}
function Sidenav(props)
{
const toPage = (page) => (event) => {
//event.preventDefault()
props.switchPage(page);
};
return (
<div>
<h3 className="sidenav-item">Menu</h3>
<a href="#servers" onClick={toPage('servers')} className="sidenav-item">Servers</a>
<a href="#storage" onClick={toPage('storage')} className="sidenav-item">Storage</a>
<a href="#snapshots" onClick={toPage('snapshots')} className="sidenav-item">Snapshots</a>
<a href="#demo" onClick={toPage('demo')} className="sidenav-item">Demo</a>
<a href="#import" onClick={toPage('import')} className="sidenav-item">Import / Export</a>
<a href="#jobs" onClick={toPage('jobs')} className="sidenav-item">Jobs</a>
</div>
);
}
function App()
{
const [page, setPage] = useState('servers');
const [serverStatus, setServerStatus] = useState({});
const [bucketSummary, setBucketSummary] = useState({});
const [lastOperation, setLastOperation] = useState('');
const [inProgress, setInProgress] = useState('');
const [operationResult, setOperationResult] = useState('');
useEffect(() => {
reloadStatus();
}, []);
function startOperation(operation, promise)
{
promise.then(result => result.text()).then(resultText => {
operationFinished(resultText);
});
setLastOperation(operation);
setInProgress(true);
setOperationResult('');
}
function operationFinished(result)
{
setInProgress(false);
setOperationResult(result);
reloadStatus();
}
function clearOperation()
{
setLastOperation('')
setInProgress('');
setOperationResult('');
console.log("cleared");
}
function reloadStatus()
{
fetch('/api/server_status').then(res => res.json()).then(data => {
setServerStatus(data);
});
fetch('/api/bucket_summary').then(res => res.json()).then(data => {
setBucketSummary(data);
});
}
const content = () => {
console.log(page);
if (page === 'servers') {
return (
<>
<h1>Server status</h1>
<ServerList startOperation={ startOperation }
serverStatus={ serverStatus }
bucketSummary={ bucketSummary }/>
</>
);
} else if (page === 'storage') {
return (
<>
<h1>Storage bucket status</h1>
<BucketSummary startOperation={ startOperation }
bucketSummary={ bucketSummary }/>
</>
);
} else if (page === 'snapshots') {
return (
<>
<h1>Snapshots</h1>
<p className="todo">
In Zenith, snapshots are just specific points (LSNs) in the WAL history, with a label. A snapshot prevents garbage collecting old data that's still needed to reconstruct the database at that LSN.
</p>
<p className="todo">
TODO:
<ul>
<li>List existing snapshots</li>
<li>Create new snapshot manually, from current state or from a given LSN</li>
<li>Drill into the WAL stream to see what have happened. Provide tools for e.g. finding point where a table was dropped</li>
<li>Create snapshots automatically based on events in the WAL, like if you call pg_create_restore_point(() in the primary</li>
<li>Launch new reader instance at a snapshot</li>
<li>Export snapshot</li>
<li>Rollback cluster to a snapshot</li>
</ul>
</p>
</>
);
} else if (page === 'demo') {
return (
<>
<h1>Misc actions</h1>
<ActionButtons startOperation={ startOperation }
bucketSummary={ bucketSummary }/>
</>
);
} else if (page === 'import') {
return (
<>
<h1>Import & Export tools</h1>
<p className="TODO">TODO:
<ul>
<li>Initialize database from existing backup (pg_basebackup, WAL-G, pgbackrest)</li>
<li>Initialize from a pg_dump or other SQL script</li>
<li>Launch batch job to import data files from S3</li>
<li>Launch batch job to export database with pg_dump to S3</li>
</ul>
These jobs can be run in against reader processing nodes. We can even
spawn a new reader node dedicated to a job, and destry it when the job is done.
</p>
</>
);
} else if (page === 'jobs') {
return (
<>
<h1>Batch jobs</h1>
<p className="TODO">TODO:
<ul>
<li>List running jobs launched from Import & Export tools</li>
<li>List other batch jobs launched by the user</li>
<li>Launch new batch jobs</li>
</ul>
</p>
</>
);
}
}
function switchPage(page)
{
console.log("topage " + page);
setPage(page)
clearOperation();
};
return (
<div className="row">
<div className="sidenav">
<Sidenav switchPage={switchPage} className="column"/>
</div>
<div className="column">
<div>
{ content() }
</div>
<OperationStatus lastOperation={ lastOperation }
inProgress = { inProgress }
operationResult = { operationResult }/>
</div>
</div>
);
}
ReactDOM.render(<App/>, document.getElementById('reactApp'));

View File

@@ -1,105 +0,0 @@
import React, { useState, useEffect } from 'react';
import ReactDOM from 'react-dom';
import Loader from "react-loader-spinner";
function walpos_to_int(walpos)
{
const [hi, lo] = walpos.split('/');
return parseInt(hi, 16) + parseInt(lo, 16);
}
const palette = [
"#003f5c",
"#2f4b7c",
"#665191",
"#a05195",
"#d45087",
"#f95d6a",
"#ff7c43",
"#ffa600"];
function WalRecord(props)
{
const firstwalpos = props.firstwalpos;
const endwalpos = props.endwalpos;
const record = props.record;
const index = props.index;
const xidmap = props.xidmap;
const startpos = walpos_to_int(record.start)
const endpos = walpos_to_int(record.end)
const scale = 1000 / (16*1024*1024)
const startx = (startpos - firstwalpos) * scale;
const endx = (endpos - firstwalpos) * scale;
const xidindex = xidmap[record.xid];
const color = palette[index % palette.length];
const y = 5 + (xidindex) * 20 + (index % 2) * 2;
return (
<line x1={ startx } y1={y} x2={endx} y2={y} stroke={ color } strokeWidth="5">
<title>
start: { record.start } end: { record.end }
</title>
</line>
)
}
function WalFile(props)
{
const walContent = props.walContent;
const firstwalpos = props.firstwalpos;
const xidmap = props.xidmap;
return <svg width="1000" height="200">
{
walContent.records ?
walContent.records.map((record, index) =>
<WalRecord key={record.start} firstwalpos={firstwalpos} record={record} index={index} xidmap={xidmap}/>
) : "no records"
}
</svg>
}
function WalDumpApp()
{
const [walContent, setWalContent] = useState({});
const filename = '00000001000000000000000C';
useEffect(() => {
fetch('/fetch_wal?filename='+filename).then(res => res.json()).then(data => {
setWalContent(data);
});
}, []);
var firstwalpos = 0;
var endwalpos = 0;
var numxids = 0;
var xidmap = {};
if (walContent.records && walContent.records.length > 0)
{
firstwalpos = walpos_to_int(walContent.records[0].start);
endwalpos = firstwalpos + 16*1024*1024;
walContent.records.forEach(rec => {
if (!xidmap[rec.xid])
{
xidmap[rec.xid] = ++numxids;
}
});
}
return (
<>
<h2>{filename}</h2>
<WalFile walContent={walContent} firstwalpos={firstwalpos} endwalpos={endwalpos} xidmap={xidmap}/>
</>
);
}
console.log('hey there');
ReactDOM.render(<WalDumpApp/>, document.getElementById('waldump'));

View File

@@ -1,9 +0,0 @@
#!/bin/bash
#
# NOTE: You must set the following environment variables before running this:
# BASIC_AUTH_PASSWORD - basic http auth password
# S3_ACCESSKEY
# S3_SECRET
S3_ENDPOINT=https://storage.googleapis.com S3_BUCKET=zenith-testbucket PATH=/home/heikki/pgsql-install/bin:$PATH flask run --host=0.0.0.0

View File

@@ -1,8 +0,0 @@
#!/bin/bash
#
# NOTE: You should set the BASIC_AUTH_PASSWORD environment variable before calling
# Launch S3 server
(cd ms3 && python3 -m ms3.app --listen-address=localhost) &
FLASK_ENV=development S3_REGION=auto S3_ENDPOINT=http://localhost:9009 S3_BUCKET=zenith-testbucket PATH=/home/heikki/pgsql.fsmfork/bin:$PATH flask run --host=0.0.0.0

File diff suppressed because it is too large Load Diff

View File

@@ -1,27 +0,0 @@
{
"name": "starter-kit",
"version": "1.1.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"build": "webpack",
"start": "python app.py"
},
"author": "",
"license": "ISC",
"dependencies": {
"react": "^17.0.1",
"react-dom": "^17.0.1",
"react-loader-spinner": "^4.0.0",
"react-router": "^5.2.0"
},
"devDependencies": {
"@babel/core": "^7.13.1",
"@babel/preset-env": "^7.13.5",
"@babel/preset-react": "^7.12.13",
"babel-loader": "^8.2.2",
"webpack": "^5.24.2",
"webpack-cli": "^4.5.0"
}
}

View File

@@ -1,58 +0,0 @@
<head>
<style>
.status {
font-family: monospace;
background-color: lightgrey;
}
.shellcommand {
font-family: monospace;
background-color: lightgrey;
}
.result {
font-family: monospace;
background-color: lightgrey;
padding: 10px;
}
.todo {font-style: italic;}
h1 {color: blue;}
.column {
float: left;
width: 50%;
padding: 10px;
}
/* Clear floats after the columns */
.row:after {
content: "";
display: table;
clear: both;
}
.sidenav {
float: left;
width: 150px;
padding: 10px;
background-color: pink;
}
.sidenav-item {
padding:10px 0px;
border:none;
display:block;
}
</style>
</head>
<body>
<div id="reactApp"></div>
<!-- Attach React components -->
<script type="text/javascript" src="{{ url_for('static', filename='app_bundle.js') }}"></script>
</body>

View File

@@ -1,46 +0,0 @@
<head>
<style>
.status {
font-family: monospace;
background-color: lightgrey;
}
.shellcommand {
font-family: monospace;
background-color: lightgrey;
}
.result {
font-family: monospace;
background-color: lightgrey;
padding: 10px;
}
h1 {color: blue;}
p {color: red;}
* {
box-sizing: border-box;
}
.row {
display: flex;
}
/* Create two equal columns that sits next to each other */
.column1 {
flex: 30%;
padding: 10px;
}
.column2 {
flex: 70%;
padding: 10px;
}
</style>
</head>
<body>
<div id="waldump"></div>
<!-- Attach React components -->
<script type="text/javascript" src="{{ url_for('static', filename='waldump_bundle.js') }}"></script>
</body>

View File

@@ -1,25 +0,0 @@
#
# This file contains work-in-progress code to visualize WAL contents.
#
# This is the API endpoint that calls a 'zenith_wal_to_json' executable,
# which is a hacked version of pg_waldump that prints information about the
# records in JSON format. The code in js/waldump.js displays it.
#
import os
import re
from subprocess import PIPE, STDOUT, run, Popen
def fetch_wal(request, s3bucket):
filename = request.args.get('filename')
if not re.match("^[A-Za-z0-9_]+$", filename):
raise Exception('invalid WAL filename: ' + filename)
# FIXME: this downloads the WAL file to current dir. Use a temp dir? Pipe?
s3bucket.download_file('walarchive/' + filename, filename)
result = run("zenith_wal_to_json " + filename, stdout=PIPE, universal_newlines=True, shell=True)
os.unlink(filename);
return result.stdout

View File

@@ -1,27 +0,0 @@
var webpack = require('webpack');
module.exports = {
entry: {
app: './js/app.js',
waldump: './js/waldump.js'
},
output: {
filename: "[name]_bundle.js",
path: __dirname + '/static'
},
module: {
rules: [
{
test: /\.js?$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader',
options: {
presets: ['@babel/preset-env']
}
}
}
]
},
plugins: [
]
};

View File

@@ -1,179 +0,0 @@
#zenith.py
import click
import testgres
import os
from testgres import PostgresNode
from tabulate import tabulate
zenith_base_dir = '/home/anastasia/zenith/basedir'
@click.group()
def main():
"""Run the Zenith CLI."""
@click.group()
def pg():
"""Db operations
NOTE: 'database' here means one postgresql node
"""
@click.command(name='create')
@click.option('--name', required=True)
@click.option('-s', '--storage-name', help='Name of the storage',
default='zenith-local',
show_default=True)
@click.option('--snapshot', help='init from the snapshot. Snap is a name or URL')
@click.option('--no-start', is_flag=True, help='Do not start created node',
default=False, show_default=True)
def pg_create(name, storage_name, snapshot, no_start):
"""Initialize the database"""
node = PostgresNode()
base_dir = os.path.join(zenith_base_dir, 'pg', name)
node = testgres.get_new_node(name, base_dir=base_dir)
# TODO skip init, instead of that link node with storage or upload it from snapshot
node.init()
if(no_start==False):
node.start()
@click.command(name='start')
@click.option('--name', required=True)
@click.option('--snapshot')
@click.option('--read-only', is_flag=True, help='Start read-only node', show_default=True)
def pg_start(name, snapshot, read_only):
"""Start the database"""
node = PostgresNode()
base_dir = os.path.join(zenith_base_dir, 'pg', name)
node = testgres.get_new_node(name, base_dir=base_dir)
# TODO pass snapshot as a parameter
node.start()
@click.command(name='stop')
@click.option('--name', required=True)
def pg_stop(name):
"""Stop the database"""
node = PostgresNode()
base_dir = os.path.join(zenith_base_dir, 'pg', name)
node = testgres.get_new_node(name, base_dir=base_dir)
node.stop()
@click.command(name='destroy')
@click.option('--name', required=True)
def pg_destroy(name):
"""Drop the database"""
node = PostgresNode()
base_dir = os.path.join(zenith_base_dir, 'pg', name)
node = testgres.get_new_node(name, base_dir=base_dir)
node.cleanup()
@click.command(name='list')
def pg_list():
"""List existing databases"""
dirs = os.listdir(os.path.join(zenith_base_dir, 'pg'))
path={}
status={}
data=[]
for dirname in dirs:
path[dirname] = os.path.join(zenith_base_dir, 'pg', dirname)
fname = os.path.join( path[dirname], 'data/postmaster.pid')
try:
f = open(fname,'r')
status[dirname] = f.readlines()[-1]
except OSError as err:
status[dirname]='inactive'
data.append([dirname , status[dirname], path[dirname]])
print(tabulate(data, headers=['Name', 'Status', 'Path']))
pg.add_command(pg_create)
pg.add_command(pg_destroy)
pg.add_command(pg_start)
pg.add_command(pg_stop)
pg.add_command(pg_list)
@click.group()
def storage():
"""Storage operations"""
@click.command(name='attach')
@click.option('--name')
def storage_attach(name):
"""Attach the storage"""
@click.command(name='detach')
@click.option('--name')
@click.option('--force', is_flag=True, show_default=True)
def storage_detach(name):
"""Detach the storage"""
@click.command(name='list')
def storage_list():
"""List existing storages"""
storage.add_command(storage_attach)
storage.add_command(storage_detach)
storage.add_command(storage_list)
@click.group()
def snapshot():
"""Snapshot operations"""
@click.command(name='create')
def snapshot_create():
"""Create new snapshot"""
@click.command(name='destroy')
def snapshot_destroy():
"""Destroy the snapshot"""
@click.command(name='pull')
def snapshot_pull():
"""Pull remote snapshot"""
@click.command(name='push')
def snapshot_push():
"""Push snapshot to remote"""
@click.command(name='import')
def snapshot_import():
"""Convert given format to zenith snapshot"""
@click.command(name='export')
def snapshot_export():
"""Convert zenith snapshot to PostgreSQL compatible format"""
snapshot.add_command(snapshot_create)
snapshot.add_command(snapshot_destroy)
snapshot.add_command(snapshot_pull)
snapshot.add_command(snapshot_push)
snapshot.add_command(snapshot_import)
snapshot.add_command(snapshot_export)
@click.group()
def wal():
"""WAL operations"""
@click.command()
def wallist(name="list"):
"""List WAL files"""
wal.add_command(wallist)
@click.command()
def console():
"""Open web console"""
main.add_command(pg)
main.add_command(storage)
main.add_command(snapshot)
main.add_command(wal)
main.add_command(console)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,25 @@
version: "3"
services:
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
# ports:
# - "9090:9090"
# TODO: find a proper portable solution
network_mode: "host"
grafana:
image: grafana/grafana:latest
volumes:
- ./grafana.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_DISABLE_LOGIN_FORM=true
# ports:
# - "3000:3000"
# TODO: find a proper portable solution
network_mode: "host"

12
monitoring/grafana.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://localhost:9090
basicAuth: false
isDefault: false
version: 1
editable: false

View File

@@ -0,0 +1,5 @@
scrape_configs:
- job_name: 'default'
scrape_interval: 10s
static_configs:
- targets: ['localhost:9898']

2373
pageserver/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -7,11 +7,11 @@ edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bookfile = "^0.3"
chrono = "0.4.19"
crossbeam-channel = "0.5.0"
rand = "0.8.3"
regex = "1.4.5"
bytes = "1.0.1"
bytes = { version = "1.0.1", features = ['serde'] }
byteorder = "1.4.3"
futures = "0.3.13"
lazy_static = "1.4.0"
@@ -22,24 +22,29 @@ slog-term = "2.8.0"
slog = "2.7.0"
log = "0.4.14"
clap = "2.33.0"
termion = "1.5.6"
tui = "0.14.0"
daemonize = "0.4.1"
rust-s3 = { git = "https://github.com/hlinnaka/rust-s3", rev="7f15a24ec7daa0a5d9516da706212745f9042818", features = ["no-verify-ssl"] }
tokio = { version = "1.3.0", features = ["full"] }
tokio-stream = { version = "0.1.4" }
tokio-postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="a0d067b66447951d1276a53fb09886539c3fa094" }
postgres-types = { git = "https://github.com/zenithdb/rust-postgres.git", rev="a0d067b66447951d1276a53fb09886539c3fa094" }
postgres-protocol = { git = "https://github.com/zenithdb/rust-postgres.git", rev="a0d067b66447951d1276a53fb09886539c3fa094" }
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="a0d067b66447951d1276a53fb09886539c3fa094" }
rocksdb = "0.16.0"
rust-s3 = { version = "0.27.0-rc4", features = ["no-verify-ssl"] }
tokio = { version = "1.5.0", features = ["full"] }
tokio-stream = { version = "0.1.5" }
postgres-types = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
postgres-protocol = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
postgres = { git = "https://github.com/zenithdb/rust-postgres.git", rev="9eb0dbfbeb6a6c1b79099b9f7ae4a8c021877858" }
# by default rust-rocksdb tries to build a lot of compression algos. Use lz4 only for now as it is simplest dependency.
rocksdb = { version = "0.16.0", features = ["lz4"], default-features = false }
anyhow = "1.0"
crc32c = "0.6.0"
walkdir = "2"
thiserror = "1.0"
hex = "0.4.3"
tar = "0.4.33"
parse_duration = "*"
humantime = "2.1.0"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1"
fs_extra = "1.2.0"
toml = "0.5"
scopeguard = "1.1.0"
postgres_ffi = { path = "../postgres_ffi" }
zenith_metrics = { path = "../zenith_metrics" }
zenith_utils = { path = "../zenith_utils" }
workspace_hack = { path = "../workspace_hack" }

View File

@@ -1,82 +1,4 @@
Page Server
===========
How to test
-----------
1. Compile and install Postgres from this repository (there are
modifications, so vanilla Postgres won't do)
./configure --prefix=/home/heikki/zenith-install
2. Compile the page server
cd pageserver
cargo build
3. Create another "dummy" cluster that will be used by the page server when it applies
the WAL records. (shouldn't really need this, getting rid of it is a TODO):
/home/heikki/zenith-install/bin/initdb -D /data/zenith-dummy
4. Initialize and start a new postgres cluster
/home/heikki/zenith-install/bin/initdb -D /data/zenith-test-db --username=postgres
/home/heikki/zenith-install/bin/postgres -D /data/zenith-test-db
5. In another terminal, start the page server.
PGDATA=/data/zenith-dummy PATH=/home/heikki/zenith-install/bin:$PATH ./target/debug/pageserver
It should connect to the postgres instance using streaming replication, and print something
like this:
$ PGDATA=/data/zenith-dummy PATH=/home/heikki/zenith-install/bin:$PATH ./target/debug/pageserver
Starting WAL receiver
connecting...
Starting page server on 127.0.0.1:5430
connected!
page cache is empty
6. You can now open another terminal and issue DDL commands. Generated WAL records will
be streamed to the page servers, and attached to blocks that they apply to in its
page cache
$ psql postgres -U postgres
psql (14devel)
Type "help" for help.
postgres=# create table mydata (i int4);
CREATE TABLE
postgres=# insert into mydata select g from generate_series(1,100) g;
INSERT 0 100
postgres=#
7. The GetPage@LSN interface to the compute nodes isn't working yet, but to simulate
that, the page server generates a test GetPage@LSN call every 5 seconds on a random
block that's in the page cache. In a few seconds, you should see output from that:
testing GetPage@LSN for block 0
WAL record at LSN 23584576 initializes the page
2021-03-19 11:03:13.791 EET [11439] LOG: applied WAL record at 0/167DF40
2021-03-19 11:03:13.791 EET [11439] LOG: applied WAL record at 0/167DF80
2021-03-19 11:03:13.791 EET [11439] LOG: applied WAL record at 0/167DFC0
2021-03-19 11:03:13.791 EET [11439] LOG: applied WAL record at 0/167E018
2021-03-19 11:03:13.791 EET [11439] LOG: applied WAL record at 0/167E058
2021-03-19 11:03:13.791 EET [11439] LOG: applied WAL record at 0/167E098
2021-03-19 11:03:13.791 EET [11439] LOG: applied WAL record at 0/167E0D8
2021-03-19 11:03:13.792 EET [11439] LOG: applied WAL record at 0/167E118
2021-03-19 11:03:13.792 EET [11439] LOG: applied WAL record at 0/167E158
2021-03-19 11:03:13.792 EET [11439] LOG: applied WAL record at 0/167E198
applied 10 WAL records to produce page image at LSN 18446744073709547246
Architecture
============
## Page server architecture
The Page Server is responsible for all operations on a number of
"chunks" of relation data. A chunk corresponds to a PostgreSQL
@@ -84,8 +6,10 @@ relation segment (i.e. one max. 1 GB file in the data directory), but
it holds all the different versions of every page in the segment that
are still needed by the system.
Determining which chunk each Page Server holds is handled elsewhere. (TODO:
currently, there is only one Page Server which holds all chunks)
Currently we do not specifically organize data in chunks.
All page images and corresponding WAL records are stored as entries in a key-value storage,
where StorageKey is a zenith_timeline_id + BufferTag + LSN.
The Page Server has a few different duties:
@@ -154,11 +78,33 @@ and stores them to the page cache.
Page Cache
----------
The Page Cache is a data structure, to hold all the different page versions.
It is accessed by all the other threads, to perform their duties.
The Page Cache is a switchboard to access different Repositories.
Currently, the page cache is implemented fully in-memory. TODO: Store it
on disk. Define a file format.
#### Repository
Repository corresponds to one .zenith directory.
Repository is needed to manage Timelines.
#### Timeline
Timeline is a page cache workhorse that accepts page changes
and serves get_page_at_lsn() and get_rel_size() requests.
Note: this has nothing to do with PostgreSQL WAL timeline.
#### Branch
We can create branch at certain LSN.
Each Branch lives in a corresponding timeline and has an ancestor.
To get full snapshot of data at certain moment we need to traverse timeline and its ancestors.
#### ObjectRepository
ObjectRepository implements Repository and has associated ObjectStore and WAL redo service.
#### ObjectStore
ObjectStore is an interface for key-value store for page images and wal records.
Currently it has one implementation - RocksDB.
#### WAL redo service
WAL redo service - service that runs PostgreSQL in a special wal_redo mode
to apply given WAL records over an old page image and return new page image.
TODO: Garbage Collection / Compaction
@@ -177,3 +123,7 @@ The backup service is responsible for periodically pushing the chunks to S3.
TODO: How/when do restore from S3? Whenever we get a GetPage@LSN request for
a chunk we don't currently have? Or when an external Control Plane tells us?
TODO: Sharding
--------------------
We should be able to run multiple Page Servers that handle sharded data.

View File

@@ -1,41 +0,0 @@
//
// Triggers postgres build if there is no postgres binary present at
// 'REPO_ROOT/tmp_install/bin/postgres'.
//
// I can see a lot of disadvantages with such automatization and main
// advantage here is ability to build everything and run integration tests
// in a bare repo by running 'cargo test'.
//
// We can interceipt whether it is debug or release build and run
// corresponding pg build. But it seems like an overkill for now.
//
// Problem #1 -- language server in my editor likes calling 'cargo build'
// by himself. So if I delete tmp_install directory it would magically reappear
// after some time. During this compilation 'cargo build' may whine about
// "waiting for file lock on build directory".
//
// Problem #2 -- cargo build would run this only if something is changed in
// the crate.
//
// And generally speaking postgres is not a build dependency for the pageserver,
// just for integration tests. So let's not mix that. I'll leave this file in
// place for some time just in case if anybody would start doing the same.
//
// use std::path::Path;
// use std::process::{Command};
fn main() {
// // build some postgres if it is not done none yet
// if !Path::new("../tmp_install/bin/postgres").exists() {
// let make_res = Command::new("make")
// .arg("postgres")
// .env_clear()
// .status()
// .expect("failed to execute 'make postgres'");
// if !make_res.success() {
// panic!("postgres build failed");
// }
// }
}

View File

@@ -1,62 +0,0 @@
#!/bin/sh
#
# Set up a simple Compute Node + Page Server combination locally.
#
# NOTE: This doesn't clean up between invocations. You'll need to manually:
#
# - Kill any previous 'postgres' and 'pageserver' processes
# - Clear the S3 bucket
# - Remove the 'zenith-pgdata' directory
set -e
# Set up some config.
#
# CHANGE THESE ACCORDING TO YOUR S3 INSTALLATION
export S3_REGION=auto
export S3_ENDPOINT=https://localhost:9000
export S3_ACCESSKEY=minioadmin
export S3_SECRET=pikkunen
export S3_BUCKET=zenith-testbucket
COMPUTE_NODE_PGDATA=zenith-pgdata
# 1. Initialize a cluster.
initdb -D $COMPUTE_NODE_PGDATA -U zenith
echo "port=65432" >> $COMPUTE_NODE_PGDATA/postgresql.conf
echo "log_connections=on" >> $COMPUTE_NODE_PGDATA/postgresql.conf
# Use a small shared_buffers, so that we hit the Page Server more
# easily.
echo "shared_buffers = 1MB" >> $COMPUTE_NODE_PGDATA/postgresql.conf
# TODO: page server should use a replication slot, or some other mechanism
# to make sure that the primary doesn't lose data that the page server still
# needs. (The WAL safekeepers should ensure that)
echo "wal_keep_size=10GB" >> $COMPUTE_NODE_PGDATA/postgresql.conf
# Tell the Postgres server how to connect to the Page Server
echo "page_server_connstring='host=localhost port=5430'" >> $COMPUTE_NODE_PGDATA/postgresql.conf
# 2. Run zenith_push to push a base backup fo the database to an S3 bucket. The
# Page Server will read it from there
zenith_push -D $COMPUTE_NODE_PGDATA
# 3. Launch page server
rm -rf /tmp/pgdata-dummy
initdb -N -D /tmp/pgdata-dummy
PGDATA=/tmp/pgdata-dummy ./target/debug/pageserver &
# 4. Start up the Postgres server
postgres -D $COMPUTE_NODE_PGDATA &
echo "ALL SET! You can now connect to Postgres with something like:"
echo ""
echo 'psql "dbname=postgres host=localhost user=zenith port=65432"'

View File

@@ -1,202 +1,290 @@
//!
//! Generate a tarball with files needed to bootstrap ComputeNode.
//!
//! TODO: this module has nothing to do with PostgreSQL pg_basebackup.
//! It could use a better name.
//!
//! Stateless Postgres compute node is launched by sending a tarball
//! which contains non-relational data (multixacts, clog, filenodemaps, twophase files),
//! generated pg_control and dummy segment of WAL.
//! This module is responsible for creation of such tarball
//! from data stored in object storage.
//!
use bytes::{BufMut, BytesMut};
use log::*;
use regex::Regex;
use std::fmt;
use std::io;
use std::io::Write;
use tar::Builder;
use walkdir::WalkDir;
use std::sync::Arc;
use std::time::SystemTime;
use tar::{Builder, EntryType, Header};
use crate::ZTimelineId;
use crate::relish::*;
use crate::repository::Timeline;
use postgres_ffi::xlog_utils::*;
use postgres_ffi::*;
use zenith_utils::lsn::Lsn;
pub fn send_snapshot_tarball(
write: &mut dyn Write,
timelineid: ZTimelineId,
snapshotlsn: u64,
) -> Result<(), std::io::Error> {
let mut ar = Builder::new(write);
/// This is short-living object only for the time of tarball creation,
/// created mostly to avoid passing a lot of parameters between various functions
/// used for constructing tarball.
pub struct Basebackup<'a> {
ar: Builder<&'a mut dyn Write>,
timeline: &'a Arc<dyn Timeline>,
lsn: Lsn,
prev_record_lsn: Lsn,
}
let snappath = format!("timelines/{}/snapshots/{:016X}", timelineid, snapshotlsn);
let walpath = format!("timelines/{}/wal", timelineid);
impl<'a> Basebackup<'a> {
pub fn new(
write: &'a mut dyn Write,
timeline: &'a Arc<dyn Timeline>,
lsn: Lsn,
prev_record_lsn: Lsn,
) -> Basebackup<'a> {
Basebackup {
ar: Builder::new(write),
timeline,
lsn,
prev_record_lsn,
}
}
debug!("sending tarball of snapshot in {}", snappath);
//ar.append_dir_all("", &snappath)?;
for entry in WalkDir::new(&snappath) {
let entry = entry?;
let fullpath = entry.path();
let relpath = entry.path().strip_prefix(&snappath).unwrap();
if relpath.to_str().unwrap() == "" {
continue;
pub fn send_tarball(&mut self) -> anyhow::Result<()> {
// Create pgdata subdirs structure
for dir in pg_constants::PGDATA_SUBDIRS.iter() {
info!("send subdir {:?}", *dir);
let header = new_tar_header_dir(*dir)?;
self.ar.append(&header, &mut io::empty())?;
}
if entry.file_type().is_dir() {
trace!(
"sending dir {} as {}",
fullpath.display(),
relpath.display()
);
ar.append_dir(relpath, fullpath)?;
} else if entry.file_type().is_symlink() {
error!("ignoring symlink in snapshot dir");
} else if entry.file_type().is_file() {
// Shared catalogs are exempt
if relpath.starts_with("global/") {
trace!("sending shared catalog {}", relpath.display());
ar.append_path_with_name(fullpath, relpath)?;
} else if !is_rel_file_path(relpath.to_str().unwrap()) {
trace!("sending {}", relpath.display());
ar.append_path_with_name(fullpath, relpath)?;
// Send empty config files.
for filepath in pg_constants::PGDATA_SPECIAL_FILES.iter() {
if *filepath == "pg_hba.conf" {
let data = pg_constants::PG_HBA.as_bytes();
let header = new_tar_header(&filepath, data.len() as u64)?;
self.ar.append(&header, &data[..])?;
} else {
trace!("not sending {}", relpath.display());
// FIXME: send all files for now
ar.append_path_with_name(fullpath, relpath)?;
let header = new_tar_header(&filepath, 0)?;
self.ar.append(&header, &mut io::empty())?;
}
}
// Gather non-relational files from object storage pages.
for obj in self.timeline.list_nonrels(self.lsn)? {
match obj {
RelishTag::Slru { slru, segno } => {
self.add_slru_segment(slru, segno)?;
}
RelishTag::FileNodeMap { spcnode, dbnode } => {
self.add_relmap_file(spcnode, dbnode)?;
}
RelishTag::TwoPhase { xid } => {
self.add_twophase_file(xid)?;
}
_ => {}
}
}
// Generate pg_control and bootstrap WAL segment.
self.add_pgcontrol_file()?;
self.ar.finish()?;
debug!("all tarred up!");
Ok(())
}
//
// Generate SLRU segment files from repository.
//
fn add_slru_segment(&mut self, slru: SlruKind, segno: u32) -> anyhow::Result<()> {
let seg_size = self
.timeline
.get_relish_size(RelishTag::Slru { slru, segno }, self.lsn)?;
if seg_size == None {
trace!(
"SLRU segment {}/{:>04X} was truncated",
slru.to_str(),
segno
);
return Ok(());
}
let nblocks = seg_size.unwrap();
let mut slru_buf: Vec<u8> =
Vec::with_capacity(nblocks as usize * pg_constants::BLCKSZ as usize);
for blknum in 0..nblocks {
let img = self.timeline.get_page_at_lsn_nowait(
RelishTag::Slru { slru, segno },
blknum,
self.lsn,
)?;
assert!(img.len() == pg_constants::BLCKSZ as usize);
slru_buf.extend_from_slice(&img);
}
let segname = format!("{}/{:>04X}", slru.to_str(), segno);
let header = new_tar_header(&segname, slru_buf.len() as u64)?;
self.ar.append(&header, slru_buf.as_slice())?;
trace!("Added to basebackup slru {} relsize {}", segname, nblocks);
Ok(())
}
//
// Extract pg_filenode.map files from repository
// Along with them also send PG_VERSION for each database.
//
fn add_relmap_file(&mut self, spcnode: u32, dbnode: u32) -> anyhow::Result<()> {
let img = self.timeline.get_page_at_lsn_nowait(
RelishTag::FileNodeMap { spcnode, dbnode },
0,
self.lsn,
)?;
let path = if spcnode == pg_constants::GLOBALTABLESPACE_OID {
let dst_path = "PG_VERSION";
let version_bytes = pg_constants::PG_MAJORVERSION.as_bytes();
let header = new_tar_header(&dst_path, version_bytes.len() as u64)?;
self.ar.append(&header, &version_bytes[..])?;
let dst_path = format!("global/PG_VERSION");
let header = new_tar_header(&dst_path, version_bytes.len() as u64)?;
self.ar.append(&header, &version_bytes[..])?;
String::from("global/pg_filenode.map") // filenode map for global tablespace
} else {
error!("unknown file type: {}", fullpath.display());
}
}
// User defined tablespaces are not supported
assert!(spcnode == pg_constants::DEFAULTTABLESPACE_OID);
// FIXME: also send all the WAL
for entry in std::fs::read_dir(&walpath)? {
let entry = entry?;
let fullpath = &entry.path();
let relpath = fullpath.strip_prefix(&walpath).unwrap();
// Append dir path for each database
let path = format!("base/{}", dbnode);
let header = new_tar_header_dir(&path)?;
self.ar.append(&header, &mut io::empty())?;
if !entry.path().is_file() {
continue;
}
let dst_path = format!("base/{}/PG_VERSION", dbnode);
let version_bytes = pg_constants::PG_MAJORVERSION.as_bytes();
let header = new_tar_header(&dst_path, version_bytes.len() as u64)?;
self.ar.append(&header, &version_bytes[..])?;
let archive_fname = relpath.to_str().unwrap();
let archive_fname = archive_fname
.strip_suffix(".partial")
.unwrap_or(&archive_fname);
let archive_path = "pg_wal/".to_owned() + archive_fname;
ar.append_path_with_name(fullpath, archive_path)?;
}
ar.finish()?;
debug!("all tarred up!");
Ok(())
}
// formats:
// <oid>
// <oid>_<fork name>
// <oid>.<segment number>
// <oid>_<fork name>.<segment number>
#[derive(Debug)]
struct FilePathError {
msg: String,
}
impl FilePathError {
fn new(msg: &str) -> FilePathError {
FilePathError {
msg: msg.to_string(),
}
}
}
impl From<core::num::ParseIntError> for FilePathError {
fn from(e: core::num::ParseIntError) -> Self {
return FilePathError {
msg: format!("invalid filename: {}", e),
format!("base/{}/pg_filenode.map", dbnode)
};
}
}
impl fmt::Display for FilePathError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "invalid filename")
}
}
fn forkname_to_forknum(forkname: Option<&str>) -> Result<u32, FilePathError> {
match forkname {
// "main" is not in filenames, it's implicit if the fork name is not present
None => Ok(0),
Some("fsm") => Ok(1),
Some("vm") => Ok(2),
Some("init") => Ok(3),
Some(_) => Err(FilePathError::new("invalid forkname")),
}
}
fn parse_filename(fname: &str) -> Result<(u32, u32, u32), FilePathError> {
let re = Regex::new(r"^(?P<relnode>\d+)(_(?P<forkname>[a-z]+))?(\.(?P<segno>\d+))?$").unwrap();
let caps = re
.captures(fname)
.ok_or_else(|| FilePathError::new("invalid relation data file name"))?;
let relnode_str = caps.name("relnode").unwrap().as_str();
let relnode = u32::from_str_radix(relnode_str, 10)?;
let forkname_match = caps.name("forkname");
let forkname = if forkname_match.is_none() {
None
} else {
Some(forkname_match.unwrap().as_str())
};
let forknum = forkname_to_forknum(forkname)?;
let segno_match = caps.name("segno");
let segno = if segno_match.is_none() {
0
} else {
u32::from_str_radix(segno_match.unwrap().as_str(), 10)?
};
Ok((relnode, forknum, segno))
}
fn parse_rel_file_path(path: &str) -> Result<(), FilePathError> {
/*
* Relation data files can be in one of the following directories:
*
* global/
* shared relations
*
* base/<db oid>/
* regular relations, default tablespace
*
* pg_tblspc/<tblspc oid>/<tblspc version>/
* within a non-default tablespace (the name of the directory
* depends on version)
*
* And the relation data files themselves have a filename like:
*
* <oid>.<segment number>
*/
if let Some(fname) = path.strip_prefix("global/") {
let (_relnode, _forknum, _segno) = parse_filename(fname)?;
assert!(img.len() == 512);
let header = new_tar_header(&path, img.len() as u64)?;
self.ar.append(&header, &img[..])?;
Ok(())
} else if let Some(dbpath) = path.strip_prefix("base/") {
let mut s = dbpath.split('/');
let dbnode_str = s
.next()
.ok_or_else(|| FilePathError::new("invalid relation data file name"))?;
let _dbnode = u32::from_str_radix(dbnode_str, 10)?;
let fname = s
.next()
.ok_or_else(|| FilePathError::new("invalid relation data file name"))?;
if s.next().is_some() {
return Err(FilePathError::new("invalid relation data file name"));
};
let (_relnode, _forknum, _segno) = parse_filename(fname)?;
}
//
// Extract twophase state files
//
fn add_twophase_file(&mut self, xid: TransactionId) -> anyhow::Result<()> {
if let Ok(img) =
self.timeline
.get_page_at_lsn_nowait(RelishTag::TwoPhase { xid }, 0, self.lsn)
{
let mut buf = BytesMut::new();
buf.extend_from_slice(&img[..]);
let crc = crc32c::crc32c(&img[..]);
buf.put_u32_le(crc);
let path = format!("pg_twophase/{:>08X}", xid);
let header = new_tar_header(&path, buf.len() as u64)?;
self.ar.append(&header, &buf[..])?;
}
Ok(())
}
//
// Add generated pg_control file and bootstrap WAL segment.
// Also send zenith.signal file with extra bootstrap data.
//
fn add_pgcontrol_file(&mut self) -> anyhow::Result<()> {
let checkpoint_bytes =
self.timeline
.get_page_at_lsn_nowait(RelishTag::Checkpoint, 0, self.lsn)?;
let pg_control_bytes =
self.timeline
.get_page_at_lsn_nowait(RelishTag::ControlFile, 0, self.lsn)?;
let mut pg_control = ControlFileData::decode(&pg_control_bytes)?;
let mut checkpoint = CheckPoint::decode(&checkpoint_bytes)?;
// Generate new pg_control and WAL needed for bootstrap
let checkpoint_segno = self.lsn.segment_number(pg_constants::WAL_SEGMENT_SIZE);
let checkpoint_lsn = XLogSegNoOffsetToRecPtr(
checkpoint_segno,
XLOG_SIZE_OF_XLOG_LONG_PHD as u32,
pg_constants::WAL_SEGMENT_SIZE,
);
checkpoint.redo = self.lsn.0 + self.lsn.calc_padding(8u32);
//reset some fields we don't want to preserve
//TODO Check this.
//We may need to determine the value from twophase data.
checkpoint.oldestActiveXid = 0;
//save new values in pg_control
pg_control.checkPoint = checkpoint_lsn;
pg_control.checkPointCopy = checkpoint;
pg_control.state = pg_constants::DB_SHUTDOWNED;
// add zenith.signal file
self.ar.append(
&new_tar_header("zenith.signal", 8)?,
&self.prev_record_lsn.0.to_le_bytes()[..],
)?;
//send pg_control
let pg_control_bytes = pg_control.encode();
let header = new_tar_header("global/pg_control", pg_control_bytes.len() as u64)?;
self.ar.append(&header, &pg_control_bytes[..])?;
//send wal segment
let wal_file_name = XLogFileName(
1, // FIXME: always use Postgres timeline 1
checkpoint_segno,
pg_constants::WAL_SEGMENT_SIZE,
);
let wal_file_path = format!("pg_wal/{}", wal_file_name);
let header = new_tar_header(&wal_file_path, pg_constants::WAL_SEGMENT_SIZE as u64)?;
let wal_seg = generate_wal_segment(&pg_control);
assert!(wal_seg.len() == pg_constants::WAL_SEGMENT_SIZE);
self.ar.append(&header, &wal_seg[..])?;
Ok(())
} else if let Some(_) = path.strip_prefix("pg_tblspc/") {
// TODO
Err(FilePathError::new("tablespaces not supported"))
} else {
Err(FilePathError::new("invalid relation data file name"))
}
}
fn is_rel_file_path(path: &str) -> bool {
parse_rel_file_path(path).is_ok()
//
// Create new tarball entry header
//
fn new_tar_header(path: &str, size: u64) -> anyhow::Result<Header> {
let mut header = Header::new_gnu();
header.set_size(size);
header.set_path(path)?;
header.set_mode(0b110000000); // -rw-------
header.set_mtime(
// use currenttime as last modified time
SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs(),
);
header.set_cksum();
Ok(header)
}
fn new_tar_header_dir(path: &str) -> anyhow::Result<Header> {
let mut header = Header::new_gnu();
header.set_size(0);
header.set_path(path)?;
header.set_mode(0o755); // -rw-------
header.set_entry_type(EntryType::dir());
header.set_mtime(
// use currenttime as last modified time
SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs(),
);
header.set_cksum();
Ok(header)
}

View File

@@ -3,24 +3,170 @@
//
use log::*;
use parse_duration::parse;
use std::fs::{self, OpenOptions};
use std::io;
use std::path::PathBuf;
use std::process::exit;
use std::thread;
use std::time::Duration;
use serde::{Deserialize, Serialize};
use std::{
env,
net::TcpListener,
path::{Path, PathBuf},
process::exit,
str::FromStr,
sync::Arc,
thread,
time::Duration,
};
use zenith_utils::{auth::JwtAuth, postgres_backend::AuthType};
use anyhow::{Context, Result};
use clap::{App, Arg};
use anyhow::{ensure, Result};
use clap::{App, Arg, ArgMatches};
use daemonize::Daemonize;
use slog::Drain;
use pageserver::{branches, logger, page_cache, page_service, PageServerConf, RepositoryFormat};
use zenith_utils::http_endpoint;
use pageserver::{page_service, tui, zenith_repo_dir, PageServerConf};
const DEFAULT_LISTEN_ADDR: &str = "127.0.0.1:64000";
const DEFAULT_HTTP_ENDPOINT_ADDR: &str = "127.0.0.1:9898";
const DEFAULT_GC_HORIZON: u64 = 64 * 1024 * 1024;
const DEFAULT_GC_PERIOD_SEC: u64 = 10;
const DEFAULT_GC_PERIOD: Duration = Duration::from_secs(10);
const DEFAULT_SUPERUSER: &str = "zenith_admin";
/// String arguments that can be declared via CLI or config file
#[derive(Serialize, Deserialize)]
struct CfgFileParams {
listen_addr: Option<String>,
http_endpoint_addr: Option<String>,
gc_horizon: Option<String>,
gc_period: Option<String>,
pg_distrib_dir: Option<String>,
auth_validation_public_key_path: Option<String>,
auth_type: Option<String>,
repository_format: Option<String>,
}
impl CfgFileParams {
/// Extract string arguments from CLI
fn from_args(arg_matches: &ArgMatches) -> Self {
let get_arg = |arg_name: &str| -> Option<String> {
arg_matches.value_of(arg_name).map(str::to_owned)
};
Self {
listen_addr: get_arg("listen"),
http_endpoint_addr: get_arg("http_endpoint"),
gc_horizon: get_arg("gc_horizon"),
gc_period: get_arg("gc_period"),
pg_distrib_dir: get_arg("postgres-distrib"),
auth_validation_public_key_path: get_arg("auth-validation-public-key-path"),
auth_type: get_arg("auth-type"),
repository_format: get_arg("repository-format"),
}
}
/// Fill missing values in `self` with `other`
fn or(self, other: CfgFileParams) -> Self {
// TODO cleaner way to do this
Self {
listen_addr: self.listen_addr.or(other.listen_addr),
http_endpoint_addr: self.http_endpoint_addr.or(other.http_endpoint_addr),
gc_horizon: self.gc_horizon.or(other.gc_horizon),
gc_period: self.gc_period.or(other.gc_period),
pg_distrib_dir: self.pg_distrib_dir.or(other.pg_distrib_dir),
auth_validation_public_key_path: self
.auth_validation_public_key_path
.or(other.auth_validation_public_key_path),
auth_type: self.auth_type.or(other.auth_type),
repository_format: self.repository_format.or(other.repository_format),
}
}
/// Create a PageServerConf from these string parameters
fn try_into_config(&self) -> Result<PageServerConf> {
let workdir = PathBuf::from(".");
let listen_addr = match self.listen_addr.as_ref() {
Some(addr) => addr.clone(),
None => DEFAULT_LISTEN_ADDR.to_owned(),
};
let http_endpoint_addr = match self.http_endpoint_addr.as_ref() {
Some(addr) => addr.clone(),
None => DEFAULT_HTTP_ENDPOINT_ADDR.to_owned(),
};
let gc_horizon: u64 = match self.gc_horizon.as_ref() {
Some(horizon_str) => horizon_str.parse()?,
None => DEFAULT_GC_HORIZON,
};
let gc_period = match self.gc_period.as_ref() {
Some(period_str) => humantime::parse_duration(period_str)?,
None => DEFAULT_GC_PERIOD,
};
let pg_distrib_dir = match self.pg_distrib_dir.as_ref() {
Some(pg_distrib_dir_str) => PathBuf::from(pg_distrib_dir_str),
None => env::current_dir()?.join("tmp_install"),
};
let auth_validation_public_key_path = self
.auth_validation_public_key_path
.as_ref()
.map(PathBuf::from);
let auth_type = self
.auth_type
.as_ref()
.map_or(Ok(AuthType::Trust), |auth_type| {
AuthType::from_str(&auth_type)
})?;
if !pg_distrib_dir.join("bin/postgres").exists() {
anyhow::bail!("Can't find postgres binary at {:?}", pg_distrib_dir);
}
if auth_type == AuthType::ZenithJWT {
ensure!(
auth_validation_public_key_path.is_some(),
"Missing auth_validation_public_key_path when auth_type is ZenithJWT"
);
let path_ref = auth_validation_public_key_path.as_ref().unwrap();
ensure!(
path_ref.exists(),
format!("Can't find auth_validation_public_key at {:?}", path_ref)
);
}
let repository_format = match self.repository_format.as_ref() {
Some(repo_format_str) if repo_format_str == "rocksdb" => RepositoryFormat::RocksDb,
Some(repo_format_str) if repo_format_str == "layered" => RepositoryFormat::Layered,
Some(repo_format_str) => anyhow::bail!(
"invalid --repository-format '{}', must be 'rocksdb' or 'layered'",
repo_format_str
),
None => RepositoryFormat::Layered, // default
};
Ok(PageServerConf {
daemonize: false,
listen_addr,
http_endpoint_addr,
gc_horizon,
gc_period,
superuser: String::from(DEFAULT_SUPERUSER),
workdir,
pg_distrib_dir,
auth_validation_public_key_path,
auth_type,
repository_format,
})
}
}
fn main() -> Result<()> {
let arg_matches = App::new("Zenith page server")
@@ -32,13 +178,6 @@ fn main() -> Result<()> {
.takes_value(true)
.help("listen for incoming page requests on ip:port (default: 127.0.0.1:5430)"),
)
.arg(
Arg::with_name("interactive")
.short("i")
.long("interactive")
.takes_value(false)
.help("Interactive mode"),
)
.arg(
Arg::with_name("daemonize")
.short("d")
@@ -46,6 +185,12 @@ fn main() -> Result<()> {
.takes_value(false)
.help("Run in the background"),
)
.arg(
Arg::with_name("init")
.long("init")
.takes_value(false)
.help("Initialize pageserver repo"),
)
.arg(
Arg::with_name("gc_horizon")
.long("gc_horizon")
@@ -58,90 +203,117 @@ fn main() -> Result<()> {
.takes_value(true)
.help("Interval between garbage collector iterations"),
)
.arg(
Arg::with_name("workdir")
.short("D")
.long("workdir")
.takes_value(true)
.help("Working directory for the pageserver"),
)
.arg(
Arg::with_name("postgres-distrib")
.long("postgres-distrib")
.takes_value(true)
.help("Postgres distribution directory"),
)
.arg(
Arg::with_name("create-tenant")
.long("create-tenant")
.takes_value(true)
.help("Create tenant during init")
.requires("init"),
)
.arg(
Arg::with_name("auth-validation-public-key-path")
.long("auth-validation-public-key-path")
.takes_value(true)
.help("Path to public key used to validate jwt signature"),
)
.arg(
Arg::with_name("auth-type")
.long("auth-type")
.takes_value(true)
.help("Authentication scheme type. One of: Trust, MD5, ZenithJWT"),
)
.arg(
Arg::with_name("repository-format")
.long("repository-format")
.takes_value(true)
.help("Which repository implementation to use, 'rocksdb' or 'layered'"),
)
.get_matches();
let mut conf = PageServerConf {
daemonize: false,
interactive: false,
gc_horizon: DEFAULT_GC_HORIZON,
gc_period: Duration::from_secs(DEFAULT_GC_PERIOD_SEC),
listen_addr: "127.0.0.1:5430".parse().unwrap(),
let workdir = Path::new(arg_matches.value_of("workdir").unwrap_or(".zenith"));
let cfg_file_path = workdir.canonicalize()?.join("pageserver.toml");
let args_params = CfgFileParams::from_args(&arg_matches);
let init = arg_matches.is_present("init");
let create_tenant = arg_matches.value_of("create-tenant");
let params = if init {
// We're initializing the repo, so there's no config file yet
args_params
} else {
// Supplement the CLI arguments with the config file
let cfg_file_contents = std::fs::read_to_string(&cfg_file_path)?;
let file_params: CfgFileParams = toml::from_str(&cfg_file_contents)?;
args_params.or(file_params)
};
if arg_matches.is_present("daemonize") {
conf.daemonize = true;
}
// Set CWD to workdir for non-daemon modes
env::set_current_dir(&workdir)?;
if arg_matches.is_present("interactive") {
conf.interactive = true;
}
// Ensure the config is valid, even if just init-ing
let mut conf = params.try_into_config()?;
if conf.daemonize && conf.interactive {
eprintln!("--daemonize is not allowed with --interactive: choose one");
conf.daemonize = arg_matches.is_present("daemonize");
if init && conf.daemonize {
eprintln!("--daemonize cannot be used with --init");
exit(1);
}
if let Some(addr) = arg_matches.value_of("listen") {
conf.listen_addr = addr.parse()?;
// The configuration is all set up now. Turn it into a 'static
// that can be freely stored in structs and passed across threads
// as a ref.
let conf: &'static PageServerConf = Box::leak(Box::new(conf));
// Create repo and exit if init was requested
if init {
branches::init_pageserver(conf, create_tenant)?;
// write the config file
let cfg_file_contents = toml::to_string_pretty(&params)?;
// TODO support enable-auth flag
std::fs::write(&cfg_file_path, cfg_file_contents)?;
return Ok(());
}
if let Some(horizon) = arg_matches.value_of("gc_horizon") {
conf.gc_horizon = horizon.parse()?;
}
if let Some(period) = arg_matches.value_of("gc_period") {
conf.gc_period = parse(period)?;
}
start_pageserver(&conf)
start_pageserver(conf)
}
fn start_pageserver(conf: &PageServerConf) -> Result<()> {
fn start_pageserver(conf: &'static PageServerConf) -> Result<()> {
// Initialize logger
let _scope_guard = init_logging(&conf)?;
let (_scope_guard, log_file) = logger::init_logging(&conf, "pageserver.log")?;
let _log_guard = slog_stdlog::init()?;
// Note: this `info!(...)` macro comes from `log` crate
info!("standard logging redirected to slog");
let tui_thread: Option<thread::JoinHandle<()>>;
if conf.interactive {
// Initialize the UI
tui_thread = Some(
thread::Builder::new()
.name("UI thread".into())
.spawn(|| {
let _ = tui::ui_main();
})
.unwrap(),
);
//threads.push(tui_thread);
} else {
tui_thread = None;
}
// TODO: Check that it looks like a valid repository before going further
if conf.daemonize {
info!("daemonizing...");
let repodir = PathBuf::from(zenith_repo_dir());
// There should'n be any logging to stdin/stdout. Redirect it to the main log so
// that we will see any accidental manual fprintf's or backtraces.
let log_filename = repodir.join("pageserver.log");
let stdout = OpenOptions::new()
.create(true)
.append(true)
.open(&log_filename)
.with_context(|| format!("failed to open {:?}", &log_filename))?;
let stderr = OpenOptions::new()
.create(true)
.append(true)
.open(&log_filename)
.with_context(|| format!("failed to open {:?}", &log_filename))?;
let stdout = log_file.try_clone().unwrap();
let stderr = log_file;
let daemonize = Daemonize::new()
.pid_file(repodir.join("pageserver.pid"))
.working_directory(repodir)
.pid_file("pageserver.pid")
.working_directory(".")
.stdout(stdout)
.stderr(stderr);
@@ -149,95 +321,42 @@ fn start_pageserver(conf: &PageServerConf) -> Result<()> {
Ok(_) => info!("Success, daemonized"),
Err(e) => error!("Error, {}", e),
}
} else {
// change into the repository directory. In daemon mode, Daemonize
// does this for us.
let repodir = zenith_repo_dir();
std::env::set_current_dir(&repodir)?;
info!("Changed current directory to repository in {:?}", &repodir);
}
let mut threads = Vec::new();
// Spawn a new thread for the http endpoint
thread::Builder::new()
.name("Metrics thread".into())
.spawn(move || http_endpoint::thread_main(conf.http_endpoint_addr.clone()))?;
// TODO: Check that it looks like a valid repository before going further
// Check that we can bind to address before starting threads to simplify shutdown
// sequence if port is occupied.
info!("Starting pageserver on {}", conf.listen_addr);
let pageserver_listener = TcpListener::bind(conf.listen_addr.clone())?;
// Create directory for wal-redo datadirs
match fs::create_dir("wal-redo") {
Ok(_) => {}
Err(e) => match e.kind() {
io::ErrorKind::AlreadyExists => {}
_ => {
anyhow::bail!("Failed to create wal-redo data directory: {}", e);
}
},
}
// Initialize page cache, this will spawn walredo_thread
page_cache::init(conf);
// GetPage@LSN requests are served by another thread. (It uses async I/O,
// but the code in page_service sets up it own thread pool for that)
let conf_copy = conf.clone();
let page_server_thread = thread::Builder::new()
// initialize authentication for incoming connections
let auth = match &conf.auth_type {
AuthType::Trust | AuthType::MD5 => Arc::new(None),
AuthType::ZenithJWT => {
// unwrap is ok because check is performed when creating config, so path is set and file exists
let key_path = conf.auth_validation_public_key_path.as_ref().unwrap();
Arc::new(Some(JwtAuth::from_key_path(key_path)?))
}
};
info!("Using auth: {:#?}", conf.auth_type);
// Spawn a thread to listen for connections. It will spawn further threads
// for each connection.
let page_service_thread = thread::Builder::new()
.name("Page Service thread".into())
.spawn(move || {
// thread code
page_service::thread_main(&conf_copy);
})
.unwrap();
threads.push(page_server_thread);
page_service::thread_main(conf, auth, pageserver_listener, conf.auth_type)
})?;
page_service_thread
.join()
.expect("Page service thread has panicked")?;
if let Some(tui_thread) = tui_thread {
// The TUI thread exits when the user asks to Quit.
tui_thread.join().unwrap();
} else {
// In non-interactive mode, wait forever.
for t in threads {
t.join().unwrap()
}
}
Ok(())
}
fn init_logging(conf: &PageServerConf) -> Result<slog_scope::GlobalLoggerGuard, io::Error> {
if conf.interactive {
Ok(tui::init_logging())
} else if conf.daemonize {
let log = zenith_repo_dir().join("pageserver.log");
let log_file = OpenOptions::new()
.create(true)
.append(true)
.open(&log)
.map_err(|err| {
// We failed to initialize logging, so we can't log this message with error!
eprintln!("Could not create log file {:?}: {}", log, err);
err
})?;
let decorator = slog_term::PlainSyncDecorator::new(log_file);
let drain = slog_term::CompactFormat::new(decorator).build();
let drain = slog::Filter::new(drain, |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Debug) {
return true;
}
false
});
let drain = std::sync::Mutex::new(drain).fuse();
let logger = slog::Logger::root(drain, slog::o!());
Ok(slog_scope::set_global_logger(logger))
} else {
let decorator = slog_term::TermDecorator::new().build();
let drain = slog_term::FullFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).chan_size(1000).build().fuse();
let drain = slog::Filter::new(drain, |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Info) {
return true;
}
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver")
{
return true;
}
false
})
.fuse();
let logger = slog::Logger::root(drain, slog::o!());
Ok(slog_scope::set_global_logger(logger))
}
}

404
pageserver/src/branches.rs Normal file
View File

@@ -0,0 +1,404 @@
//!
//! Branch management code
//!
// TODO: move all paths construction to conf impl
//
use anyhow::{bail, ensure, Context, Result};
use postgres_ffi::ControlFileData;
use serde::{Deserialize, Serialize};
use std::{
fs,
path::Path,
process::{Command, Stdio},
str::FromStr,
sync::Arc,
};
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use log::*;
use zenith_utils::lsn::Lsn;
use crate::logger;
use crate::object_repository::ObjectRepository;
use crate::page_cache;
use crate::restore_local_repo;
use crate::walredo::WalRedoManager;
use crate::{repository::Repository, PageServerConf, RepositoryFormat};
#[derive(Serialize, Deserialize, Clone)]
pub struct BranchInfo {
pub name: String,
pub timeline_id: ZTimelineId,
pub latest_valid_lsn: Option<Lsn>,
pub ancestor_id: Option<String>,
pub ancestor_lsn: Option<String>,
}
#[derive(Debug, Clone, Copy)]
pub struct PointInTime {
pub timelineid: ZTimelineId,
pub lsn: Lsn,
}
pub fn init_pageserver(conf: &'static PageServerConf, create_tenant: Option<&str>) -> Result<()> {
// Initialize logger
let (_scope_guard, _log_file) = logger::init_logging(&conf, "pageserver.log")?;
let _log_guard = slog_stdlog::init()?;
if let Some(tenantid) = create_tenant {
let tenantid = ZTenantId::from_str(tenantid)?;
println!("initializing tenantid {}", tenantid);
create_repo(
conf,
tenantid,
Arc::new(crate::walredo::DummyRedoManager {}),
)
.with_context(|| "failed to create repo")?;
}
fs::create_dir_all(conf.tenants_path())?;
println!("pageserver init succeeded");
Ok(())
}
pub fn create_repo(
conf: &'static PageServerConf,
tenantid: ZTenantId,
wal_redo_manager: Arc<dyn WalRedoManager + Send + Sync>,
) -> Result<Arc<dyn Repository>> {
let repo_dir = conf.tenant_path(&tenantid);
if repo_dir.exists() {
bail!("repo for {} already exists", tenantid)
}
// top-level dir may exist if we are creating it through CLI
fs::create_dir_all(&repo_dir)
.with_context(|| format!("could not create directory {}", repo_dir.display()))?;
// Note: this `info!(...)` macro comes from `log` crate
info!("standard logging redirected to slog");
fs::create_dir(conf.timelines_path(&tenantid))?;
fs::create_dir_all(conf.branches_path(&tenantid))?;
fs::create_dir_all(conf.tags_path(&tenantid))?;
info!("created directory structure in {}", repo_dir.display());
let tli = create_timeline(conf, None, &tenantid)?;
// We don't use page_cache here, because we don't want to spawn the WAL redo thread during
// repository initialization.
//
// FIXME: That caused trouble, because the WAL redo thread launched initdb in the background,
// and it kept running even after the "zenith init" had exited. In tests, we started the
// page server immediately after that, so that initdb was still running in the background,
// and we failed to run initdb again in the same directory. This has been solved for the
// rapid init+start case now, but the general race condition remains if you restart the
// server quickly.
let repo: Arc<dyn Repository + Sync + Send> =
match conf.repository_format {
RepositoryFormat::Layered => Arc::new(
crate::layered_repository::LayeredRepository::new(conf, wal_redo_manager, tenantid),
),
RepositoryFormat::RocksDb => {
let obj_store = crate::rocksdb_storage::RocksObjectStore::create(conf, &tenantid)?;
Arc::new(ObjectRepository::new(
conf,
Arc::new(obj_store),
wal_redo_manager,
tenantid,
))
}
};
// Load data into pageserver
// TODO To implement zenith import we need to
// move data loading out of create_repo()
bootstrap_timeline(conf, tenantid, tli, &*repo)?;
Ok(repo)
}
// Returns checkpoint LSN from controlfile
fn get_lsn_from_controlfile(path: &Path) -> Result<Lsn> {
// Read control file to extract the LSN
let controlfile_path = path.join("global").join("pg_control");
let controlfile = ControlFileData::decode(&fs::read(controlfile_path)?)?;
let lsn = controlfile.checkPoint;
Ok(Lsn(lsn))
}
// Create the cluster temporarily in a initdbpath directory inside the repository
// to get bootstrap data for timeline initialization.
//
fn run_initdb(conf: &'static PageServerConf, initdbpath: &Path) -> Result<()> {
info!("running initdb... ");
let initdb_path = conf.pg_bin_dir().join("initdb");
let initdb_output = Command::new(initdb_path)
.args(&["-D", initdbpath.to_str().unwrap()])
.args(&["-U", &conf.superuser])
.arg("--no-instructions")
.env_clear()
.env("LD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.stdout(Stdio::null())
.output()
.with_context(|| "failed to execute initdb")?;
if !initdb_output.status.success() {
anyhow::bail!(
"initdb failed: '{}'",
String::from_utf8_lossy(&initdb_output.stderr)
);
}
info!("initdb succeeded");
Ok(())
}
//
// - run initdb to init temporary instance and get bootstrap data
// - after initialization complete, remove the temp dir.
//
fn bootstrap_timeline(
conf: &'static PageServerConf,
tenantid: ZTenantId,
tli: ZTimelineId,
repo: &dyn Repository,
) -> Result<()> {
let initdb_path = conf.tenant_path(&tenantid).join("tmp");
// Init temporarily repo to get bootstrap data
run_initdb(conf, &initdb_path)?;
let pgdata_path = initdb_path;
let lsn = get_lsn_from_controlfile(&pgdata_path)?;
info!("bootstrap_timeline {:?} at lsn {}", pgdata_path, lsn);
let timeline = repo.create_empty_timeline(tli, lsn)?;
restore_local_repo::import_timeline_from_postgres_datadir(&pgdata_path, &*timeline, lsn)?;
let wal_dir = pgdata_path.join("pg_wal");
restore_local_repo::import_timeline_wal(&wal_dir, &*timeline, timeline.get_last_record_lsn())?;
println!(
"created initial timeline {} timeline.lsn {}",
tli,
timeline.get_last_record_lsn()
);
let data = tli.to_string();
fs::write(conf.branch_path("main", &tenantid), data)?;
println!("created main branch");
// Remove temp dir. We don't need it anymore
fs::remove_dir_all(pgdata_path)?;
Ok(())
}
pub(crate) fn get_tenants(conf: &PageServerConf) -> Result<Vec<String>> {
let tenants_dir = conf.tenants_path();
std::fs::read_dir(&tenants_dir)?
.map(|dir_entry_res| {
let dir_entry = dir_entry_res?;
ensure!(dir_entry.file_type()?.is_dir());
Ok(dir_entry.file_name().to_str().unwrap().to_owned())
})
.collect()
}
pub(crate) fn get_branches(conf: &PageServerConf, tenantid: &ZTenantId) -> Result<Vec<BranchInfo>> {
let repo = page_cache::get_repository_for_tenant(tenantid)?;
// Each branch has a corresponding record (text file) in the refs/branches
// with timeline_id.
let branches_dir = conf.branches_path(tenantid);
std::fs::read_dir(&branches_dir)?
.map(|dir_entry_res| {
let dir_entry = dir_entry_res?;
let name = dir_entry.file_name().to_str().unwrap().to_string();
let timeline_id = std::fs::read_to_string(dir_entry.path())?.parse::<ZTimelineId>()?;
let latest_valid_lsn = repo
.get_timeline(timeline_id)
.map(|timeline| timeline.get_last_valid_lsn())
.ok();
let ancestor_path = conf.ancestor_path(&timeline_id, tenantid);
let mut ancestor_id: Option<String> = None;
let mut ancestor_lsn: Option<String> = None;
if ancestor_path.exists() {
let ancestor = std::fs::read_to_string(ancestor_path)?;
let mut strings = ancestor.split('@');
ancestor_id = Some(
strings
.next()
.with_context(|| "wrong branch ancestor point in time format")?
.to_owned(),
);
ancestor_lsn = Some(
strings
.next()
.with_context(|| "wrong branch ancestor point in time format")?
.to_owned(),
);
}
Ok(BranchInfo {
name,
timeline_id,
latest_valid_lsn,
ancestor_id,
ancestor_lsn,
})
})
.collect()
}
pub(crate) fn create_branch(
conf: &PageServerConf,
branchname: &str,
startpoint_str: &str,
tenantid: &ZTenantId,
) -> Result<BranchInfo> {
let repo = page_cache::get_repository_for_tenant(tenantid)?;
if conf.branch_path(branchname, tenantid).exists() {
anyhow::bail!("branch {} already exists", branchname);
}
let mut startpoint = parse_point_in_time(conf, startpoint_str, tenantid)?;
if startpoint.lsn == Lsn(0) {
// Find end of WAL on the old timeline
let end_of_wal = repo
.get_timeline(startpoint.timelineid)?
.get_last_record_lsn();
println!("branching at end of WAL: {}", end_of_wal);
startpoint.lsn = end_of_wal;
}
// create a new timeline directory for it
let newtli = create_timeline(conf, Some(startpoint), tenantid)?;
// Let the Repository backend do its initialization
repo.branch_timeline(startpoint.timelineid, newtli, startpoint.lsn)?;
// Remember the human-readable branch name for the new timeline.
// FIXME: there's a race condition, if you create a branch with the same
// name concurrently.
let data = newtli.to_string();
fs::write(conf.branch_path(&branchname, tenantid), data)?;
Ok(BranchInfo {
name: branchname.to_string(),
timeline_id: newtli,
latest_valid_lsn: Some(startpoint.lsn),
ancestor_id: None,
ancestor_lsn: None,
})
}
//
// Parse user-given string that represents a point-in-time.
//
// We support multiple variants:
//
// Raw timeline id in hex, meaning the end of that timeline:
// bc62e7d612d0e6fe8f99a6dd2f281f9d
//
// A specific LSN on a timeline:
// bc62e7d612d0e6fe8f99a6dd2f281f9d@2/15D3DD8
//
// Same, with a human-friendly branch name:
// main
// main@2/15D3DD8
//
// Human-friendly tag name:
// mytag
//
//
fn parse_point_in_time(
conf: &PageServerConf,
s: &str,
tenantid: &ZTenantId,
) -> Result<PointInTime> {
let mut strings = s.split('@');
let name = strings.next().unwrap();
let lsn: Option<Lsn>;
if let Some(lsnstr) = strings.next() {
lsn = Some(
Lsn::from_str(lsnstr).with_context(|| "invalid LSN in point-in-time specification")?,
);
} else {
lsn = None
}
// Check if it's a tag
if lsn.is_none() {
let tagpath = conf.tag_path(name, &tenantid);
if tagpath.exists() {
let pointstr = fs::read_to_string(tagpath)?;
return parse_point_in_time(conf, &pointstr, &tenantid);
}
}
// Check if it's a branch
// Check if it's branch @ LSN
let branchpath = conf.branch_path(name, &tenantid);
if branchpath.exists() {
let pointstr = fs::read_to_string(branchpath)?;
let mut result = parse_point_in_time(conf, &pointstr, &tenantid)?;
result.lsn = lsn.unwrap_or(Lsn(0));
return Ok(result);
}
// Check if it's a timelineid
// Check if it's timelineid @ LSN
if let Ok(timelineid) = ZTimelineId::from_str(name) {
let tlipath = conf.timeline_path(&timelineid, &tenantid);
if tlipath.exists() {
return Ok(PointInTime {
timelineid,
lsn: lsn.unwrap_or(Lsn(0)),
});
}
}
bail!("could not parse point-in-time {}", s);
}
fn create_timeline(
conf: &PageServerConf,
ancestor: Option<PointInTime>,
tenantid: &ZTenantId,
) -> Result<ZTimelineId> {
// Create initial timeline
let timelineid = ZTimelineId::generate();
let timelinedir = conf.timeline_path(&timelineid, tenantid);
fs::create_dir(&timelinedir)?;
fs::create_dir(&timelinedir.join("wal"))?;
if let Some(ancestor) = ancestor {
let data = format!("{}@{}", ancestor.timelineid, ancestor.lsn);
fs::write(timelinedir.join("ancestor"), data)?;
}
Ok(timelineid)
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,298 @@
# Overview
The on-disk format is based on immutable files. The page server
receives a stream of incoming WAL, parses the WAL records to determine
which pages they apply to, and accumulates the incoming changes in
memory. Every now and then, the accumulated changes are written out to
new files.
The files are called "snapshot files". Each snapshot file corresponds
to one PostgreSQL relation fork. The snapshot files for each timeline
are stored in the timeline's subdirectory under
.zenith/tenants/<tenantid>/timelines.
The files are named like this:
rel_<spcnode>_<dbnode>_<relnode>_<forknum>_<start LSN>_<end LSN>
For example:
rel_1663_13990_2609_0_000000000169C348_0000000001702000
Some non-relation files are also stored in repository. For example,
a CLOG segment would be named like this:
pg_xact_0000_00000000198B06B0_00000000198C2550
There is no difference in how the relation and non-relation files are
managed, except that the first part of file names is different.
Internally, the relations and non-relation files that are managed in
the versioned store are together called "relishes".
Each snapshot file contains a full snapshot, that is, full copy of all
pages in the relation, as of the "start LSN". It also contains all WAL
records applicable to the relation between the start and end
LSNs. With this information, the page server can reconstruct any page
version of the relation in the LSN range.
If a file has been dropped, the last snapshot file for it is created
with the _DROPPED suffix, e.g.
rel_1663_13990_2609_0_000000000169C348_0000000001702000_DROPPED
In addition to the relations, with "rel_*" prefix, we use the same
format for storing various smaller files from the PostgreSQL data
directory. They will use different suffixes and the naming scheme
up to the LSN range varies. The Zenith source code uses the term
"relish" to mean "a relation, or other file that's treated like a
relation in the storage"
## Notation used in this document
The full path of a snapshot file looks like this:
.zenith/tenants/941ddc8604413b88b3d208bddf90396c/timelines/4af489b06af8eed9e27a841775616962/rel_1663_13990_2609_0_000000000169C348_0000000001702000
For simplicity, the examples below use a simplified notation for the
paths. The tenant ID is left out, the timeline ID is replaced with
the human-readable branch name, and spcnode+dbnode+relnode+forkum with
a human-readable table name. The LSNs are also shorter. For example, a
snapshot file for 'orders' table on 'main' branch, with LSN range
100-200 would be:
main/orders_100_200
# Creating snapshot files
Let's start with a simple example with a system that contains one
branch called 'main' and two tables, 'orders' and 'customers'. The end
of WAL is currently at LSN 250. In this starting situation, you would
have two files on disk:
main/orders_100_200
main/customers_100_200
In addition to those files, the recent changes between LSN 200 and the
end of WAL at 250 are kept in memory. If the page server crashes, the
latest records between 200-250 need to be re-read from the WAL.
Whenever enough WAL has been accumulated in memory, the page server
writes out the changes in memory into new snapshot files. This process
is called "checkpointing" (not to be confused with the PostgreSQL
checkpoints, that's a different thing). The page server only creates
snapshot files for relations that have been modified since the last
checkpoint. For example, if the current end of WAL is at LSN 450, and
the last checkpoint happened at LSN 400 but there hasn't been any
recent changes to 'customers' table, you would have these files on
disk:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/customers_100_200
If the customers table is modified later, a new file is created for it
at the next checkpoint. The new file will cover the "gap" from the
last snapshot file, so the LSN ranges are always contiguous:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/customers_100_200
main/customers_200_500
## Reading page versions
Whenever a GetPage@LSN request comes in from the compute node, the
page server needs to reconstruct the requested page, as it was at the
requested LSN. To do that, the page server first checks the recent
in-memory layer; if the requested page version is found there, it can
be returned immediatedly without looking at the files on
disk. Otherwise the page server needs to locate the snapshot file that
contains the requested page version.
For example, if a request comes in for table 'orders' at LSN 250, the
page server would load the 'main/orders_200_300' file into memory, and
reconstruct and return the requested page from it, as it was at
LSN 250. Because the snapshot file consists of a full image of the
relation at the start LSN and the WAL, reconstructing the page
involves replaying any WAL records applicable to the page between LSNs
200-250, starting from the base image at LSN 200.
A request at a file boundary can be satisfied using either file. For
example, if there are two files on disk:
main/orders_100_200
main/orders_200_300
And a request comes with LSN 200, either file can be used for it. It
is better to use the later file, however, because it contains an
already materialized version of all the pages at LSN 200. Using the
first file, you would need to apply any WAL records between 100 and
200 to reconstruct the requested page.
# Multiple branches
Imagine that a child branch is created at LSN 250:
@250
----main--+-------------------------->
\
+---child-------------->
Then, the 'orders' table is updated differently on the 'main' and
'child' branches. You now have this situation on disk:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/customers_100_200
child/orders_250_300
child/orders_300_400
Because the 'customers' table hasn't been modified on the child
branch, there is no file for it there. If you request a page for it on
the 'child' branch, the page server will not find any snapshot file
for it in the 'child' directory, so it will recurse to look into the
parent 'main' branch instead.
From the 'child' branch's point of view, the history for each relation
is linear, and the request's LSN identifies unambiguously which file
you need to look at. For example, the history for the 'orders' table
on the 'main' branch consists of these files:
main/orders_100_200
main/orders_200_300
main/orders_300_400
And from the 'child' branch's point of view, it consists of these
files:
main/orders_100_200
main/orders_200_300
child/orders_250_300
child/orders_300_400
The branch metadata includes the point where the child branch was
created, LSN 250. If a page request comes with LSN 275, we read the
page version from the 'child/orders_250_300' file. If the request LSN
is 225, we read it from the 'main/orders_200_300' file instead. The
page versions between 250-300 in the 'main/orders_200_300' file are
ignored when operating on the child branch.
Note: It doesn't make any difference if the child branch is created
when the end of the main branch was at LSN 250, or later when the tip of
the main branch had already moved on. The latter case, creating a
branch at a historic LSN, is how we support PITR in Zenith.
# Garbage collection
In this scheme, we keep creating new snapshot files over time. We also
need a mechanism to remove old files that are no longer needed,
because disk space isn't infinite.
What files are still needed? Currently, the page server supports PITR
and branching from any branch at any LSN that is "recent enough" from
the tip of the branch. "Recent enough" is defined as an LSN horizon,
which by default is 64 MB. (See DEFAULT_GC_HORIZON). For this
example, let's assume that the LSN horizon is 150 units.
Let's look at the single branch scenario again. Imagine that the end
of the branch is LSN 525, so that the GC horizon is currently at
525-150 = 375
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/orders_400_500
main/customers_100_200
We can remove files 'main/orders_100_200' and 'main/orders_200_300',
because the end LSNs of those files are older than GC horizon 375, and
there are more recent snapshot files for the table. 'main/orders_300_400'
and 'main/orders_400_500' are still within the horizon, so they must be
retained. 'main/customers_100_200' is old enough, but it cannot be
removed because there is no newer snapshot file for the table.
Things get slightly more complicated with multiple branches. All of
the above still holds, but in addition to recent files we must also
retain older shapshot files that are still needed by child branches.
For example, if child branch is created at LSN 150, and the 'customers'
table is updated on the branch, you would have these files:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/orders_400_500
main/customers_100_200
child/customers_150_300
In this situation, the 'main/orders_100_200' file cannot be removed,
even though it is older than the GC horizon, because it is still
needed by the child branch. 'main/orders_200_300' can still be
removed. So after garbage collection, these files would remain:
main/orders_100_200
main/orders_300_400
main/orders_400_500
main/customers_100_200
child/customers_150_300
If 'orders' is modified later on the 'child' branch, we will create a
snapshot file for it on the child:
main/orders_100_200
main/orders_300_400
main/orders_400_500
main/customers_100_200
child/customers_150_300
child/orders_150_400
After this, the 'main/orders_100_200' file can be removed. It is no
longer needed by the child branch, because there is a newer snapshot
file there. TODO: This optimization hasn't been implemented! The GC
algorithm will currently keep the file on the 'main' branch anyway, for
as long as the child branch exists.
# TODO: On LSN ranges
In principle, each relation can be checkpointed separately, i.e. the
LSN ranges of the files don't need to line up. So this would be legal:
main/orders_100_200
main/orders_200_300
main/orders_300_400
main/customers_150_250
main/customers_250_500
However, the code currently always checkpoints all relations together.
So that situation doesn't arise in practice.
It would also be OK to have overlapping LSN ranges for the same relation:
main/orders_100_200
main/orders_200_300
main/orders_250_350
main/orders_300_400
The code that reads the snapshot files should cope with this, but this
situation doesn't arise either, because the checkpointing code never
does that. It could be useful, however, as a transient state when
garbage collecting around branch points, or explicit recovery
points. For example, if we start with this:
main/orders_100_200
main/orders_200_300
main/orders_300_400
And there is a branch or explicit recovery point at LSN 150, we could
replace 'main/orders_100_200' with 'main/orders_150_150' to keep a
snapshot only at that exact point that's still needed, removing the
other page versions around it. But such compaction has not been
implemented yet.

View File

@@ -0,0 +1,534 @@
//!
//! An in-memory layer stores recently received page versions in memory. The page versions
//! are held in a BTreeMap, and there's another BTreeMap to track the size of the relation.
//!
use crate::layered_repository::storage_layer::Layer;
use crate::layered_repository::storage_layer::PageVersion;
use crate::layered_repository::SnapshotLayer;
use crate::relish::*;
use crate::repository::WALRecord;
use crate::walredo::WalRedoManager;
use crate::PageServerConf;
use crate::{ZTenantId, ZTimelineId};
use anyhow::{bail, Result};
use bytes::Bytes;
use log::*;
use std::collections::BTreeMap;
use std::ops::Bound::Included;
use std::sync::{Arc, Mutex};
use zenith_utils::lsn::Lsn;
static ZERO_PAGE: Bytes = Bytes::from_static(&[0u8; 8192]);
pub struct InMemoryLayer {
conf: &'static PageServerConf,
tenantid: ZTenantId,
timelineid: ZTimelineId,
rel: RelishTag,
///
/// This layer contains all the changes from 'start_lsn'. The
/// start is inclusive. There is no end LSN; we only use in-memory
/// layer at the end of a timeline.
///
start_lsn: Lsn,
/// The above fields never change. The parts that do change are in 'inner',
/// and protected by mutex.
inner: Mutex<InMemoryLayerInner>,
}
pub struct InMemoryLayerInner {
/// If this relation was dropped, remember when that happened.
drop_lsn: Option<Lsn>,
///
/// All versions of all pages in the layer are are kept here.
/// Indexed by block number and LSN.
///
page_versions: BTreeMap<(u32, Lsn), PageVersion>,
///
/// `relsizes` tracks the size of the relation at different points in time.
///
relsizes: BTreeMap<Lsn, u32>,
}
impl Layer for InMemoryLayer {
fn is_frozen(&self) -> bool {
return false;
}
fn get_timeline_id(&self) -> ZTimelineId {
return self.timelineid;
}
fn get_relish_tag(&self) -> RelishTag {
return self.rel;
}
fn get_start_lsn(&self) -> Lsn {
return self.start_lsn;
}
fn get_end_lsn(&self) -> Lsn {
return Lsn(u64::MAX);
}
fn is_dropped(&self) -> bool {
let inner = self.inner.lock().unwrap();
inner.drop_lsn.is_some()
}
/// Look up given page in the cache.
fn get_page_at_lsn(
&self,
walredo_mgr: &dyn WalRedoManager,
blknum: u32,
lsn: Lsn,
) -> Result<Bytes> {
// Scan the BTreeMap backwards, starting from the given entry.
let mut records: Vec<WALRecord> = Vec::new();
let mut page_img: Option<Bytes> = None;
let mut need_base_image_lsn: Option<Lsn> = Some(lsn);
{
let inner = self.inner.lock().unwrap();
let minkey = (blknum, Lsn(0));
let maxkey = (blknum, lsn);
let mut iter = inner
.page_versions
.range((Included(&minkey), Included(&maxkey)));
while let Some(((_blknum, entry_lsn), entry)) = iter.next_back() {
if let Some(img) = &entry.page_image {
page_img = Some(img.clone());
need_base_image_lsn = None;
break;
} else if let Some(rec) = &entry.record {
records.push(rec.clone());
if rec.will_init {
// This WAL record initializes the page, so no need to go further back
need_base_image_lsn = None;
break;
} else {
need_base_image_lsn = Some(*entry_lsn);
}
} else {
// No base image, and no WAL record. Huh?
bail!("no page image or WAL record for requested page");
}
}
// release lock on 'page_versions'
}
records.reverse();
// If we needed a base image to apply the WAL records against, we should have found it in memory.
if let Some(lsn) = need_base_image_lsn {
if records.is_empty() {
// no records, and no base image. This can happen if PostgreSQL extends a relation
// but never writes the page.
//
// Would be nice to detect that situation better.
warn!("Page {} blk {} at {} not found", self.rel, blknum, lsn);
return Ok(ZERO_PAGE.clone());
}
bail!(
"No base image found for page {} blk {} at {}/{}",
self.rel,
blknum,
self.timelineid,
lsn
);
}
// If we have a page image, and no WAL, we're all set
if records.is_empty() {
if let Some(img) = page_img {
trace!(
"found page image for blk {} in {} at {}/{}, no WAL redo required",
blknum,
self.rel,
self.timelineid,
lsn
);
Ok(img)
} else {
// FIXME: this ought to be an error?
warn!("Page {} blk {} at {} not found", self.rel, blknum, lsn);
Ok(ZERO_PAGE.clone())
}
} else {
// We need to do WAL redo.
//
// If we don't have a base image, then the oldest WAL record better initialize
// the page
if page_img.is_none() && !records.first().unwrap().will_init {
// FIXME: this ought to be an error?
warn!(
"Base image for page {}/{} at {} not found, but got {} WAL records",
self.rel,
blknum,
lsn,
records.len()
);
Ok(ZERO_PAGE.clone())
} else {
if page_img.is_some() {
trace!("found {} WAL records and a base image for blk {} in {} at {}/{}, performing WAL redo", records.len(), blknum, self.rel, self.timelineid, lsn);
} else {
trace!("found {} WAL records that will init the page for blk {} in {} at {}/{}, performing WAL redo", records.len(), blknum, self.rel, self.timelineid, lsn);
}
let img = walredo_mgr.request_redo(self.rel, blknum, lsn, page_img, records)?;
self.put_page_image(blknum, lsn, img.clone())?;
Ok(img)
}
}
}
/// Get size of the relation at given LSN
fn get_relish_size(&self, lsn: Lsn) -> Result<Option<u32>> {
// Scan the BTreeMap backwards, starting from the given entry.
let inner = self.inner.lock().unwrap();
let mut iter = inner.relsizes.range((Included(&Lsn(0)), Included(&lsn)));
if let Some((_entry_lsn, entry)) = iter.next_back() {
let result = *entry;
drop(inner);
trace!("get_relish_size: {} at {} -> {}", self.rel, lsn, result);
Ok(Some(result))
} else {
Ok(None)
}
}
/// Does this relation exist at given LSN?
fn get_rel_exists(&self, lsn: Lsn) -> Result<bool> {
let inner = self.inner.lock().unwrap();
// Is the requested LSN after the rel was dropped?
if let Some(drop_lsn) = inner.drop_lsn {
if lsn >= drop_lsn {
return Ok(false);
}
}
// Otherwise, it exists
Ok(true)
}
// Write operations
/// Common subroutine of the public put_wal_record() and put_page_image() functions.
/// Adds the page version to the in-memory tree
fn put_page_version(&self, blknum: u32, lsn: Lsn, pv: PageVersion) -> Result<()> {
trace!(
"put_page_version blk {} of {} at {}/{}",
blknum,
self.rel,
self.timelineid,
lsn
);
let mut inner = self.inner.lock().unwrap();
let old = inner.page_versions.insert((blknum, lsn), pv);
if old.is_some() {
// We already had an entry for this LSN. That's odd..
warn!(
"Page version of rel {:?} blk {} at {} already exists",
self.rel, blknum, lsn
);
}
// Also update the relation size, if this extended the relation.
if self.rel.is_blocky() {
let mut iter = inner.relsizes.range((Included(&Lsn(0)), Included(&lsn)));
let oldsize;
if let Some((_entry_lsn, entry)) = iter.next_back() {
oldsize = *entry;
} else {
oldsize = 0;
//bail!("No old size found for {} at {}", self.tag, lsn);
}
if blknum >= oldsize {
trace!(
"enlarging relation {} from {} to {} blocks at {}",
self.rel,
oldsize,
blknum + 1,
lsn
);
inner.relsizes.insert(lsn, blknum + 1);
}
}
Ok(())
}
/// Remember that the relation was truncated at given LSN
fn put_truncation(&self, lsn: Lsn, relsize: u32) -> anyhow::Result<()> {
let mut inner = self.inner.lock().unwrap();
let old = inner.relsizes.insert(lsn, relsize);
if old.is_some() {
// We already had an entry for this LSN. That's odd..
warn!("Inserting truncation, but had an entry for the LSN already");
}
Ok(())
}
/// Remember that the relation was dropped at given LSN
fn put_unlink(&self, lsn: Lsn) -> anyhow::Result<()> {
let mut inner = self.inner.lock().unwrap();
assert!(inner.drop_lsn.is_none());
inner.drop_lsn = Some(lsn);
info!("dropped relation {} at {}", self.rel, lsn);
Ok(())
}
///
/// Write the this in-memory layer to disk, as a snapshot layer.
///
/// The cutoff point for the layer that's written to disk is 'end_lsn'.
///
/// Returns new layers that replace this one. Always returns a
/// SnapshotLayer containing the page versions that were written to disk,
/// but if there were page versions newer than 'end_lsn', also return a new
/// in-memory layer containing those page versions. The caller replaces
/// this layer with the returned layers in the layer map.
///
fn freeze(
&self,
cutoff_lsn: Lsn,
walredo_mgr: &dyn WalRedoManager,
) -> Result<Vec<Arc<dyn Layer>>> {
info!(
"freezing in memory layer for {} on timeline {} at {}",
self.rel, self.timelineid, cutoff_lsn
);
let inner = self.inner.lock().unwrap();
// Normally, use the cutoff LSN as the end of the frozen layer.
// But if the relation was dropped, we know that there are no
// more changes coming in for it, and in particular we know that
// there are no changes "in flight" for the LSN anymore, so we use
// the drop LSN instead. The drop-LSN could be ahead of the
// caller-specified LSN!
let dropped = inner.drop_lsn.is_some();
let end_lsn =
if dropped {
inner.drop_lsn.unwrap()
} else {
cutoff_lsn
};
// Divide all the page versions into old and new at the 'end_lsn' cutoff point.
let mut before_page_versions;
let mut before_relsizes;
let mut after_page_versions;
let mut after_relsizes;
if !dropped {
before_relsizes = BTreeMap::new();
after_relsizes = BTreeMap::new();
for (lsn, size) in inner.relsizes.iter() {
if *lsn > end_lsn {
after_relsizes.insert(*lsn, *size);
} else {
before_relsizes.insert(*lsn, *size);
}
}
before_page_versions = BTreeMap::new();
after_page_versions = BTreeMap::new();
for ((blknum, lsn), pv) in inner.page_versions.iter() {
if *lsn > end_lsn {
after_page_versions.insert((*blknum, *lsn), pv.clone());
} else {
before_page_versions.insert((*blknum, *lsn), pv.clone());
}
}
} else {
before_page_versions = inner.page_versions.clone();
before_relsizes = inner.relsizes.clone();
after_relsizes = BTreeMap::new();
after_page_versions = BTreeMap::new();
}
// we can release the lock now.
drop(inner);
// Write the page versions before the cutoff to disk.
let snapfile = SnapshotLayer::create(
self.conf,
self.timelineid,
self.tenantid,
self.rel,
self.start_lsn,
end_lsn,
dropped,
before_page_versions,
before_relsizes,
)?;
let mut result: Vec<Arc<dyn Layer>> = Vec::new();
// If there were any page versions after the cutoff, initialize a new in-memory layer
// to hold them
if !after_relsizes.is_empty() || !after_page_versions.is_empty() {
info!("created new in-mem layer for {} {}-", self.rel, end_lsn);
let new_layer = Self::copy_snapshot(
self.conf,
walredo_mgr,
&snapfile,
self.timelineid,
self.tenantid,
end_lsn,
)?;
let mut new_inner = new_layer.inner.lock().unwrap();
new_inner.page_versions.append(&mut after_page_versions);
new_inner.relsizes.append(&mut after_relsizes);
drop(new_inner);
result.push(Arc::new(new_layer));
}
result.push(Arc::new(snapfile));
Ok(result)
}
fn delete(&self) -> Result<()> {
// Nothing to do. When the reference is dropped, the memory is released.
Ok(())
}
fn unload(&self) -> Result<()> {
// cannot unload in-memory layer. Freeze instead
Ok(())
}
}
impl InMemoryLayer {
///
/// Create a new, empty, in-memory layer
///
pub fn create(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
tenantid: ZTenantId,
rel: RelishTag,
start_lsn: Lsn,
) -> Result<InMemoryLayer> {
trace!(
"initializing new empty InMemoryLayer for writing {} on timeline {} at {}",
rel,
timelineid,
start_lsn
);
Ok(InMemoryLayer {
conf,
timelineid,
tenantid,
rel,
start_lsn,
inner: Mutex::new(InMemoryLayerInner {
drop_lsn: None,
page_versions: BTreeMap::new(),
relsizes: BTreeMap::new(),
}),
})
}
///
/// Initialize a new InMemoryLayer for, by copying the state at the given
/// point in time from given existing layer.
///
pub fn copy_snapshot(
conf: &'static PageServerConf,
walredo_mgr: &dyn WalRedoManager,
src: &dyn Layer,
timelineid: ZTimelineId,
tenantid: ZTenantId,
lsn: Lsn,
) -> Result<InMemoryLayer> {
trace!(
"initializing new InMemoryLayer for writing {} on timeline {} at {}",
src.get_relish_tag(),
timelineid,
lsn
);
let mut page_versions = BTreeMap::new();
let mut relsizes = BTreeMap::new();
let size;
if src.get_relish_tag().is_blocky() {
if let Some(sz) = src.get_relish_size(lsn)? {
relsizes.insert(lsn, sz);
size = sz;
} else {
bail!("no size found or {} at {}", src.get_relish_tag(), lsn);
}
} else {
size = 1;
}
for blknum in 0..size {
let img = src.get_page_at_lsn(walredo_mgr, blknum, lsn)?;
let pv = PageVersion {
page_image: Some(img),
record: None,
};
page_versions.insert((blknum, lsn), pv);
}
Ok(InMemoryLayer {
conf,
timelineid,
tenantid,
rel: src.get_relish_tag(),
start_lsn: lsn,
inner: Mutex::new(InMemoryLayerInner {
drop_lsn: None,
page_versions: page_versions,
relsizes: relsizes,
}),
})
}
/// debugging function to print out the contents of the layer
#[allow(unused)]
pub fn dump(&self) -> String {
let mut result = format!(
"----- inmemory layer for {} {}-> ----\n",
self.rel, self.start_lsn
);
let inner = self.inner.lock().unwrap();
for (k, v) in inner.relsizes.iter() {
result += &format!("{}: {}\n", k, v);
}
for (k, v) in inner.page_versions.iter() {
result += &format!(
"blk {} at {}: {}/{}\n",
k.0,
k.1,
v.page_image.is_some(),
v.record.is_some()
);
}
result
}
}

View File

@@ -0,0 +1,132 @@
//!
//! The layer map tracks what layers exist for all the relations in a timeline.
//!
//! When the timeline is first accessed, the server lists of all snapshot files
//! in the timelines/<timelineid> directory, and populates this map with
//! SnapshotLayers corresponding to each file. When new WAL is received,
//! we create InMemoryLayers to hold the incoming records. Now and then,
//! in the checkpoint() function, the in-memory layers are frozen, forming
//! new snapshot layers and corresponding files are written to disk.
//!
use crate::layered_repository::storage_layer::Layer;
use crate::relish::*;
use anyhow::Result;
use log::*;
use std::collections::BTreeMap;
use std::collections::HashSet;
use std::ops::Bound::Included;
use std::sync::Arc;
use zenith_utils::lsn::Lsn;
/// LayerMap is a BTreeMap keyed by RelishTag and the layer's start LSN.
/// It provides a couple of convenience functions over a plain BTreeMap
pub struct LayerMap {
pub inner: BTreeMap<(RelishTag, Lsn), Arc<dyn Layer>>,
}
impl LayerMap {
///
/// Look up using the given rel tag and LSN. This differs from a plain
/// key-value lookup in that if there is any layer that covers the
/// given LSN, or precedes the given LSN, it is returned. In other words,
/// you don't need to know the exact start LSN of the layer.
///
pub fn get(&self, tag: RelishTag, lsn: Lsn) -> Option<Arc<dyn Layer>> {
let startkey = (tag, Lsn(0));
let endkey = (tag, lsn);
if let Some((_k, v)) = self
.inner
.range((Included(startkey), Included(endkey)))
.next_back()
{
Some(Arc::clone(v))
} else {
None
}
}
pub fn insert(&mut self, layer: Arc<dyn Layer>) {
let rel = layer.get_relish_tag();
let start_lsn = layer.get_start_lsn();
self.inner.insert((rel, start_lsn), Arc::clone(&layer));
}
pub fn remove(&mut self, layer: &dyn Layer) {
let rel = layer.get_relish_tag();
let start_lsn = layer.get_start_lsn();
self.inner.remove(&(rel, start_lsn));
}
pub fn list_rels(&self, spcnode: u32, dbnode: u32) -> Result<HashSet<RelTag>> {
let mut rels: HashSet<RelTag> = HashSet::new();
// Scan the timeline directory to get all rels in this timeline.
for ((rel, _lsn), _l) in self.inner.iter() {
if let RelishTag::Relation(reltag) = rel {
// FIXME: skip if it was dropped before the requested LSN. But there is no
// LSN argument
if (spcnode == 0 || reltag.spcnode == spcnode)
&& (dbnode == 0 || reltag.dbnode == dbnode)
{
rels.insert(*reltag);
}
}
}
Ok(rels)
}
pub fn list_nonrels(&self, _lsn: Lsn) -> Result<HashSet<RelishTag>> {
let mut rels: HashSet<RelishTag> = HashSet::new();
// Scan the timeline directory to get all rels in this timeline.
for ((rel, _lsn), _l) in self.inner.iter() {
// FIXME: skip if it was dropped before the requested LSN.
if let RelishTag::Relation(_) = rel {
} else {
rels.insert(*rel);
}
}
Ok(rels)
}
/// Is there a newer layer for given relation?
pub fn newer_layer_exists(&self, rel: RelishTag, lsn: Lsn) -> bool {
let startkey = (rel, lsn);
let endkey = (rel, Lsn(u64::MAX));
for ((_rel, newer_lsn), layer) in self.inner.range((Included(startkey), Included(endkey))) {
if layer.get_end_lsn() > lsn {
trace!(
"found later layer for rel {}, {} {}-{}",
rel,
lsn,
newer_lsn,
layer.get_end_lsn()
);
return true;
} else {
trace!(
"found singleton layer for rel {}, {} {}",
rel, lsn, newer_lsn
);
continue;
}
}
trace!("no later layer found for rel {}, {}", rel, lsn);
false
}
}
impl Default for LayerMap {
fn default() -> Self {
LayerMap {
inner: BTreeMap::new(),
}
}
}

View File

@@ -0,0 +1,631 @@
//!
//! A SnapshotLayer represents one snapshot file on disk. One file holds all page
//! version and size information of one relation, in a range of LSN.
//! The name "snapshot file" is a bit of a misnomer because a snapshot file doesn't
//! contain a snapshot at a specific LSN, but rather all the page versions in a range
//! of LSNs.
//!
//! Currently, a snapshot file contains full information needed to reconstruct any
//! page version in the LSN range, without consulting any other snapshot files. When
//! a new snapshot file is created for writing, the full contents of relation are
//! materialized as it is at the beginning of the LSN range. That can be very expensive,
//! we should find a way to store differential files. But this keeps the read-side
//! of things simple. You can find the correct snapshot file based on RelishTag and
//! timeline+LSN, and once you've located it, you have all the data you need to in that
//! file.
//!
//! When a snapshot file needs to be accessed, we slurp the whole file into memory, into
//! the SnapshotLayer struct. See load() and unload() functions.
//!
//! On disk, the snapshot files are stored in timelines/<timelineid> directory.
//! Currently, there are no subdirectories, and each snapshot file is named like this:
//!
//! <spcnode>_<dbnode>_<relnode>_<forknum>_<start LSN>_<end LSN>
//!
//! For example:
//!
//! 1663_13990_2609_0_000000000169C348_000000000169C349
//!
//! If a relation is dropped, we add a '_DROPPED' to the end of the filename to indicate that.
//! So the above example would become:
//!
//! 1663_13990_2609_0_000000000169C348_000000000169C349_DROPPED
//!
//! The end LSN indicates when it was dropped in that case, we don't store it in the
//! file contents in any way.
//!
//! A snapshot file is constructed using the 'bookfile' crate. Each file consists of two
//! parts: the page versions and the relation sizes. They are stored as separate chapters.
//!
use crate::layered_repository::storage_layer::Layer;
use crate::layered_repository::storage_layer::PageVersion;
use crate::layered_repository::storage_layer::ZERO_PAGE;
use crate::relish::*;
use crate::repository::WALRecord;
use crate::walredo::WalRedoManager;
use crate::PageServerConf;
use crate::{ZTenantId, ZTimelineId};
use anyhow::{bail, Result};
use bytes::Bytes;
use log::*;
use std::collections::BTreeMap;
use std::fmt;
use std::fs;
use std::fs::File;
use std::io::Write;
use std::ops::Bound::Included;
use std::path::PathBuf;
use std::sync::{Arc, Mutex, MutexGuard};
use bookfile::{Book, BookWriter};
use zenith_utils::bin_ser::BeSer;
use zenith_utils::lsn::Lsn;
// Magic constant to identify a Zenith snapshot file
static SNAPSHOT_FILE_MAGIC: u32 = 0x5A616E01;
static PAGE_VERSIONS_CHAPTER: u64 = 1;
static REL_SIZES_CHAPTER: u64 = 2;
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone)]
struct SnapshotFileName {
rel: RelishTag,
start_lsn: Lsn,
end_lsn: Lsn,
dropped: bool,
}
impl SnapshotFileName {
fn from_str(fname: &str) -> Option<Self> {
// Split the filename into parts
//
// <spcnode>_<dbnode>_<relnode>_<forknum>_<start LSN>_<end LSN>
//
// or if it was dropped:
//
// <spcnode>_<dbnode>_<relnode>_<forknum>_<start LSN>_<end LSN>_DROPPED
//
let rel;
let mut parts;
if let Some(rest) = fname.strip_prefix("rel_") {
parts = rest.split('_');
rel = RelishTag::Relation(RelTag {
spcnode: parts.next()?.parse::<u32>().ok()?,
dbnode: parts.next()?.parse::<u32>().ok()?,
relnode: parts.next()?.parse::<u32>().ok()?,
forknum: parts.next()?.parse::<u8>().ok()?,
});
} else if let Some(rest) = fname.strip_prefix("pg_xact_") {
parts = rest.split('_');
rel = RelishTag::Slru {
slru: SlruKind::Clog,
segno: u32::from_str_radix(parts.next()?, 16).ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_multixact_members_") {
parts = rest.split('_');
rel = RelishTag::Slru {
slru: SlruKind::MultiXactMembers,
segno: u32::from_str_radix(parts.next()?, 16).ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_multixact_offsets_") {
parts = rest.split('_');
rel = RelishTag::Slru {
slru: SlruKind::MultiXactOffsets,
segno: u32::from_str_radix(parts.next()?, 16).ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_filenodemap_") {
parts = rest.split('_');
rel = RelishTag::FileNodeMap {
spcnode: parts.next()?.parse::<u32>().ok()?,
dbnode: parts.next()?.parse::<u32>().ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_twophase_") {
parts = rest.split('_');
rel = RelishTag::TwoPhase {
xid: parts.next()?.parse::<u32>().ok()?,
};
} else if let Some(rest) = fname.strip_prefix("pg_control_checkpoint_") {
parts = rest.split('_');
rel = RelishTag::Checkpoint;
} else if let Some(rest) = fname.strip_prefix("pg_control_") {
parts = rest.split('_');
rel = RelishTag::ControlFile;
} else {
return None;
}
let start_lsn = Lsn::from_hex(parts.next()?).ok()?;
let end_lsn = Lsn::from_hex(parts.next()?).ok()?;
let mut dropped = false;
if let Some(suffix) = parts.next() {
if suffix == "DROPPED" {
dropped = true;
} else {
warn!("unrecognized filename in timeline dir: {}", fname);
return None;
}
}
if parts.next().is_some() {
warn!("unrecognized filename in timeline dir: {}", fname);
return None;
}
Some(SnapshotFileName {
rel,
start_lsn,
end_lsn,
dropped,
})
}
fn to_string(&self) -> String {
let basename = match self.rel {
RelishTag::Relation(reltag) => format!(
"rel_{}_{}_{}_{}",
reltag.spcnode, reltag.dbnode, reltag.relnode, reltag.forknum
),
RelishTag::Slru {
slru: SlruKind::Clog,
segno,
} => format!("pg_xact_{:04X}", segno),
RelishTag::Slru {
slru: SlruKind::MultiXactMembers,
segno,
} => format!("pg_multixact_members_{:04X}", segno),
RelishTag::Slru {
slru: SlruKind::MultiXactOffsets,
segno,
} => format!("pg_multixact_offsets_{:04X}", segno),
RelishTag::FileNodeMap { spcnode, dbnode } => {
format!("pg_filenodemap_{}_{}", spcnode, dbnode)
}
RelishTag::TwoPhase { xid } => format!("pg_twophase_{}", xid),
RelishTag::Checkpoint => format!("pg_control_checkpoint"),
RelishTag::ControlFile => format!("pg_control"),
};
format!(
"{}_{:016X}_{:016X}{}",
basename,
u64::from(self.start_lsn),
u64::from(self.end_lsn),
if self.dropped { "_DROPPED" } else { "" }
)
}
}
impl fmt::Display for SnapshotFileName {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.to_string())
}
}
///
/// SnapshotLayer is the in-memory data structure associated with an
/// on-disk snapshot file. We keep a SnapshotLayer in memory for each
/// file, in the LayerMap. If a layer is in "loaded" state, we have a
/// copy of the file in memory, in 'inner'. Otherwise the struct is
/// just a placeholder for a file that exists on disk, and it needs to
/// be loaded before using it in queries.
///
pub struct SnapshotLayer {
conf: &'static PageServerConf,
pub tenantid: ZTenantId,
pub timelineid: ZTimelineId,
pub rel: RelishTag,
//
// This entry contains all the changes from 'start_lsn' to 'end_lsn'. The
// start is inclusive, and end is exclusive.
pub start_lsn: Lsn,
pub end_lsn: Lsn,
dropped: bool,
inner: Mutex<SnapshotLayerInner>,
}
pub struct SnapshotLayerInner {
/// If false, the 'page_versions' and 'relsizes' have not been
/// loaded into memory yet.
loaded: bool,
/// All versions of all pages in the file are are kept here.
/// Indexed by block number and LSN.
page_versions: BTreeMap<(u32, Lsn), PageVersion>,
/// `relsizes` tracks the size of the relation at different points in time.
relsizes: BTreeMap<Lsn, u32>,
}
impl Layer for SnapshotLayer {
fn is_frozen(&self) -> bool {
return true;
}
fn get_timeline_id(&self) -> ZTimelineId {
return self.timelineid;
}
fn get_relish_tag(&self) -> RelishTag {
return self.rel;
}
fn is_dropped(&self) -> bool {
return self.dropped;
}
fn get_start_lsn(&self) -> Lsn {
return self.start_lsn;
}
fn get_end_lsn(&self) -> Lsn {
return self.end_lsn;
}
/// Look up given page in the cache.
fn get_page_at_lsn(
&self,
walredo_mgr: &dyn WalRedoManager,
blknum: u32,
lsn: Lsn,
) -> Result<Bytes> {
// Scan the BTreeMap backwards, starting from the given entry.
let mut records: Vec<WALRecord> = Vec::new();
let mut page_img: Option<Bytes> = None;
let mut need_base_image_lsn: Option<Lsn> = Some(lsn);
{
let inner = self.load()?;
let minkey = (blknum, Lsn(0));
let maxkey = (blknum, lsn);
let mut iter = inner
.page_versions
.range((Included(&minkey), Included(&maxkey)));
while let Some(((_blknum, entry_lsn), entry)) = iter.next_back() {
if let Some(img) = &entry.page_image {
page_img = Some(img.clone());
need_base_image_lsn = None;
break;
} else if let Some(rec) = &entry.record {
records.push(rec.clone());
if rec.will_init {
// This WAL record initializes the page, so no need to go further back
need_base_image_lsn = None;
break;
} else {
need_base_image_lsn = Some(*entry_lsn);
}
} else {
// No base image, and no WAL record. Huh?
bail!("no page image or WAL record for requested page");
}
}
// release lock on 'inner'
}
records.reverse();
// If we needed a base image to apply the WAL records against, we should have found it in memory.
if let Some(lsn) = need_base_image_lsn {
if records.is_empty() {
// no records, and no base image. This can happen if PostgreSQL extends a relation
// but never writes the page.
//
// Would be nice to detect that situation better.
warn!("Page {} blk {} at {} not found", self.rel, blknum, lsn);
return Ok(ZERO_PAGE.clone());
}
bail!(
"No base image found for page {} blk {} at {}/{}",
self.rel,
blknum,
self.timelineid,
lsn
);
}
// If we have a page image, and no WAL, we're all set
if records.is_empty() {
if let Some(img) = page_img {
trace!(
"found page image for blk {} in {} at {}/{}, no WAL redo required",
blknum,
self.rel,
self.timelineid,
lsn
);
Ok(img)
} else {
// FIXME: this ought to be an error?
warn!("Page {} blk {} at {} not found", self.rel, blknum, lsn);
Ok(ZERO_PAGE.clone())
}
} else {
// We need to do WAL redo.
//
// If we don't have a base image, then the oldest WAL record better initialize
// the page
if page_img.is_none() && !records.first().unwrap().will_init {
// FIXME: this ought to be an error?
warn!(
"Base image for page {} blk {} at {} not found, but got {} WAL records",
self.rel,
blknum,
lsn,
records.len()
);
Ok(ZERO_PAGE.clone())
} else {
if page_img.is_some() {
trace!("found {} WAL records and a base image for blk {} in {} at {}/{}, performing WAL redo", records.len(), blknum, self.rel, self.timelineid, lsn);
} else {
trace!("found {} WAL records that will init the page for blk {} in {} at {}/{}, performing WAL redo", records.len(), blknum, self.rel, self.timelineid, lsn);
}
let img = walredo_mgr.request_redo(self.rel, blknum, lsn, page_img, records)?;
// FIXME: Should we memoize the page image in memory, so that
// we wouldn't need to reconstruct it again, if it's requested again?
//self.put_page_image(blknum, lsn, img.clone())?;
Ok(img)
}
}
}
/// Get size of the relation at given LSN
fn get_relish_size(&self, lsn: Lsn) -> Result<Option<u32>> {
// Scan the BTreeMap backwards, starting from the given entry.
let inner = self.load()?;
let mut iter = inner.relsizes.range((Included(&Lsn(0)), Included(&lsn)));
if let Some((_entry_lsn, entry)) = iter.next_back() {
let result = *entry;
drop(inner);
trace!("get_relsize: {} at {} -> {}", self.rel, lsn, result);
Ok(Some(result))
} else {
Ok(None)
}
}
/// Does this relation exist at given LSN?
fn get_rel_exists(&self, lsn: Lsn) -> Result<bool> {
// Is the requested LSN after the rel was dropped?
if self.dropped && lsn >= self.end_lsn {
return Ok(false);
}
// Otherwise, it exists.
Ok(true)
}
// Unsupported write operations
fn put_page_version(&self, blknum: u32, lsn: Lsn, _pv: PageVersion) -> Result<()> {
panic!(
"cannot modify historical snapshot layer, rel {} blk {} at {}/{}, {}-{}",
self.rel, blknum, self.timelineid, lsn, self.start_lsn, self.end_lsn
);
}
fn put_truncation(&self, _lsn: Lsn, _relsize: u32) -> anyhow::Result<()> {
bail!("cannot modify historical snapshot layer");
}
fn put_unlink(&self, _lsn: Lsn) -> anyhow::Result<()> {
bail!("cannot modify historical snapshot layer");
}
fn freeze(
&self,
_end_lsn: Lsn,
_walredo_mgr: &dyn WalRedoManager,
) -> Result<Vec<Arc<dyn Layer>>> {
bail!("cannot freeze historical snapshot layer");
}
fn delete(&self) -> Result<()> {
// delete underlying file
fs::remove_file(self.path())?;
Ok(())
}
///
/// Release most of the memory used by this layer. If it's accessed again later,
/// it will need to be loaded back.
///
fn unload(&self) -> Result<()> {
let mut inner = self.inner.lock().unwrap();
inner.page_versions = BTreeMap::new();
inner.relsizes = BTreeMap::new();
inner.loaded = false;
Ok(())
}
}
impl SnapshotLayer {
fn path(&self) -> PathBuf {
Self::path_for(
self.conf,
self.timelineid,
self.tenantid,
&SnapshotFileName {
rel: self.rel,
start_lsn: self.start_lsn,
end_lsn: self.end_lsn,
dropped: self.dropped,
},
)
}
fn path_for(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
tenantid: ZTenantId,
fname: &SnapshotFileName,
) -> PathBuf {
conf.timeline_path(&timelineid, &tenantid)
.join(fname.to_string())
}
/// Create a new snapshot file, using the given btreemaps containing the page versions and
/// relsizes.
///
/// This is used to write the in-memory layer to disk. The in-memory layer uses the same
/// data structure with two btreemaps as we do, so passing the btreemaps is currently
/// expedient.
pub fn create(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
tenantid: ZTenantId,
rel: RelishTag,
start_lsn: Lsn,
end_lsn: Lsn,
dropped: bool,
page_versions: BTreeMap<(u32, Lsn), PageVersion>,
relsizes: BTreeMap<Lsn, u32>,
) -> Result<SnapshotLayer> {
let snapfile = SnapshotLayer {
conf: conf,
timelineid: timelineid,
tenantid: tenantid,
rel: rel,
start_lsn: start_lsn,
end_lsn,
dropped,
inner: Mutex::new(SnapshotLayerInner {
loaded: true,
page_versions: page_versions,
relsizes: relsizes,
}),
};
let inner = snapfile.inner.lock().unwrap();
// Write the in-memory btreemaps into a file
let path = snapfile.path();
// Note: This overwrites any existing file. There shouldn't be any.
// FIXME: throw an error instead?
let file = File::create(&path)?;
let book = BookWriter::new(file, SNAPSHOT_FILE_MAGIC)?;
// Write out page versions
let mut chapter = book.new_chapter(PAGE_VERSIONS_CHAPTER);
let buf = BTreeMap::ser(&inner.page_versions)?;
chapter.write_all(&buf)?;
let book = chapter.close()?;
// and relsizes to separate chapter
let mut chapter = book.new_chapter(REL_SIZES_CHAPTER);
let buf = BTreeMap::ser(&inner.relsizes)?;
chapter.write_all(&buf)?;
let book = chapter.close()?;
book.close()?;
trace!("saved {}", &path.display());
drop(inner);
Ok(snapfile)
}
///
/// Load the contents of the file into memory
///
fn load(&self) -> Result<MutexGuard<SnapshotLayerInner>> {
// quick exit if already loaded
let mut inner = self.inner.lock().unwrap();
if inner.loaded {
return Ok(inner);
}
let path = Self::path_for(
self.conf,
self.timelineid,
self.tenantid,
&SnapshotFileName {
rel: self.rel,
start_lsn: self.start_lsn,
end_lsn: self.end_lsn,
dropped: self.dropped,
},
);
let file = File::open(&path)?;
let book = Book::new(file)?;
let chapter = book.read_chapter(PAGE_VERSIONS_CHAPTER)?;
let page_versions = BTreeMap::des(&chapter)?;
let chapter = book.read_chapter(REL_SIZES_CHAPTER)?;
let relsizes = BTreeMap::des(&chapter)?;
debug!("loaded from {}", &path.display());
*inner = SnapshotLayerInner {
loaded: true,
page_versions,
relsizes,
};
Ok(inner)
}
/// Create SnapshotLayers representing all files on dik
///
// TODO: returning an Iterator would be more idiomatic
pub fn list_snapshot_files(
conf: &'static PageServerConf,
timelineid: ZTimelineId,
tenantid: ZTenantId,
) -> Result<Vec<Arc<dyn Layer>>> {
let path = conf.timeline_path(&timelineid, &tenantid);
let mut snapfiles: Vec<Arc<dyn Layer>> = Vec::new();
for direntry in fs::read_dir(path)? {
let fname = direntry?.file_name();
let fname = fname.to_str().unwrap();
if let Some(snapfilename) = SnapshotFileName::from_str(fname) {
let snapfile = SnapshotLayer {
conf,
timelineid,
tenantid,
rel: snapfilename.rel,
start_lsn: snapfilename.start_lsn,
end_lsn: snapfilename.end_lsn,
dropped: snapfilename.dropped,
inner: Mutex::new(SnapshotLayerInner {
loaded: false,
page_versions: BTreeMap::new(),
relsizes: BTreeMap::new(),
}),
};
snapfiles.push(Arc::new(snapfile));
}
}
return Ok(snapfiles);
}
/// debugging function to print out the contents of the layer
#[allow(unused)]
pub fn dump(&self) -> String {
let mut result = format!(
"----- snapshot layer for {} {}-{} ----\n",
self.rel, self.start_lsn, self.end_lsn
);
let inner = self.inner.lock().unwrap();
for (k, v) in inner.relsizes.iter() {
result += &format!("{}: {}\n", k, v);
}
//for (k, v) in inner.page_versions.iter() {
// result += &format!("blk {} at {}: {}/{}\n", k.0, k.1, v.page_image.is_some(), v.record.is_some());
//}
result
}
}

View File

@@ -0,0 +1,123 @@
use crate::relish::RelishTag;
use crate::repository::WALRecord;
use crate::walredo::WalRedoManager;
use crate::ZTimelineId;
use anyhow::Result;
use bytes::Bytes;
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use zenith_utils::lsn::Lsn;
pub static ZERO_PAGE: Bytes = Bytes::from_static(&[0u8; 8192]);
///
/// Represents a version of a page at a specific LSN. The LSN is the key of the
/// entry in the 'page_versions' hash, it is not duplicated here.
///
/// A page version can be stored as a full page image, or as WAL record that needs
/// to be applied over the previous page version to reconstruct this version.
///
/// It's also possible to have both a WAL record and a page image in the same
/// PageVersion. That happens if page version is originally stored as a WAL record
/// but it is later reconstructed by a GetPage@LSN request by performing WAL
/// redo. The get_page_at_lsn() code will store the reconstructed pag image next to
/// the WAL record in that case. TODO: That's pretty accidental, not the result
/// of any grand design. If we want to keep reconstructed page versions around, we
/// probably should have a separate buffer cache so that we could control the
/// replacement policy globally. Or if we keep a reconstructed page image, we
/// could throw away the WAL record.
///
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PageVersion {
/// an 8kb page image
pub page_image: Option<Bytes>,
/// WAL record to get from previous page version to this one.
pub record: Option<WALRecord>,
}
///
/// A Layer holds all page versions for one relish, in a range of LSNs.
/// There are two kinds of layers, in-memory and snapshot layers. In-memory
/// layers are used to ingest incoming WAL, and provide fast access
/// to the recent page versions. Snaphot layers are stored on disk, and
/// are immutable.
///
/// Each layer contains a full snapshot of the relish at the start
/// LSN. In addition to that, it contains WAL (or more page images)
/// needed to recontruct any page version up to the end LSN.
///
pub trait Layer: Send + Sync {
// These functions identify the relish and the LSN range that this Layer
// holds.
fn get_timeline_id(&self) -> ZTimelineId;
fn get_relish_tag(&self) -> RelishTag;
fn get_start_lsn(&self) -> Lsn;
fn get_end_lsn(&self) -> Lsn;
fn is_dropped(&self) -> bool;
/// Frozen layers are stored on disk, an cannot accept cannot accept new WAL
/// records, whereas an unfrozen layer can still be modified, but is not
/// durable in case of a crash. Snapshot layers are always frozen, and
/// in-memory layers are always unfrozen.
fn is_frozen(&self) -> bool;
// Functions that correspond to the Timeline trait functions.
fn get_page_at_lsn(
&self,
walredo_mgr: &dyn WalRedoManager,
blknum: u32,
lsn: Lsn,
) -> Result<Bytes>;
fn get_relish_size(&self, lsn: Lsn) -> Result<Option<u32>>;
fn get_rel_exists(&self, lsn: Lsn) -> Result<bool>;
fn put_page_version(&self, blknum: u32, lsn: Lsn, pv: PageVersion) -> Result<()>;
fn put_truncation(&self, lsn: Lsn, relsize: u32) -> anyhow::Result<()>;
fn put_unlink(&self, lsn: Lsn) -> anyhow::Result<()>;
/// Remember new page version, as a WAL record over previous version
fn put_wal_record(&self, blknum: u32, rec: WALRecord) -> Result<()> {
self.put_page_version(
blknum,
rec.lsn,
PageVersion {
page_image: None,
record: Some(rec),
},
)
}
/// Remember new page version, as a full page image
fn put_page_image(&self, blknum: u32, lsn: Lsn, img: Bytes) -> Result<()> {
self.put_page_version(
blknum,
lsn,
PageVersion {
page_image: Some(img),
record: None,
},
)
}
///
/// Split off an immutable layer from existing layer.
///
/// Returns new layers that replace this one.
///
fn freeze(&self, end_lsn: Lsn, walredo_mgr: &dyn WalRedoManager)
-> Result<Vec<Arc<dyn Layer>>>;
/// Permanently delete this layer
fn delete(&self) -> Result<()>;
/// Try to release memory used by this layer. This is currently
/// only used by snapshot layers, to free the copy of the file
/// from memory. (TODO: a smarter, more granular caching scheme
/// would be nice)
fn unload(&self) -> Result<()>;
}

View File

@@ -1,95 +1,124 @@
use std::fmt;
use std::net::SocketAddr;
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::{ZTenantId, ZTimelineId};
use std::path::PathBuf;
use std::str::FromStr;
use std::time::Duration;
use lazy_static::lazy_static;
use zenith_metrics::{register_int_gauge_vec, IntGaugeVec};
pub mod basebackup;
pub mod branches;
pub mod layered_repository;
pub mod logger;
pub mod object_key;
pub mod object_repository;
pub mod object_store;
pub mod page_cache;
pub mod page_service;
pub mod pg_constants;
pub mod relish;
pub mod repository;
pub mod restore_local_repo;
pub mod tui;
pub mod tui_event;
mod tui_logger;
pub mod rocksdb_storage;
pub mod waldecoder;
pub mod walreceiver;
pub mod walredo;
lazy_static! {
static ref LIVE_CONNECTIONS_COUNT: IntGaugeVec = register_int_gauge_vec!(
"pageserver_live_connections_count",
"Number of live network connections",
&["pageserver_connection_kind"]
)
.expect("failed to define a metric");
}
#[derive(Debug, Clone)]
pub struct PageServerConf {
pub daemonize: bool,
pub interactive: bool,
pub listen_addr: SocketAddr,
pub listen_addr: String,
pub http_endpoint_addr: String,
pub gc_horizon: u64,
pub gc_period: Duration,
pub superuser: String,
// Repository directory, relative to current working directory.
// Normally, the page server changes the current working directory
// to the repository, and 'workdir' is always '.'. But we don't do
// that during unit testing, because the current directory is global
// to the process but different unit tests work on different
// repositories.
pub workdir: PathBuf,
pub pg_distrib_dir: PathBuf,
pub auth_type: AuthType,
pub auth_validation_public_key_path: Option<PathBuf>,
pub repository_format: RepositoryFormat,
}
/// Zenith Timeline ID is a 128-bit random ID.
///
/// Zenith timeline IDs are different from PostgreSQL timeline
/// IDs. They serve a similar purpose though: they differentiate
/// between different "histories" of the same cluster. However,
/// PostgreSQL timeline IDs are a bit cumbersome, because they are only
/// 32-bits wide, and they must be in ascending order in any given
/// timeline history. Those limitations mean that we cannot generate a
/// new PostgreSQL timeline ID by just generating a random number. And
/// that in turn is problematic for the "pull/push" workflow, where you
/// have a local copy of a zenith repository, and you periodically sync
/// the local changes with a remote server. When you work "detached"
/// from the remote server, you cannot create a PostgreSQL timeline ID
/// that's guaranteed to be different from all existing timelines in
/// the remote server. For example, if two people are having a clone of
/// the repository on their laptops, and they both create a new branch
/// with different name. What timeline ID would they assign to their
/// branches? If they pick the same one, and later try to push the
/// branches to the same remote server, they will get mixed up.
///
/// To avoid those issues, Zenith has its own concept of timelines that
/// is separate from PostgreSQL timelines, and doesn't have those
/// limitations. A zenith timeline is identified by a 128-bit ID, which
/// is usually printed out as a hex string.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct ZTimelineId([u8; 16]);
impl FromStr for ZTimelineId {
type Err = hex::FromHexError;
fn from_str(s: &str) -> Result<ZTimelineId, Self::Err> {
let timelineid = hex::decode(s)?;
let mut buf: [u8; 16] = [0u8; 16];
buf.copy_from_slice(timelineid.as_slice());
Ok(ZTimelineId(buf))
}
#[derive(Debug, Clone, PartialEq)]
pub enum RepositoryFormat {
Layered,
RocksDb,
}
impl ZTimelineId {
pub fn from(b: [u8; 16]) -> ZTimelineId {
ZTimelineId(b)
impl PageServerConf {
//
// Repository paths, relative to workdir.
//
fn tenants_path(&self) -> PathBuf {
self.workdir.join("tenants")
}
pub fn get_from_buf(buf: &mut dyn bytes::Buf) -> ZTimelineId {
let mut arr = [0u8; 16];
buf.copy_to_slice(&mut arr);
ZTimelineId::from(arr)
fn tenant_path(&self, tenantid: &ZTenantId) -> PathBuf {
self.tenants_path().join(tenantid.to_string())
}
pub fn as_arr(&self) -> [u8; 16] {
self.0
}
}
impl fmt::Display for ZTimelineId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str(&hex::encode(self.0))
}
}
pub fn zenith_repo_dir() -> PathBuf {
// Find repository path
match std::env::var_os("ZENITH_REPO_DIR") {
Some(val) => PathBuf::from(val.to_str().unwrap()),
None => ".zenith".into(),
fn tags_path(&self, tenantid: &ZTenantId) -> PathBuf {
self.tenant_path(tenantid).join("refs").join("tags")
}
fn tag_path(&self, tag_name: &str, tenantid: &ZTenantId) -> PathBuf {
self.tags_path(tenantid).join(tag_name)
}
fn branches_path(&self, tenantid: &ZTenantId) -> PathBuf {
self.tenant_path(tenantid).join("refs").join("branches")
}
fn branch_path(&self, branch_name: &str, tenantid: &ZTenantId) -> PathBuf {
self.branches_path(tenantid).join(branch_name)
}
fn timelines_path(&self, tenantid: &ZTenantId) -> PathBuf {
self.tenant_path(tenantid).join("timelines")
}
fn timeline_path(&self, timelineid: &ZTimelineId, tenantid: &ZTenantId) -> PathBuf {
self.timelines_path(tenantid).join(timelineid.to_string())
}
fn ancestor_path(&self, timelineid: &ZTimelineId, tenantid: &ZTenantId) -> PathBuf {
self.timeline_path(timelineid, tenantid).join("ancestor")
}
fn wal_dir_path(&self, timelineid: &ZTimelineId, tenantid: &ZTenantId) -> PathBuf {
self.timeline_path(timelineid, tenantid).join("wal")
}
//
// Postgres distribution paths
//
pub fn pg_bin_dir(&self) -> PathBuf {
self.pg_distrib_dir.join("bin")
}
pub fn pg_lib_dir(&self) -> PathBuf {
self.pg_distrib_dir.join("lib")
}
}

45
pageserver/src/logger.rs Normal file
View File

@@ -0,0 +1,45 @@
use crate::PageServerConf;
use anyhow::{Context, Result};
use slog::{Drain, FnValue};
use std::fs::{File, OpenOptions};
pub fn init_logging(
_conf: &PageServerConf,
log_filename: &str,
) -> Result<(slog_scope::GlobalLoggerGuard, File)> {
// Don't open the same file for output multiple times;
// the different fds could overwrite each other's output.
let log_file = OpenOptions::new()
.create(true)
.append(true)
.open(&log_filename)
.with_context(|| format!("failed to open {:?}", &log_filename))?;
let logger_file = log_file.try_clone().unwrap();
let decorator = slog_term::PlainSyncDecorator::new(logger_file);
let drain = slog_term::FullFormat::new(decorator).build();
let drain = slog::Filter::new(drain, |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Info) {
return true;
}
false
});
let drain = std::sync::Mutex::new(drain).fuse();
let logger = slog::Logger::root(
drain,
slog::o!(
"location" =>
FnValue(move |record| {
format!("{}, {}:{}",
record.module(),
record.file(),
record.line()
)
}
)
),
);
Ok((slog_scope::set_global_logger(logger), log_file))
}

View File

@@ -0,0 +1,49 @@
//!
//! Common structs shared by object_repository.rs and object_store.rs.
//!
use crate::relish::RelishTag;
use serde::{Deserialize, Serialize};
use zenith_utils::zid::ZTimelineId;
///
/// ObjectKey is the key type used to identify objects stored in an object
/// repository. It is shared between object_repository.rs and object_store.rs.
/// It is mostly opaque to ObjectStore, it just stores and retrieves objects
/// using the key given by the caller.
///
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ObjectKey {
pub timeline: ZTimelineId,
pub tag: ObjectTag,
}
///
/// ObjectTag is a part of ObjectKey that is specific to the type of
/// the stored object.
///
/// NB: the order of the enum values is significant! In particular,
/// rocksdb_storage.rs assumes that TimelineMetadataTag is first
///
/// Buffer is the kind of object that is accessible by the public
/// get_page_at_lsn() / put_page_image() / put_wal_record() functions in
/// the repository.rs interface. The rest are internal objects stored in
/// the key-value store, to store various metadata. They're not directly
/// accessible outside object_repository.rs
///
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum ObjectTag {
// dummy tag preceeding all other keys
FirstTag,
// Metadata about a timeline. Not versioned.
TimelineMetadataTag,
// These objects store metadata about one relish. Currently it's used
// just to track the relish's size. It's not used for non-blocky relishes
// at all.
RelationMetadata(RelishTag),
// These are the pages exposed in the public Repository/Timeline interface.
Buffer(RelishTag, u32),
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,92 @@
//! Low-level key-value storage abstraction.
//!
use crate::object_key::*;
use crate::relish::*;
use anyhow::Result;
use std::collections::HashSet;
use std::iter::Iterator;
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTimelineId;
///
/// Low-level storage abstraction.
///
/// All the data in the repository is stored in a key-value store. This trait
/// abstracts the details of the key-value store.
///
/// A simple key-value store would support just GET and PUT operations with
/// a key, but the upper layer needs slightly complicated read operations
///
/// The most frequently used function is 'object_versions'. It is used
/// to look up a page version. It is LSN aware, in that the caller
/// specifies an LSN, and the function returns all values for that
/// block with the same or older LSN.
///
pub trait ObjectStore: Send + Sync {
///
/// Store a value with given key.
///
fn put(&self, key: &ObjectKey, lsn: Lsn, value: &[u8]) -> Result<()>;
/// Read entry with the exact given key.
///
/// This is used for retrieving metadata with special key that doesn't
/// correspond to any real relation.
fn get(&self, key: &ObjectKey, lsn: Lsn) -> Result<Vec<u8>>;
/// Read key greater or equal than specified
fn get_next_key(&self, key: &ObjectKey) -> Result<Option<ObjectKey>>;
/// Iterate through all page versions of one object.
///
/// Returns all page versions in descending LSN order, along with the LSN
/// of each page version.
fn object_versions<'a>(
&'a self,
key: &ObjectKey,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = (Lsn, Vec<u8>)> + 'a>>;
/// Iterate through versions of all objects in a timeline.
///
/// Returns objects in increasing key-version order.
/// Returns all versions up to and including the specified LSN.
fn objects<'a>(
&'a self,
timeline: ZTimelineId,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = Result<(ObjectTag, Lsn, Vec<u8>)>> + 'a>>;
/// Iterate through all keys with given tablespace and database ID, and LSN <= 'lsn'.
/// Both dbnode and spcnode can be InvalidId (0) which means get all relations in tablespace/cluster
///
/// This is used to implement 'create database'
fn list_rels(
&self,
timelineid: ZTimelineId,
spcnode: u32,
dbnode: u32,
lsn: Lsn,
) -> Result<HashSet<RelTag>>;
/// Iterate through non-rel relishes
///
/// This is used to prepare tarball for new node startup.
fn list_nonrels<'a>(&'a self, timelineid: ZTimelineId, lsn: Lsn) -> Result<HashSet<RelishTag>>;
/// Iterate through objects tags. If nonrel_only, then only non-relationa data is iterated.
///
/// This is used to implement GC and preparing tarball for new node startup
/// Returns objects in increasing key-version order.
fn list_objects<'a>(
&'a self,
timelineid: ZTimelineId,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = ObjectTag> + 'a>>;
/// Unlink object (used by GC). This mehod may actually delete object or just mark it for deletion.
fn unlink(&self, key: &ObjectKey, lsn: Lsn) -> Result<()>;
// Compact storage and remove versions marged for deletion
fn compact(&self);
}

View File

@@ -1,920 +1,91 @@
//
// Page Cache holds all the different page versions and WAL records
//
// The Page Cache is currenusing RocksDB for storing wal records and full page images, keyed by the RelFileNode, blocknumber, and the LSN.
//
//! This module acts as a switchboard to access different repositories managed by this
//! page server.
use crate::restore_local_repo::restore_timeline;
use crate::ZTimelineId;
use crate::{walredo, zenith_repo_dir, PageServerConf};
use anyhow::{bail, Context};
use bytes::{Buf, BufMut, Bytes, BytesMut};
use crossbeam_channel::unbounded;
use crossbeam_channel::{Receiver, Sender};
use crate::branches;
use crate::layered_repository::LayeredRepository;
use crate::object_repository::ObjectRepository;
use crate::repository::Repository;
use crate::rocksdb_storage::RocksObjectStore;
use crate::walredo::PostgresRedoManager;
use crate::{PageServerConf, RepositoryFormat};
use anyhow::{anyhow, bail, Result};
use lazy_static::lazy_static;
use log::*;
use rocksdb;
use std::cmp::min;
use log::info;
use std::collections::HashMap;
use std::sync::atomic::Ordering;
use std::sync::atomic::{AtomicU64};
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
use std::time::Duration;
use std::{convert::TryInto, ops::AddAssign};
use zenith_utils::seqwait::SeqWait;
// Timeout when waiting or WAL receiver to catch up to an LSN given in a GetPage@LSN call.
static TIMEOUT: Duration = Duration::from_secs(60);
pub struct PageCache {
shared: Mutex<PageCacheShared>,
// RocksDB handle
db: rocksdb::DB,
// Channel for communicating with the WAL redo process here.
pub walredo_sender: Sender<Arc<CacheEntry>>,
pub walredo_receiver: Receiver<Arc<CacheEntry>>,
// Allows .await on the arrival of a particular LSN.
seqwait_lsn: SeqWait,
// Counters, for metrics collection.
pub num_entries: AtomicU64,
pub num_page_images: AtomicU64,
pub num_wal_records: AtomicU64,
pub num_getpage_requests: AtomicU64,
// copies of shared.first/last_valid_lsn fields (copied here so
// that they can be read without acquiring the mutex).
pub first_valid_lsn: AtomicU64,
pub last_valid_lsn: AtomicU64,
pub last_record_lsn: AtomicU64,
}
#[derive(Clone)]
pub struct PageCacheStats {
pub num_entries: u64,
pub num_page_images: u64,
pub num_wal_records: u64,
pub num_getpage_requests: u64,
pub first_valid_lsn: u64,
pub last_valid_lsn: u64,
pub last_record_lsn: u64,
}
impl AddAssign for PageCacheStats {
fn add_assign(&mut self, other: Self) {
*self = Self {
num_entries: self.num_entries + other.num_entries,
num_page_images: self.num_page_images + other.num_page_images,
num_wal_records: self.num_wal_records + other.num_wal_records,
num_getpage_requests: self.num_getpage_requests + other.num_getpage_requests,
first_valid_lsn: self.first_valid_lsn + other.first_valid_lsn,
last_valid_lsn: self.last_valid_lsn + other.last_valid_lsn,
last_record_lsn: self.last_record_lsn + other.last_record_lsn,
}
}
}
//
// Shared data structure, holding page cache and related auxiliary information
//
struct PageCacheShared {
// What page versions do we hold in the cache? If we get GetPage with
// LSN < first_valid_lsn, that's an error because we (no longer) hold that
// page version. If we get a request > last_valid_lsn, we need to wait until
// we receive all the WAL up to the request.
//
// last_record_lsn points to the end of last processed WAL record.
// It can lag behind last_valid_lsn, if the WAL receiver has received some WAL
// after the end of last record, but not the whole next record yet. In the
// page cache, we care about last_valid_lsn, but if the WAL receiver needs to
// restart the streaming, it needs to restart at the end of last record, so
// we track them separately. last_record_lsn should perhaps be in
// walreceiver.rs instead of here, but it seems convenient to keep all three
// values together.
//
first_valid_lsn: u64,
last_valid_lsn: u64,
last_record_lsn: u64,
}
use std::fs;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use zenith_utils::zid::ZTenantId;
lazy_static! {
pub static ref PAGECACHES: Mutex<HashMap<ZTimelineId, Arc<PageCache>>> =
pub static ref REPOSITORY: Mutex<HashMap<ZTenantId, Arc<dyn Repository>>> =
Mutex::new(HashMap::new());
}
// Get Page Cache for given timeline. It is assumed to already exist.
pub fn get_pagecache(_conf: &PageServerConf, timelineid: ZTimelineId) -> Option<Arc<PageCache>> {
let pcaches = PAGECACHES.lock().unwrap();
pub fn init(conf: &'static PageServerConf) {
let mut m = REPOSITORY.lock().unwrap();
match pcaches.get(&timelineid) {
Some(pcache) => Some(pcache.clone()),
None => None,
}
}
for dir_entry in fs::read_dir(conf.tenants_path()).unwrap() {
let tenantid =
ZTenantId::from_str(dir_entry.unwrap().file_name().to_str().unwrap()).unwrap();
pub fn get_or_restore_pagecache(
conf: &PageServerConf,
timelineid: ZTimelineId,
) -> anyhow::Result<Arc<PageCache>> {
let mut pcaches = PAGECACHES.lock().unwrap();
// Set up a WAL redo manager, for applying WAL records.
let walredo_mgr = PostgresRedoManager::new(conf, tenantid);
match pcaches.get(&timelineid) {
Some(pcache) => Ok(pcache.clone()),
None => {
let pcache = init_page_cache(conf, timelineid);
restore_timeline(conf, &pcache, timelineid)?;
let result = Arc::new(pcache);
pcaches.insert(timelineid, result.clone());
// Initialize the WAL redo thread
//
// Now join_handle is not saved any where and we won'try restart tharead
// if it is dead. We may later stop that treads after some inactivity period
// and restart them on demand.
let conf_copy = conf.clone();
let _walredo_thread = thread::Builder::new()
.name("WAL redo thread".into())
.spawn(move || {
walredo::wal_redo_main(&conf_copy, timelineid);
})
.unwrap();
if conf.gc_horizon != 0 {
let conf_copy = conf.clone();
let _gc_thread = thread::Builder::new()
.name("Garbage collection thread".into())
.spawn(move || {
gc_thread_main(&conf_copy, timelineid);
})
.unwrap();
// Set up an object repository, for actual data storage.
let repo: Arc<dyn Repository + Sync + Send> = match conf.repository_format {
RepositoryFormat::Layered => {
let repo = Arc::new(LayeredRepository::new(
conf,
Arc::new(walredo_mgr),
tenantid,
));
LayeredRepository::launch_checkpointer_thread(conf, repo.clone());
repo
}
Ok(result)
}
}
}
RepositoryFormat::RocksDb => {
let obj_store = RocksObjectStore::open(conf, &tenantid).unwrap();
fn gc_thread_main(conf: &PageServerConf, timelineid: ZTimelineId) {
info!("Garbage collection thread started {}", timelineid);
let pcache = get_pagecache(conf, timelineid).unwrap();
pcache.do_gc(conf).unwrap();
}
fn open_rocksdb(_conf: &PageServerConf, timelineid: ZTimelineId) -> rocksdb::DB {
let path = zenith_repo_dir().join(timelineid.to_string());
let mut opts = rocksdb::Options::default();
opts.create_if_missing(true);
opts.set_use_fsync(true);
opts.set_compression_type(rocksdb::DBCompressionType::Lz4);
opts.create_missing_column_families(true);
rocksdb::DB::open_cf(&opts, &path, &[rocksdb::DEFAULT_COLUMN_FAMILY_NAME]).unwrap()
}
fn init_page_cache(conf: &PageServerConf, timelineid: ZTimelineId) -> PageCache {
// Initialize the channel between the page cache and the WAL applicator
let (s, r) = unbounded();
PageCache {
db: open_rocksdb(&conf, timelineid),
shared: Mutex::new(PageCacheShared {
first_valid_lsn: 0,
last_valid_lsn: 0,
last_record_lsn: 0,
}),
seqwait_lsn: SeqWait::new(0),
walredo_sender: s,
walredo_receiver: r,
num_entries: AtomicU64::new(0),
num_page_images: AtomicU64::new(0),
num_wal_records: AtomicU64::new(0),
num_getpage_requests: AtomicU64::new(0),
first_valid_lsn: AtomicU64::new(0),
last_valid_lsn: AtomicU64::new(0),
last_record_lsn: AtomicU64::new(0),
}
}
//
// We store two kinds of entries in the page cache:
//
// 1. Ready-made images of the block
// 2. WAL records, to be applied on top of the "previous" entry
//
// Some WAL records will initialize the page from scratch. For such records,
// the 'will_init' flag is set. They don't need the previous page image before
// applying. The 'will_init' flag is set for records containing a full-page image,
// and for records with the BKPBLOCK_WILL_INIT flag. These differ from PageImages
// stored directly in the cache entry in that you still need to run the WAL redo
// routine to generate the page image.
//
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone)]
pub struct CacheKey {
pub tag: BufferTag,
pub lsn: u64,
}
impl CacheKey {
pub fn pack(&self, buf: &mut BytesMut) {
self.tag.pack(buf);
buf.put_u64(self.lsn);
}
pub fn unpack(buf: &mut BytesMut) -> CacheKey {
CacheKey {
tag: BufferTag::unpack(buf),
lsn: buf.get_u64(),
}
}
}
pub struct CacheEntry {
pub key: CacheKey,
pub content: Mutex<CacheEntryContent>,
// Condition variable used by the WAL redo service, to wake up
// requester.
//
// FIXME: this takes quite a lot of space. Consider using parking_lot::Condvar
// or something else.
pub walredo_condvar: Condvar,
}
pub struct CacheEntryContent {
pub page_image: Option<Bytes>,
pub wal_record: Option<WALRecord>,
pub apply_pending: bool,
}
impl CacheEntryContent {
pub fn pack(&self, buf: &mut BytesMut) {
if let Some(image) = &self.page_image {
buf.put_u8(1);
buf.put_u16(image.len() as u16);
buf.put_slice(&image[..]);
} else if let Some(rec) = &self.wal_record {
buf.put_u8(0);
rec.pack(buf);
}
}
pub fn unpack(buf: &mut BytesMut) -> CacheEntryContent {
if buf.get_u8() == 1 {
let mut dst = vec![0u8; buf.get_u16() as usize];
buf.copy_to_slice(&mut dst);
CacheEntryContent {
page_image: Some(Bytes::from(dst)),
wal_record: None,
apply_pending: false,
}
} else {
CacheEntryContent {
page_image: None,
wal_record: Some(WALRecord::unpack(buf)),
apply_pending: false,
}
}
}
}
impl CacheEntry {
fn new(key: CacheKey, content: CacheEntryContent) -> CacheEntry {
CacheEntry {
key,
content: Mutex::new(content),
walredo_condvar: Condvar::new(),
}
}
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Hash, Ord, Clone, Copy)]
pub struct RelTag {
pub spcnode: u32,
pub dbnode: u32,
pub relnode: u32,
pub forknum: u8,
}
impl RelTag {
pub fn pack(&self, buf: &mut BytesMut) {
buf.put_u32(self.spcnode);
buf.put_u32(self.dbnode);
buf.put_u32(self.relnode);
buf.put_u32(self.forknum as u32); // encode forknum as u32 to provide compatibility with wal_redo_postgres
}
pub fn unpack(buf: &mut BytesMut) -> RelTag {
RelTag {
spcnode: buf.get_u32(),
dbnode: buf.get_u32(),
relnode: buf.get_u32(),
forknum: buf.get_u32() as u8,
}
}
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, Copy)]
pub struct BufferTag {
pub rel: RelTag,
pub blknum: u32,
}
impl BufferTag {
pub fn pack(&self, buf: &mut BytesMut) {
self.rel.pack(buf);
buf.put_u32(self.blknum);
}
pub fn unpack(buf: &mut BytesMut) -> BufferTag {
BufferTag {
rel: RelTag::unpack(buf),
blknum: buf.get_u32(),
}
}
}
#[derive(Clone)]
pub struct WALRecord {
pub lsn: u64, // LSN at the *end* of the record
pub will_init: bool,
pub truncate: bool,
pub rec: Bytes,
// Remember the offset of main_data in rec,
// so that we don't have to parse the record again.
// If record has no main_data, this offset equals rec.len().
pub main_data_offset: u32,
}
impl WALRecord {
pub fn pack(&self, buf: &mut BytesMut) {
buf.put_u64(self.lsn);
buf.put_u8(self.will_init as u8);
buf.put_u8(self.truncate as u8);
buf.put_u32(self.main_data_offset);
buf.put_u32(self.rec.len() as u32);
buf.put_slice(&self.rec[..]);
}
pub fn unpack(buf: &mut BytesMut) -> WALRecord {
let lsn = buf.get_u64();
let will_init = buf.get_u8() != 0;
let truncate = buf.get_u8() != 0;
let main_data_offset = buf.get_u32();
let mut dst = vec![0u8; buf.get_u32() as usize];
buf.copy_to_slice(&mut dst);
WALRecord {
lsn,
will_init,
truncate,
rec: Bytes::from(dst),
main_data_offset,
}
}
}
// Public interface functions
impl PageCache {
fn do_gc(&self, conf: &PageServerConf) -> anyhow::Result<Bytes> {
let mut minbuf = BytesMut::new();
let mut maxbuf = BytesMut::new();
let cf = self
.db
.cf_handle(rocksdb::DEFAULT_COLUMN_FAMILY_NAME)
.unwrap();
loop {
thread::sleep(conf.gc_period);
let last_lsn = self.get_last_valid_lsn();
if last_lsn > conf.gc_horizon {
let horizon = last_lsn - conf.gc_horizon;
let mut maxkey = CacheKey {
tag: BufferTag {
rel: RelTag {
spcnode: u32::MAX,
dbnode: u32::MAX,
relnode: u32::MAX,
forknum: u8::MAX,
},
blknum: u32::MAX,
},
lsn: u64::MAX,
};
loop {
maxbuf.clear();
maxkey.pack(&mut maxbuf);
let mut iter = self.db.iterator(rocksdb::IteratorMode::From(
&maxbuf[..],
rocksdb::Direction::Reverse,
));
if let Some((k, v)) = iter.next() {
minbuf.clear();
minbuf.extend_from_slice(&v);
let content = CacheEntryContent::unpack(&mut minbuf);
minbuf.clear();
minbuf.extend_from_slice(&k);
let key = CacheKey::unpack(&mut minbuf);
// Construct boundaries for old records cleanup
maxkey.tag = key.tag;
let last_lsn = key.lsn;
maxkey.lsn = min(horizon, last_lsn); // do not remove last version
let mut minkey = maxkey.clone();
minkey.lsn = 0;
// reconstruct most recent page version
if content.wal_record.is_some() {
trace!("Reconstruct most recent page {:?}", key);
// force reconstruction of most recent page version
self.reconstruct_page(key, content)?;
}
maxbuf.clear();
maxkey.pack(&mut maxbuf);
if last_lsn > horizon {
// locate most recent record before horizon
let mut iter = self.db.iterator(rocksdb::IteratorMode::From(
&maxbuf[..],
rocksdb::Direction::Reverse,
));
if let Some((k, v)) = iter.next() {
minbuf.clear();
minbuf.extend_from_slice(&v);
let content = CacheEntryContent::unpack(&mut minbuf);
if content.wal_record.is_some() {
minbuf.clear();
minbuf.extend_from_slice(&k);
let key = CacheKey::unpack(&mut minbuf);
trace!("Reconstruct horizon page {:?}", key);
self.reconstruct_page(key, content)?;
}
}
}
// remove records prior to horizon
minbuf.clear();
minkey.pack(&mut minbuf);
trace!("Delete records in range {:?}..{:?}", minkey, maxkey);
self.db.delete_range_cf(cf, &minbuf[..], &maxbuf[..])?;
maxkey = minkey;
} else {
break;
}
}
}
}
}
fn reconstruct_page(&self, key: CacheKey, content: CacheEntryContent) -> anyhow::Result<Bytes> {
let entry_rc = Arc::new(CacheEntry::new(key.clone(), content));
let mut entry_content = entry_rc.content.lock().unwrap();
entry_content.apply_pending = true;
let s = &self.walredo_sender;
s.send(entry_rc.clone())?;
while entry_content.apply_pending {
entry_content = entry_rc.walredo_condvar.wait(entry_content).unwrap();
}
// We should now have a page image. If we don't, it means that WAL redo
// failed to reconstruct it. WAL redo should've logged that error already.
let page_img = match &entry_content.page_image {
Some(p) => p.clone(),
None => {
error!("could not apply WAL to reconstruct page image for GetPage@LSN request");
bail!("could not apply WAL to reconstruct page image");
Arc::new(ObjectRepository::new(
conf,
Arc::new(obj_store),
Arc::new(walredo_mgr),
tenantid,
))
}
};
self.put_page_image(key.tag, key.lsn, page_img.clone());
Ok(page_img)
}
async fn wait_lsn(&self, lsn: u64) -> anyhow::Result<()> {
self.seqwait_lsn
.wait_for_timeout(lsn, TIMEOUT)
.await
.with_context(|| {
format!(
"Timed out while waiting for WAL record at LSN {:X}/{:X} to arrive",
lsn >> 32,
lsn & 0xffff_ffff
)
})?;
Ok(())
}
//
// GetPage@LSN
//
// Returns an 8k page image
//
pub async fn get_page_at_lsn(&self, tag: BufferTag, req_lsn: u64) -> anyhow::Result<Bytes> {
self.num_getpage_requests.fetch_add(1, Ordering::Relaxed);
let mut lsn = req_lsn;
//When invalid LSN is requested, it means "don't wait, return latest version of the page"
//This is necessary for bootstrap.
if lsn == 0
{
lsn = self.last_valid_lsn.load(Ordering::Acquire);
trace!(
"walreceiver doesn't work yet last_valid_lsn {}, requested {}",
self.last_valid_lsn.load(Ordering::Acquire),
lsn
);
}
else
{
self.wait_lsn(lsn).await?;
}
// Look up cache entry. If it's a page image, return that. If it's a WAL record,
// ask the WAL redo service to reconstruct the page image from the WAL records.
let minkey = CacheKey { tag, lsn: 0 };
let maxkey = CacheKey { tag, lsn };
let mut buf = BytesMut::new();
minkey.pack(&mut buf);
let mut readopts = rocksdb::ReadOptions::default();
readopts.set_iterate_lower_bound(buf.to_vec());
buf.clear();
maxkey.pack(&mut buf);
let mut iter = self.db.iterator_opt(
rocksdb::IteratorMode::From(&buf[..], rocksdb::Direction::Reverse),
readopts,
);
let entry_opt = iter.next();
if entry_opt.is_none() {
static ZERO_PAGE: [u8; 8192] = [0u8; 8192];
return Ok(Bytes::from_static(&ZERO_PAGE));
/* return Err("could not find page image")?; */
}
let (k, v) = entry_opt.unwrap();
buf.clear();
buf.extend_from_slice(&v);
let content = CacheEntryContent::unpack(&mut buf);
let page_img: Bytes;
if let Some(img) = &content.page_image {
page_img = img.clone();
} else if content.wal_record.is_some() {
buf.clear();
buf.extend_from_slice(&k);
let key = CacheKey::unpack(&mut buf);
page_img = self.reconstruct_page(key, content)?;
} else {
// No base image, and no WAL record. Huh?
bail!("no page image or WAL record for requested page");
}
// FIXME: assumes little-endian. Only used for the debugging log though
let page_lsn_hi = u32::from_le_bytes(page_img.get(0..4).unwrap().try_into().unwrap());
let page_lsn_lo = u32::from_le_bytes(page_img.get(4..8).unwrap().try_into().unwrap());
trace!(
"Returning page with LSN {:X}/{:X} for {}/{}/{}.{} blk {}",
page_lsn_hi,
page_lsn_lo,
tag.rel.spcnode,
tag.rel.dbnode,
tag.rel.relnode,
tag.rel.forknum,
tag.blknum
);
Ok(page_img)
}
//
// Collect all the WAL records that are needed to reconstruct a page
// image for the given cache entry.
//
// Returns an old page image (if any), and a vector of WAL records to apply
// over it.
//
pub fn collect_records_for_apply(&self, entry: &CacheEntry) -> (Option<Bytes>, Vec<WALRecord>) {
let minkey = CacheKey {
tag: BufferTag {
rel: entry.key.tag.rel,
blknum: 0,
},
lsn: 0,
};
let mut buf = BytesMut::new();
minkey.pack(&mut buf);
let mut readopts = rocksdb::ReadOptions::default();
readopts.set_iterate_lower_bound(buf.to_vec());
buf.clear();
entry.key.pack(&mut buf);
let iter = self.db.iterator_opt(
rocksdb::IteratorMode::From(&buf[..], rocksdb::Direction::Reverse),
readopts,
);
let mut base_img: Option<Bytes> = None;
let mut records: Vec<WALRecord> = Vec::new();
// Scan backwards, collecting the WAL records, until we hit an
// old page image.
for (_k, v) in iter {
buf.clear();
buf.extend_from_slice(&v);
let content = CacheEntryContent::unpack(&mut buf);
if let Some(img) = &content.page_image {
// We have a base image. No need to dig deeper into the list of
// records
base_img = Some(img.clone());
break;
} else if let Some(rec) = &content.wal_record {
records.push(rec.clone());
// If this WAL record initializes the page, no need to dig deeper.
if rec.will_init {
break;
}
} else {
panic!("no base image and no WAL record on cache entry");
}
}
records.reverse();
(base_img, records)
}
//
// Adds a WAL record to the page cache
//
pub fn put_wal_record(&self, tag: BufferTag, rec: WALRecord) {
let lsn = rec.lsn;
let key = CacheKey { tag, lsn };
let content = CacheEntryContent {
page_image: None,
wal_record: Some(rec),
apply_pending: false,
};
let mut key_buf = BytesMut::new();
key.pack(&mut key_buf);
let mut val_buf = BytesMut::new();
content.pack(&mut val_buf);
let _res = self.db.put(&key_buf[..], &val_buf[..]);
//trace!("put_wal_record lsn: {}", lsn);
self.num_entries.fetch_add(1, Ordering::Relaxed);
self.num_wal_records.fetch_add(1, Ordering::Relaxed);
}
//
// Adds a relation-wide WAL record (like truncate) to the page cache,
// associating it with all pages started with specified block number
//
pub async fn put_rel_wal_record(&self, tag: BufferTag, rec: WALRecord) -> anyhow::Result<()> {
let mut key = CacheKey { tag, lsn: rec.lsn };
let old_rel_size = self.relsize_get(&tag.rel, u64::MAX).await?;
let content = CacheEntryContent {
page_image: None,
wal_record: Some(rec),
apply_pending: false,
};
// set new relation size
trace!("Truncate relation {:?}", tag);
let mut key_buf = BytesMut::new();
let mut val_buf = BytesMut::new();
content.pack(&mut val_buf);
for blknum in tag.blknum..old_rel_size {
key_buf.clear();
key.tag.blknum = blknum;
key.pack(&mut key_buf);
trace!("put_wal_record lsn: {}", key.lsn);
let _res = self.db.put(&key_buf[..], &val_buf[..]);
}
let n = (old_rel_size - tag.blknum) as u64;
self.num_entries.fetch_add(n, Ordering::Relaxed);
self.num_wal_records.fetch_add(n, Ordering::Relaxed);
Ok(())
}
//
// Memorize a full image of a page version
//
pub fn put_page_image(&self, tag: BufferTag, lsn: u64, img: Bytes) {
let key = CacheKey { tag, lsn };
let content = CacheEntryContent {
page_image: Some(img),
wal_record: None,
apply_pending: false,
};
let mut key_buf = BytesMut::new();
key.pack(&mut key_buf);
let mut val_buf = BytesMut::new();
content.pack(&mut val_buf);
trace!("put_wal_record lsn: {}", key.lsn);
let _res = self.db.put(&key_buf[..], &val_buf[..]);
//debug!("inserted page image for {}/{}/{}_{} blk {} at {}",
// tag.spcnode, tag.dbnode, tag.relnode, tag.forknum, tag.blknum, lsn);
self.num_page_images.fetch_add(1, Ordering::Relaxed);
}
//
pub fn advance_last_valid_lsn(&self, lsn: u64) {
let mut shared = self.shared.lock().unwrap();
// Can't move backwards.
let oldlsn = shared.last_valid_lsn;
if lsn >= oldlsn {
shared.last_valid_lsn = lsn;
self.seqwait_lsn.advance(lsn);
self.last_valid_lsn.store(lsn, Ordering::Relaxed);
} else {
warn!(
"attempted to move last valid LSN backwards (was {:X}/{:X}, new {:X}/{:X})",
oldlsn >> 32,
oldlsn & 0xffffffff,
lsn >> 32,
lsn & 0xffffffff
);
}
}
//
// NOTE: this updates last_valid_lsn as well.
//
pub fn advance_last_record_lsn(&self, lsn: u64) {
let mut shared = self.shared.lock().unwrap();
// Can't move backwards.
assert!(lsn >= shared.last_valid_lsn);
assert!(lsn >= shared.last_record_lsn);
shared.last_valid_lsn = lsn;
shared.last_record_lsn = lsn;
self.seqwait_lsn.advance(lsn);
self.last_valid_lsn.store(lsn, Ordering::Relaxed);
self.last_record_lsn.store(lsn, Ordering::Relaxed);
}
//
pub fn _advance_first_valid_lsn(&self, lsn: u64) {
let mut shared = self.shared.lock().unwrap();
// Can't move backwards.
assert!(lsn >= shared.first_valid_lsn);
// Can't overtake last_valid_lsn (except when we're
// initializing the system and last_valid_lsn hasn't been set yet.
assert!(shared.last_valid_lsn == 0 || lsn < shared.last_valid_lsn);
shared.first_valid_lsn = lsn;
self.first_valid_lsn.store(lsn, Ordering::Relaxed);
}
pub fn init_valid_lsn(&self, lsn: u64) {
let mut shared = self.shared.lock().unwrap();
assert!(shared.first_valid_lsn == 0);
assert!(shared.last_valid_lsn == 0);
assert!(shared.last_record_lsn == 0);
shared.first_valid_lsn = lsn;
shared.last_valid_lsn = lsn;
shared.last_record_lsn = lsn;
self.first_valid_lsn.store(lsn, Ordering::Relaxed);
self.last_valid_lsn.store(lsn, Ordering::Relaxed);
self.last_record_lsn.store(lsn, Ordering::Relaxed);
}
pub fn get_last_valid_lsn(&self) -> u64 {
let shared = self.shared.lock().unwrap();
shared.last_record_lsn
}
pub async fn relsize_get(&self, rel: &RelTag, lsn: u64) -> anyhow::Result<u32> {
if lsn != u64::MAX {
self.wait_lsn(lsn).await?;
}
let mut key = CacheKey {
tag: BufferTag {
rel: *rel,
blknum: u32::MAX,
},
lsn,
};
let mut buf = BytesMut::new();
loop {
buf.clear();
key.pack(&mut buf);
let mut iter = self.db.iterator(rocksdb::IteratorMode::From(
&buf[..],
rocksdb::Direction::Reverse,
));
if let Some((k, v)) = iter.next() {
buf.clear();
buf.extend_from_slice(&k);
let tag = BufferTag::unpack(&mut buf);
if tag.rel == *rel {
buf.clear();
buf.extend_from_slice(&v);
let content = CacheEntryContent::unpack(&mut buf);
if let Some(rec) = &content.wal_record {
if rec.truncate {
if tag.blknum > 0 {
key.tag.blknum = tag.blknum - 1;
continue;
}
break;
}
}
let relsize = tag.blknum + 1;
trace!("Size of relation {:?} at {} is {}", rel, lsn, relsize);
return Ok(relsize);
}
}
break;
}
trace!("Size of relation {:?} at {} is zero", rel, lsn);
Ok(0)
}
pub async fn relsize_exist(&self, rel: &RelTag, lsn: u64) -> anyhow::Result<bool> {
self.wait_lsn(lsn).await?;
let key = CacheKey {
tag: BufferTag {
rel: *rel,
blknum: u32::MAX,
},
lsn,
};
let mut buf = BytesMut::new();
key.pack(&mut buf);
let mut iter = self.db.iterator(rocksdb::IteratorMode::From(
&buf[..],
rocksdb::Direction::Reverse,
));
if let Some((k, _v)) = iter.next() {
buf.clear();
buf.extend_from_slice(&k);
let tag = BufferTag::unpack(&mut buf);
if tag.rel == *rel {
trace!("Relation {:?} exists at {}", rel, lsn);
return Ok(true);
}
}
trace!("Relation {:?} doesn't exist at {}", rel, lsn);
Ok(false)
}
pub fn get_stats(&self) -> PageCacheStats {
PageCacheStats {
num_entries: self.num_entries.load(Ordering::Relaxed),
num_page_images: self.num_page_images.load(Ordering::Relaxed),
num_wal_records: self.num_wal_records.load(Ordering::Relaxed),
num_getpage_requests: self.num_getpage_requests.load(Ordering::Relaxed),
first_valid_lsn: self.first_valid_lsn.load(Ordering::Relaxed),
last_valid_lsn: self.last_valid_lsn.load(Ordering::Relaxed),
last_record_lsn: self.last_record_lsn.load(Ordering::Relaxed),
}
info!("initialized storage for tenant: {}", &tenantid);
m.insert(tenantid, repo);
}
}
pub fn get_stats() -> PageCacheStats {
let pcaches = PAGECACHES.lock().unwrap();
pub fn create_repository_for_tenant(
conf: &'static PageServerConf,
tenantid: ZTenantId,
) -> Result<()> {
let mut m = REPOSITORY.lock().unwrap();
let mut stats = PageCacheStats {
num_entries: 0,
num_page_images: 0,
num_wal_records: 0,
num_getpage_requests: 0,
first_valid_lsn: 0,
last_valid_lsn: 0,
last_record_lsn: 0,
};
// First check that the tenant doesn't exist already
if m.get(&tenantid).is_some() {
bail!("tenant {} already exists", tenantid);
}
let wal_redo_manager = Arc::new(PostgresRedoManager::new(conf, tenantid));
let repo = branches::create_repo(conf, tenantid, wal_redo_manager)?;
pcaches.iter().for_each(|(_sys_id, pcache)| {
stats += pcache.get_stats();
});
stats
m.insert(tenantid, repo);
Ok(())
}
pub fn insert_repository_for_tenant(tenantid: ZTenantId, repo: Arc<dyn Repository>) {
let o = &mut REPOSITORY.lock().unwrap();
o.insert(tenantid, repo);
}
pub fn get_repository_for_tenant(tenantid: &ZTenantId) -> Result<Arc<dyn Repository>> {
let o = &REPOSITORY.lock().unwrap();
o.get(tenantid)
.map(|repo| Arc::clone(repo))
.ok_or_else(|| anyhow!("repository not found for tenant name {}", tenantid))
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,65 +0,0 @@
// From pg_tablespace_d.h
//
pub const DEFAULTTABLESPACE_OID: u32 = 1663;
pub const GLOBALTABLESPACE_OID: u32 = 1664;
//Special values for non-rel files' tags
//TODO maybe use enum?
pub const PG_CONTROLFILE_FORKNUM: u32 = 42;
pub const PG_FILENODEMAP_FORKNUM: u32 = 43;
pub const PG_XACT_FORKNUM: u32 = 44;
pub const PG_MXACT_OFFSETS_FORKNUM: u32 = 45;
pub const PG_MXACT_MEMBERS_FORKNUM: u32 = 46;
//
// constants from clog.h
//
pub const CLOG_XACTS_PER_BYTE: u32 = 4;
pub const CLOG_XACTS_PER_PAGE: u32 = 8192 * CLOG_XACTS_PER_BYTE;
pub const CLOG_BITS_PER_XACT: u8 = 2;
pub const CLOG_XACT_BITMASK: u8 = (1 << CLOG_BITS_PER_XACT) - 1;
pub const TRANSACTION_STATUS_COMMITTED: u8 = 0x01;
pub const TRANSACTION_STATUS_ABORTED: u8 = 0x02;
pub const TRANSACTION_STATUS_SUB_COMMITTED: u8 = 0x03;
pub const CLOG_ZEROPAGE: u8 = 0x00;
pub const CLOG_TRUNCATE: u8 = 0x10;
// From xact.h
pub const XLOG_XACT_COMMIT: u8 = 0x00;
pub const XLOG_XACT_ABORT: u8 = 0x20;
/* mask for filtering opcodes out of xl_info */
pub const XLOG_XACT_OPMASK: u8 = 0x70;
/* does this record have a 'xinfo' field or not */
pub const XLOG_XACT_HAS_INFO: u8 = 0x80;
/*
* The following flags, stored in xinfo, determine which information is
* contained in commit/abort records.
*/
pub const XACT_XINFO_HAS_DBINFO: u32 = 1;
pub const XACT_XINFO_HAS_SUBXACTS: u32 = 2;
pub const XACT_XINFO_HAS_RELFILENODES: u32 = 4;
// From pg_control.h and rmgrlist.h
pub const XLOG_SWITCH: u8 = 0x40;
pub const XLOG_SMGR_TRUNCATE: u8 = 0x20;
pub const RM_XLOG_ID: u8 = 0;
pub const RM_XACT_ID: u8 = 1;
pub const RM_SMGR_ID: u8 = 2;
pub const RM_CLOG_ID: u8 = 3;
pub const RM_DBASE_ID: u8 = 4;
pub const RM_TBLSPC_ID: u8 = 5;
// pub const RM_MULTIXACT_ID:u8 = 6;
// from xlogreader.h
pub const XLR_INFO_MASK: u8 = 0x0F;
pub const XLR_RMGR_INFO_MASK: u8 = 0xF0;
// from dbcommands_xlog.h
pub const XLOG_DBASE_CREATE: u8 = 0x00;
pub const XLOG_DBASE_DROP: u8 = 0x10;
pub const XLOG_TBLSPC_CREATE: u8 = 0x00;
pub const XLOG_TBLSPC_DROP: u8 = 0x10;

236
pageserver/src/relish.rs Normal file
View File

@@ -0,0 +1,236 @@
//!
//! Zenith stores PostgreSQL relations, and some other files, in the
//! repository. The relations (i.e. tables and indexes) take up most
//! of the space in a typical installation, while the other files are
//! small. We call each relation and other file that is stored in the
//! repository a "relish". It comes from "rel"-ish, as in "kind of a
//! rel", because it covers relations as well as other things that are
//! not relations, but are treated similarly for the purposes of the
//! storage layer.
//!
//! This source file contains the definition of the RelishTag struct,
//! which uniquely identifies a relish.
//!
//! Relishes come in two flavors: blocky and non-blocky. Relations and
//! SLRUs are blocky, that is, they are divided into 8k blocks, and
//! the repository tracks their size. Other relishes are non-blocky:
//! the content of the whole relish is stored as one blob. Block
//! number must be passed as 0 for all operations on a non-blocky
//! relish. The one "block" that you store in a non-blocky relish can
//! have arbitrary size, but they are expected to be small, or you
//! will have performance issues.
//!
//! All relishes are versioned by LSN in the repository.
//!
use serde::{Deserialize, Serialize};
use std::fmt;
use postgres_ffi::relfile_utils::forknumber_to_name;
use postgres_ffi::{Oid, TransactionId};
///
/// RelishTag identifies one relish.
///
#[derive(Debug, Clone, Copy, Hash, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum RelishTag {
// Relations correspond to PostgreSQL relation forks. Each
// PostgreSQL relation fork is considered a separate relish.
Relation(RelTag),
// SLRUs include pg_clog, pg_multixact/members, and
// pg_multixact/offsets. There are other SLRUs in PostgreSQL, but
// they don't need to be stored permanently (e.g. pg_subtrans),
// or we do not support them in zenith yet (pg_commit_ts).
//
// These are currently never requested directly by the compute
// nodes, although in principle that would be possible. However,
// when a new compute node is created, these are included in the
// tarball that we send to the compute node to initialize the
// PostgreSQL data directory.
//
// Each SLRU segment in PostgreSQL is considered a separate
// relish. For example, pg_clog/0000, pg_clog/0001, and so forth.
//
// SLRU segments are divided into blocks, like relations.
Slru { slru: SlruKind, segno: u32 },
// Miscellaneous other files that need to be included in the
// tarball at compute node creation. These are non-blocky, and are
// expected to be small.
//
// FileNodeMap represents PostgreSQL's 'pg_filenode.map'
// files. They are needed to map catalog table OIDs to filenode
// numbers. Usually the mapping is done by looking up a relation's
// 'relfilenode' field in the 'pg_class' system table, but that
// doesn't work for 'pg_class' itself and a few other such system
// relations. See PostgreSQL relmapper.c for details.
//
// Each database has a map file for its local mapped catalogs,
// and there is a separate map file for shared catalogs.
//
// These files are always 512 bytes long (although we don't check
// or care about that in the page server).
//
FileNodeMap { spcnode: Oid, dbnode: Oid },
//
// State files for prepared transactions (e.g pg_twophase/1234)
//
TwoPhase { xid: TransactionId },
// The control file, stored in global/pg_control
ControlFile,
// Special entry that represents PostgreSQL checkpoint. It doesn't
// correspond to to any physical file in PostgreSQL, but we use it
// to track fields needed to restore the checkpoint data in the
// control file, when a compute node is created.
Checkpoint,
}
impl RelishTag {
pub const fn is_blocky(&self) -> bool {
match self {
// These relishes work with blocks
RelishTag::Relation(_) | RelishTag::Slru { slru: _, segno: _ } => true,
// and these don't
RelishTag::FileNodeMap {
spcnode: _,
dbnode: _,
}
| RelishTag::TwoPhase { xid: _ }
| RelishTag::ControlFile
| RelishTag::Checkpoint => false,
}
}
// Physical relishes represent files and use
// RelationSizeEntry to track existing and dropped files.
// They can be both blocky and non-blocky.
pub const fn is_physical(&self) -> bool {
match self {
// These relishes represent physical files
RelishTag::Relation(_)
| RelishTag::Slru { .. }
| RelishTag::FileNodeMap { .. }
| RelishTag::TwoPhase { .. } => true,
// and these don't
| RelishTag::ControlFile
| RelishTag::Checkpoint => false,
}
}
// convenience function to check if this relish is a normal relation.
pub const fn is_relation(&self) -> bool {
if let RelishTag::Relation(_) = self {
true
} else {
false
}
}
}
///
/// Relation data file segment id throughout the Postgres cluster.
///
/// Every data file in Postgres is uniquely identified by 4 numbers:
/// - relation id / node (`relnode`)
/// - database id (`dbnode`)
/// - tablespace id (`spcnode`), in short this is a unique id of a separate
/// directory to store data files.
/// - forknumber (`forknum`) is used to split different kinds of data of the same relation
/// between some set of files (`relnode`, `relnode_fsm`, `relnode_vm`).
///
/// In native Postgres code `RelFileNode` structure and individual `ForkNumber` value
/// are used for the same purpose.
/// [See more related comments here](https:///github.com/postgres/postgres/blob/99c5852e20a0987eca1c38ba0c09329d4076b6a0/src/include/storage/relfilenode.h#L57).
///
#[derive(Debug, PartialEq, Eq, PartialOrd, Hash, Ord, Clone, Copy, Serialize, Deserialize)]
pub struct RelTag {
pub forknum: u8,
pub spcnode: Oid,
pub dbnode: Oid,
pub relnode: Oid,
}
/// Display RelTag in the same format that's used in most PostgreSQL debug messages:
///
/// <spcnode>/<dbnode>/<relnode>[_fsm|_vm|_init]
///
impl fmt::Display for RelTag {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if let Some(forkname) = forknumber_to_name(self.forknum) {
write!(
f,
"{}/{}/{}_{}",
self.spcnode, self.dbnode, self.relnode, forkname
)
} else {
write!(f, "{}/{}/{}", self.spcnode, self.dbnode, self.relnode)
}
}
}
/// Display RelTag in the same format that's used in most PostgreSQL debug messages:
///
/// <spcnode>/<dbnode>/<relnode>[_fsm|_vm|_init]
///
impl fmt::Display for RelishTag {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
RelishTag::Relation(rel) => rel.fmt(f),
RelishTag::Slru { slru, segno } => {
// e.g. pg_clog/0001
write!(f, "{}/{:04X}", slru.to_str(), segno)
}
RelishTag::FileNodeMap { spcnode, dbnode } => {
write!(f, "relmapper file for spc {} db {}", spcnode, dbnode)
}
RelishTag::TwoPhase { xid } => {
write!(f, "pg_twophase/{:08X}", xid)
}
RelishTag::ControlFile => {
write!(f, "control file")
}
RelishTag::Checkpoint => {
write!(f, "checkpoint")
}
}
}
}
///
/// Non-relation transaction status files (clog (a.k.a. pg_xact) and
/// pg_multixact) in Postgres are handled by SLRU (Simple LRU) buffer,
/// hence the name.
///
/// These files are global for a postgres instance.
///
/// These files are divided into segments, which are divided into
/// pages of the same BLCKSZ as used for relation files.
///
#[derive(Debug, Clone, Copy, Hash, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum SlruKind {
Clog,
MultiXactMembers,
MultiXactOffsets,
}
impl SlruKind {
pub fn to_str(&self) -> &'static str {
match self {
Self::Clog => "pg_xact",
Self::MultiXactMembers => "pg_multixact/members",
Self::MultiXactOffsets => "pg_multixact/offsets",
}
}
}
pub const FIRST_NONREL_RELISH_TAG: RelishTag = RelishTag::Slru {
slru: SlruKind::Clog,
segno: 0,
};

View File

@@ -0,0 +1,716 @@
use crate::object_key::*;
use crate::relish::*;
use anyhow::Result;
use bytes::{Buf, BufMut, Bytes, BytesMut};
use serde::{Deserialize, Serialize};
use std::collections::HashSet;
use std::iter::Iterator;
use std::ops::AddAssign;
use std::sync::Arc;
use std::time::Duration;
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTimelineId;
///
/// A repository corresponds to one .zenith directory. One repository holds multiple
/// timelines, forked off from the same initial call to 'initdb'.
pub trait Repository: Send + Sync {
/// Get Timeline handle for given zenith timeline ID.
fn get_timeline(&self, timelineid: ZTimelineId) -> Result<Arc<dyn Timeline>>;
/// Create a new, empty timeline. The caller is responsible for loading data into it
fn create_empty_timeline(
&self,
timelineid: ZTimelineId,
start_lsn: Lsn,
) -> Result<Arc<dyn Timeline>>;
/// Branch a timeline
fn branch_timeline(&self, src: ZTimelineId, dst: ZTimelineId, start_lsn: Lsn) -> Result<()>;
/// perform one garbage collection iteration.
/// garbage collection is periodically performed by gc thread,
/// but it can be explicitly requested through page server api.
///
/// 'timelineid' specifies the timeline to GC, or None for all.
/// `horizon` specifies delta from last lsn to preserve all object versions (pitr interval).
/// `compact` parameter is used to force compaction of storage.
/// some storage implementation are based on lsm tree and require periodic merge (compaction).
/// usually storage implementation determines itself when compaction should be performed.
/// but for gc tests it way be useful to force compaction just after completion of gc iteration
/// to make sure that all detected garbage is removed.
/// so right now `compact` is set to true when gc explicitly requested through page srver api,
/// and is st to false in gc threads which infinitely repeats gc iterations in loop.
fn gc_iteration(
&self,
timelineid: Option<ZTimelineId>,
horizon: u64,
compact: bool,
) -> Result<GcResult>;
// TODO get timelines?
//fn get_stats(&self) -> RepositoryStats;
}
///
/// Result of performing GC
///
#[derive(Default)]
pub struct GcResult {
// FIXME: These counters make sense for the ObjectRepository. They are not used
// by the LayeredRepository.
pub n_relations: u64,
pub inspected: u64,
pub truncated: u64,
pub deleted: u64,
pub prep_deleted: u64, // RelishTag::Twophase
pub slru_deleted: u64, // RelishTag::Slru
pub chkp_deleted: u64, // RelishTag::Checkpoint
pub control_deleted: u64, // RelishTag::ControlFile
pub filenodemap_deleted: u64, // RelishTag::FileNodeMap
pub dropped: u64,
// These are used for the LayeredRepository instead
pub snapshot_relfiles_total: u64,
pub snapshot_relfiles_needed_by_cutoff: u64,
pub snapshot_relfiles_needed_by_branches: u64,
pub snapshot_relfiles_not_updated: u64,
pub snapshot_relfiles_removed: u64, // # of snapshot files removed because they have been made obsolete by newer snapshot files.
pub snapshot_relfiles_dropped: u64, // # of snapshot files removed because the relation was dropped
pub snapshot_nonrelfiles_total: u64,
pub snapshot_nonrelfiles_needed_by_cutoff: u64,
pub snapshot_nonrelfiles_needed_by_branches: u64,
pub snapshot_nonrelfiles_not_updated: u64,
pub snapshot_nonrelfiles_removed: u64, // # of snapshot files removed because they have been made obsolete by newer snapshot files.
pub snapshot_nonrelfiles_dropped: u64, // # of snapshot files removed because the relation was dropped
pub elapsed: Duration,
}
impl AddAssign for GcResult {
fn add_assign(&mut self, other: Self) {
self.n_relations += other.n_relations;
self.truncated += other.truncated;
self.deleted += other.deleted;
self.dropped += other.dropped;
self.snapshot_relfiles_total += other.snapshot_relfiles_total;
self.snapshot_relfiles_needed_by_cutoff += other.snapshot_relfiles_needed_by_cutoff;
self.snapshot_relfiles_needed_by_branches += other.snapshot_relfiles_needed_by_branches;
self.snapshot_relfiles_not_updated += other.snapshot_relfiles_not_updated;
self.snapshot_relfiles_removed += other.snapshot_relfiles_removed;
self.snapshot_relfiles_dropped += other.snapshot_relfiles_dropped;
self.snapshot_nonrelfiles_total += other.snapshot_nonrelfiles_total;
self.snapshot_nonrelfiles_needed_by_cutoff += other.snapshot_nonrelfiles_needed_by_cutoff;
self.snapshot_nonrelfiles_needed_by_branches +=
other.snapshot_nonrelfiles_needed_by_branches;
self.snapshot_nonrelfiles_not_updated += other.snapshot_nonrelfiles_not_updated;
self.snapshot_nonrelfiles_removed += other.snapshot_nonrelfiles_removed;
self.snapshot_nonrelfiles_dropped += other.snapshot_nonrelfiles_dropped;
self.elapsed += other.elapsed;
}
}
pub trait Timeline: Send + Sync {
//------------------------------------------------------------------------------
// Public GET functions
//------------------------------------------------------------------------------
/// Look up given page in the cache.
fn get_page_at_lsn(&self, tag: RelishTag, blknum: u32, lsn: Lsn) -> Result<Bytes>;
/// Look up given page in the cache.
fn get_page_at_lsn_nowait(&self, tag: RelishTag, blknum: u32, lsn: Lsn) -> Result<Bytes>;
/// Get size of a relish
fn get_relish_size(&self, tag: RelishTag, lsn: Lsn) -> Result<Option<u32>>;
/// Does relation exist?
fn get_rel_exists(&self, tag: RelishTag, lsn: Lsn) -> Result<bool>;
/// Get a list of all distinct relations in given tablespace and database.
fn list_rels(&self, spcnode: u32, dbnode: u32, lsn: Lsn) -> Result<HashSet<RelTag>>;
/// Get a list of non-relational objects
fn list_nonrels<'a>(&'a self, lsn: Lsn) -> Result<HashSet<RelishTag>>;
//------------------------------------------------------------------------------
// Public PUT functions, to update the repository with new page versions.
//
// These are called by the WAL receiver to digest WAL records.
//------------------------------------------------------------------------------
/// Put a new page version that can be constructed from a WAL record
///
/// This will implicitly extend the relation, if the page is beyond the
/// current end-of-file.
fn put_wal_record(&self, tag: RelishTag, blknum: u32, rec: WALRecord) -> Result<()>;
/// Like put_wal_record, but with ready-made image of the page.
fn put_page_image(
&self,
tag: RelishTag,
blknum: u32,
lsn: Lsn,
img: Bytes,
update_meta: bool,
) -> Result<()>;
/// Truncate relation
fn put_truncation(&self, rel: RelishTag, lsn: Lsn, nblocks: u32) -> Result<()>;
/// Unlink relish.
/// This method is used for marking dropped relations and truncated SLRU segments
fn put_unlink(&self, tag: RelishTag, lsn: Lsn) -> Result<()>;
/// Put raw data
fn put_raw_data(&self, tag: ObjectTag, lsn: Lsn, data: &[u8]) -> Result<()>;
/// Remember the all WAL before the given LSN has been processed.
///
/// The WAL receiver calls this after the put_* functions, to indicate that
/// all WAL before this point has been digested. Before that, if you call
/// GET on an earlier LSN, it will block.
fn advance_last_valid_lsn(&self, lsn: Lsn);
fn get_last_valid_lsn(&self) -> Lsn;
fn init_valid_lsn(&self, lsn: Lsn);
/// Like `advance_last_valid_lsn`, but this always points to the end of
/// a WAL record, not in the middle of one.
///
/// This must be <= last valid LSN. This is tracked separately from last
/// valid LSN, so that the WAL receiver knows where to restart streaming.
fn advance_last_record_lsn(&self, lsn: Lsn);
fn get_last_record_lsn(&self) -> Lsn;
// Like `advance_last_record_lsn`, but points to the start position of last record
fn get_prev_record_lsn(&self) -> Lsn;
///
/// Flush to disk all data that was written with the put_* functions
///
/// NOTE: This has nothing to do with checkpoint in PostgreSQL. We don't
/// know anything about them here in the repository.
fn checkpoint(&self) -> Result<()>;
/// Events for all relations in the timeline.
/// Contains updates from start up to the last valid LSN
/// at time of history() call. This lsn can be read via the lsn() function.
///
/// Relation size is increased implicitly and decreased with Truncate updates.
// TODO ordering guarantee?
fn history<'a>(&'a self) -> Result<Box<dyn History + 'a>>;
}
pub trait History: Iterator<Item = Result<Modification>> {
/// The last_valid_lsn at the time of history() call.
fn lsn(&self) -> Lsn;
}
//
// Structure representing any update operation of object storage.
// It is used to copy object storage content in PUSH method.
//
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
pub struct Modification {
pub tag: ObjectTag,
pub lsn: Lsn,
pub data: Vec<u8>,
}
impl Modification {
pub fn new(entry: (ObjectTag, Lsn, Vec<u8>)) -> Modification {
Modification {
tag: entry.0,
lsn: entry.1,
data: entry.2,
}
}
}
#[derive(Clone)]
pub struct RepositoryStats {
pub num_entries: Lsn,
pub num_page_images: Lsn,
pub num_wal_records: Lsn,
pub num_getpage_requests: Lsn,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct WALRecord {
pub lsn: Lsn, // LSN at the *end* of the record
pub will_init: bool,
pub rec: Bytes,
// Remember the offset of main_data in rec,
// so that we don't have to parse the record again.
// If record has no main_data, this offset equals rec.len().
pub main_data_offset: u32,
}
impl WALRecord {
pub fn pack(&self, buf: &mut BytesMut) {
buf.put_u64(self.lsn.0);
buf.put_u8(self.will_init as u8);
buf.put_u32(self.main_data_offset);
buf.put_u32(self.rec.len() as u32);
buf.put_slice(&self.rec[..]);
}
pub fn unpack(buf: &mut Bytes) -> WALRecord {
let lsn = Lsn::from(buf.get_u64());
let will_init = buf.get_u8() != 0;
let main_data_offset = buf.get_u32();
let mut dst = vec![0u8; buf.get_u32() as usize];
buf.copy_to_slice(&mut dst);
WALRecord {
lsn,
will_init,
rec: Bytes::from(dst),
main_data_offset,
}
}
}
///
/// Tests that should work the same with any Repository/Timeline implementation.
///
#[cfg(test)]
mod tests {
use super::*;
use crate::layered_repository::LayeredRepository;
use crate::object_repository::ObjectRepository;
use crate::object_repository::{ObjectValue, PageEntry, RelationSizeEntry};
use crate::rocksdb_storage::RocksObjectStore;
use crate::walredo::{WalRedoError, WalRedoManager};
use crate::{PageServerConf, RepositoryFormat};
use postgres_ffi::pg_constants;
use std::fs;
use std::path::PathBuf;
use std::str::FromStr;
use std::time::Duration;
use zenith_utils::bin_ser::BeSer;
use zenith_utils::postgres_backend::AuthType;
use zenith_utils::zid::ZTenantId;
/// Arbitrary relation tag, for testing.
const TESTREL_A: RelishTag = RelishTag::Relation(RelTag {
spcnode: 0,
dbnode: 111,
relnode: 1000,
forknum: 0,
});
const TESTREL_B: RelishTag = RelishTag::Relation(RelTag {
spcnode: 0,
dbnode: 111,
relnode: 1001,
forknum: 0,
});
/// Convenience function to create a page image with given string as the only content
#[allow(non_snake_case)]
fn TEST_IMG(s: &str) -> Bytes {
let mut buf = BytesMut::new();
buf.extend_from_slice(s.as_bytes());
buf.resize(8192, 0);
buf.freeze()
}
static ZERO_PAGE: Bytes = Bytes::from_static(&[0u8; 8192]);
fn get_test_repo(
test_name: &str,
repository_format: RepositoryFormat,
) -> Result<Box<dyn Repository>> {
let repo_dir = PathBuf::from(format!("../tmp_check/test_{}", test_name));
let _ = fs::remove_dir_all(&repo_dir);
fs::create_dir_all(&repo_dir)?;
fs::create_dir_all(&repo_dir.join("timelines"))?;
let conf = PageServerConf {
daemonize: false,
gc_horizon: 64 * 1024 * 1024,
gc_period: Duration::from_secs(10),
listen_addr: "127.0.0.1:5430".to_string(),
http_endpoint_addr: "127.0.0.1:9898".to_string(),
superuser: "zenith_admin".to_string(),
workdir: repo_dir,
pg_distrib_dir: "".into(),
auth_type: AuthType::Trust,
auth_validation_public_key_path: None,
repository_format,
};
// Make a static copy of the config. This can never be free'd, but that's
// OK in a test.
let conf: &'static PageServerConf = Box::leak(Box::new(conf));
let tenantid = ZTenantId::generate();
fs::create_dir_all(conf.tenant_path(&tenantid)).unwrap();
let walredo_mgr = TestRedoManager {};
let repo: Box<dyn Repository + Sync + Send> = match conf.repository_format {
RepositoryFormat::Layered => Box::new(LayeredRepository::new(
conf,
Arc::new(walredo_mgr),
tenantid,
)),
RepositoryFormat::RocksDb => {
let obj_store = RocksObjectStore::create(conf, &tenantid)?;
Box::new(ObjectRepository::new(
conf,
Arc::new(obj_store),
Arc::new(walredo_mgr),
tenantid,
))
}
};
Ok(repo)
}
/// Test get_relsize() and truncation.
#[test]
fn test_relsize_rocksdb() -> Result<()> {
let repo = get_test_repo("test_relsize_rocksdb", RepositoryFormat::RocksDb)?;
test_relsize(&*repo)
}
#[test]
fn test_relsize_layered() -> Result<()> {
let repo = get_test_repo("test_relsize_layered", RepositoryFormat::Layered)?;
test_relsize(&*repo)
}
fn test_relsize(repo: &dyn Repository) -> Result<()> {
// get_timeline() with non-existent timeline id should fail
//repo.get_timeline("11223344556677881122334455667788");
// Create timeline to work on
let timelineid = ZTimelineId::from_str("11223344556677881122334455667788").unwrap();
let tline = repo.create_empty_timeline(timelineid, Lsn(0))?;
tline.init_valid_lsn(Lsn(1));
tline.put_page_image(TESTREL_A, 0, Lsn(2), TEST_IMG("foo blk 0 at 2"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(2), TEST_IMG("foo blk 0 at 2"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(3), TEST_IMG("foo blk 0 at 3"), true)?;
tline.put_page_image(TESTREL_A, 1, Lsn(4), TEST_IMG("foo blk 1 at 4"), true)?;
tline.put_page_image(TESTREL_A, 2, Lsn(5), TEST_IMG("foo blk 2 at 5"), true)?;
tline.advance_last_valid_lsn(Lsn(5));
// The relation was created at LSN 2, not visible at LSN 1 yet.
assert_eq!(tline.get_rel_exists(TESTREL_A, Lsn(1))?, false);
assert!(tline.get_relish_size(TESTREL_A, Lsn(1))?.is_none());
assert_eq!(tline.get_rel_exists(TESTREL_A, Lsn(2))?, true);
assert_eq!(tline.get_relish_size(TESTREL_A, Lsn(2))?.unwrap(), 1);
assert_eq!(tline.get_relish_size(TESTREL_A, Lsn(5))?.unwrap(), 3);
// Check page contents at each LSN
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(2))?,
TEST_IMG("foo blk 0 at 2")
);
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(3))?,
TEST_IMG("foo blk 0 at 3")
);
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(4))?,
TEST_IMG("foo blk 0 at 3")
);
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 1, Lsn(4))?,
TEST_IMG("foo blk 1 at 4")
);
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(5))?,
TEST_IMG("foo blk 0 at 3")
);
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 1, Lsn(5))?,
TEST_IMG("foo blk 1 at 4")
);
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 2, Lsn(5))?,
TEST_IMG("foo blk 2 at 5")
);
// Truncate last block
tline.put_truncation(TESTREL_A, Lsn(6), 2)?;
tline.advance_last_valid_lsn(Lsn(6));
// Check reported size and contents after truncation
assert_eq!(tline.get_relish_size(TESTREL_A, Lsn(6))?.unwrap(), 2);
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(6))?,
TEST_IMG("foo blk 0 at 3")
);
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 1, Lsn(6))?,
TEST_IMG("foo blk 1 at 4")
);
// should still see the truncated block with older LSN
assert_eq!(tline.get_relish_size(TESTREL_A, Lsn(5))?.unwrap(), 3);
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 2, Lsn(5))?,
TEST_IMG("foo blk 2 at 5")
);
Ok(())
}
/// Test get_relsize() and truncation with a file larger than 1 GB, so that it's
/// split into multiple 1 GB segments in Postgres.
///
/// This isn't very interesting with the RocksDb implementation, as we don't pay
/// any attention to Postgres segment boundaries there.
#[test]
fn test_large_rel_rocksdb() -> Result<()> {
let repo = get_test_repo("test_large_rel_rocksdb", RepositoryFormat::RocksDb)?;
test_large_rel(&*repo)
}
#[test]
fn test_large_rel_layered() -> Result<()> {
let repo = get_test_repo("test_large_rel_layered", RepositoryFormat::Layered)?;
test_large_rel(&*repo)
}
fn test_large_rel(repo: &dyn Repository) -> Result<()> {
let timelineid = ZTimelineId::from_str("11223344556677881122334455667788").unwrap();
let tline = repo.create_empty_timeline(timelineid, Lsn(0))?;
tline.init_valid_lsn(Lsn(1));
let mut lsn = 1;
for blknum in 0..pg_constants::RELSEG_SIZE + 1 {
let img = TEST_IMG(&format!("foo blk {} at {}", blknum, Lsn(lsn)));
lsn += 1;
tline.put_page_image(TESTREL_A, blknum as u32, Lsn(lsn), img, true)?;
}
tline.advance_last_valid_lsn(Lsn(lsn));
assert_eq!(
tline.get_relish_size(TESTREL_A, Lsn(lsn))?.unwrap(),
pg_constants::RELSEG_SIZE + 1
);
// Truncate one block
lsn += 1;
tline.put_truncation(TESTREL_A, Lsn(lsn), pg_constants::RELSEG_SIZE)?;
tline.advance_last_valid_lsn(Lsn(lsn));
assert_eq!(
tline.get_relish_size(TESTREL_A, Lsn(lsn))?.unwrap(),
pg_constants::RELSEG_SIZE
);
// Truncate another block
lsn += 1;
tline.put_truncation(TESTREL_A, Lsn(lsn), pg_constants::RELSEG_SIZE - 1)?;
tline.advance_last_valid_lsn(Lsn(lsn));
assert_eq!(
tline.get_relish_size(TESTREL_A, Lsn(lsn))?.unwrap(),
pg_constants::RELSEG_SIZE - 1
);
Ok(())
}
fn skip_nonrel_objects<'a>(
snapshot: Box<dyn History + 'a>,
) -> Result<impl Iterator<Item = <dyn History as Iterator>::Item> + 'a> {
Ok(snapshot.skip_while(|r| match r {
Ok(m) => match m.tag {
ObjectTag::RelationMetadata(_) => false,
_ => true,
},
_ => panic!("Iteration error"),
}))
}
#[test]
fn test_branch_rocksdb() -> Result<()> {
let repo = get_test_repo("test_branch_rocksdb", RepositoryFormat::RocksDb)?;
test_branch(&*repo)
}
#[test]
fn test_branch_layered() -> Result<()> {
let repo = get_test_repo("test_branch_layered", RepositoryFormat::Layered)?;
test_branch(&*repo)
}
///
/// Test branch creation
///
fn test_branch(repo: &dyn Repository) -> Result<()> {
let timelineid = ZTimelineId::from_str("11223344556677881122334455667788").unwrap();
let tline = repo.create_empty_timeline(timelineid, Lsn(0))?;
// Import initial dummy checkpoint record, otherwise the get_timeline() call
// after branching fails below
tline.put_page_image(RelishTag::Checkpoint, 0, Lsn(1), ZERO_PAGE.clone(), false)?;
// Create a relation on the timeline
tline.init_valid_lsn(Lsn(1));
tline.put_page_image(TESTREL_A, 0, Lsn(2), TEST_IMG("foo blk 0 at 2"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(3), TEST_IMG("foo blk 0 at 3"), true)?;
tline.put_page_image(TESTREL_A, 0, Lsn(4), TEST_IMG("foo blk 0 at 4"), true)?;
// Create another relation
tline.put_page_image(TESTREL_B, 0, Lsn(2), TEST_IMG("foobar blk 0 at 2"), true)?;
tline.advance_last_valid_lsn(Lsn(4));
// Branch the history, modify relation differently on the new timeline
let newtimelineid = ZTimelineId::from_str("AA223344556677881122334455667788").unwrap();
repo.branch_timeline(timelineid, newtimelineid, Lsn(3))?;
let newtline = repo.get_timeline(newtimelineid)?;
newtline.put_page_image(TESTREL_A, 0, Lsn(4), TEST_IMG("bar blk 0 at 4"), true)?;
newtline.advance_last_valid_lsn(Lsn(4));
// Check page contents on both branches
assert_eq!(
tline.get_page_at_lsn(TESTREL_A, 0, Lsn(4))?,
TEST_IMG("foo blk 0 at 4")
);
assert_eq!(
newtline.get_page_at_lsn(TESTREL_A, 0, Lsn(4))?,
TEST_IMG("bar blk 0 at 4")
);
assert_eq!(
newtline.get_page_at_lsn(TESTREL_B, 0, Lsn(4))?,
TEST_IMG("foobar blk 0 at 2")
);
assert_eq!(newtline.get_relish_size(TESTREL_B, Lsn(4))?.unwrap(), 1);
Ok(())
}
#[test]
fn test_history_rocksdb() -> Result<()> {
let repo = get_test_repo("test_history_rocksdb", RepositoryFormat::RocksDb)?;
test_history(&*repo)
}
#[test]
// TODO: This doesn't work with the layered storage, the functions needed for push/pull
// functionality haven't been implemented yet.
#[ignore]
fn test_history_layered() -> Result<()> {
let repo = get_test_repo("test_history_layered", RepositoryFormat::Layered)?;
test_history(&*repo)
}
fn test_history(repo: &dyn Repository) -> Result<()> {
let timelineid = ZTimelineId::from_str("11223344556677881122334455667788").unwrap();
let tline = repo.create_empty_timeline(timelineid, Lsn(0))?;
let snapshot = tline.history()?;
assert_eq!(snapshot.lsn(), Lsn(0));
let mut snapshot = skip_nonrel_objects(snapshot)?;
assert_eq!(None, snapshot.next().transpose()?);
// add a page and advance the last valid LSN
let rel = TESTREL_A;
tline.put_page_image(rel, 1, Lsn(1), TEST_IMG("blk 1 @ lsn 1"), true)?;
tline.advance_last_valid_lsn(Lsn(1));
let expected_page = Modification {
tag: ObjectTag::Buffer(rel, 1),
lsn: Lsn(1),
data: ObjectValue::ser(&ObjectValue::Page(PageEntry::Page(TEST_IMG(
"blk 1 @ lsn 1",
))))?,
};
let expected_init_size = Modification {
tag: ObjectTag::RelationMetadata(rel),
lsn: Lsn(1),
data: ObjectValue::ser(&ObjectValue::RelationSize(RelationSizeEntry::Size(2)))?,
};
let expected_trunc_size = Modification {
tag: ObjectTag::RelationMetadata(rel),
lsn: Lsn(2),
data: ObjectValue::ser(&ObjectValue::RelationSize(RelationSizeEntry::Size(0)))?,
};
let snapshot = tline.history()?;
assert_eq!(snapshot.lsn(), Lsn(1));
let mut snapshot = skip_nonrel_objects(snapshot)?;
assert_eq!(
Some(&expected_init_size),
snapshot.next().transpose()?.as_ref()
);
assert_eq!(Some(&expected_page), snapshot.next().transpose()?.as_ref());
assert_eq!(None, snapshot.next().transpose()?);
// truncate to zero, but don't advance the last valid LSN
tline.put_truncation(rel, Lsn(2), 0)?;
let snapshot = tline.history()?;
assert_eq!(snapshot.lsn(), Lsn(1));
let mut snapshot = skip_nonrel_objects(snapshot)?;
assert_eq!(
Some(&expected_init_size),
snapshot.next().transpose()?.as_ref()
);
assert_eq!(Some(&expected_page), snapshot.next().transpose()?.as_ref());
assert_eq!(None, snapshot.next().transpose()?);
// advance the last valid LSN and the truncation should be observable
tline.advance_last_valid_lsn(Lsn(2));
let snapshot = tline.history()?;
assert_eq!(snapshot.lsn(), Lsn(2));
let mut snapshot = skip_nonrel_objects(snapshot)?;
assert_eq!(
Some(&expected_init_size),
snapshot.next().transpose()?.as_ref()
);
assert_eq!(
Some(&expected_trunc_size),
snapshot.next().transpose()?.as_ref()
);
assert_eq!(Some(&expected_page), snapshot.next().transpose()?.as_ref());
assert_eq!(None, snapshot.next().transpose()?);
Ok(())
}
// Mock WAL redo manager that doesn't do much
struct TestRedoManager {}
impl WalRedoManager for TestRedoManager {
fn request_redo(
&self,
rel: RelishTag,
blknum: u32,
lsn: Lsn,
base_img: Option<Bytes>,
records: Vec<WALRecord>,
) -> Result<Bytes, WalRedoError> {
let s = format!(
"redo for {} blk {} to get to {}, with {} and {} records",
rel,
blknum,
lsn,
if base_img.is_some() {
"base image"
} else {
"no base image"
},
records.len()
);
println!("{}", s);
Ok(TEST_IMG(&s))
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,322 +0,0 @@
//
// Restore chunks from S3
//
// This runs once at Page Server startup. It loads all the "base images" from
// S3 into the in-memory page cache. It also initializes the "last valid LSN"
// in the page cache to the LSN of the base image, so that when the WAL receiver
// is started, it starts streaming from that LSN.
//
use bytes::{Buf, BytesMut};
use log::*;
use regex::Regex;
use std::env;
use std::fmt;
use s3::bucket::Bucket;
use s3::creds::Credentials;
use s3::region::Region;
use s3::S3Error;
use tokio::runtime;
use futures::future;
use crate::{page_cache, PageServerConf};
struct Storage {
region: Region,
credentials: Credentials,
bucket: String,
}
pub fn restore_main(conf: &PageServerConf) {
// Create a new thread pool
let runtime = runtime::Runtime::new().unwrap();
runtime.block_on(async {
let result = restore_chunk(conf).await;
match result {
Ok(_) => {}
Err(err) => {
error!("S3 error: {}", err);
}
}
});
}
//
// Restores one chunk from S3.
//
// 1. Fetch the last base image >= given LSN
// 2. Fetch all WAL
//
// Load it all into the page cache.
//
async fn restore_chunk(conf: &PageServerConf) -> Result<(), S3Error> {
let backend = Storage {
region: Region::Custom {
region: env::var("S3_REGION").unwrap(),
endpoint: env::var("S3_ENDPOINT").unwrap(),
},
credentials: Credentials::new(
Some(&env::var("S3_ACCESSKEY").unwrap()),
Some(&env::var("S3_SECRET").unwrap()),
None,
None,
None,
)
.unwrap(),
bucket: "zenith-testbucket".to_string(),
};
info!("Restoring from S3...");
// Create Bucket in REGION for BUCKET
let bucket = Bucket::new_with_path_style(&backend.bucket, backend.region, backend.credentials)?;
// List out contents of directory
let results: Vec<s3::serde_types::ListBucketResult> = bucket
.list("relationdata/".to_string(), Some("".to_string()))
.await?;
// TODO: get that from backup
let sys_id: u64 = 42;
let mut oldest_lsn = 0;
let mut slurp_futures: Vec<_> = Vec::new();
for result in results {
for object in result.contents {
// Download every relation file, slurping them into memory
let key = object.key;
let relpath = key.strip_prefix("relationdata/").unwrap();
let parsed = parse_rel_file_path(&relpath);
match parsed {
Ok(p) => {
if oldest_lsn == 0 || p.lsn < oldest_lsn {
oldest_lsn = p.lsn;
}
let b = bucket.clone();
let f = slurp_base_file(conf, sys_id, b, key.to_string(), p);
slurp_futures.push(f);
}
Err(e) => {
warn!("unrecognized file: {} ({})", relpath, e);
}
};
}
}
if oldest_lsn == 0 {
panic!("no base backup found");
}
let pcache = page_cache::get_pagecache(conf, sys_id);
pcache.init_valid_lsn(oldest_lsn);
info!("{} files to restore...", slurp_futures.len());
future::join_all(slurp_futures).await;
info!("restored!");
Ok(())
}
// From pg_tablespace_d.h
//
// FIXME: we'll probably need these elsewhere too, move to some common location
const DEFAULTTABLESPACE_OID: u32 = 1663;
const GLOBALTABLESPACE_OID: u32 = 1664;
#[derive(Debug)]
struct FilePathError {
msg: String,
}
impl FilePathError {
fn new(msg: &str) -> FilePathError {
FilePathError {
msg: msg.to_string(),
}
}
}
impl From<core::num::ParseIntError> for FilePathError {
fn from(e: core::num::ParseIntError) -> Self {
return FilePathError {
msg: format!("invalid filename: {}", e),
};
}
}
impl fmt::Display for FilePathError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "invalid filename")
}
}
fn forkname_to_forknum(forkname: Option<&str>) -> Result<u32, FilePathError> {
match forkname {
// "main" is not in filenames, it's implicit if the fork name is not present
None => Ok(0),
Some("fsm") => Ok(1),
Some("vm") => Ok(2),
Some("init") => Ok(3),
Some(_) => Err(FilePathError::new("invalid forkname")),
}
}
#[derive(Debug)]
struct ParsedBaseImageFileName {
pub spcnode: u32,
pub dbnode: u32,
pub relnode: u32,
pub forknum: u32,
pub segno: u32,
pub lsn: u64,
}
// formats:
// <oid>
// <oid>_<fork name>
// <oid>.<segment number>
// <oid>_<fork name>.<segment number>
fn parse_filename(fname: &str) -> Result<(u32, u32, u32, u64), FilePathError> {
let re = Regex::new(r"^(?P<relnode>\d+)(_(?P<forkname>[a-z]+))?(\.(?P<segno>\d+))?_(?P<lsnhi>[[:xdigit:]]{8})(?P<lsnlo>[[:xdigit:]]{8})$").unwrap();
let caps = re
.captures(fname)
.ok_or_else(|| FilePathError::new("invalid relation data file name"))?;
let relnode_str = caps.name("relnode").unwrap().as_str();
let relnode: u32 = relnode_str.parse()?;
let forkname_match = caps.name("forkname");
let forkname = if forkname_match.is_none() {
None
} else {
Some(forkname_match.unwrap().as_str())
};
let forknum = forkname_to_forknum(forkname)?;
let segno_match = caps.name("segno");
let segno = if segno_match.is_none() {
0
} else {
segno_match.unwrap().as_str().parse::<u32>()?
};
let lsn_hi: u64 = caps.name("lsnhi").unwrap().as_str().parse()?;
let lsn_lo: u64 = caps.name("lsnlo").unwrap().as_str().parse()?;
let lsn = lsn_hi << 32 | lsn_lo;
Ok((relnode, forknum, segno, lsn))
}
fn parse_rel_file_path(path: &str) -> Result<ParsedBaseImageFileName, FilePathError> {
/*
* Relation data files can be in one of the following directories:
*
* global/
* shared relations
*
* base/<db oid>/
* regular relations, default tablespace
*
* pg_tblspc/<tblspc oid>/<tblspc version>/
* within a non-default tablespace (the name of the directory
* depends on version)
*
* And the relation data files themselves have a filename like:
*
* <oid>.<segment number>
*/
if let Some(fname) = path.strip_prefix("global/") {
let (relnode, forknum, segno, lsn) = parse_filename(fname)?;
Ok(ParsedBaseImageFileName {
spcnode: GLOBALTABLESPACE_OID,
dbnode: 0,
relnode,
forknum,
segno,
lsn,
})
} else if let Some(dbpath) = path.strip_prefix("base/") {
let mut s = dbpath.split("/");
let dbnode_str = s
.next()
.ok_or_else(|| FilePathError::new("invalid relation data file name"))?;
let dbnode: u32 = dbnode_str.parse()?;
let fname = s
.next()
.ok_or_else(|| FilePathError::new("invalid relation data file name"))?;
if s.next().is_some() {
return Err(FilePathError::new("invalid relation data file name"));
};
let (relnode, forknum, segno, lsn) = parse_filename(fname)?;
Ok(ParsedBaseImageFileName {
spcnode: DEFAULTTABLESPACE_OID,
dbnode,
relnode,
forknum,
segno,
lsn,
})
} else if let Some(_) = path.strip_prefix("pg_tblspc/") {
// TODO
Err(FilePathError::new("tablespaces not supported"))
} else {
Err(FilePathError::new("invalid relation data file name"))
}
}
//
// Load a base file from S3, and insert it into the page cache
//
async fn slurp_base_file(
conf: &PageServerConf,
sys_id: u64,
bucket: Bucket,
s3path: String,
parsed: ParsedBaseImageFileName,
) {
// FIXME: rust-s3 opens a new connection for each request. Should reuse
// the reqwest::Client object. But that requires changes to rust-s3 itself.
let (data, code) = bucket.get_object(s3path.clone()).await.unwrap();
trace!("got response: {} on {}", code, &s3path);
assert_eq!(200, code);
let mut bytes = BytesMut::from(data.as_slice()).freeze();
// FIXME: use constants (BLCKSZ)
let mut blknum: u32 = parsed.segno * (1024 * 1024 * 1024 / 8192);
let pcache = page_cache::get_pagecache(conf, sys_id);
while bytes.remaining() >= 8192 {
let tag = page_cache::BufferTag {
rel: page_cache::RelTag {
spcnode: parsed.spcnode,
dbnode: parsed.dbnode,
relnode: parsed.relnode,
forknum: parsed.forknum as u8,
},
blknum,
};
pcache.put_page_image(tag, parsed.lsn, bytes.copy_to_bytes(8192));
blknum += 1;
}
}

View File

@@ -0,0 +1,475 @@
//!
//! An implementation of the ObjectStore interface, backed by RocksDB
//!
use crate::object_key::*;
use crate::object_store::ObjectStore;
use crate::relish::*;
use crate::PageServerConf;
use anyhow::{bail, Result};
use serde::{Deserialize, Serialize};
use std::collections::HashSet;
use std::sync::{Arc, Mutex};
use zenith_utils::bin_ser::BeSer;
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTenantId;
use zenith_utils::zid::ZTimelineId;
#[derive(Debug, Clone, Serialize, Deserialize)]
struct StorageKey {
obj_key: ObjectKey,
lsn: Lsn,
}
impl StorageKey {
/// The first key for a given timeline
fn timeline_start(timeline: ZTimelineId) -> Self {
Self {
obj_key: ObjectKey {
timeline,
tag: ObjectTag::TimelineMetadataTag,
},
lsn: Lsn(0),
}
}
}
///
/// RocksDB very inefficiently delete random record. Instead of it we have to use merge
/// filter, which allows to throw away records at LSM merge phase.
/// Unfortunately, it is hard (if ever possible) to determine whether version can be removed
/// at merge time. Version ca be removed if:
/// 1. It is above PITR horizon (we need to get current LSN and gc_horizon from config)
/// 2. Page is reconstructed at horizon (all WAL records above horizon are applied and can be removed)
///
/// So we have GC process which reconstructs pages at horizon and mark deteriorated WAL record
/// for deletion. To mark object for deletion we can either set some flag in object itself.
/// But it is complicated with new object value format, because RocksDB storage knows nothing about
/// this format. Also updating whole record just to set one bit seems to be inefficient in any case.
/// This is why we keep keys of marked for deletion versions in HashSet in memory.
/// When LSM merge filter found key in this map, it removes it from the set preventing memory overflow.
///
struct GarbageCollector {
garbage: Mutex<HashSet<Vec<u8>>>,
}
impl GarbageCollector {
fn new() -> GarbageCollector {
GarbageCollector {
garbage: Mutex::new(HashSet::new()),
}
}
/// Called by GC to mark version as delete
fn mark_for_deletion(&self, key: &[u8]) {
let mut garbage = self.garbage.lock().unwrap();
garbage.insert(key.to_vec());
}
/// Called by LSM merge filter. If it finds key in the set, then
/// it doesn't merge it and removes from this set.
fn was_deleted(&self, key: &[u8]) -> bool {
let key = key.to_vec();
let mut garbage = self.garbage.lock().unwrap();
garbage.remove(&key)
}
}
pub struct RocksObjectStore {
_conf: &'static PageServerConf,
// RocksDB handle
db: rocksdb::DB,
gc: Arc<GarbageCollector>,
}
impl ObjectStore for RocksObjectStore {
fn get(&self, key: &ObjectKey, lsn: Lsn) -> Result<Vec<u8>> {
let val = self.db.get(StorageKey::ser(&StorageKey {
obj_key: key.clone(),
lsn,
})?)?;
if let Some(val) = val {
Ok(val)
} else {
bail!("could not find page {:?}", key);
}
}
fn get_next_key(&self, key: &ObjectKey) -> Result<Option<ObjectKey>> {
let mut iter = self.db.raw_iterator();
let search_key = StorageKey {
obj_key: key.clone(),
lsn: Lsn(0),
};
iter.seek(search_key.ser()?);
if !iter.valid() {
Ok(None)
} else {
let key = StorageKey::des(iter.key().unwrap())?;
Ok(Some(key.obj_key.clone()))
}
}
fn put(&self, key: &ObjectKey, lsn: Lsn, value: &[u8]) -> Result<()> {
self.db.put(
StorageKey::ser(&StorageKey {
obj_key: key.clone(),
lsn,
})?,
value,
)?;
Ok(())
}
fn unlink(&self, key: &ObjectKey, lsn: Lsn) -> Result<()> {
self.gc.mark_for_deletion(&StorageKey::ser(&StorageKey {
obj_key: key.clone(),
lsn,
})?);
Ok(())
}
/// Iterate through page versions of given page, starting from the given LSN.
/// The versions are walked in descending LSN order.
fn object_versions<'a>(
&'a self,
key: &ObjectKey,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = (Lsn, Vec<u8>)> + 'a>> {
let iter = RocksObjectVersionIter::new(&self.db, key, lsn)?;
Ok(Box::new(iter))
}
/// Iterate through all timeline objects
fn list_objects<'a>(
&'a self,
timeline: ZTimelineId,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = ObjectTag> + 'a>> {
let iter = RocksObjectIter::new(&self.db, timeline, lsn)?;
Ok(Box::new(iter))
}
/// Get a list of all distinct relations in given tablespace and database.
///
/// TODO: This implementation is very inefficient, it scans
/// through all entries in the given database. In practice, this
/// is used for CREATE DATABASE, and usually the template database is small.
/// But if it's not, this will be slow.
fn list_rels(
&self,
timelineid: ZTimelineId,
spcnode: u32,
dbnode: u32,
lsn: Lsn,
) -> Result<HashSet<RelTag>> {
// FIXME: This scans everything. Very slow
let mut rels: HashSet<RelTag> = HashSet::new();
let mut search_rel_tag = RelTag {
spcnode,
dbnode,
relnode: 0,
forknum: 0u8,
};
let mut iter = self.db.raw_iterator();
loop {
let search_key = StorageKey {
obj_key: ObjectKey {
timeline: timelineid,
tag: ObjectTag::RelationMetadata(RelishTag::Relation(search_rel_tag)),
},
lsn: Lsn(0),
};
iter.seek(search_key.ser()?);
if !iter.valid() {
break;
}
let key = StorageKey::des(iter.key().unwrap())?;
if let ObjectTag::RelationMetadata(RelishTag::Relation(rel_tag)) = key.obj_key.tag {
if spcnode != 0 && rel_tag.spcnode != spcnode
|| dbnode != 0 && rel_tag.dbnode != dbnode
{
break;
}
if key.lsn <= lsn {
// visible in this snapshot
rels.insert(rel_tag);
}
search_rel_tag = rel_tag;
// skip to next relation
// FIXME: What if relnode is u32::MAX ?
search_rel_tag.relnode += 1;
} else {
// no more relation metadata entries
break;
}
}
Ok(rels)
}
/// Get a list of all distinct NON-relations in timeline
/// that are visible at given lsn.
///
/// TODO: This implementation is very inefficient, it scans
/// through all non-rel page versions in the system. In practice, this
/// is used when initializing a new compute node, and the non-rel files
/// are never very large nor change very frequently, so this will do for now.
fn list_nonrels(&self, timelineid: ZTimelineId, lsn: Lsn) -> Result<HashSet<RelishTag>> {
let mut rels: HashSet<RelishTag> = HashSet::new();
let search_key = StorageKey {
obj_key: ObjectKey {
timeline: timelineid,
tag: ObjectTag::Buffer(FIRST_NONREL_RELISH_TAG, 0),
},
lsn: Lsn(0),
};
let mut iter = self.db.raw_iterator();
iter.seek(search_key.ser()?);
while iter.valid() {
let key = StorageKey::des(iter.key().unwrap())?;
if key.obj_key.timeline != timelineid {
// reached end of this timeline in the store
break;
}
if let ObjectTag::Buffer(rel_tag, _blknum) = key.obj_key.tag {
if key.lsn <= lsn {
// visible in this snapshot
rels.insert(rel_tag);
}
}
// TODO: we could skip to next relation here like we do in list_rels(),
// but hopefully there are not that many SLRU segments or other non-rel
// entries for it to matter.
iter.next();
}
Ok(rels)
}
/// Iterate through versions of all objects in a timeline.
///
/// Returns objects in increasing key-version order.
/// Returns all versions up to and including the specified LSN.
fn objects<'a>(
&'a self,
timeline: ZTimelineId,
lsn: Lsn,
) -> Result<Box<dyn Iterator<Item = Result<(ObjectTag, Lsn, Vec<u8>)>> + 'a>> {
let start_key = StorageKey::timeline_start(timeline);
let start_key_bytes = StorageKey::ser(&start_key)?;
let iter = self.db.iterator(rocksdb::IteratorMode::From(
&start_key_bytes,
rocksdb::Direction::Forward,
));
Ok(Box::new(RocksObjects {
iter,
timeline,
lsn,
}))
}
fn compact(&self) {
self.db.compact_range::<&[u8], &[u8]>(None, None);
}
}
impl RocksObjectStore {
/// Open a RocksDB database.
pub fn open(conf: &'static PageServerConf, tenantid: &ZTenantId) -> Result<RocksObjectStore> {
let opts = Self::get_rocksdb_opts();
let obj_store = Self::new(conf, opts, tenantid)?;
Ok(obj_store)
}
/// Create a new, empty RocksDB database.
pub fn create(conf: &'static PageServerConf, tenantid: &ZTenantId) -> Result<RocksObjectStore> {
let path = conf.tenant_path(&tenantid).join("rocksdb-storage");
std::fs::create_dir(&path)?;
let mut opts = Self::get_rocksdb_opts();
opts.create_if_missing(true);
opts.set_error_if_exists(true);
let obj_store = Self::new(conf, opts, tenantid)?;
Ok(obj_store)
}
fn new(
conf: &'static PageServerConf,
mut opts: rocksdb::Options,
tenantid: &ZTenantId,
) -> Result<RocksObjectStore> {
let path = conf.tenant_path(&tenantid).join("rocksdb-storage");
let gc = Arc::new(GarbageCollector::new());
let gc_ref = gc.clone();
opts.set_compaction_filter("ttl", move |_level: u32, key: &[u8], _val: &[u8]| {
if gc_ref.was_deleted(key) {
rocksdb::compaction_filter::Decision::Remove
} else {
rocksdb::compaction_filter::Decision::Keep
}
});
let db = rocksdb::DB::open(&opts, &path)?;
let obj_store = RocksObjectStore {
_conf: conf,
db,
gc,
};
Ok(obj_store)
}
/// common options used by `open` and `create`
fn get_rocksdb_opts() -> rocksdb::Options {
let mut opts = rocksdb::Options::default();
opts.set_use_fsync(true);
opts.set_compression_type(rocksdb::DBCompressionType::Lz4);
opts
}
}
///
/// Iterator for `object_versions`. Returns all page versions of a given block, in
/// reverse LSN order.
///
struct RocksObjectVersionIter<'a> {
obj_key: ObjectKey,
dbiter: rocksdb::DBRawIterator<'a>,
first_call: bool,
}
impl<'a> RocksObjectVersionIter<'a> {
fn new(
db: &'a rocksdb::DB,
obj_key: &ObjectKey,
lsn: Lsn,
) -> Result<RocksObjectVersionIter<'a>> {
let key = StorageKey {
obj_key: obj_key.clone(),
lsn,
};
let mut dbiter = db.raw_iterator();
dbiter.seek_for_prev(StorageKey::ser(&key)?); // locate last entry
Ok(RocksObjectVersionIter {
first_call: true,
obj_key: obj_key.clone(),
dbiter,
})
}
}
impl<'a> Iterator for RocksObjectVersionIter<'a> {
type Item = (Lsn, Vec<u8>);
fn next(&mut self) -> std::option::Option<Self::Item> {
if self.first_call {
self.first_call = false;
} else {
self.dbiter.prev(); // walk backwards
}
if !self.dbiter.valid() {
return None;
}
let key = StorageKey::des(self.dbiter.key().unwrap()).unwrap();
if key.obj_key.tag != self.obj_key.tag {
return None;
}
let val = self.dbiter.value().unwrap();
let result = val.to_vec();
Some((key.lsn, result))
}
}
struct RocksObjects<'r> {
iter: rocksdb::DBIterator<'r>,
timeline: ZTimelineId,
lsn: Lsn,
}
impl<'r> Iterator for RocksObjects<'r> {
// TODO consider returning Box<[u8]>
type Item = Result<(ObjectTag, Lsn, Vec<u8>)>;
fn next(&mut self) -> Option<Self::Item> {
self.next_result().transpose()
}
}
impl<'r> RocksObjects<'r> {
fn next_result(&mut self) -> Result<Option<(ObjectTag, Lsn, Vec<u8>)>> {
for (key_bytes, v) in &mut self.iter {
let key = StorageKey::des(&key_bytes)?;
if key.obj_key.timeline != self.timeline {
return Ok(None);
}
if key.lsn > self.lsn {
// TODO can speed up by seeking iterator
continue;
}
return Ok(Some((key.obj_key.tag, key.lsn, v.to_vec())));
}
Ok(None)
}
}
///
/// Iterator for `list_objects`. Returns all objects preceeding specified LSN
///
struct RocksObjectIter<'a> {
timeline: ZTimelineId,
key: StorageKey,
lsn: Lsn,
dbiter: rocksdb::DBRawIterator<'a>,
}
impl<'a> RocksObjectIter<'a> {
fn new(db: &'a rocksdb::DB, timeline: ZTimelineId, lsn: Lsn) -> Result<RocksObjectIter<'a>> {
let key = StorageKey {
obj_key: ObjectKey {
timeline,
tag: ObjectTag::FirstTag,
},
lsn: Lsn(0),
};
let dbiter = db.raw_iterator();
Ok(RocksObjectIter {
key,
timeline,
lsn,
dbiter,
})
}
}
impl<'a> Iterator for RocksObjectIter<'a> {
type Item = ObjectTag;
fn next(&mut self) -> std::option::Option<Self::Item> {
loop {
self.dbiter.seek(StorageKey::ser(&self.key).unwrap());
if !self.dbiter.valid() {
return None;
}
let key = StorageKey::des(self.dbiter.key().unwrap()).unwrap();
if key.obj_key.timeline != self.timeline {
// End of this timeline
return None;
}
self.key = key.clone();
self.key.lsn = Lsn(u64::MAX); // next seek should skip all versions
if key.lsn <= self.lsn {
// visible in this snapshot
return Some(key.obj_key.tag);
}
}
}
}

View File

@@ -1,300 +0,0 @@
use crate::tui_event::{Event, Events};
use crate::tui_logger::TuiLogger;
use crate::tui_logger::TuiLoggerWidget;
use lazy_static::lazy_static;
use std::sync::Arc;
use std::{error::Error, io};
use termion::{event::Key, input::MouseTerminal, raw::IntoRawMode, screen::AlternateScreen};
use tui::backend::TermionBackend;
use tui::buffer::Buffer;
use tui::layout::{Constraint, Direction, Layout, Rect};
use tui::style::{Color, Modifier, Style};
use tui::text::{Span, Spans, Text};
use tui::widgets::{Block, BorderType, Borders, Paragraph, Widget};
use tui::Terminal;
use slog::Drain;
lazy_static! {
pub static ref PAGESERVICE_DRAIN: Arc<TuiLogger> = Arc::new(TuiLogger::default());
pub static ref WALRECEIVER_DRAIN: Arc<TuiLogger> = Arc::new(TuiLogger::default());
pub static ref WALREDO_DRAIN: Arc<TuiLogger> = Arc::new(TuiLogger::default());
pub static ref CATCHALL_DRAIN: Arc<TuiLogger> = Arc::new(TuiLogger::default());
}
pub fn init_logging() -> slog_scope::GlobalLoggerGuard {
let pageservice_drain =
slog::Filter::new(PAGESERVICE_DRAIN.as_ref(), |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver::page_service")
{
return true;
}
false
})
.fuse();
let walredo_drain = slog::Filter::new(WALREDO_DRAIN.as_ref(), |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver::walredo")
{
return true;
}
false
})
.fuse();
let walreceiver_drain =
slog::Filter::new(WALRECEIVER_DRAIN.as_ref(), |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver::walreceiver")
{
return true;
}
false
})
.fuse();
let catchall_drain = slog::Filter::new(CATCHALL_DRAIN.as_ref(), |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Info) {
return true;
}
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver")
{
return true;
}
false
})
.fuse();
let drain = pageservice_drain;
let drain = slog::Duplicate::new(drain, walreceiver_drain).fuse();
let drain = slog::Duplicate::new(drain, walredo_drain).fuse();
let drain = slog::Duplicate::new(drain, catchall_drain).fuse();
let drain = slog_async::Async::new(drain).chan_size(1000).build().fuse();
let drain = slog::Filter::new(drain, |record: &slog::Record| {
if record.level().is_at_least(slog::Level::Info) {
return true;
}
if record.level().is_at_least(slog::Level::Debug)
&& record.module().starts_with("pageserver")
{
return true;
}
false
})
.fuse();
let logger = slog::Logger::root(drain, slog::o!());
slog_scope::set_global_logger(logger)
}
pub fn ui_main() -> Result<(), Box<dyn Error>> {
// Terminal initialization
let stdout = io::stdout().into_raw_mode()?;
let stdout = MouseTerminal::from(stdout);
let stdout = AlternateScreen::from(stdout);
let backend = TermionBackend::new(stdout);
let mut terminal = Terminal::new(backend)?;
// Setup event handlers
let events = Events::new();
loop {
terminal.draw(|f| {
let size = f.size();
// +----------------+----------------+
// | | |
// | top_top_left | top_top_right |
// | | |
// +----------------+----------------|
// | | |
// | top_bot_left | top_left_right |
// | | |
// +----------------+----------------+
// | |
// | bottom |
// | |
// +---------------------------------+
let chunks = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Percentage(70), Constraint::Percentage(30)].as_ref())
.split(size);
let top_chunk = chunks[0];
let bottom_chunk = chunks[1];
let top_chunks = Layout::default()
.direction(Direction::Horizontal)
.constraints([Constraint::Percentage(50), Constraint::Percentage(50)].as_ref())
.split(top_chunk);
let top_left_chunk = top_chunks[0];
let top_right_chunk = top_chunks[1];
let c = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Percentage(50), Constraint::Percentage(50)].as_ref())
.split(top_left_chunk);
let top_top_left_chunk = c[0];
let top_bot_left_chunk = c[1];
let c = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Percentage(50), Constraint::Percentage(50)].as_ref())
.split(top_right_chunk);
let top_top_right_chunk = c[0];
let top_bot_right_chunk = c[1];
f.render_widget(
LogWidget::new(PAGESERVICE_DRAIN.as_ref(), "Page Service"),
top_top_left_chunk,
);
f.render_widget(
LogWidget::new(WALREDO_DRAIN.as_ref(), "WAL Redo"),
top_bot_left_chunk,
);
f.render_widget(
LogWidget::new(WALRECEIVER_DRAIN.as_ref(), "WAL Receiver"),
top_top_right_chunk,
);
f.render_widget(MetricsWidget {}, top_bot_right_chunk);
f.render_widget(
LogWidget::new(CATCHALL_DRAIN.as_ref(), "All Log").show_module(true),
bottom_chunk,
);
})?;
// If ther user presses 'q', quit.
if let Event::Input(key) = events.next()? {
match key {
Key::Char('q') => {
break;
}
_ => (),
}
}
}
terminal.show_cursor().unwrap();
terminal.clear().unwrap();
Ok(())
}
#[allow(dead_code)]
struct LogWidget<'a> {
logger: &'a TuiLogger,
title: &'a str,
show_module: bool,
}
impl<'a> LogWidget<'a> {
fn new(logger: &'a TuiLogger, title: &'a str) -> LogWidget<'a> {
LogWidget {
logger,
title,
show_module: false,
}
}
fn show_module(mut self, b: bool) -> LogWidget<'a> {
self.show_module = b;
self
}
}
impl<'a> Widget for LogWidget<'a> {
fn render(self, area: Rect, buf: &mut Buffer) {
let w = TuiLoggerWidget::default(self.logger)
.block(
Block::default()
.borders(Borders::ALL)
.title(self.title)
.border_type(BorderType::Rounded),
)
.show_module(true)
.style_error(Style::default().fg(Color::Red))
.style_warn(Style::default().fg(Color::Yellow))
.style_info(Style::default().fg(Color::Green));
w.render(area, buf);
}
}
// Render a widget to show some metrics
struct MetricsWidget {}
fn get_metric_u64(title: &str, value: u64) -> Spans {
Spans::from(vec![
Span::styled(format!("{:<20}", title), Style::default()),
Span::raw(": "),
Span::styled(
value.to_string(),
Style::default().add_modifier(Modifier::BOLD),
),
])
}
fn get_metric_str<'a>(title: &str, value: &'a str) -> Spans<'a> {
Spans::from(vec![
Span::styled(format!("{:<20}", title), Style::default()),
Span::raw(": "),
Span::styled(value, Style::default().add_modifier(Modifier::BOLD)),
])
}
// FIXME: We really should define a datatype for LSNs, with Display trait and
// helper functions. There's one in tokio-postgres, but I don't think we want
// to rely on that.
fn format_lsn(lsn: u64) -> String {
return format!("{:X}/{:X}", lsn >> 32, lsn & 0xffff_ffff);
}
impl tui::widgets::Widget for MetricsWidget {
fn render(self, area: Rect, buf: &mut Buffer) {
let block = Block::default()
.borders(Borders::ALL)
.title("Page Cache Metrics")
.border_type(BorderType::Rounded);
let inner_area = block.inner(area);
block.render(area, buf);
let mut lines: Vec<Spans> = Vec::new();
let page_cache_stats = crate::page_cache::get_stats();
let lsnrange = format!(
"{} - {}",
format_lsn(page_cache_stats.first_valid_lsn),
format_lsn(page_cache_stats.last_valid_lsn)
);
let last_valid_recordlsn_str = format_lsn(page_cache_stats.last_record_lsn);
lines.push(get_metric_str("Valid LSN range", &lsnrange));
lines.push(get_metric_str("Last record LSN", &last_valid_recordlsn_str));
lines.push(get_metric_u64(
"# of cache entries",
page_cache_stats.num_entries,
));
lines.push(get_metric_u64(
"# of page images",
page_cache_stats.num_page_images,
));
lines.push(get_metric_u64(
"# of WAL records",
page_cache_stats.num_wal_records,
));
lines.push(get_metric_u64(
"# of GetPage@LSN calls",
page_cache_stats.num_getpage_requests,
));
let text = Text::from(lines);
Paragraph::new(text).render(inner_area, buf);
}
}

View File

@@ -1,96 +0,0 @@
use std::io;
use std::sync::mpsc;
use std::sync::{
atomic::{AtomicBool, Ordering},
Arc,
};
use std::thread;
use std::time::Duration;
use termion::event::Key;
use termion::input::TermRead;
pub enum Event<I> {
Input(I),
Tick,
}
/// A small event handler that wrap termion input and tick events. Each event
/// type is handled in its own thread and returned to a common `Receiver`
#[allow(dead_code)]
pub struct Events {
rx: mpsc::Receiver<Event<Key>>,
input_handle: thread::JoinHandle<()>,
ignore_exit_key: Arc<AtomicBool>,
tick_handle: thread::JoinHandle<()>,
}
#[derive(Debug, Clone, Copy)]
pub struct Config {
pub exit_key: Key,
pub tick_rate: Duration,
}
impl Default for Config {
fn default() -> Config {
Config {
exit_key: Key::Char('q'),
tick_rate: Duration::from_millis(250),
}
}
}
impl Events {
pub fn new() -> Events {
Events::with_config(Config::default())
}
pub fn with_config(config: Config) -> Events {
let (tx, rx) = mpsc::channel();
let ignore_exit_key = Arc::new(AtomicBool::new(false));
let input_handle = {
let tx = tx.clone();
let ignore_exit_key = ignore_exit_key.clone();
thread::spawn(move || {
let stdin = io::stdin();
for evt in stdin.keys() {
if let Ok(key) = evt {
if let Err(err) = tx.send(Event::Input(key)) {
eprintln!("{}", err);
return;
}
if !ignore_exit_key.load(Ordering::Relaxed) && key == config.exit_key {
return;
}
}
}
})
};
let tick_handle = {
thread::spawn(move || loop {
if tx.send(Event::Tick).is_err() {
break;
}
thread::sleep(config.tick_rate);
})
};
Events {
rx,
input_handle,
ignore_exit_key,
tick_handle,
}
}
pub fn next(&self) -> Result<Event<Key>, mpsc::RecvError> {
self.rx.recv()
}
pub fn disable_exit_key(&mut self) {
self.ignore_exit_key.store(true, Ordering::Relaxed);
}
pub fn enable_exit_key(&mut self) {
self.ignore_exit_key.store(false, Ordering::Relaxed);
}
}

View File

@@ -1,199 +0,0 @@
//
// A TUI Widget that displays log entries
//
// This is heavily inspired by gin66's tui_logger crate at https://github.com/gin66/tui-logger,
// but I wrote this based on the 'slog' module, which simplified things a lot. tui-logger also
// implemented the slog Drain trait, but it had a model of one global buffer for the records.
// With this implementation, each TuiLogger is a separate ring buffer and separate slog Drain.
// Also, I didn't do any of the "hot log" stuff that gin66's implementation had, you can use an
// AsyncDrain to buffer and handle overflow if desired.
//
use chrono::offset::Local;
use chrono::DateTime;
use slog::{Drain, Level, OwnedKVList, Record};
use slog_async::AsyncRecord;
use std::collections::VecDeque;
use std::sync::Mutex;
use std::time::SystemTime;
use tui::buffer::Buffer;
use tui::layout::Rect;
use tui::style::{Modifier, Style};
use tui::text::{Span, Spans};
use tui::widgets::{Block, Paragraph, Widget, Wrap};
// Size of the log ring buffer, in # of records
static BUFFER_SIZE: usize = 1000;
pub struct TuiLogger {
events: Mutex<VecDeque<(SystemTime, AsyncRecord)>>,
}
impl<'a> Default for TuiLogger {
fn default() -> TuiLogger {
TuiLogger {
events: Mutex::new(VecDeque::with_capacity(BUFFER_SIZE)),
}
}
}
impl Drain for TuiLogger {
type Ok = ();
type Err = slog::Error;
fn log(&self, record: &Record, values: &OwnedKVList) -> Result<Self::Ok, Self::Err> {
let mut events = self.events.lock().unwrap();
let now = SystemTime::now();
let asyncrec = AsyncRecord::from(record, values);
events.push_front((now, asyncrec));
if events.len() > BUFFER_SIZE {
events.pop_back();
}
Ok(())
}
}
// TuiLoggerWidget renders a TuiLogger ring buffer
pub struct TuiLoggerWidget<'b> {
block: Option<Block<'b>>,
/// Base style of the widget
style: Style,
/// Level based style
style_error: Option<Style>,
style_warn: Option<Style>,
style_debug: Option<Style>,
style_trace: Option<Style>,
style_info: Option<Style>,
show_module: bool,
logger: &'b TuiLogger,
}
impl<'b> TuiLoggerWidget<'b> {
pub fn default(logger: &'b TuiLogger) -> TuiLoggerWidget<'b> {
TuiLoggerWidget {
block: None,
style: Default::default(),
style_error: None,
style_warn: None,
style_debug: None,
style_trace: None,
style_info: None,
show_module: true,
logger,
}
}
}
impl<'b> TuiLoggerWidget<'b> {
pub fn block(mut self, block: Block<'b>) -> TuiLoggerWidget<'b> {
self.block = Some(block);
self
}
#[allow(unused)]
pub fn style(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style = style;
self
}
pub fn style_error(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_error = Some(style);
self
}
pub fn style_warn(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_warn = Some(style);
self
}
pub fn style_info(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_info = Some(style);
self
}
#[allow(unused)]
pub fn style_trace(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_trace = Some(style);
self
}
#[allow(unused)]
pub fn style_debug(mut self, style: Style) -> TuiLoggerWidget<'b> {
self.style_debug = Some(style);
self
}
pub fn show_module(mut self, b: bool) -> TuiLoggerWidget<'b> {
self.show_module = b;
self
}
}
impl<'b> Widget for TuiLoggerWidget<'b> {
fn render(mut self, area: Rect, buf: &mut Buffer) {
buf.set_style(area, self.style);
let list_area = match self.block.take() {
Some(b) => {
let inner_area = b.inner(area);
b.render(area, buf);
inner_area
}
None => area,
};
if list_area.width == 0 || list_area.height == 0 {
return;
}
let la_height = list_area.height as usize;
//
// Iterate through the records in the buffer. The records are
// pushed to the front, so the newest records come first.
//
let mut lines: Vec<Spans> = Vec::new();
let style_msg = Style::default().add_modifier(Modifier::BOLD);
{
let events = self.logger.events.lock().unwrap();
for evt in events.iter() {
let (timestamp, rec) = evt;
rec.as_record_values(|rec, _kwlist| {
let mut line: Vec<Span> = Vec::new();
let datetime: DateTime<Local> = timestamp.clone().into();
let ts = format!("{}", datetime.format("%H:%M:%S%.3f "));
line.push(Span::raw(ts));
let (lvl_style, txt, with_loc) = match rec.level() {
Level::Critical => (self.style_error, "CRIT ", true),
Level::Error => (self.style_error, "ERROR", true),
Level::Warning => (self.style_warn, "WARN ", true),
Level::Info => (self.style_info, "INFO ", false),
Level::Debug => (self.style_debug, "DEBUG", true),
Level::Trace => (self.style_trace, "TRACE", true),
};
line.push(Span::styled(txt, lvl_style.unwrap_or_default()));
if self.show_module {
line.push(Span::raw(" "));
line.push(Span::raw(rec.module()));
}
if with_loc {
let loc = format!(" {}:{}", rec.file(), rec.line());
line.push(Span::raw(loc));
}
let msg = format!(" {}", rec.msg());
line.push(Span::styled(msg, style_msg));
lines.push(Spans::from(line));
});
if lines.len() == la_height {
break;
}
}
}
lines.reverse();
let text = tui::text::Text::from(lines);
Paragraph::new(text)
.wrap(Wrap { trim: true })
.render(list_area, buf);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,36 +1,37 @@
//!
//! WAL receiver
//!
//! The WAL receiver connects to the WAL safekeeper service, and streams WAL.
//! For each WAL record, it decodes the record to figure out which data blocks
//! the record affects, and adds the records to the page cache.
//! WAL receiver connects to the WAL safekeeper service,
//! streams WAL, decodes records and saves them in page cache.
//!
//! We keep one WAL receiver active per timeline.
use crate::page_cache;
use crate::page_cache::{BufferTag, RelTag};
use crate::pg_constants;
use crate::relish::*;
use crate::restore_local_repo;
use crate::waldecoder::*;
use crate::PageServerConf;
use crate::ZTimelineId;
use anyhow::Error;
use crate::{PageServerConf, RepositoryFormat};
use anyhow::{Error, Result};
use lazy_static::lazy_static;
use log::*;
use postgres::fallible_iterator::FallibleIterator;
use postgres::replication::ReplicationIter;
use postgres::{Client, NoTls, SimpleQueryMessage, SimpleQueryRow};
use postgres_ffi::xlog_utils::*;
use postgres_ffi::*;
use postgres_protocol::message::backend::ReplicationMessage;
use postgres_types::PgLsn;
use std::cmp::{max, min};
use std::collections::HashMap;
use std::fs;
use std::fs::{File, OpenOptions};
use std::io::{Seek, SeekFrom, Write};
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Mutex;
use std::thread;
use tokio::runtime;
use tokio::time::{sleep, Duration};
use tokio_postgres::replication::{PgTimestamp, ReplicationStream};
use tokio_postgres::{NoTls, SimpleQueryMessage, SimpleQueryRow};
use tokio_stream::StreamExt;
use std::thread::sleep;
use std::time::{Duration, SystemTime};
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTenantId;
use zenith_utils::zid::ZTimelineId;
//
// We keep one WAL Receiver active per timeline.
@@ -46,9 +47,10 @@ lazy_static! {
// Launch a new WAL receiver, or tell one that's running about change in connection string
pub fn launch_wal_receiver(
conf: &PageServerConf,
conf: &'static PageServerConf,
timelineid: ZTimelineId,
wal_producer_connstr: &str,
tenantid: ZTenantId,
) {
let mut receivers = WAL_RECEIVERS.lock().unwrap();
@@ -63,11 +65,10 @@ pub fn launch_wal_receiver(
receivers.insert(timelineid, receiver);
// Also launch a new thread to handle this connection
let conf_copy = conf.clone();
let _walreceiver_thread = thread::Builder::new()
.name("WAL receiver thread".into())
.spawn(move || {
thread_main(&conf_copy, timelineid);
thread_main(conf, timelineid, &tenantid);
})
.unwrap();
}
@@ -88,192 +89,147 @@ fn get_wal_producer_connstr(timelineid: ZTimelineId) -> String {
//
// This is the entry point for the WAL receiver thread.
//
fn thread_main(conf: &PageServerConf, timelineid: ZTimelineId) {
fn thread_main(conf: &'static PageServerConf, timelineid: ZTimelineId, tenantid: &ZTenantId) {
info!(
"WAL receiver thread started for timeline : '{}'",
timelineid
);
let runtime = runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
//
// Make a connection to the WAL safekeeper, or directly to the primary PostgreSQL server,
// and start streaming WAL from it. If the connection is lost, keep retrying.
//
loop {
// Look up the current WAL producer address
let wal_producer_connstr = get_wal_producer_connstr(timelineid);
runtime.block_on(async {
loop {
// Look up the current WAL producer address
let wal_producer_connstr = get_wal_producer_connstr(timelineid);
let res = walreceiver_main(conf, timelineid, &wal_producer_connstr, tenantid);
let res = walreceiver_main(conf, timelineid, &wal_producer_connstr).await;
if let Err(e) = res {
info!(
"WAL streaming connection failed ({}), retrying in 1 second",
e
);
sleep(Duration::from_secs(1)).await;
}
if let Err(e) = res {
info!(
"WAL streaming connection failed ({}), retrying in 1 second",
e
);
sleep(Duration::from_secs(1));
}
});
}
}
async fn walreceiver_main(
fn walreceiver_main(
conf: &PageServerConf,
timelineid: ZTimelineId,
wal_producer_connstr: &str,
tenantid: &ZTenantId,
) -> Result<(), Error> {
// Connect to the database in replication mode.
info!("connecting to {:?}", wal_producer_connstr);
let connect_cfg = format!("{} replication=true", wal_producer_connstr);
let (rclient, connection) = tokio_postgres::connect(&connect_cfg, NoTls).await?;
let connect_cfg = format!(
"{} application_name=pageserver replication=true",
wal_producer_connstr
);
let mut rclient = Client::connect(&connect_cfg, NoTls)?;
info!("connected!");
// The connection object performs the actual communication with the database,
// so spawn it off to run on its own.
tokio::spawn(async move {
if let Err(e) = connection.await {
error!("connection error: {}", e);
}
});
// Immediately increment the gauge, then create a job to decrement it on thread exit.
// One of the pros of `defer!` is that this will *most probably*
// get called, even in presence of panics.
let gauge = crate::LIVE_CONNECTIONS_COUNT.with_label_values(&["wal_receiver"]);
gauge.inc();
scopeguard::defer! {
gauge.dec();
}
let identify = identify_system(&rclient).await?;
let identify = identify_system(&mut rclient)?;
info!("{:?}", identify);
let end_of_wal = u64::from(identify.xlogpos);
let end_of_wal = Lsn::from(u64::from(identify.xlogpos));
let mut caught_up = false;
let pcache = page_cache::get_pagecache(&conf, timelineid).unwrap();
let repository = page_cache::get_repository_for_tenant(tenantid)?;
let timeline = repository.get_timeline(timelineid).unwrap();
//
// Start streaming the WAL, from where we left off previously.
//
let mut startpoint = pcache.get_last_valid_lsn();
let last_valid_lsn = pcache.get_last_valid_lsn();
if startpoint == 0 {
// If we start here with identify.xlogpos we will have race condition with
// postgres start: insert into postgres may request page that was modified with lsn
// smaller than identify.xlogpos.
//
// Current procedure for starting postgres will anyway be changed to something
// different like having 'initdb' method on a pageserver (or importing some shared
// empty database snapshot), so for now I just put start of first segment which
// seems to be a valid record.
pcache.init_valid_lsn(0x_1_000_000_u64);
startpoint = 0x_1_000_000_u64;
} else {
// There might be some padding after the last full record, skip it.
//
// FIXME: It probably would be better to always start streaming from the beginning
// of the page, or the segment, so that we could check the page/segment headers
// too. Just for the sake of paranoia.
if startpoint % 8 != 0 {
startpoint += 8 - (startpoint % 8);
}
// If we had previously received WAL up to some point in the middle of a WAL record, we
// better start from the end of last full WAL record, not in the middle of one. Hence,
// use 'last_record_lsn' rather than 'last_valid_lsn' here.
let mut last_rec_lsn = timeline.get_last_record_lsn();
let mut startpoint = last_rec_lsn;
if startpoint == Lsn(0) {
error!("No previous WAL position");
}
// There might be some padding after the last full record, skip it.
//
// FIXME: It probably would be better to always start streaming from the beginning
// of the page, or the segment, so that we could check the page/segment headers
// too. Just for the sake of paranoia.
startpoint += startpoint.calc_padding(8u32);
debug!(
"last_valid_lsn {:X}/{:X} starting replication from {:X}/{:X} for timeline {}, server is at {:X}/{:X}...",
(last_valid_lsn >> 32),
(last_valid_lsn & 0xffffffff),
(startpoint >> 32),
(startpoint & 0xffffffff),
timelineid,
(end_of_wal >> 32),
(end_of_wal & 0xffffffff)
"last_record_lsn {} starting replication from {} for timeline {}, server is at {}...",
last_rec_lsn, startpoint, timelineid, end_of_wal
);
let startpoint = PgLsn::from(startpoint);
let query = format!("START_REPLICATION PHYSICAL {}", startpoint);
let copy_stream = rclient.copy_both_simple::<bytes::Bytes>(&query).await?;
let physical_stream = ReplicationStream::new(copy_stream);
tokio::pin!(physical_stream);
let copy_stream = rclient.copy_both_simple(&query)?;
let mut physical_stream = ReplicationIter::new(copy_stream);
let mut waldecoder = WalStreamDecoder::new(u64::from(startpoint));
let mut waldecoder = WalStreamDecoder::new(startpoint);
while let Some(replication_message) = physical_stream.next().await {
match replication_message? {
let checkpoint_bytes = timeline.get_page_at_lsn_nowait(RelishTag::Checkpoint, 0, startpoint)?;
let mut checkpoint = CheckPoint::decode(&checkpoint_bytes)?;
trace!("CheckPoint.nextXid = {}", checkpoint.nextXid.value);
while let Some(replication_message) = physical_stream.next()? {
let status_update = match replication_message {
ReplicationMessage::XLogData(xlog_data) => {
// Pass the WAL data to the decoder, and see if we can decode
// more records as a result.
let data = xlog_data.data();
let startlsn = xlog_data.wal_start();
let startlsn = Lsn::from(xlog_data.wal_start());
let endlsn = startlsn + data.len() as u64;
let prev_last_rec_lsn = last_rec_lsn;
write_wal_file(
conf,
startlsn,
timelineid,
16 * 1024 * 1024, // FIXME
&timelineid,
pg_constants::WAL_SEGMENT_SIZE,
data,
tenantid,
)?;
trace!(
"received XLogData between {:X}/{:X} and {:X}/{:X}",
(startlsn >> 32),
(startlsn & 0xffffffff),
(endlsn >> 32),
(endlsn & 0xffffffff)
);
trace!("received XLogData between {} and {}", startlsn, endlsn);
waldecoder.feed_bytes(data);
loop {
if let Some((lsn, recdata)) = waldecoder.poll_decode()? {
let decoded = decode_wal_record(recdata.clone());
// Put the WAL record to the page cache. We make a separate copy of
// it for every block it modifies. (The actual WAL record is kept in
// a Bytes, which uses a reference counter for the underlying buffer,
// so having multiple copies of it doesn't cost that much)
for blk in decoded.blocks.iter() {
let tag = BufferTag {
rel: RelTag {
spcnode: blk.rnode_spcnode,
dbnode: blk.rnode_dbnode,
relnode: blk.rnode_relnode,
forknum: blk.forknum as u8,
},
blknum: blk.blkno,
};
while let Some((lsn, recdata)) = waldecoder.poll_decode()? {
// Save old checkpoint value to compare with it after decoding WAL record
let old_checkpoint_bytes = checkpoint.encode();
let decoded = decode_wal_record(recdata.clone());
restore_local_repo::save_decoded_record(
&mut checkpoint,
&*timeline,
&decoded,
recdata,
lsn,
)?;
last_rec_lsn = lsn;
let rec = page_cache::WALRecord {
lsn,
will_init: blk.will_init || blk.apply_image,
truncate: false,
rec: recdata.clone(),
main_data_offset: decoded.main_data_offset as u32,
};
pcache.put_wal_record(tag, rec);
}
// include truncate wal record in all pages
if decoded.xl_rmid == pg_constants::RM_SMGR_ID
&& (decoded.xl_info & pg_constants::XLR_RMGR_INFO_MASK)
== pg_constants::XLOG_SMGR_TRUNCATE
{
let truncate = decode_truncate_record(&decoded);
if (truncate.flags & SMGR_TRUNCATE_HEAP) != 0 {
let tag = BufferTag {
rel: RelTag {
spcnode: truncate.rnode.spcnode,
dbnode: truncate.rnode.dbnode,
relnode: truncate.rnode.relnode,
forknum: MAIN_FORKNUM,
},
blknum: truncate.blkno,
};
let rec = page_cache::WALRecord {
lsn: lsn,
will_init: false,
truncate: true,
rec: recdata.clone(),
main_data_offset: decoded.main_data_offset as u32,
};
pcache.put_rel_wal_record(tag, rec).await?;
}
}
// Now that this record has been handled, let the page cache know that
// it is up-to-date to this LSN
pcache.advance_last_record_lsn(lsn);
} else {
break;
let new_checkpoint_bytes = checkpoint.encode();
// Check if checkpoint data was updated by save_decoded_record
if new_checkpoint_bytes != old_checkpoint_bytes {
timeline.put_page_image(
RelishTag::Checkpoint,
0,
lsn,
new_checkpoint_bytes,
false,
)?;
}
}
@@ -283,48 +239,126 @@ async fn walreceiver_main(
// better reflect that, because GetPage@LSN requests might also point in the
// middle of a record, if the request LSN was taken from the server's current
// flush ptr.
pcache.advance_last_valid_lsn(endlsn);
timeline.advance_last_valid_lsn(endlsn);
// Somewhat arbitrarily, if we have at least 10 complete wal segments (16 MB each),
// "checkpoint" the repository to flush all the changes from WAL we've processed
// so far to disk. After this, we don't need the original WAL anymore, and it
// can be removed. This is probably too aggressive for production, but it's useful
// to expose bugs now.
//
// TODO: We don't actually dare to remove the WAL. It's useful for debugging,
// and we might it for logical decoding other things in the future. Although
// we should also be able to fetch it back from the WAL safekeepers or S3 if
// needed.
if prev_last_rec_lsn.segment_number(pg_constants::WAL_SEGMENT_SIZE)
!= last_rec_lsn.segment_number(pg_constants::WAL_SEGMENT_SIZE)
{
info!("switched segment {} to {}", prev_last_rec_lsn, last_rec_lsn);
let (oldest_segno, newest_segno) = find_wal_file_range(
conf,
&timelineid,
pg_constants::WAL_SEGMENT_SIZE,
last_rec_lsn,
tenantid,
)?;
if newest_segno - oldest_segno >= 10 {
// FIXME: The layered repository performs checkpointing in a separate thread, so this
// isn't needed anymore. Remove 'checkpoint' from the Timeline trait altogether?
if conf.repository_format == RepositoryFormat::RocksDb {
timeline.checkpoint()?;
}
// TODO: This is where we could remove WAL older than last_rec_lsn.
//remove_wal_files(timelineid, pg_constants::WAL_SEGMENT_SIZE, last_rec_lsn)?;
}
}
if !caught_up && endlsn >= end_of_wal {
info!(
"caught up at LSN {:X}/{:X}",
(endlsn >> 32),
(endlsn & 0xffffffff)
);
info!("caught up at LSN {}", endlsn);
caught_up = true;
}
Some(endlsn)
}
ReplicationMessage::PrimaryKeepAlive(keepalive) => {
let wal_end = keepalive.wal_end();
let timestamp = keepalive.timestamp();
let reply_requested: bool = keepalive.reply() != 0;
let reply_requested = keepalive.reply() != 0;
trace!(
"received PrimaryKeepAlive(wal_end: {}, timestamp: {} reply: {})",
"received PrimaryKeepAlive(wal_end: {}, timestamp: {:?} reply: {})",
wal_end,
timestamp,
reply_requested,
);
if reply_requested {
// TODO: More thought should go into what values are sent here.
let last_lsn = PgLsn::from(pcache.get_last_valid_lsn());
let write_lsn = last_lsn;
let flush_lsn = last_lsn;
let apply_lsn = PgLsn::INVALID;
let ts = PgTimestamp::now()?;
const NO_REPLY: u8 = 0u8;
physical_stream
.as_mut()
.standby_status_update(write_lsn, flush_lsn, apply_lsn, ts, NO_REPLY)
.await?;
if reply_requested {
Some(timeline.get_last_valid_lsn())
} else {
None
}
}
_ => (),
_ => None,
};
if let Some(last_lsn) = status_update {
// TODO: More thought should go into what values are sent here.
let last_lsn = PgLsn::from(u64::from(last_lsn));
let write_lsn = last_lsn;
let flush_lsn = last_lsn;
let apply_lsn = PgLsn::from(0);
let ts = SystemTime::now();
const NO_REPLY: u8 = 0;
physical_stream.standby_status_update(write_lsn, flush_lsn, apply_lsn, ts, NO_REPLY)?;
}
}
return Ok(());
Ok(())
}
fn find_wal_file_range(
conf: &PageServerConf,
timeline: &ZTimelineId,
wal_seg_size: usize,
written_upto: Lsn,
tenant: &ZTenantId,
) -> Result<(u64, u64)> {
let written_upto_segno = written_upto.segment_number(wal_seg_size);
let mut oldest_segno = written_upto_segno;
let mut newest_segno = written_upto_segno;
// Scan the wal directory, and count how many WAL filed we could remove
let wal_dir = conf.wal_dir_path(timeline, tenant);
for entry in fs::read_dir(wal_dir)? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
continue;
}
let filename = path.file_name().unwrap().to_str().unwrap();
if IsXLogFileName(filename) {
let (segno, _tli) = XLogFromFileName(filename, wal_seg_size);
if segno > written_upto_segno {
// that's strange.
warn!("there is a WAL file from future at {}", path.display());
continue;
}
oldest_segno = min(oldest_segno, segno);
newest_segno = max(newest_segno, segno);
}
}
// FIXME: would be good to assert that there are no gaps in the WAL files
Ok((oldest_segno, newest_segno))
}
/// Data returned from the postgres `IDENTIFY_SYSTEM` command
@@ -347,9 +381,9 @@ pub struct IdentifySystem {
pub struct IdentifyError;
/// Run the postgres `IDENTIFY_SYSTEM` command
pub async fn identify_system(client: &tokio_postgres::Client) -> Result<IdentifySystem, Error> {
pub fn identify_system(client: &mut Client) -> Result<IdentifySystem, Error> {
let query_str = "IDENTIFY_SYSTEM";
let response = client.simple_query(query_str).await?;
let response = client.simple_query(query_str)?;
// get(N) from row, then parse it as some destination type.
fn get_parse<T>(row: &SimpleQueryRow, idx: usize) -> Result<T, IdentifyError>
@@ -370,15 +404,17 @@ pub async fn identify_system(client: &tokio_postgres::Client) -> Result<Identify
dbname: get_parse(first_row, 3).ok(),
})
} else {
Err(IdentifyError)?
Err(IdentifyError.into())
}
}
fn write_wal_file(
startpos: XLogRecPtr,
timeline: ZTimelineId,
conf: &PageServerConf,
startpos: Lsn,
timelineid: &ZTimelineId,
wal_seg_size: usize,
buf: &[u8],
tenantid: &ZTenantId,
) -> anyhow::Result<()> {
let mut bytes_left: usize = buf.len();
let mut bytes_written: usize = 0;
@@ -386,10 +422,10 @@ fn write_wal_file(
let mut start_pos = startpos;
const ZERO_BLOCK: &[u8] = &[0u8; XLOG_BLCKSZ];
let wal_dir = PathBuf::from(format!("timelines/{}/wal", timeline));
let wal_dir = conf.wal_dir_path(timelineid, tenantid);
/* Extract WAL location for this block */
let mut xlogoff = XLogSegmentOffset(start_pos, wal_seg_size) as usize;
let mut xlogoff = start_pos.segment_offset(wal_seg_size);
while bytes_left != 0 {
let bytes_to_write;
@@ -405,7 +441,7 @@ fn write_wal_file(
}
/* Open file */
let segno = XLByteToSeg(start_pos, wal_seg_size);
let segno = start_pos.segment_number(wal_seg_size);
let wal_file_name = XLogFileName(
1, // FIXME: always use Postgres timeline 1
segno,
@@ -457,7 +493,7 @@ fn write_wal_file(
xlogoff += bytes_to_write;
/* Did we reach the end of a WAL segment? */
if XLogSegmentOffset(start_pos, wal_seg_size) == 0 {
if start_pos.segment_offset(wal_seg_size) == 0 {
xlogoff = 0;
if partial {
fs::rename(&wal_file_partial_path, &wal_file_path)?;

View File

@@ -1,21 +1,28 @@
//
// WAL redo
//
// We rely on Postgres to perform WAL redo for us. We launch a
// postgres process in special "wal redo" mode that's similar to
// single-user mode. We then pass the the previous page image, if any,
// and all the WAL records we want to apply, to the postgress
// process. Then we get the page image back. Communication with the
// postgres process happens via stdin/stdout
//
// See src/backend/tcop/zenith_wal_redo.c for the other side of
// this communication.
//
// TODO: Even though the postgres code runs in a separate process,
// it's not a secure sandbox.
//
//!
//! WAL redo. This service runs PostgreSQL in a special wal_redo mode
//! to apply given WAL records over an old page image and return new
//! page image.
//!
//! We rely on Postgres to perform WAL redo for us. We launch a
//! postgres process in special "wal redo" mode that's similar to
//! single-user mode. We then pass the previous page image, if any,
//! and all the WAL records we want to apply, to the postgres
//! process. Then we get the page image back. Communication with the
//! postgres process happens via stdin/stdout
//!
//! See src/backend/tcop/zenith_wal_redo.c for the other side of
//! this communication.
//!
//! The Postgres process is assumed to be secure against malicious WAL
//! records. It achieves it by dropping privileges before replaying
//! any WAL records, so that even if an attacker hijacks the Postgres
//! process, he cannot escape out of it.
//!
use byteorder::{ByteOrder, LittleEndian};
use bytes::{Buf, BufMut, Bytes, BytesMut};
use lazy_static::lazy_static;
use log::*;
use std::assert;
use serde::{Deserialize, Serialize};
use std::cell::RefCell;
use std::fs;
use std::fs::OpenOptions;
@@ -23,241 +30,467 @@ use std::io::prelude::*;
use std::io::Error;
use std::path::PathBuf;
use std::process::Stdio;
use std::sync::Arc;
use std::sync::Mutex;
use std::time::Duration;
use std::time::Instant;
use tokio::io::AsyncBufReadExt;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::process::{Child, ChildStdin, ChildStdout, Command};
use tokio::runtime::Runtime;
use tokio::process::{ChildStdin, ChildStdout, Command};
use tokio::time::timeout;
use zenith_metrics::{register_histogram, register_int_counter, Histogram, IntCounter};
use zenith_utils::bin_ser::BeSer;
use zenith_utils::lsn::Lsn;
use zenith_utils::zid::ZTenantId;
use bytes::{Buf, BufMut, Bytes, BytesMut};
use crate::relish::*;
use crate::repository::WALRecord;
use crate::waldecoder::XlXactParsedRecord;
use crate::waldecoder::{MultiXactId, XlMultiXactCreate};
use crate::PageServerConf;
use postgres_ffi::nonrelfile_utils::transaction_id_set_status;
use postgres_ffi::pg_constants;
use postgres_ffi::XLogRecord;
use crate::page_cache;
use crate::page_cache::CacheEntry;
use crate::page_cache::WALRecord;
use crate::ZTimelineId;
use crate::{page_cache::BufferTag, pg_constants, PageServerConf};
///
/// `RelTag` + block number (`blknum`) gives us a unique id of the page in the cluster.
///
/// In Postgres `BufferTag` structure is used for exactly the same purpose.
/// [See more related comments here](https://github.com/postgres/postgres/blob/99c5852e20a0987eca1c38ba0c09329d4076b6a0/src/include/storage/buf_internals.h#L91).
///
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, Copy, Serialize, Deserialize)]
pub struct BufferTag {
pub rel: RelTag,
pub blknum: u32,
}
///
/// WAL Redo Manager is responsible for replaying WAL records.
///
/// Callers use the WAL redo manager through this abstract interface,
/// which makes it easy to mock it in tests.
pub trait WalRedoManager: Send + Sync {
/// Apply some WAL records.
///
/// The caller passes an old page image, and WAL records that should be
/// applied over it. The return value is a new page image, after applying
/// the reords.
fn request_redo(
&self,
rel: RelishTag,
blknum: u32,
lsn: Lsn,
base_img: Option<Bytes>,
records: Vec<WALRecord>,
) -> Result<Bytes, WalRedoError>;
}
///
/// A dummy WAL Redo Manager implementation that doesn't allow replaying
/// anything. Currently used during bootstrapping (zenith init), to create
/// a Repository object without launching the real WAL redo process.
///
pub struct DummyRedoManager {}
impl crate::walredo::WalRedoManager for DummyRedoManager {
fn request_redo(
&self,
_rel: RelishTag,
_blknum: u32,
_lsn: Lsn,
_base_img: Option<Bytes>,
_records: Vec<WALRecord>,
) -> Result<Bytes, WalRedoError> {
Err(WalRedoError::InvalidState)
}
}
static TIMEOUT: Duration = Duration::from_secs(20);
// Metrics collected on WAL redo operations
//
// Main entry point for the WAL applicator thread.
//
pub fn wal_redo_main(conf: &PageServerConf, timelineid: ZTimelineId) {
info!("WAL redo thread started {}", timelineid);
// We collect the time spent in actual WAL redo ('redo'), and time waiting
// for access to the postgres process ('wait') since there is only one for
// each tenant.
lazy_static! {
static ref WAL_REDO_TIME: Histogram =
register_histogram!("pageserver_wal_redo_time", "Time spent on WAL redo")
.expect("failed to define a metric");
static ref WAL_REDO_WAIT_TIME: Histogram = register_histogram!(
"pageserver_wal_redo_wait_time",
"Time spent waiting for access to the WAL redo process"
)
.expect("failed to define a metric");
static ref WAL_REDO_RECORD_COUNTER: IntCounter = register_int_counter!(
"pageserver_wal_records_replayed",
"Number of WAL records replayed"
)
.unwrap();
}
// We block on waiting for requests on the walredo request channel, but
// use async I/O to communicate with the child process. Initialize the
// runtime for the async part.
let runtime = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
///
/// This is the real implementation that uses a Postgres process to
/// perform WAL replay. Only one thread can use the processs at a time,
/// that is controlled by the Mutex. In the future, we might want to
/// launch a pool of processes to allow concurrent replay of multiple
/// records.
///
pub struct PostgresRedoManager {
tenantid: ZTenantId,
conf: &'static PageServerConf,
let pcache = page_cache::get_pagecache(conf, timelineid).unwrap();
runtime: tokio::runtime::Runtime,
process: Mutex<Option<PostgresRedoProcess>>,
}
// Loop forever, handling requests as they come.
let walredo_channel_receiver = &pcache.walredo_receiver;
loop {
let mut process: WalRedoProcess;
let datadir = format!("wal-redo/{}", timelineid);
#[derive(Debug)]
struct WalRedoRequest {
rel: RelishTag,
blknum: u32,
lsn: Lsn,
info!("launching WAL redo postgres process {}", timelineid);
{
let _guard = runtime.enter();
process = WalRedoProcess::launch(&datadir, &runtime).unwrap();
}
info!("WAL redo postgres started");
base_img: Option<Bytes>,
records: Vec<WALRecord>,
}
// Pretty arbitrarily, reuse the same Postgres process for 100 requests.
// After that, kill it and start a new one. This is mostly to avoid
// using up all shared buffers in Postgres's shared buffer cache; we don't
// want to write any pages to disk in the WAL redo process.
for _i in 1..100000 {
let request = walredo_channel_receiver.recv().unwrap();
/// An error happened in WAL redo
#[derive(Debug, thiserror::Error)]
pub enum WalRedoError {
#[error(transparent)]
IoError(#[from] std::io::Error),
let result = handle_apply_request(&pcache, &process, &runtime, request);
if result.is_err() {
// Something went wrong with handling the request. It's not clear
// if the request was faulty, and the next request would succeed
// again, or if the 'postgres' process went haywire. To be safe,
// kill the 'postgres' process so that we will start from a clean
// slate, with a new process, for the next request.
break;
#[error("cannot perform WAL redo now")]
InvalidState,
}
///
/// Public interface of WAL redo manager
///
impl WalRedoManager for PostgresRedoManager {
///
/// Request the WAL redo manager to apply some WAL records
///
/// The WAL redo is handled by a separate thread, so this just sends a request
/// to the thread and waits for response.
///
fn request_redo(
&self,
rel: RelishTag,
blknum: u32,
lsn: Lsn,
base_img: Option<Bytes>,
records: Vec<WALRecord>,
) -> Result<Bytes, WalRedoError> {
let start_time;
let lock_time;
let end_time;
let request = WalRedoRequest {
rel,
blknum,
lsn,
base_img,
records,
};
start_time = Instant::now();
let result = {
let mut process_guard = self.process.lock().unwrap();
lock_time = Instant::now();
// launch the WAL redo process on first use
if process_guard.is_none() {
let p = self
.runtime
.block_on(PostgresRedoProcess::launch(self.conf, &self.tenantid))?;
*process_guard = Some(p);
}
let process = (*process_guard).as_ref().unwrap();
self.runtime
.block_on(self.handle_apply_request(&process, &request))
};
end_time = Instant::now();
WAL_REDO_WAIT_TIME.observe(lock_time.duration_since(start_time).as_secs_f64());
WAL_REDO_TIME.observe(end_time.duration_since(lock_time).as_secs_f64());
result
}
}
fn mx_offset_to_flags_offset(xid: MultiXactId) -> usize {
((xid / pg_constants::MULTIXACT_MEMBERS_PER_MEMBERGROUP as u32) as u16
% pg_constants::MULTIXACT_MEMBERGROUPS_PER_PAGE
* pg_constants::MULTIXACT_MEMBERGROUP_SIZE) as usize
}
fn mx_offset_to_flags_bitshift(xid: MultiXactId) -> u16 {
(xid as u16) % pg_constants::MULTIXACT_MEMBERS_PER_MEMBERGROUP
* pg_constants::MXACT_MEMBER_BITS_PER_XACT
}
/* Location (byte offset within page) of TransactionId of given member */
fn mx_offset_to_member_offset(xid: MultiXactId) -> usize {
mx_offset_to_flags_offset(xid)
+ (pg_constants::MULTIXACT_FLAGBYTES_PER_GROUP
+ (xid as u16 % pg_constants::MULTIXACT_MEMBERS_PER_MEMBERGROUP) * 4) as usize
}
impl PostgresRedoManager {
///
/// Create a new PostgresRedoManager.
///
pub fn new(conf: &'static PageServerConf, tenantid: ZTenantId) -> PostgresRedoManager {
// We block on waiting for requests on the walredo request channel, but
// use async I/O to communicate with the child process. Initialize the
// runtime for the async part.
let runtime = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
// The actual process is launched lazily, on first request.
PostgresRedoManager {
runtime,
tenantid,
conf,
process: Mutex::new(None),
}
// Time to kill the 'postgres' process. A new one will be launched on next
// iteration of the loop.
info!("killing WAL redo postgres process");
let _ = runtime.block_on(process.stdin.get_mut().shutdown());
let mut child = process.child;
drop(process.stdin);
let _ = runtime.block_on(child.wait());
}
}
fn transaction_id_set_status_bit(
xl_info: u8,
xl_rmid: u8,
xl_xid: u32,
record: WALRecord,
page: &mut BytesMut,
) {
let info = xl_info & pg_constants::XLOG_XACT_OPMASK;
let mut status = 0;
if info == pg_constants::XLOG_XACT_COMMIT {
status = pg_constants::TRANSACTION_STATUS_COMMITTED;
} else if info == pg_constants::XLOG_XACT_ABORT {
status = pg_constants::TRANSACTION_STATUS_ABORTED;
} else {
trace!("handle_apply_request for RM_XACT_ID-{} NOT SUPPORTED YET. RETURN. lsn {:X}/{:X} main_data_offset {}, rec.len {}",
status,
record.lsn >> 32,
record.lsn & 0xffffffff,
record.main_data_offset, record.rec.len());
return;
}
trace!("handle_apply_request for RM_XACT_ID-{} (1-commit, 2-abort) lsn {:X}/{:X} main_data_offset {}, rec.len {}",
status,
record.lsn >> 32,
record.lsn & 0xffffffff,
record.main_data_offset, record.rec.len());
///
/// Process one request for WAL redo.
///
async fn handle_apply_request(
&self,
process: &PostgresRedoProcess,
request: &WalRedoRequest,
) -> Result<Bytes, WalRedoError> {
let rel = request.rel;
let blknum = request.blknum;
let lsn = request.lsn;
let base_img = request.base_img.clone();
let records = &request.records;
let byteno: usize = ((xl_rmid as u32 % pg_constants::CLOG_XACTS_PER_PAGE as u32)
/ pg_constants::CLOG_XACTS_PER_BYTE) as usize;
let nrecords = records.len();
let byteptr = &mut page[byteno..byteno + 1];
let bshift: u8 = ((xl_xid % pg_constants::CLOG_XACTS_PER_BYTE)
* pg_constants::CLOG_BITS_PER_XACT as u32) as u8;
let start = Instant::now();
let mut curval = byteptr[0];
curval = (curval >> bshift) & pg_constants::CLOG_XACT_BITMASK;
let apply_result: Result<Bytes, Error>;
if let RelishTag::Relation(rel) = rel {
// Relational WAL records are applied using wal-redo-postgres
let buf_tag = BufferTag { rel, blknum };
apply_result = process.apply_wal_records(buf_tag, base_img, records).await;
} else {
// Non-relational WAL records are handled here, with custom code that has the
// same effects as the corresponding Postgres WAL redo function.
const ZERO_PAGE: [u8; 8192] = [0u8; 8192];
let mut page = BytesMut::new();
if let Some(fpi) = base_img {
// If full-page image is provided, then use it...
page.extend_from_slice(&fpi[..]);
} else {
// otherwise initialize page with zeros
page.extend_from_slice(&ZERO_PAGE);
}
// Apply all collected WAL records
for record in records {
let mut buf = record.rec.clone();
let mut byteval = [0];
byteval[0] = curval;
byteval[0] &= !(((1 << pg_constants::CLOG_BITS_PER_XACT as u8) - 1) << bshift);
byteval[0] |= status << bshift;
WAL_REDO_RECORD_COUNTER.inc();
byteptr.copy_from_slice(&byteval);
trace!(
"xl_xid {} byteno {} curval {} byteval {}",
xl_xid,
byteno,
curval,
byteval[0]
);
}
// 1. Parse XLogRecord struct
// FIXME: refactor to avoid code duplication.
let xlogrec = XLogRecord::from_bytes(&mut buf);
fn handle_apply_request(
pcache: &page_cache::PageCache,
process: &WalRedoProcess,
runtime: &Runtime,
entry_rc: Arc<CacheEntry>,
) -> Result<(), Error> {
let tag = entry_rc.key.tag;
let lsn = entry_rc.key.lsn;
let (base_img, records) = pcache.collect_records_for_apply(entry_rc.as_ref());
let mut entry = entry_rc.content.lock().unwrap();
assert!(entry.apply_pending);
entry.apply_pending = false;
let nrecords = records.len();
let start = Instant::now();
let apply_result: Result<Bytes, Error>;
if tag.rel.forknum == pg_constants::PG_XACT_FORKNUM as u8 {
//TODO use base image if any
static ZERO_PAGE: [u8; 8192] = [0u8; 8192];
let zero_page_bytes: &[u8] = &ZERO_PAGE;
let mut page = BytesMut::from(zero_page_bytes);
for record in records {
let mut buf = record.rec.clone();
// 1. Parse XLogRecord struct
// FIXME: refactor to avoid code duplication.
let _xl_tot_len = buf.get_u32_le();
let xl_xid = buf.get_u32_le();
let _xl_prev = buf.get_u64_le();
let xl_info = buf.get_u8();
let xl_rmid = buf.get_u8();
buf.advance(2); // 2 bytes of padding
let _xl_crc = buf.get_u32_le();
if xl_rmid == pg_constants::RM_CLOG_ID {
let info = xl_info & !pg_constants::XLR_INFO_MASK;
if info == pg_constants::CLOG_ZEROPAGE {
page.clone_from_slice(zero_page_bytes);
trace!("handle_apply_request for RM_CLOG_ID-CLOG_ZEROPAGE lsn {:X}/{:X} main_data_offset {}, rec.len {}",
record.lsn >> 32,
record.lsn & 0xffffffff,
record.main_data_offset, record.rec.len());
//move to main data
// TODO probably, we should store some records in our special format
// to avoid this weird parsing on replay
let skip = (record.main_data_offset - pg_constants::SIZEOF_XLOGRECORD) as usize;
if buf.remaining() > skip {
buf.advance(skip);
}
if xlogrec.xl_rmid == pg_constants::RM_XACT_ID {
// Transaction manager stuff
let rec_segno = match rel {
RelishTag::Slru { slru, segno } => {
if slru != SlruKind::Clog {
panic!("Not valid XACT relish tag {:?}", rel);
}
segno
}
_ => panic!("Not valid XACT relish tag {:?}", rel),
};
let parsed_xact =
XlXactParsedRecord::decode(&mut buf, xlogrec.xl_xid, xlogrec.xl_info);
if parsed_xact.info == pg_constants::XLOG_XACT_COMMIT
|| parsed_xact.info == pg_constants::XLOG_XACT_COMMIT_PREPARED
{
transaction_id_set_status(
parsed_xact.xid,
pg_constants::TRANSACTION_STATUS_COMMITTED,
&mut page,
);
for subxact in &parsed_xact.subxacts {
let pageno = *subxact as u32 / pg_constants::CLOG_XACTS_PER_PAGE;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
// only update xids on the requested page
if rec_segno == segno && blknum == rpageno {
transaction_id_set_status(
*subxact,
pg_constants::TRANSACTION_STATUS_SUB_COMMITTED,
&mut page,
);
}
}
} else if parsed_xact.info == pg_constants::XLOG_XACT_ABORT
|| parsed_xact.info == pg_constants::XLOG_XACT_ABORT_PREPARED
{
transaction_id_set_status(
parsed_xact.xid,
pg_constants::TRANSACTION_STATUS_ABORTED,
&mut page,
);
for subxact in &parsed_xact.subxacts {
let pageno = *subxact as u32 / pg_constants::CLOG_XACTS_PER_PAGE;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
// only update xids on the requested page
if rec_segno == segno && blknum == rpageno {
transaction_id_set_status(
*subxact,
pg_constants::TRANSACTION_STATUS_ABORTED,
&mut page,
);
}
}
}
} else if xlogrec.xl_rmid == pg_constants::RM_MULTIXACT_ID {
// Multixact operations
let info = xlogrec.xl_info & pg_constants::XLR_RMGR_INFO_MASK;
if info == pg_constants::XLOG_MULTIXACT_CREATE_ID {
let xlrec = XlMultiXactCreate::decode(&mut buf);
if let RelishTag::Slru {
slru,
segno: rec_segno,
} = rel
{
if slru == SlruKind::MultiXactMembers {
for i in 0..xlrec.nmembers {
let pageno =
i / pg_constants::MULTIXACT_MEMBERS_PER_PAGE as u32;
let segno = pageno / pg_constants::SLRU_PAGES_PER_SEGMENT;
let rpageno = pageno % pg_constants::SLRU_PAGES_PER_SEGMENT;
if segno == rec_segno && rpageno == blknum {
// update only target block
let offset = xlrec.moff + i;
let memberoff = mx_offset_to_member_offset(offset);
let flagsoff = mx_offset_to_flags_offset(offset);
let bshift = mx_offset_to_flags_bitshift(offset);
let mut flagsval =
LittleEndian::read_u32(&page[flagsoff..flagsoff + 4]);
flagsval &= !(((1
<< pg_constants::MXACT_MEMBER_BITS_PER_XACT)
- 1)
<< bshift);
flagsval |= xlrec.members[i as usize].status << bshift;
LittleEndian::write_u32(
&mut page[flagsoff..flagsoff + 4],
flagsval,
);
LittleEndian::write_u32(
&mut page[memberoff..memberoff + 4],
xlrec.members[i as usize].xid,
);
}
}
} else {
// Multixact offsets SLRU
let offs = (xlrec.mid
% pg_constants::MULTIXACT_OFFSETS_PER_PAGE as u32
* 4) as usize;
LittleEndian::write_u32(&mut page[offs..offs + 4], xlrec.moff);
}
} else {
panic!();
}
} else {
panic!();
}
} else if xlogrec.xl_rmid == pg_constants::RM_RELMAP_ID {
// Relation map file has size 512 bytes
page.clear();
page.extend_from_slice(&buf[12..]); // skip xl_relmap_update
assert!(page.len() == 512); // size of pg_filenode.map
}
} else if xl_rmid == pg_constants::RM_XACT_ID {
transaction_id_set_status_bit(xl_info, xl_rmid, xl_xid, record, &mut page);
}
apply_result = Ok::<Bytes, Error>(page.freeze());
}
apply_result = Ok::<Bytes, Error>(page.freeze());
} else {
apply_result = process.apply_wal_records(runtime, tag, base_img, records);
let duration = start.elapsed();
let result: Result<Bytes, WalRedoError>;
debug!(
"applied {} WAL records in {} ms to reconstruct page image at LSN {}",
nrecords,
duration.as_millis(),
lsn
);
if let Err(e) = apply_result {
error!("could not apply WAL records: {}", e);
result = Err(WalRedoError::IoError(e));
} else {
let img = apply_result.unwrap();
result = Ok(img);
}
// The caller is responsible for sending the response
result
}
let duration = start.elapsed();
let result;
trace!(
"applied {} WAL records in {} ms to reconstruct page image at LSN {:X}/{:X}",
nrecords,
duration.as_millis(),
lsn >> 32,
lsn & 0xffff_ffff
);
if let Err(e) = apply_result {
error!("could not apply WAL records: {}", e);
result = Err(e);
} else {
entry.page_image = Some(apply_result.unwrap());
result = Ok(());
}
// Wake up the requester, whether the operation succeeded or not.
entry_rc.walredo_condvar.notify_all();
result
}
struct WalRedoProcess {
child: Child,
///
/// Handle to the Postgres WAL redo process
///
struct PostgresRedoProcess {
stdin: RefCell<ChildStdin>,
stdout: RefCell<ChildStdout>,
}
impl WalRedoProcess {
impl PostgresRedoProcess {
//
// Start postgres binary in special WAL redo mode.
//
// Tests who run pageserver binary are setting proper PG_BIN_DIR
// and PG_LIB_DIR so that WalRedo would start right postgres. We may later
// switch to setting same things in pageserver config file.
fn launch(datadir: &str, runtime: &Runtime) -> Result<WalRedoProcess, Error> {
// Create empty data directory for wal-redo postgres deleting old one.
fs::remove_dir_all(datadir).ok();
let initdb = runtime
.block_on(
Command::new("initdb")
.args(&["-D", datadir])
.arg("-N")
.output(),
)
async fn launch(
conf: &PageServerConf,
tenantid: &ZTenantId,
) -> Result<PostgresRedoProcess, Error> {
// FIXME: We need a dummy Postgres cluster to run the process in. Currently, we
// just create one with constant name. That fails if you try to launch more than
// one WAL redo manager concurrently.
let datadir = conf.tenant_path(&tenantid).join("wal-redo-datadir");
// Create empty data directory for wal-redo postgres, deleting old one first.
if datadir.exists() {
info!("directory {:?} exists, removing", &datadir);
if let Err(e) = fs::remove_dir_all(&datadir) {
error!("could not remove old wal-redo-datadir: {:?}", e);
}
}
info!("running initdb in {:?}", datadir.display());
let initdb = Command::new(conf.pg_bin_dir().join("initdb"))
.args(&["-D", datadir.to_str().unwrap()])
.arg("-N")
.env_clear()
.env("LD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.output()
.await
.expect("failed to execute initdb");
if !initdb.status.success() {
@@ -271,20 +504,28 @@ impl WalRedoProcess {
let mut config = OpenOptions::new()
.append(true)
.open(PathBuf::from(&datadir).join("postgresql.conf"))?;
config.write(b"shared_buffers=128kB\n")?;
config.write(b"fsync=off\n")?;
config.write_all(b"shared_buffers=128kB\n")?;
config.write_all(b"fsync=off\n")?;
config.write_all(b"shared_preload_libraries=zenith\n")?;
config.write_all(b"zenith.wal_redo=on\n")?;
}
// Start postgres itself
let mut child = Command::new("postgres")
let mut child = Command::new(conf.pg_bin_dir().join("postgres"))
.arg("--wal-redo")
.stdin(Stdio::piped())
.stderr(Stdio::piped())
.stdout(Stdio::piped())
.env("PGDATA", datadir)
.env_clear()
.env("LD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.env("DYLD_LIBRARY_PATH", conf.pg_lib_dir().to_str().unwrap())
.env("PGDATA", &datadir)
.spawn()
.expect("postgres --wal-redo command failed to start");
info!("launched WAL redo postgres process on {}", datadir);
info!(
"launched WAL redo postgres process on {:?}",
datadir.display()
);
let stdin = child.stdin.take().expect("failed to open child's stdin");
let stderr = child.stderr.take().expect("failed to open child's stderr");
@@ -311,8 +552,7 @@ impl WalRedoProcess {
};
tokio::spawn(f_stderr);
Ok(WalRedoProcess {
child,
Ok(PostgresRedoProcess {
stdin: RefCell::new(stdin),
stdout: RefCell::new(stdout),
})
@@ -322,91 +562,108 @@ impl WalRedoProcess {
// Apply given WAL records ('records') over an old page image. Returns
// new page image.
//
fn apply_wal_records(
async fn apply_wal_records(
&self,
runtime: &Runtime,
tag: BufferTag,
base_img: Option<Bytes>,
records: Vec<WALRecord>,
) -> Result<Bytes, Error> {
records: &[WALRecord],
) -> Result<Bytes, std::io::Error> {
let mut stdin = self.stdin.borrow_mut();
let mut stdout = self.stdout.borrow_mut();
runtime.block_on(async {
//
// This async block sends all the commands to the process.
//
// For reasons I don't understand, this needs to be a "move" block;
// otherwise the stdin pipe doesn't get closed, despite the shutdown()
// call.
//
let f_stdin = async {
// Send base image, if any. (If the record initializes the page, previous page
// version is not needed.)
// We do three things simultaneously: send the old base image and WAL records to
// the child process's stdin, read the result from child's stdout, and forward any logging
// information that the child writes to its stderr to the page server's log.
//
// 'f_stdin' handles writing the base image and WAL records to the child process.
// 'f_stdout' below reads the result back. And 'f_stderr', which was spawned into the
// tokio runtime in the 'launch' function already, forwards the logging.
let f_stdin = async {
// Send base image, if any. (If the record initializes the page, previous page
// version is not needed.)
timeout(
TIMEOUT,
stdin.write_all(&build_begin_redo_for_block_msg(tag)),
)
.await??;
if base_img.is_some() {
timeout(
TIMEOUT,
stdin.write_all(&build_begin_redo_for_block_msg(tag)),
stdin.write_all(&build_push_page_msg(tag, base_img.unwrap())),
)
.await??;
if base_img.is_some() {
timeout(
TIMEOUT,
stdin.write_all(&build_push_page_msg(tag, base_img.unwrap())),
)
.await??;
}
}
// Send WAL records.
for rec in records.iter() {
let r = rec.clone();
// Send WAL records.
for rec in records.iter() {
let r = rec.clone();
stdin
.write_all(&build_apply_record_msg(r.lsn, r.rec))
.await?;
WAL_REDO_RECORD_COUNTER.inc();
//debug!("sent WAL record to wal redo postgres process ({:X}/{:X}",
// r.lsn >> 32, r.lsn & 0xffff_ffff);
}
//debug!("sent {} WAL records to wal redo postgres process ({:X}/{:X}",
// records.len(), lsn >> 32, lsn & 0xffff_ffff);
stdin
.write_all(&build_apply_record_msg(r.lsn, r.rec))
.await?;
// Send GetPage command to get the result back
timeout(TIMEOUT, stdin.write_all(&build_get_page_msg(tag))).await??;
timeout(TIMEOUT, stdin.flush()).await??;
//debug!("sent GetPage for {}", tag.blknum);
Ok::<(), Error>(())
};
//debug!("sent WAL record to wal redo postgres process ({:X}/{:X}",
// r.lsn >> 32, r.lsn & 0xffff_ffff);
}
//debug!("sent {} WAL records to wal redo postgres process ({:X}/{:X}",
// records.len(), lsn >> 32, lsn & 0xffff_ffff);
// Read back new page image
let f_stdout = async {
let mut buf = [0u8; 8192];
// Send GetPage command to get the result back
timeout(TIMEOUT, stdin.write_all(&build_get_page_msg(tag))).await??;
timeout(TIMEOUT, stdin.flush()).await??;
//debug!("sent GetPage for {}", tag.blknum);
Ok::<(), Error>(())
};
timeout(TIMEOUT, stdout.read_exact(&mut buf)).await??;
//debug!("got response for {}", tag.blknum);
Ok::<[u8; 8192], Error>(buf)
};
// Read back new page image
let f_stdout = async {
let mut buf = [0u8; 8192];
// Kill the process. This closes its stdin, which should signal the process
// to terminate. TODO: SIGKILL if needed
//child.wait();
timeout(TIMEOUT, stdout.read_exact(&mut buf)).await??;
//debug!("got response for {}", tag.blknum);
Ok::<[u8; 8192], Error>(buf)
};
let res = futures::try_join!(f_stdout, f_stdin)?;
let res = tokio::try_join!(f_stdout, f_stdin)?;
let buf = res.0;
let buf = res.0;
Ok::<Bytes, Error>(Bytes::from(std::vec::Vec::from(buf)))
})
Ok::<Bytes, Error>(Bytes::from(std::vec::Vec::from(buf)))
}
}
// Functions for constructing messages to send to the postgres WAL redo
// process. See vendor/postgres/src/backend/tcop/zenith_wal_redo.c for
// explanation of the protocol.
fn build_begin_redo_for_block_msg(tag: BufferTag) -> Bytes {
let len = 4 + 5 * 4;
let len = 4 + 1 + 4 * 4;
let mut buf = BytesMut::with_capacity(1 + len);
buf.put_u8(b'B');
buf.put_u32(len as u32);
tag.pack(&mut buf);
assert!(buf.len() == 1 + len);
// FIXME: this is a temporary hack that should go away when we refactor
// the postgres protocol serialization + handlers.
//
// BytesMut is a dynamic growable buffer, used a lot in tokio code but
// not in the std library. To write to a BytesMut from a serde serializer,
// we need to either:
// - pre-allocate the required buffer space. This is annoying because we
// shouldn't care what the exact serialized size is-- that's the
// serializer's job.
// - Or, we need to create a temporary "writer" (which implements the
// `Write` trait). It's a bit awkward, because the writer consumes the
// underlying BytesMut, and we need to extract it later with
// `into_inner`.
let mut writer = buf.writer();
tag.ser_into(&mut writer)
.expect("serialize BufferTag should always succeed");
let buf = writer.into_inner();
debug_assert!(buf.len() == 1 + len);
buf.freeze()
}
@@ -414,42 +671,48 @@ fn build_begin_redo_for_block_msg(tag: BufferTag) -> Bytes {
fn build_push_page_msg(tag: BufferTag, base_img: Bytes) -> Bytes {
assert!(base_img.len() == 8192);
let len = 4 + 5 * 4 + base_img.len();
let len = 4 + 1 + 4 * 4 + base_img.len();
let mut buf = BytesMut::with_capacity(1 + len);
buf.put_u8(b'P');
buf.put_u32(len as u32);
tag.pack(&mut buf);
let mut writer = buf.writer();
tag.ser_into(&mut writer)
.expect("serialize BufferTag should always succeed");
let mut buf = writer.into_inner();
buf.put(base_img);
assert!(buf.len() == 1 + len);
debug_assert!(buf.len() == 1 + len);
buf.freeze()
}
fn build_apply_record_msg(endlsn: u64, rec: Bytes) -> Bytes {
fn build_apply_record_msg(endlsn: Lsn, rec: Bytes) -> Bytes {
let len = 4 + 8 + rec.len();
let mut buf = BytesMut::with_capacity(1 + len);
buf.put_u8(b'A');
buf.put_u32(len as u32);
buf.put_u64(endlsn);
buf.put_u64(endlsn.0);
buf.put(rec);
assert!(buf.len() == 1 + len);
debug_assert!(buf.len() == 1 + len);
buf.freeze()
}
fn build_get_page_msg(tag: BufferTag) -> Bytes {
let len = 4 + 5 * 4;
let len = 4 + 1 + 4 * 4;
let mut buf = BytesMut::with_capacity(1 + len);
buf.put_u8(b'G');
buf.put_u32(len as u32);
tag.pack(&mut buf);
let mut writer = buf.writer();
tag.ser_into(&mut writer)
.expect("serialize BufferTag should always succeed");
let buf = writer.into_inner();
assert!(buf.len() == 1 + len);
debug_assert!(buf.len() == 1 + len);
buf.freeze()
}

View File

@@ -1,33 +0,0 @@
#!/bin/sh
#
# Purpose of this script is to build and install postgres in a local directory
# so that zenith intergation tests would find pg binaries and support files.
#
# ./pgbuild.sh would do following:
#
# 1) run out-of-source build of postgres in REPO_ROOT/tmp_install/build directory (I'm reusing
# tmp_install path here since it is already present in .gitignore)
#
# 2) installs postgres to REPO_ROOT/tmp_install/
#
# Halt immediately if any command fails
set -e
REPO_ROOT=$(dirname "$0")
REPO_ROOT="`( cd \"$REPO_ROOT\" && pwd )`"
# configure
echo "Configuring postgres build"
mkdir -p $REPO_ROOT/tmp_install/build
cd $REPO_ROOT/tmp_install/build
../../vendor/postgres/configure CFLAGS='-O0' --enable-debug --enable-cassert \
--enable-depend --with-libxml --prefix=/ > configure.log
# compile
echo "Compiling postgres"
make -j8 -s
export DESTDIR=$REPO_ROOT/tmp_install
echo "Installing postgres to $DESTDIR"
make install -s

View File

@@ -9,12 +9,18 @@ edition = "2018"
[dependencies]
chrono = "0.4.19"
rand = "0.8.3"
regex = "1.4.5"
bytes = "1.0.1"
byteorder = "1.4.3"
anyhow = "1.0"
crc32c = "0.6.0"
hex = "0.4.3"
lazy_static = "1.4"
log = "0.4.14"
memoffset = "0.6.2"
thiserror = "1.0"
workspace_hack = { path = "../workspace_hack" }
zenith_utils = { path = "../zenith_utils" }
[build-dependencies]
bindgen = "0.57"

View File

@@ -1,3 +1,25 @@
This module contains utility functions for interacting with PostgreSQL
file formats.
This module contains utilities for working with PostgreSQL file
formats. It's a collection of structs that are auto-generated from the
PostgreSQL header files using bindgen, and Rust functions to read and
manipulate them.
There are also a bunch of constants in `pg_constants.rs` that are copied
from various PostgreSQL headers, rather than auto-generated. They mostly
should be auto-generated too, but that's a TODO.
The PostgreSQL on-disk file format is not portable across different
CPU architectures and operating systems. It is also subject to change
in each major PostgreSQL version. Currently, this module is based on
PostgreSQL v14, but in the future we will probably need a separate
copy for each PostgreSQL version.
To interact with the C structs, there is some unsafe code in this
module. Do not copy-paste that to the rest of the codebase! Keep the
amount of unsafe code to a minimum, and limited to this module only,
and only where it's truly needed.
TODO: Currently, there is also some code that deals with WAL records
in pageserver/src/waldecoder.rs. That should be moved into this
module. The rest of the codebase should not have intimate knowledge of
PostgreSQL file formats or WAL layout, that knowledge should be
encapsulated in this module.

View File

@@ -11,27 +11,42 @@ fn main() {
// to bindgen, and lets you build up options for
// the resulting bindings.
let bindings = bindgen::Builder::default()
// The input header we would like to generate
// bindings for.
//
// All the needed PostgreSQL headers are included from 'pg_control_ffi.h'
//
.header("pg_control_ffi.h")
//
// Tell cargo to invalidate the built crate whenever any of the
// included header files changed.
//
.parse_callbacks(Box::new(bindgen::CargoCallbacks))
//
// These are the types and constants that we want to generate bindings for
//
.whitelist_type("ControlFileData")
.whitelist_type("CheckPoint")
.whitelist_type("FullTransactionId")
.whitelist_type("XLogRecord")
.whitelist_type("XLogPageHeaderData")
.whitelist_type("XLogLongPageHeaderData")
.whitelist_var("XLOG_PAGE_MAGIC")
.whitelist_var("PG_CONTROL_FILE_SIZE")
.whitelist_var("PG_CONTROLFILEDATA_OFFSETOF_CRC")
.whitelist_type("DBState")
//
// Path the server include dir. It is in tmp_install/include/server, if you did
// "configure --prefix=<path to tmp_install>". But if you used "configure --prefix=/",
// and used DESTDIR to move it into tmp_install, then it's in
// tmp_install/include/postgres/server (that's how the pgbuild.sh script does it).
// tmp_install/include/postgres/server
// 'pg_config --includedir-server' would perhaps be the more proper way to find it,
// but this will do for now.
//
.clang_arg("-I../tmp_install/include/server")
.clang_arg("-I../tmp_install/include/postgresql/server")
//
// Finish the builder and generate the bindings.
//
.generate()
// Unwrap the Result and panic on failure.
.expect("Unable to generate bindings");
// Write the bindings to the $OUT_DIR/bindings.rs file.

View File

@@ -1,4 +1,10 @@
/*
* This header file is the input to bindgen. It includes all the
* PostgreSQL headers that we need to auto-generate Rust structs
* from. If you need to expose a new struct to Rust code, add the
* header here, and whitelist the struct in the build.rs file.
*/
#include "c.h"
#include "catalog/pg_control.h"
#include "access/xlog_internal.h"
const uint32 PG_CONTROLFILEDATA_OFFSETOF_CRC = offsetof(ControlFileData, crc);

View File

@@ -0,0 +1,98 @@
# PostgreSQL Client Authentication Configuration File
# ===================================================
#
# Refer to the "Client Authentication" section in the PostgreSQL
# documentation for a complete description of this file. A short
# synopsis follows.
#
# This file controls: which hosts are allowed to connect, how clients
# are authenticated, which PostgreSQL user names they can use, which
# databases they can access. Records take one of these forms:
#
# local DATABASE USER METHOD [OPTIONS]
# host DATABASE USER ADDRESS METHOD [OPTIONS]
# hostssl DATABASE USER ADDRESS METHOD [OPTIONS]
# hostnossl DATABASE USER ADDRESS METHOD [OPTIONS]
# hostgssenc DATABASE USER ADDRESS METHOD [OPTIONS]
# hostnogssenc DATABASE USER ADDRESS METHOD [OPTIONS]
#
# (The uppercase items must be replaced by actual values.)
#
# The first field is the connection type:
# - "local" is a Unix-domain socket
# - "host" is a TCP/IP socket (encrypted or not)
# - "hostssl" is a TCP/IP socket that is SSL-encrypted
# - "hostnossl" is a TCP/IP socket that is not SSL-encrypted
# - "hostgssenc" is a TCP/IP socket that is GSSAPI-encrypted
# - "hostnogssenc" is a TCP/IP socket that is not GSSAPI-encrypted
#
# DATABASE can be "all", "sameuser", "samerole", "replication", a
# database name, or a comma-separated list thereof. The "all"
# keyword does not match "replication". Access to replication
# must be enabled in a separate record (see example below).
#
# USER can be "all", a user name, a group name prefixed with "+", or a
# comma-separated list thereof. In both the DATABASE and USER fields
# you can also write a file name prefixed with "@" to include names
# from a separate file.
#
# ADDRESS specifies the set of hosts the record matches. It can be a
# host name, or it is made up of an IP address and a CIDR mask that is
# an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that
# specifies the number of significant bits in the mask. A host name
# that starts with a dot (.) matches a suffix of the actual host name.
# Alternatively, you can write an IP address and netmask in separate
# columns to specify the set of hosts. Instead of a CIDR-address, you
# can write "samehost" to match any of the server's own IP addresses,
# or "samenet" to match any address in any subnet that the server is
# directly connected to.
#
# METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
# Note that "password" sends passwords in clear text; "md5" or
# "scram-sha-256" are preferred since they send encrypted passwords.
#
# OPTIONS are a set of options for the authentication in the format
# NAME=VALUE. The available options depend on the different
# authentication methods -- refer to the "Client Authentication"
# section in the documentation for a list of which options are
# available for which authentication methods.
#
# Database and user names containing spaces, commas, quotes and other
# special characters must be quoted. Quoting one of the keywords
# "all", "sameuser", "samerole" or "replication" makes the name lose
# its special character, and just match a database or username with
# that name.
#
# This file is read on server startup and when the server receives a
# SIGHUP signal. If you edit the file on a running system, you have to
# SIGHUP the server for the changes to take effect, run "pg_ctl reload",
# or execute "SELECT pg_reload_conf()".
#
# Put your actual configuration here
# ----------------------------------
#
# If you want to allow non-local connections, you need to add more
# "host" records. In that case you will also need to make PostgreSQL
# listen on a non-local interface via the listen_addresses
# configuration parameter, or via the -i or -h command line switches.
# CAUTION: Configuring the system for local "trust" authentication
# allows any local user to connect as any PostgreSQL user, including
# the database superuser. If you do not trust all your local users,
# use another authentication method.
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust

View File

@@ -0,0 +1,124 @@
//!
//! Utilities for reading and writing the PostgreSQL control file.
//!
//! The PostgreSQL control file is one the first things that the PostgreSQL
//! server reads when it starts up. It indicates whether the server was shut
//! down cleanly, or if it crashed or was restored from online backup so that
//! WAL recovery needs to be performed. It also contains a copy of the latest
//! checkpoint record and its location in the WAL.
//!
//! The control file also contains fields for detecting whether the
//! data directory is compatible with a postgres binary. That includes
//! a version number, configuration options that can be set at
//! compilation time like the block size, and the platform's alignment
//! and endianess information. (The PostgreSQL on-disk file format is
//! not portable across platforms.)
//!
//! The control file is stored in the PostgreSQL data directory, as
//! `global/pg_control`. The data stored in it is designed to be smaller than
//! 512 bytes, on the assumption that it can be updated atomically. The actual
//! file is larger, 8192 bytes, but the rest of it is just filled with zeros.
//!
//! See src/include/catalog/pg_control.h in the PostgreSQL sources for more
//! information. You can use PostgreSQL's pg_controldata utility to view its
//! contents.
//!
use crate::{ControlFileData, PG_CONTROL_FILE_SIZE};
use anyhow::{bail, Result};
use bytes::{Bytes, BytesMut};
/// Equivalent to sizeof(ControlFileData) in C
const SIZEOF_CONTROLDATA: usize = std::mem::size_of::<ControlFileData>();
impl ControlFileData {
/// Compute the offset of the `crc` field within the `ControlFileData` struct.
/// Equivalent to offsetof(ControlFileData, crc) in C.
// Someday this can be const when the right compiler features land.
fn pg_control_crc_offset() -> usize {
memoffset::offset_of!(ControlFileData, crc)
}
///
/// Interpret a slice of bytes as a Postgres control file.
///
pub fn decode(buf: &[u8]) -> Result<ControlFileData> {
// Check that the slice has the expected size. The control file is
// padded with zeros up to a 512 byte sector size, so accept a
// larger size too, so that the caller can just the whole file
// contents without knowing the exact size of the struct.
if buf.len() < SIZEOF_CONTROLDATA {
bail!("control file is too short");
}
// Compute the expected CRC of the content.
let OFFSETOF_CRC = Self::pg_control_crc_offset();
let expectedcrc = crc32c::crc32c(&buf[0..OFFSETOF_CRC]);
// Convert the slice into an array of the right size, and use `transmute` to
// reinterpret the raw bytes as a ControlFileData struct.
//
// NB: Ideally we would use 'zerocopy::FromBytes' for this, but bindgen doesn't
// derive FromBytes for us. The safety of this depends on the same constraints
// as for FromBytes, namely, all of its fields must implement FromBytes. That
// includes the primitive integer types, like `u8`, `u16`, `u32`, `u64` and their
// signed variants. But `bool` is not safe, because the contents of the high bits
// in a rust bool are undefined. In practice, PostgreSQL uses 1 to represent
// true and 0 for false, which is compatible with Rust bool, but let's try not to
// depend on it.
//
// FIXME: ControlFileData does contain 'bool's at the moment.
//
// See https://github.com/zenithdb/zenith/issues/207 for discussion on the safety
// of this.
let mut b: [u8; SIZEOF_CONTROLDATA] = [0u8; SIZEOF_CONTROLDATA];
b.copy_from_slice(&buf[0..SIZEOF_CONTROLDATA]);
let controlfile: ControlFileData =
unsafe { std::mem::transmute::<[u8; SIZEOF_CONTROLDATA], ControlFileData>(b) };
// Check the CRC
if expectedcrc != controlfile.crc {
bail!(
"invalid CRC in control file: expected {:08X}, was {:08X}",
expectedcrc,
controlfile.crc
);
}
Ok(controlfile)
}
///
/// Convert a struct representing a Postgres control file into raw bytes.
///
/// The CRC is recomputed to match the contents of the fields.
pub fn encode(&self) -> Bytes {
//
// Use `transmute` to reinterpret struct as raw bytes.
//
// FIXME: This triggers undefined behavior, because the contents
// of the padding bytes are undefined, and this leaks those
// undefined bytes into the resulting array. The Rust code won't
// care what's in those bytes, and PostgreSQL doesn't care
// either. HOWEVER, it is a potential security issue, because the
// bytes can contain arbitrary pieces of memory from the page
// server. In the worst case, that could be private keys or
// another tenant's data.
//
// See https://github.com/zenithdb/zenith/issues/207 for discussion.
let b: [u8; SIZEOF_CONTROLDATA] =
unsafe { std::mem::transmute::<ControlFileData, [u8; SIZEOF_CONTROLDATA]>(*self) };
// Recompute the CRC
let OFFSETOF_CRC = Self::pg_control_crc_offset();
let newcrc = crc32c::crc32c(&b[0..OFFSETOF_CRC]);
let mut buf = BytesMut::with_capacity(PG_CONTROL_FILE_SIZE as usize);
buf.extend_from_slice(&b[0..OFFSETOF_CRC]);
buf.extend_from_slice(&newcrc.to_ne_bytes());
// Fill the rest of the control file with zeros.
buf.resize(PG_CONTROL_FILE_SIZE as usize, 0);
buf.into()
}
}

View File

@@ -1,69 +1,13 @@
#![allow(non_upper_case_globals)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
// suppress warnings on rust 1.53 due to bindgen unit tests.
// https://github.com/rust-lang/rust-bindgen/issues/1651
#![allow(deref_nullptr)]
include!(concat!(env!("OUT_DIR"), "/bindings.rs"));
pub mod controlfile_utils;
pub mod nonrelfile_utils;
pub mod pg_constants;
pub mod relfile_utils;
pub mod xlog_utils;
use bytes::{Buf, Bytes, BytesMut};
// sizeof(ControlFileData)
const SIZEOF_CONTROLDATA: usize = std::mem::size_of::<ControlFileData>();
const OFFSETOF_CRC: usize = PG_CONTROLFILEDATA_OFFSETOF_CRC as usize;
impl ControlFileData {
// Initialize an all-zeros ControlFileData struct
pub fn new() -> ControlFileData {
let controlfile: ControlFileData;
let b = [0u8; SIZEOF_CONTROLDATA];
controlfile =
unsafe { std::mem::transmute::<[u8; SIZEOF_CONTROLDATA], ControlFileData>(b) };
controlfile
}
}
pub fn decode_pg_control(mut buf: Bytes) -> Result<ControlFileData, anyhow::Error> {
let mut b: [u8; SIZEOF_CONTROLDATA] = [0u8; SIZEOF_CONTROLDATA];
buf.copy_to_slice(&mut b);
let controlfile: ControlFileData;
// TODO: verify CRC
let mut data_without_crc: [u8; OFFSETOF_CRC] = [0u8; OFFSETOF_CRC];
data_without_crc.copy_from_slice(&b[0..OFFSETOF_CRC]);
let expectedcrc = crc32c::crc32c(&data_without_crc);
controlfile = unsafe { std::mem::transmute::<[u8; SIZEOF_CONTROLDATA], ControlFileData>(b) };
if expectedcrc != controlfile.crc {
anyhow::bail!(
"invalid CRC in control file: expected {:08X}, was {:08X}",
expectedcrc,
controlfile.crc
);
}
Ok(controlfile)
}
pub fn encode_pg_control(controlfile: ControlFileData) -> Bytes {
let b: [u8; SIZEOF_CONTROLDATA];
b = unsafe { std::mem::transmute::<ControlFileData, [u8; SIZEOF_CONTROLDATA]>(controlfile) };
// Recompute the CRC
let mut data_without_crc: [u8; OFFSETOF_CRC] = [0u8; OFFSETOF_CRC];
data_without_crc.copy_from_slice(&b[0..OFFSETOF_CRC]);
let newcrc = crc32c::crc32c(&data_without_crc);
let mut buf = BytesMut::with_capacity(PG_CONTROL_FILE_SIZE as usize);
buf.extend_from_slice(&b[0..OFFSETOF_CRC]);
buf.extend_from_slice(&newcrc.to_ne_bytes());
// Fill the rest of the control file with zeros.
buf.resize(PG_CONTROL_FILE_SIZE as usize, 0);
buf.into()
}

View File

@@ -0,0 +1,32 @@
//!
//! Common utilities for dealing with PostgreSQL non-relation files.
//!
use crate::pg_constants;
use bytes::BytesMut;
use log::*;
pub fn transaction_id_set_status(xid: u32, status: u8, page: &mut BytesMut) {
trace!(
"handle_apply_request for RM_XACT_ID-{} (1-commit, 2-abort, 3-sub_commit)",
status
);
let byteno: usize = ((xid as u32 % pg_constants::CLOG_XACTS_PER_PAGE as u32)
/ pg_constants::CLOG_XACTS_PER_BYTE) as usize;
let bshift: u8 =
((xid % pg_constants::CLOG_XACTS_PER_BYTE) * pg_constants::CLOG_BITS_PER_XACT as u32) as u8;
page[byteno] =
(page[byteno] & !(pg_constants::CLOG_XACT_BITMASK << bshift)) | (status << bshift);
}
pub fn transaction_id_get_status(xid: u32, page: &[u8]) -> u8 {
let byteno: usize = ((xid as u32 % pg_constants::CLOG_XACTS_PER_PAGE as u32)
/ pg_constants::CLOG_XACTS_PER_BYTE) as usize;
let bshift: u8 =
((xid % pg_constants::CLOG_XACTS_PER_BYTE) * pg_constants::CLOG_BITS_PER_XACT as u32) as u8;
((page[byteno] >> bshift) & pg_constants::CLOG_XACT_BITMASK) as u8
}

View File

@@ -0,0 +1,224 @@
//!
//! Misc constants, copied from PostgreSQL headers.
//!
//! TODO: These probably should be auto-generated using bindgen,
//! rather than copied by hand. Although on the other hand, it's nice
//! to have them all here in one place, and have the ability to add
//! comments on them.
//!
//
// From pg_tablespace_d.h
//
pub const DEFAULTTABLESPACE_OID: u32 = 1663;
pub const GLOBALTABLESPACE_OID: u32 = 1664;
//
// Fork numbers, from relpath.h
//
pub const MAIN_FORKNUM: u8 = 0;
pub const FSM_FORKNUM: u8 = 1;
pub const VISIBILITYMAP_FORKNUM: u8 = 2;
pub const INIT_FORKNUM: u8 = 3;
// From storage_xlog.h
pub const SMGR_TRUNCATE_HEAP: u32 = 0x0001;
pub const SMGR_TRUNCATE_VM: u32 = 0x0002;
pub const SMGR_TRUNCATE_FSM: u32 = 0x0004;
// from pg_config.h. These can be changed with configure options --with-blocksize=BLOCKSIZE and
// --with-segsize=SEGSIZE, but assume the defaults for now.
pub const BLCKSZ: u16 = 8192;
pub const RELSEG_SIZE: u32 = 1024 * 1024 * 1024 / (BLCKSZ as u32);
//
// constants from clog.h
//
pub const CLOG_XACTS_PER_BYTE: u32 = 4;
pub const CLOG_XACTS_PER_PAGE: u32 = BLCKSZ as u32 * CLOG_XACTS_PER_BYTE;
pub const CLOG_BITS_PER_XACT: u8 = 2;
pub const CLOG_XACT_BITMASK: u8 = (1 << CLOG_BITS_PER_XACT) - 1;
//
// Constants from visbilitymap.h
//
pub const SIZE_OF_PAGE_HEADER: u16 = 24;
pub const BITS_PER_HEAPBLOCK: u16 = 2;
pub const HEAPBLOCKS_PER_PAGE: u16 = (BLCKSZ - SIZE_OF_PAGE_HEADER) * 8 / BITS_PER_HEAPBLOCK;
pub const TRANSACTION_STATUS_COMMITTED: u8 = 0x01;
pub const TRANSACTION_STATUS_ABORTED: u8 = 0x02;
pub const TRANSACTION_STATUS_SUB_COMMITTED: u8 = 0x03;
pub const CLOG_ZEROPAGE: u8 = 0x00;
pub const CLOG_TRUNCATE: u8 = 0x10;
// From xact.h
pub const XLOG_XACT_COMMIT: u8 = 0x00;
pub const XLOG_XACT_PREPARE: u8 = 0x10;
pub const XLOG_XACT_ABORT: u8 = 0x20;
pub const XLOG_XACT_COMMIT_PREPARED: u8 = 0x30;
pub const XLOG_XACT_ABORT_PREPARED: u8 = 0x40;
// From srlu.h
pub const SLRU_PAGES_PER_SEGMENT: u32 = 32;
pub const SLRU_SEG_SIZE: usize = BLCKSZ as usize * SLRU_PAGES_PER_SEGMENT as usize;
/* mask for filtering opcodes out of xl_info */
pub const XLOG_XACT_OPMASK: u8 = 0x70;
pub const XLOG_HEAP_OPMASK: u8 = 0x70;
/* does this record have a 'xinfo' field or not */
pub const XLOG_XACT_HAS_INFO: u8 = 0x80;
/*
* The following flags, stored in xinfo, determine which information is
* contained in commit/abort records.
*/
pub const XACT_XINFO_HAS_DBINFO: u32 = 1u32 << 0;
pub const XACT_XINFO_HAS_SUBXACTS: u32 = 1u32 << 1;
pub const XACT_XINFO_HAS_RELFILENODES: u32 = 1u32 << 2;
pub const XACT_XINFO_HAS_INVALS: u32 = 1u32 << 3;
pub const XACT_XINFO_HAS_TWOPHASE: u32 = 1u32 << 4;
// pub const XACT_XINFO_HAS_ORIGIN: u32 = 1u32 << 5;
// pub const XACT_XINFO_HAS_AE_LOCKS: u32 = 1u32 << 6;
// pub const XACT_XINFO_HAS_GID: u32 = 1u32 << 7;
// From pg_control.h and rmgrlist.h
pub const XLOG_NEXTOID: u8 = 0x30;
pub const XLOG_SWITCH: u8 = 0x40;
pub const XLOG_SMGR_TRUNCATE: u8 = 0x20;
pub const DB_SHUTDOWNED: u32 = 1;
// From multixact.h
pub const FIRST_MULTIXACT_ID: u32 = 1;
pub const MAX_MULTIXACT_ID: u32 = 0xFFFFFFFF;
pub const XLOG_MULTIXACT_ZERO_OFF_PAGE: u8 = 0x00;
pub const XLOG_MULTIXACT_ZERO_MEM_PAGE: u8 = 0x10;
pub const XLOG_MULTIXACT_CREATE_ID: u8 = 0x20;
pub const XLOG_MULTIXACT_TRUNCATE_ID: u8 = 0x30;
pub const MULTIXACT_OFFSETS_PER_PAGE: u16 = BLCKSZ / 4;
pub const MXACT_MEMBER_BITS_PER_XACT: u16 = 8;
pub const MXACT_MEMBER_FLAGS_PER_BYTE: u16 = 1;
pub const MULTIXACT_FLAGBYTES_PER_GROUP: u16 = 4;
pub const MULTIXACT_MEMBERS_PER_MEMBERGROUP: u16 =
MULTIXACT_FLAGBYTES_PER_GROUP * MXACT_MEMBER_FLAGS_PER_BYTE;
/* size in bytes of a complete group */
pub const MULTIXACT_MEMBERGROUP_SIZE: u16 =
4 * MULTIXACT_MEMBERS_PER_MEMBERGROUP + MULTIXACT_FLAGBYTES_PER_GROUP;
pub const MULTIXACT_MEMBERGROUPS_PER_PAGE: u16 = BLCKSZ / MULTIXACT_MEMBERGROUP_SIZE;
pub const MULTIXACT_MEMBERS_PER_PAGE: u16 =
MULTIXACT_MEMBERGROUPS_PER_PAGE * MULTIXACT_MEMBERS_PER_MEMBERGROUP;
// From heapam_xlog.h
pub const XLOG_HEAP_INSERT: u8 = 0x00;
pub const XLOG_HEAP_DELETE: u8 = 0x10;
pub const XLOG_HEAP_UPDATE: u8 = 0x20;
pub const XLOG_HEAP_HOT_UPDATE: u8 = 0x40;
pub const XLOG_HEAP2_VISIBLE: u8 = 0x40;
pub const XLOG_HEAP2_MULTI_INSERT: u8 = 0x50;
pub const XLH_INSERT_ALL_FROZEN_SET: u8 = (1 << 5) as u8;
pub const XLH_INSERT_ALL_VISIBLE_CLEARED: u8 = (1 << 0) as u8;
pub const XLH_UPDATE_OLD_ALL_VISIBLE_CLEARED: u8 = (1 << 0) as u8;
pub const XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED: u8 = (1 << 1) as u8;
pub const XLH_DELETE_ALL_VISIBLE_CLEARED: u8 = (1 << 0) as u8;
pub const RM_XLOG_ID: u8 = 0;
pub const RM_XACT_ID: u8 = 1;
pub const RM_SMGR_ID: u8 = 2;
pub const RM_CLOG_ID: u8 = 3;
pub const RM_DBASE_ID: u8 = 4;
pub const RM_TBLSPC_ID: u8 = 5;
pub const RM_MULTIXACT_ID: u8 = 6;
pub const RM_RELMAP_ID: u8 = 7;
pub const RM_STANDBY_ID: u8 = 8;
pub const RM_HEAP2_ID: u8 = 9;
pub const RM_HEAP_ID: u8 = 10;
// from xlogreader.h
pub const XLR_INFO_MASK: u8 = 0x0F;
pub const XLR_RMGR_INFO_MASK: u8 = 0xF0;
// from dbcommands_xlog.h
pub const XLOG_DBASE_CREATE: u8 = 0x00;
pub const XLOG_DBASE_DROP: u8 = 0x10;
pub const XLOG_TBLSPC_CREATE: u8 = 0x00;
pub const XLOG_TBLSPC_DROP: u8 = 0x10;
pub const SIZEOF_XLOGRECORD: u32 = 24;
//
// from xlogrecord.h
//
pub const XLR_MAX_BLOCK_ID: u8 = 32;
pub const XLR_BLOCK_ID_DATA_SHORT: u8 = 255;
pub const XLR_BLOCK_ID_DATA_LONG: u8 = 254;
pub const XLR_BLOCK_ID_ORIGIN: u8 = 253;
pub const XLR_BLOCK_ID_TOPLEVEL_XID: u8 = 252;
pub const BKPBLOCK_FORK_MASK: u8 = 0x0F;
pub const _BKPBLOCK_FLAG_MASK: u8 = 0xF0;
pub const BKPBLOCK_HAS_IMAGE: u8 = 0x10; /* block data is an XLogRecordBlockImage */
pub const BKPBLOCK_HAS_DATA: u8 = 0x20;
pub const BKPBLOCK_WILL_INIT: u8 = 0x40; /* redo will re-init the page */
pub const BKPBLOCK_SAME_REL: u8 = 0x80; /* RelFileNode omitted, same as previous */
/* Information stored in bimg_info */
pub const BKPIMAGE_HAS_HOLE: u8 = 0x01; /* page image has "hole" */
pub const BKPIMAGE_IS_COMPRESSED: u8 = 0x02; /* page image is compressed */
pub const BKPIMAGE_APPLY: u8 = 0x04; /* page image should be restored during replay */
/* From transam.h */
pub const FIRST_NORMAL_TRANSACTION_ID: u32 = 3;
pub const INVALID_TRANSACTION_ID: u32 = 0;
pub const FIRST_BOOTSTRAP_OBJECT_ID: u32 = 12000;
pub const FIRST_NORMAL_OBJECT_ID: u32 = 16384;
/* FIXME: pageserver should request wal_seg_size from compute node */
pub const WAL_SEGMENT_SIZE: usize = 16 * 1024 * 1024;
pub const XLOG_BLCKSZ: usize = 8192;
pub const XLOG_CHECKPOINT_SHUTDOWN: u8 = 0x00;
pub const XLOG_CHECKPOINT_ONLINE: u8 = 0x10;
pub const XLP_LONG_HEADER: u16 = 0x0002;
pub const PG_MAJORVERSION: &'static str = "14";
// List of subdirectories inside pgdata.
// Copied from src/bin/initdb/initdb.c
pub const PGDATA_SUBDIRS: [&'static str; 22] = [
"global",
"pg_wal/archive_status",
"pg_commit_ts",
"pg_dynshmem",
"pg_notify",
"pg_serial",
"pg_snapshots",
"pg_subtrans",
"pg_twophase",
"pg_multixact",
"pg_multixact/members",
"pg_multixact/offsets",
"base",
"base/1",
"pg_replslot",
"pg_tblspc",
"pg_stat",
"pg_stat_tmp",
"pg_xact",
"pg_logical",
"pg_logical/snapshots",
"pg_logical/mappings",
];
pub const PGDATA_SPECIAL_FILES: [&'static str; 4] = [
"pg_hba.conf",
"pg_ident.conf",
"postgresql.conf",
"postgresql.auto.conf",
];
pub static PG_HBA: &'static str = include_str!("../samples/pg_hba.conf");

View File

@@ -0,0 +1,141 @@
//!
//! Common utilities for dealing with PostgreSQL relation files.
//!
use crate::pg_constants;
use lazy_static::lazy_static;
use regex::Regex;
#[derive(Debug, Clone, thiserror::Error, PartialEq)]
pub enum FilePathError {
#[error("invalid relation fork name")]
InvalidForkName,
#[error("invalid relation data file name")]
InvalidFileName,
}
impl From<core::num::ParseIntError> for FilePathError {
fn from(_e: core::num::ParseIntError) -> Self {
FilePathError::InvalidFileName
}
}
/// Convert Postgres relation file's fork suffix to fork number.
pub fn forkname_to_number(forkname: Option<&str>) -> Result<u8, FilePathError> {
match forkname {
// "main" is not in filenames, it's implicit if the fork name is not present
None => Ok(pg_constants::MAIN_FORKNUM),
Some("fsm") => Ok(pg_constants::FSM_FORKNUM),
Some("vm") => Ok(pg_constants::VISIBILITYMAP_FORKNUM),
Some("init") => Ok(pg_constants::INIT_FORKNUM),
Some(_) => Err(FilePathError::InvalidForkName),
}
}
/// Convert Postgres fork number to the right suffix of the relation data file.
pub fn forknumber_to_name(forknum: u8) -> Option<&'static str> {
match forknum {
pg_constants::MAIN_FORKNUM => None,
pg_constants::FSM_FORKNUM => Some("fsm"),
pg_constants::VISIBILITYMAP_FORKNUM => Some("vm"),
pg_constants::INIT_FORKNUM => Some("init"),
_ => Some("UNKNOWN FORKNUM"),
}
}
///
/// Parse a filename of a relation file. Returns (relfilenode, forknum, segno) tuple.
///
/// Formats:
/// <oid>
/// <oid>_<fork name>
/// <oid>.<segment number>
/// <oid>_<fork name>.<segment number>
///
/// See functions relpath() and _mdfd_segpath() in PostgreSQL sources.
///
pub fn parse_relfilename(fname: &str) -> Result<(u32, u8, u32), FilePathError> {
lazy_static! {
static ref RELFILE_RE: Regex =
Regex::new(r"^(?P<relnode>\d+)(_(?P<forkname>[a-z]+))?(\.(?P<segno>\d+))?$").unwrap();
}
let caps = RELFILE_RE
.captures(fname)
.ok_or(FilePathError::InvalidFileName)?;
let relnode_str = caps.name("relnode").unwrap().as_str();
let relnode = relnode_str.parse::<u32>()?;
let forkname = caps.name("forkname").map(|f| f.as_str());
let forknum = forkname_to_number(forkname)?;
let segno_match = caps.name("segno");
let segno = if segno_match.is_none() {
0
} else {
segno_match.unwrap().as_str().parse::<u32>()?
};
Ok((relnode, forknum, segno))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_valid_relfilenames() {
assert_eq!(parse_relfilename("1234"), Ok((1234, 0, 0)));
assert_eq!(parse_relfilename("1234_fsm"), Ok((1234, 1, 0)));
assert_eq!(parse_relfilename("1234_vm"), Ok((1234, 2, 0)));
assert_eq!(parse_relfilename("1234_init"), Ok((1234, 3, 0)));
assert_eq!(parse_relfilename("1234.12"), Ok((1234, 0, 12)));
assert_eq!(parse_relfilename("1234_fsm.12"), Ok((1234, 1, 12)));
assert_eq!(parse_relfilename("1234_vm.12"), Ok((1234, 2, 12)));
assert_eq!(parse_relfilename("1234_init.12"), Ok((1234, 3, 12)));
// relfilenode is unsigned, so it can go up to 2^32-1
assert_eq!(parse_relfilename("3147483648"), Ok((3147483648, 0, 0)));
}
#[test]
fn test_parse_invalid_relfilenames() {
assert_eq!(
parse_relfilename("foo"),
Err(FilePathError::InvalidFileName)
);
assert_eq!(
parse_relfilename("1.2.3"),
Err(FilePathError::InvalidFileName)
);
assert_eq!(
parse_relfilename("1234_invalid"),
Err(FilePathError::InvalidForkName)
);
assert_eq!(
parse_relfilename("1234_"),
Err(FilePathError::InvalidFileName)
);
// too large for u32
assert_eq!(
parse_relfilename("12345678901"),
Err(FilePathError::InvalidFileName)
);
assert_eq!(
parse_relfilename("-1234"),
Err(FilePathError::InvalidFileName)
);
}
#[test]
fn test_parse_weird_relfilenames() {
// we accept 0 for the relfilenode, but PostgreSQL should never do that.
assert_eq!(parse_relfilename("0"), Ok((0, 0, 0)));
// PostgreSQL has a limit of 2^32-2 blocks in a table. With 8k block size and
// 1 GB segments, the max segment number is 32767. But we accept larger values
// currently.
assert_eq!(parse_relfilename("1.123456"), Ok((1, 0, 123456)));
}
}

View File

@@ -7,7 +7,18 @@
// have been named the same as the corresponding PostgreSQL functions instead.
//
use crate::pg_constants;
use crate::CheckPoint;
use crate::ControlFileData;
use crate::FullTransactionId;
use crate::XLogLongPageHeaderData;
use crate::XLogPageHeaderData;
use crate::XLogRecord;
use crate::XLOG_PAGE_MAGIC;
use byteorder::{ByteOrder, LittleEndian};
use bytes::{Buf, Bytes};
use bytes::{BufMut, BytesMut};
use crc32c::*;
use log::*;
use std::cmp::min;
@@ -19,32 +30,27 @@ use std::time::SystemTime;
pub const XLOG_FNAME_LEN: usize = 24;
pub const XLOG_BLCKSZ: usize = 8192;
pub const XLP_FIRST_IS_CONTRECORD: u16 = 0x0001;
pub const XLOG_PAGE_MAGIC: u16 = 0xD109;
pub const XLP_REM_LEN_OFFS: usize = 2 + 2 + 4 + 8;
pub const XLOG_SIZE_OF_XLOG_SHORT_PHD: usize = XLP_REM_LEN_OFFS + 4 + 4;
pub const XLOG_SIZE_OF_XLOG_LONG_PHD: usize = XLOG_SIZE_OF_XLOG_SHORT_PHD + 8 + 4 + 4;
pub const XLOG_RECORD_CRC_OFFS: usize = 4 + 4 + 8 + 1 + 1 + 2;
pub const XLOG_SIZE_OF_XLOG_RECORD: usize = XLOG_RECORD_CRC_OFFS + 4;
pub const MAX_SEND_SIZE: usize = XLOG_BLCKSZ * 16;
pub const XLOG_SIZE_OF_XLOG_SHORT_PHD: usize = std::mem::size_of::<XLogPageHeaderData>();
pub const XLOG_SIZE_OF_XLOG_LONG_PHD: usize = std::mem::size_of::<XLogLongPageHeaderData>();
pub const XLOG_SIZE_OF_XLOG_RECORD: usize = std::mem::size_of::<XLogRecord>();
pub const SIZE_OF_XLOG_RECORD_DATA_HEADER_SHORT: usize = 1 * 2;
pub type XLogRecPtr = u64;
pub type TimeLineID = u32;
pub type TimestampTz = u64;
pub type TimestampTz = i64;
pub type XLogSegNo = u64;
#[allow(non_snake_case)]
pub fn XLogSegmentOffset(xlogptr: XLogRecPtr, wal_segsz_bytes: usize) -> u32 {
(xlogptr as u32) & (wal_segsz_bytes as u32 - 1)
}
const XID_CHECKPOINT_INTERVAL: u32 = 1024;
#[allow(non_snake_case)]
pub fn XLogSegmentsPerXLogId(wal_segsz_bytes: usize) -> XLogSegNo {
(0x100000000u64 / wal_segsz_bytes as u64) as XLogSegNo
}
#[allow(non_snake_case)]
pub fn XLByteToSeg(xlogptr: XLogRecPtr, wal_segsz_bytes: usize) -> XLogSegNo {
xlogptr / wal_segsz_bytes as u64
}
#[allow(non_snake_case)]
pub fn XLogSegNoOffsetToRecPtr(
segno: XLogSegNo,
@@ -89,9 +95,9 @@ pub fn get_current_timestamp() -> TimestampTz {
const USECS_PER_SEC: u64 = 1000000;
match SystemTime::now().duration_since(SystemTime::UNIX_EPOCH) {
Ok(n) => {
(n.as_secs() - ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY))
((n.as_secs() - ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY))
* USECS_PER_SEC
+ n.subsec_micros() as u64
+ n.subsec_micros() as u64) as i64
}
Err(_) => panic!("SystemTime before UNIX EPOCH!"),
}
@@ -126,7 +132,7 @@ fn find_end_of_wal_segment(
let xlp_magic = LittleEndian::read_u16(&buf[0..2]);
let xlp_info = LittleEndian::read_u16(&buf[2..4]);
let xlp_rem_len = LittleEndian::read_u32(&buf[XLP_REM_LEN_OFFS..XLP_REM_LEN_OFFS + 4]);
if xlp_magic != XLOG_PAGE_MAGIC {
if xlp_magic != XLOG_PAGE_MAGIC as u16 {
info!("Invalid WAL file {}.partial magic {}", file_name, xlp_magic);
break;
}
@@ -205,33 +211,31 @@ pub fn find_end_of_wal(
let mut high_tli: TimeLineID = 0;
let mut high_ispartial = false;
for entry in fs::read_dir(data_dir).unwrap() {
if let Ok(entry) = entry {
let ispartial: bool;
let entry_name = entry.file_name();
let fname = entry_name.to_str().unwrap();
/*
* Check if the filename looks like an xlog file, or a .partial file.
*/
if IsXLogFileName(fname) {
ispartial = false;
} else if IsPartialXLogFileName(fname) {
ispartial = true;
} else {
continue;
}
let (segno, tli) = XLogFromFileName(fname, wal_seg_size);
if !ispartial && entry.metadata().unwrap().len() != wal_seg_size as u64 {
continue;
}
if segno > high_segno
|| (segno == high_segno && tli > high_tli)
|| (segno == high_segno && tli == high_tli && high_ispartial && !ispartial)
{
high_segno = segno;
high_tli = tli;
high_ispartial = ispartial;
}
for entry in fs::read_dir(data_dir).unwrap().flatten() {
let ispartial: bool;
let entry_name = entry.file_name();
let fname = entry_name.to_str().unwrap();
/*
* Check if the filename looks like an xlog file, or a .partial file.
*/
if IsXLogFileName(fname) {
ispartial = false;
} else if IsPartialXLogFileName(fname) {
ispartial = true;
} else {
continue;
}
let (segno, tli) = XLogFromFileName(fname, wal_seg_size);
if !ispartial && entry.metadata().unwrap().len() != wal_seg_size as u64 {
continue;
}
if segno > high_segno
|| (segno == high_segno && tli > high_tli)
|| (segno == high_segno && tli == high_tli && high_ispartial && !ispartial)
{
high_segno = segno;
high_tli = tli;
high_ispartial = ispartial;
}
}
if high_segno > 0 {
@@ -264,3 +268,233 @@ pub fn main() {
tli
);
}
impl XLogRecord {
pub fn from_bytes(buf: &mut Bytes) -> XLogRecord {
XLogRecord {
xl_tot_len: buf.get_u32_le(),
xl_xid: buf.get_u32_le(),
xl_prev: buf.get_u64_le(),
xl_info: buf.get_u8(),
xl_rmid: buf.get_u8(),
xl_crc: {
buf.advance(2);
buf.get_u32_le()
},
}
}
pub fn encode(&self) -> Bytes {
let b: [u8; XLOG_SIZE_OF_XLOG_RECORD];
b = unsafe { std::mem::transmute::<XLogRecord, [u8; XLOG_SIZE_OF_XLOG_RECORD]>(*self) };
Bytes::copy_from_slice(&b[..])
}
// Is this record an XLOG_SWITCH record? They need some special processing,
pub fn is_xlog_switch_record(&self) -> bool {
self.xl_info == pg_constants::XLOG_SWITCH && self.xl_rmid == pg_constants::RM_XLOG_ID
}
}
impl XLogPageHeaderData {
pub fn from_bytes<B: Buf>(buf: &mut B) -> XLogPageHeaderData {
let hdr: XLogPageHeaderData = XLogPageHeaderData {
xlp_magic: buf.get_u16_le(),
xlp_info: buf.get_u16_le(),
xlp_tli: buf.get_u32_le(),
xlp_pageaddr: buf.get_u64_le(),
xlp_rem_len: buf.get_u32_le(),
};
buf.get_u32_le(); //padding
hdr
}
}
impl XLogLongPageHeaderData {
pub fn from_bytes<B: Buf>(buf: &mut B) -> XLogLongPageHeaderData {
XLogLongPageHeaderData {
std: XLogPageHeaderData::from_bytes(buf),
xlp_sysid: buf.get_u64_le(),
xlp_seg_size: buf.get_u32_le(),
xlp_xlog_blcksz: buf.get_u32_le(),
}
}
pub fn encode(&self) -> Bytes {
let b: [u8; XLOG_SIZE_OF_XLOG_LONG_PHD];
b = unsafe {
std::mem::transmute::<XLogLongPageHeaderData, [u8; XLOG_SIZE_OF_XLOG_LONG_PHD]>(*self)
};
Bytes::copy_from_slice(&b[..])
}
}
pub const SIZEOF_CHECKPOINT: usize = std::mem::size_of::<CheckPoint>();
impl CheckPoint {
pub fn encode(&self) -> Bytes {
let b: [u8; SIZEOF_CHECKPOINT];
b = unsafe { std::mem::transmute::<CheckPoint, [u8; SIZEOF_CHECKPOINT]>(*self) };
Bytes::copy_from_slice(&b[..])
}
pub fn decode(buf: &[u8]) -> Result<CheckPoint, anyhow::Error> {
let mut b = [0u8; SIZEOF_CHECKPOINT];
b.copy_from_slice(&buf[0..SIZEOF_CHECKPOINT]);
let checkpoint: CheckPoint;
checkpoint = unsafe { std::mem::transmute::<[u8; SIZEOF_CHECKPOINT], CheckPoint>(b) };
Ok(checkpoint)
}
// Update next XID based on provided new_xid and stored epoch.
// Next XID should be greater than new_xid.
// Also take in account 32-bit wrap-around.
pub fn update_next_xid(&mut self, xid: u32) {
let xid = xid.wrapping_add(XID_CHECKPOINT_INTERVAL - 1) & !(XID_CHECKPOINT_INTERVAL - 1);
let full_xid = self.nextXid.value;
let new_xid = std::cmp::max(xid + 1, pg_constants::FIRST_NORMAL_TRANSACTION_ID);
let old_xid = full_xid as u32;
if new_xid.wrapping_sub(old_xid) as i32 > 0 {
let mut epoch = full_xid >> 32;
if new_xid < old_xid {
// wrap-around
epoch += 1;
}
self.nextXid = FullTransactionId {
value: (epoch << 32) | new_xid as u64,
};
}
}
}
//
// Generate new WAL segment with single XLOG_CHECKPOINT_SHUTDOWN record.
// We need this segment to start compute node.
// In order to minimize changes in Postgres core, we prefer to
// provide WAL segment from which is can extract checkpoint record in standard way,
// rather then implement some alternative mechanism.
//
pub fn generate_wal_segment(pg_control: &ControlFileData) -> Bytes {
let mut seg_buf = BytesMut::with_capacity(pg_constants::WAL_SEGMENT_SIZE as usize);
let hdr = XLogLongPageHeaderData {
std: {
XLogPageHeaderData {
xlp_magic: XLOG_PAGE_MAGIC as u16,
xlp_info: pg_constants::XLP_LONG_HEADER,
xlp_tli: 1, // FIXME: always use Postgres timeline 1
xlp_pageaddr: pg_control.checkPoint - XLOG_SIZE_OF_XLOG_LONG_PHD as u64,
xlp_rem_len: 0,
}
},
xlp_sysid: pg_control.system_identifier,
xlp_seg_size: pg_constants::WAL_SEGMENT_SIZE as u32,
xlp_xlog_blcksz: XLOG_BLCKSZ as u32,
};
let hdr_bytes = hdr.encode();
seg_buf.extend_from_slice(&hdr_bytes);
let rec_hdr = XLogRecord {
xl_tot_len: (XLOG_SIZE_OF_XLOG_RECORD
+ SIZE_OF_XLOG_RECORD_DATA_HEADER_SHORT
+ SIZEOF_CHECKPOINT) as u32,
xl_xid: 0, //0 is for InvalidTransactionId
xl_prev: 0,
xl_info: pg_constants::XLOG_CHECKPOINT_SHUTDOWN,
xl_rmid: pg_constants::RM_XLOG_ID,
xl_crc: 0,
};
let mut rec_shord_hdr_bytes = BytesMut::new();
rec_shord_hdr_bytes.put_u8(pg_constants::XLR_BLOCK_ID_DATA_SHORT);
rec_shord_hdr_bytes.put_u8(SIZEOF_CHECKPOINT as u8);
let rec_bytes = rec_hdr.encode();
let checkpoint_bytes = pg_control.checkPointCopy.encode();
//calculate record checksum
let mut crc = 0;
crc = crc32c_append(crc, &rec_shord_hdr_bytes[..]);
crc = crc32c_append(crc, &checkpoint_bytes[..]);
crc = crc32c_append(crc, &rec_bytes[0..XLOG_RECORD_CRC_OFFS]);
seg_buf.extend_from_slice(&rec_bytes[0..XLOG_RECORD_CRC_OFFS]);
seg_buf.put_u32_le(crc);
seg_buf.extend_from_slice(&rec_shord_hdr_bytes);
seg_buf.extend_from_slice(&checkpoint_bytes);
//zero out the rest of the file
seg_buf.resize(pg_constants::WAL_SEGMENT_SIZE, 0);
seg_buf.freeze()
}
#[cfg(test)]
mod tests {
use super::*;
use regex::Regex;
use std::{env, process::Command, str::FromStr};
use zenith_utils::lsn::Lsn;
// Run find_end_of_wal against file in test_wal dir
// Ensure that it finds last record correctly
#[test]
pub fn test_find_end_of_wal() {
// 1. Run initdb to generate some WAL
let top_path = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("..");
let data_dir = top_path.join("test_output/test_find_end_of_wal");
let initdb_path = top_path.join("tmp_install/bin/initdb");
let lib_path = top_path.join("tmp_install/lib");
if data_dir.exists() {
fs::remove_dir_all(&data_dir).unwrap();
}
println!("Using initdb from '{}'", initdb_path.display());
println!("Data directory '{}'", data_dir.display());
let initdb_output = Command::new(initdb_path)
.args(&["-D", data_dir.to_str().unwrap()])
.arg("--no-instructions")
.arg("--no-sync")
.env_clear()
.env("LD_LIBRARY_PATH", &lib_path)
.env("DYLD_LIBRARY_PATH", &lib_path)
.output()
.unwrap();
assert!(initdb_output.status.success());
// 2. Pick WAL generated by initdb
let wal_dir = data_dir.join("pg_wal");
let wal_seg_size = 16 * 1024 * 1024;
// 3. Check end_of_wal on non-partial WAL segment (we treat it as fully populated)
let (wal_end, tli) = find_end_of_wal(&wal_dir, wal_seg_size, true);
let wal_end = Lsn(wal_end);
println!("wal_end={}, tli={}", wal_end, tli);
assert_eq!(wal_end, "0/2000000".parse::<Lsn>().unwrap());
// 4. Get the actual end of WAL by pg_waldump
let waldump_path = top_path.join("tmp_install/bin/pg_waldump");
let waldump_output = Command::new(waldump_path)
.arg(wal_dir.join("000000010000000000000001"))
.env_clear()
.env("LD_LIBRARY_PATH", &lib_path)
.env("DYLD_LIBRARY_PATH", &lib_path)
.output()
.unwrap();
let waldump_output = std::str::from_utf8(&waldump_output.stderr).unwrap();
println!("waldump_output = '{}'", &waldump_output);
let re = Regex::new(r"invalid record length at (.+):").unwrap();
let caps = re.captures(&waldump_output).unwrap();
let waldump_wal_end = Lsn::from_str(caps.get(1).unwrap().as_str()).unwrap();
// 5. Rename file to partial to actually find last valid lsn
fs::rename(
wal_dir.join("000000010000000000000001"),
wal_dir.join("000000010000000000000001.partial"),
)
.unwrap();
let (wal_end, tli) = find_end_of_wal(&wal_dir, wal_seg_size, true);
let wal_end = Lsn(wal_end);
println!("wal_end={}, tli={}", wal_end, tli);
assert_eq!(wal_end, waldump_wal_end);
}
}

21
proxy/Cargo.toml Normal file
View File

@@ -0,0 +1,21 @@
[package]
name = "proxy"
version = "0.1.0"
authors = ["Stas Kelvich <stas.kelvich@gmail.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
anyhow = "1.0"
bytes = { version = "1.0.1", features = ['serde'] }
md5 = "0.7.0"
rand = "0.8.3"
hex = "0.4.3"
serde = "1"
serde_json = "1"
tokio = { version = "1.7.1", features = ["full"] }
tokio-postgres = "0.7.2"
clap = "2.33.0"
zenith_utils = { path = "../zenith_utils" }

92
proxy/src/cplane_api.rs Normal file
View File

@@ -0,0 +1,92 @@
use anyhow::{bail, Result};
use serde::{Deserialize, Serialize};
use std::{
collections::HashMap,
net::{IpAddr, SocketAddr},
};
pub struct CPlaneApi {
// address: SocketAddr,
}
#[derive(Serialize, Deserialize)]
pub struct DatabaseInfo {
pub host: IpAddr, // TODO: allow host name here too
pub port: u16,
pub dbname: String,
pub user: String,
pub password: String,
}
impl DatabaseInfo {
pub fn socket_addr(&self) -> SocketAddr {
SocketAddr::new(self.host, self.port)
}
pub fn conn_string(&self) -> String {
format!(
"dbname={} user={} password={}",
self.dbname, self.user, self.password
)
}
}
// mock cplane api
impl CPlaneApi {
pub fn new(_address: &SocketAddr) -> CPlaneApi {
CPlaneApi {
// address: address.clone(),
}
}
pub fn check_auth(&self, user: &str, md5_response: &[u8], salt: &[u8; 4]) -> Result<()> {
// passwords for both is "mypass"
let auth_map: HashMap<_, &str> = vec![
("stas@zenith", "716ee6e1c4a9364d66285452c47402b1"),
("stas2@zenith", "3996f75df64c16a8bfaf01301b61d582"),
]
.into_iter()
.collect();
let stored_hash = auth_map
.get(&user)
.ok_or_else(|| anyhow::Error::msg("user not found"))?;
let salted_stored_hash = format!(
"md5{:x}",
md5::compute([stored_hash.as_bytes(), salt].concat())
);
let received_hash = std::str::from_utf8(&md5_response)?;
println!(
"auth: {} rh={} sh={} ssh={} {:?}",
user, received_hash, stored_hash, salted_stored_hash, salt
);
if received_hash == salted_stored_hash {
Ok(())
} else {
bail!("Auth failed")
}
}
pub fn get_database_uri(&self, _user: &str, _database: &str) -> Result<DatabaseInfo> {
Ok(DatabaseInfo {
host: "127.0.0.1".parse()?,
port: 5432,
dbname: "stas".to_string(),
user: "stas".to_string(),
password: "mypass".to_string(),
})
}
// pub fn create_database(&self, _user: &String, _database: &String) -> Result<DatabaseInfo> {
// Ok(DatabaseInfo {
// host: "127.0.0.1".parse()?,
// port: 5432,
// dbname: "stas".to_string(),
// user: "stas".to_string(),
// password: "mypass".to_string(),
// })
// }
}

106
proxy/src/main.rs Normal file
View File

@@ -0,0 +1,106 @@
///
/// Postgres protocol proxy/router.
///
/// This service listens psql port and can check auth via external service
/// (control plane API in our case) and can create new databases and accounts
/// in somewhat transparent manner (again via communication with control plane API).
///
use std::{
collections::HashMap,
net::{SocketAddr, TcpListener},
sync::{mpsc, Mutex},
thread,
};
use clap::{App, Arg};
use cplane_api::DatabaseInfo;
mod cplane_api;
mod mgmt;
mod proxy;
pub struct ProxyConf {
/// main entrypoint for users to connect to
pub proxy_address: SocketAddr,
/// http management endpoint. Upon user account creation control plane
/// will notify us here, so that we can 'unfreeze' user session.
pub mgmt_address: SocketAddr,
/// send unauthenticated users to this URI
pub redirect_uri: String,
/// control plane address where we would check auth.
pub cplane_address: SocketAddr,
}
pub struct ProxyState {
pub conf: ProxyConf,
pub waiters: Mutex<HashMap<String, mpsc::Sender<anyhow::Result<DatabaseInfo>>>>,
}
fn main() -> anyhow::Result<()> {
let arg_matches = App::new("Zenith proxy/router")
.arg(
Arg::with_name("proxy")
.short("p")
.long("proxy")
.takes_value(true)
.help("listen for incoming client connections on ip:port")
.default_value("127.0.0.1:4432"),
)
.arg(
Arg::with_name("mgmt")
.short("m")
.long("mgmt")
.takes_value(true)
.help("listen for management callback connection on ip:port")
.default_value("127.0.0.1:7000"),
)
.arg(
Arg::with_name("uri")
.short("u")
.long("uri")
.takes_value(true)
.help("redirect unauthenticated users to given uri")
.default_value("http://localhost:3000/psql_session/"),
)
.get_matches();
let conf = ProxyConf {
proxy_address: arg_matches.value_of("proxy").unwrap().parse()?,
mgmt_address: arg_matches.value_of("mgmt").unwrap().parse()?,
redirect_uri: arg_matches.value_of("uri").unwrap().parse()?,
cplane_address: "127.0.0.1:3000".parse()?,
};
let state = ProxyState {
conf,
waiters: Mutex::new(HashMap::new()),
};
let state: &'static ProxyState = Box::leak(Box::new(state));
// Check that we can bind to address before further initialization
println!("Starting proxy on {}", state.conf.proxy_address);
let pageserver_listener = TcpListener::bind(state.conf.proxy_address)?;
println!("Starting mgmt on {}", state.conf.mgmt_address);
let mgmt_listener = TcpListener::bind(state.conf.mgmt_address)?;
let threads = vec![
// Spawn a thread to listen for connections. It will spawn further threads
// for each connection.
thread::Builder::new()
.name("Proxy thread".into())
.spawn(move || proxy::thread_main(&state, pageserver_listener))?,
thread::Builder::new()
.name("Mgmt thread".into())
.spawn(move || mgmt::thread_main(&state, mgmt_listener))?,
];
for t in threads.into_iter() {
t.join().unwrap()?;
}
Ok(())
}

111
proxy/src/mgmt.rs Normal file
View File

@@ -0,0 +1,111 @@
use std::{
net::{TcpListener, TcpStream},
thread,
};
use anyhow::bail;
use bytes::Bytes;
use serde::{Deserialize, Serialize};
use zenith_utils::{
postgres_backend::{self, query_from_cstring, AuthType, PostgresBackend},
pq_proto::{BeMessage, SINGLE_COL_ROWDESC},
};
use crate::{cplane_api::DatabaseInfo, ProxyState};
///
/// Main proxy listener loop.
///
/// Listens for connections, and launches a new handler thread for each.
///
pub fn thread_main(state: &'static ProxyState, listener: TcpListener) -> anyhow::Result<()> {
loop {
let (socket, peer_addr) = listener.accept()?;
println!("accepted connection from {}", peer_addr);
socket.set_nodelay(true).unwrap();
thread::spawn(move || {
if let Err(err) = mgmt_conn_main(state, socket) {
println!("error: {}", err);
}
});
}
}
pub fn mgmt_conn_main(state: &'static ProxyState, socket: TcpStream) -> anyhow::Result<()> {
let mut conn_handler = MgmtHandler { state };
let pgbackend = PostgresBackend::new(socket, AuthType::Trust)?;
pgbackend.run(&mut conn_handler)
}
struct MgmtHandler {
state: &'static ProxyState,
}
/// Serialized examples:
// {
// "session_id": "71d6d03e6d93d99a",
// "result": {
// "Success": {
// "host": "127.0.0.1",
// "port": 5432,
// "dbname": "stas",
// "user": "stas"
// "password": "mypass"
// }
// }
// }
// {
// "session_id": "71d6d03e6d93d99a",
// "result": {
// "Failure": "oops"
// }
// }
#[derive(Serialize, Deserialize)]
pub struct PsqlSessionResponse {
session_id: String,
result: PsqlSessionResult,
}
#[derive(Serialize, Deserialize)]
pub enum PsqlSessionResult {
Success(DatabaseInfo),
Failure(String),
}
impl postgres_backend::Handler for MgmtHandler {
fn process_query(
&mut self,
pgb: &mut PostgresBackend,
query_string: Bytes,
) -> anyhow::Result<()> {
let query_string = query_from_cstring(query_string);
println!("Got mgmt query: '{}'", std::str::from_utf8(&query_string)?);
let resp: PsqlSessionResponse = serde_json::from_slice(&query_string)?;
let waiters = self.state.waiters.lock().unwrap();
let sender = waiters
.get(&resp.session_id)
.ok_or_else(|| anyhow::Error::msg("psql_session_id is not found"))?;
match resp.result {
PsqlSessionResult::Success(db_info) => {
sender.send(Ok(db_info))?;
pgb.write_message_noflush(&SINGLE_COL_ROWDESC)?
.write_message_noflush(&BeMessage::DataRow(&[Some(b"ok")]))?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
pgb.flush()?;
Ok(())
}
PsqlSessionResult::Failure(message) => {
sender.send(Err(anyhow::Error::msg(message.clone())))?;
bail!("psql session request failed: {}", message)
}
}
}
}

256
proxy/src/proxy.rs Normal file
View File

@@ -0,0 +1,256 @@
use crate::cplane_api::CPlaneApi;
use crate::cplane_api::DatabaseInfo;
use crate::ProxyState;
use anyhow::bail;
use tokio_postgres::NoTls;
use rand::Rng;
use std::sync::mpsc::channel;
use std::thread;
use tokio::io::AsyncWriteExt;
use zenith_utils::postgres_backend::{PostgresBackend, ProtoState};
use zenith_utils::pq_proto::*;
use zenith_utils::{postgres_backend, pq_proto::BeMessage};
///
/// Main proxy listener loop.
///
/// Listens for connections, and launches a new handler thread for each.
///
pub fn thread_main(
state: &'static ProxyState,
listener: std::net::TcpListener,
) -> anyhow::Result<()> {
loop {
let (socket, peer_addr) = listener.accept()?;
println!("accepted connection from {}", peer_addr);
socket.set_nodelay(true).unwrap();
thread::spawn(move || {
if let Err(err) = proxy_conn_main(state, socket) {
println!("error: {}", err);
}
});
}
}
// XXX: clean up fields
struct ProxyConnection {
state: &'static ProxyState,
cplane: CPlaneApi,
user: String,
database: String,
pgb: PostgresBackend,
md5_salt: [u8; 4],
psql_session_id: String,
}
pub fn proxy_conn_main(
state: &'static ProxyState,
socket: std::net::TcpStream,
) -> anyhow::Result<()> {
let mut conn = ProxyConnection {
state,
cplane: CPlaneApi::new(&state.conf.cplane_address),
user: "".into(),
database: "".into(),
pgb: PostgresBackend::new(socket, postgres_backend::AuthType::MD5)?,
md5_salt: [0u8; 4],
psql_session_id: "".into(),
};
// Check StartupMessage
// This will set conn.existing_user and we can decide on next actions
conn.handle_startup()?;
// both scenarious here should end up producing database connection string
let db_info = if conn.is_existing_user() {
conn.handle_existing_user()?
} else {
conn.handle_new_user()?
};
// ok, proxy pass user connection to database_uri
let runtime = tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap();
let _ = runtime.block_on(proxy_pass(conn.pgb, db_info))?;
println!("proxy_conn_main done;");
Ok(())
}
impl ProxyConnection {
fn is_existing_user(&self) -> bool {
self.user.ends_with("@zenith")
}
fn handle_startup(&mut self) -> anyhow::Result<()> {
loop {
let msg = self.pgb.read_message()?;
println!("got message {:?}", msg);
match msg {
Some(FeMessage::StartupMessage(m)) => {
println!("got startup message {:?}", m);
match m.kind {
StartupRequestCode::NegotiateGss | StartupRequestCode::NegotiateSsl => {
println!("SSL requested");
self.pgb.write_message(&BeMessage::Negotiate)?;
}
StartupRequestCode::Normal => {
self.user = m
.params
.get("user")
.ok_or_else(|| {
anyhow::Error::msg("user is required in startup packet")
})?
.into();
self.database = m
.params
.get("database")
.ok_or_else(|| {
anyhow::Error::msg("database is required in startup packet")
})?
.into();
break;
}
StartupRequestCode::Cancel => break,
}
}
None => {
bail!("connection closed")
}
unexpected => {
bail!("unexpected message type : {:?}", unexpected)
}
}
}
Ok(())
}
fn handle_existing_user(&mut self) -> anyhow::Result<DatabaseInfo> {
// ask password
rand::thread_rng().fill(&mut self.md5_salt);
self.pgb
.write_message(&BeMessage::AuthenticationMD5Password(&self.md5_salt))?;
self.pgb.state = ProtoState::Authentication; // XXX
// check password
println!("handle_existing_user");
let msg = self.pgb.read_message()?;
println!("got message {:?}", msg);
if let Some(FeMessage::PasswordMessage(m)) = msg {
println!("got password message '{:?}'", m);
assert!(self.is_existing_user());
let (_trailing_null, md5_response) = m
.split_last()
.ok_or_else(|| anyhow::Error::msg("unexpected password message"))?;
if let Err(e) = self.check_auth_md5(md5_response) {
self.pgb
.write_message(&BeMessage::ErrorResponse(format!("{}", e)))?;
bail!("auth failed: {}", e);
} else {
self.pgb
.write_message_noflush(&BeMessage::AuthenticationOk)?;
self.pgb
.write_message_noflush(&BeMessage::ParameterStatus)?;
self.pgb.write_message(&BeMessage::ReadyForQuery)?;
}
}
// ok, we are authorized
self.cplane.get_database_uri(&self.user, &self.database)
}
fn handle_new_user(&mut self) -> anyhow::Result<DatabaseInfo> {
let mut psql_session_id_buf = [0u8; 8];
rand::thread_rng().fill(&mut psql_session_id_buf);
self.psql_session_id = hex::encode(psql_session_id_buf);
let hello_message = format!("☀️ Welcome to Zenith!
To proceed with database creation open following link:
{}{}
It needed to be done once and we will send you '.pgpass' file which will allow you to access or create
databases without opening the browser.
", self.state.conf.redirect_uri,self.psql_session_id);
self.pgb
.write_message_noflush(&BeMessage::AuthenticationOk)?;
self.pgb
.write_message_noflush(&BeMessage::ParameterStatus)?;
self.pgb
.write_message(&BeMessage::NoticeResponse(hello_message))?;
// await for database creation
let (tx, rx) = channel::<anyhow::Result<DatabaseInfo>>();
let _ = self
.state
.waiters
.lock()
.unwrap()
.insert(self.psql_session_id.clone(), tx);
// Wait for web console response
// XXX: respond with error to client
let dbinfo = rx.recv()??;
self.pgb.write_message_noflush(&BeMessage::NoticeResponse(
"Connecting to database.".to_string(),
))?;
self.pgb.write_message(&BeMessage::ReadyForQuery)?;
Ok(dbinfo)
}
fn check_auth_md5(&self, md5_response: &[u8]) -> anyhow::Result<()> {
assert!(self.is_existing_user());
self.cplane
.check_auth(self.user.as_str(), md5_response, &self.md5_salt)
}
}
async fn proxy_pass(pgb: PostgresBackend, db_info: DatabaseInfo) -> anyhow::Result<()> {
let mut socket = tokio::net::TcpStream::connect(db_info.socket_addr()).await?;
let config = db_info.conn_string().parse::<tokio_postgres::Config>()?;
let _ = config.connect_raw(&mut socket, NoTls).await?;
println!("Connected to pg, proxying");
let incoming_std = pgb.into_stream();
incoming_std.set_nonblocking(true)?;
let mut incoming_conn = tokio::net::TcpStream::from_std(incoming_std)?;
let (mut ri, mut wi) = incoming_conn.split();
let (mut ro, mut wo) = socket.split();
let client_to_server = async {
tokio::io::copy(&mut ri, &mut wo).await?;
wo.shutdown().await
};
let server_to_client = async {
tokio::io::copy(&mut ro, &mut wi).await?;
wi.shutdown().await
};
tokio::try_join!(client_to_server, server_to_client)?;
Ok(())
}

11
run_clippy.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/bash
# If you save this in your path under the name "cargo-zclippy" (or whatever
# name you like), then you can run it as "cargo zclippy" from the shell prompt.
#
# If your text editor has rust-analyzer integration, you can also use this new
# command as a replacement for "cargo check" or "cargo clippy" and see clippy
# warnings and errors right in the editor.
# In vscode, this setting is Rust-analyzer>Check On Save:Command
cargo clippy "${@:2}" -- -A clippy::new_without_default -A clippy::manual_range_contains -A clippy::comparison_chain

19
test_runner/Pipfile Normal file
View File

@@ -0,0 +1,19 @@
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[packages]
pytest = ">=6.0.0"
psycopg2 = "*"
typing-extensions = "*"
pyjwt = {extras = ["crypto"], version = "*"}
[dev-packages]
yapf = "*"
flake8 = "*"
mypy = "*"
[requires]
# we need at least 3.6, but pipenv doesn't allow to say this directly
python_version = "3"

Some files were not shown because too many files have changed in this diff Show More