This is a backwards-incompatible change. The new pageserver cannot read repositories created with an old pageserver binary, or vice versa. Simplify Repository to a value-store ------------------------------------ Move the responsibility of tracking relation metadata, like which relations exist and what are their sizes, from Repository to a new module, pgdatadir_mapping.rs. The interface to Repository is now a simple key-value PUT/GET operations. It's still not any old key-value store though. A Repository is still responsible from handling branching, and every GET operation comes with an LSN. Mapping from Postgres data directory to keys/values --------------------------------------------------- All the data is now stored in the key-value store. The 'pgdatadir_mapping.rs' module handles mapping from PostgreSQL objects like relation pages and SLRUs, to key-value pairs. The key to the Repository key-value store is a Key struct, which consists of a few integer fields. It's wide enough to store a full RelFileNode, fork and block number, and to distinguish those from metadata keys. 'pgdatadir_mapping.rs' is also responsible for maintaining a "partitioning" of the keyspace. Partitioning means splitting the keyspace so that each partition holds a roughly equal number of keys. The partitioning is used when new image layer files are created, so that each image layer file is roughly the same size. The partitioning is also responsible for reclaiming space used by deleted keys. The Repository implementation doesn't have any explicit support for deleting keys. Instead, the deleted keys are simply omitted from the partitioning, and when a new image layer is created, the omitted keys are not copied over to the new image layer. We might want to implement tombstone keys in the future, to reclaim space faster, but this will work for now. Changes to low-level layer file code ------------------------------------ The concept of a "segment" is gone. Each layer file can now store an arbitrary range of Keys. Checkpointing, compaction ------------------------- The background tasks are somewhat different now. Whenever checkpoint_distance is reached, the WAL receiver thread "freezes" the current in-memory layer, and creates a new one. This is a quick operation and doesn't perform any I/O yet. It then launches a background "layer flushing thread" to write the frozen layer to disk, as a new L0 delta layer. This mechanism takes care of durability. It replaces the checkpointing thread. Compaction is a new background operation that takes a bunch of L0 delta layers, and reshuffles the data in them. It runs in a separate compaction thread. Deployment ---------- This also contains changes to the ansible scripts that enable having multiple different pageservers running at the same time in the staging environment. We will use that to keep an old version of the pageserver running, for clusters created with the old version, at the same time with a new pageserver with the new binary. Author: Heikki Linnakangas Author: Konstantin Knizhnik <knizhnik@zenith.tech> Author: Andrey Taranik <andrey@zenith.tech> Reviewed-by: Matthias Van De Meent <matthias@zenith.tech> Reviewed-by: Bojan Serafimov <bojan@zenith.tech> Reviewed-by: Konstantin Knizhnik <knizhnik@zenith.tech> Reviewed-by: Anton Shyrabokau <antons@zenith.tech> Reviewed-by: Dhammika Pathirana <dham@zenith.tech> Reviewed-by: Kirill Bulatov <kirill@zenith.tech> Reviewed-by: Anastasia Lubennikova <anastasia@zenith.tech> Reviewed-by: Alexey Kondratov <alexey@zenith.tech>
Zenith test runner
This directory contains integration tests.
Prerequisites:
- Correctly configured Python, see
/docs/sourcetree.md - Zenith and Postgres binaries
- See the root README.md for build directions
- Tests can be run from the git tree; or see the environment variables below to run from other directories.
- The zenith git repo, including the postgres submodule
(for some tests, e.g.
pg_regress)
Test Organization
The tests are divided into a few batches, such that each batch takes roughly the same amount of time. The batches can be run in parallel, to minimize total runtime. Currently, there are only two batches:
- test_batch_pg_regress: Runs PostgreSQL regression tests
- test_others: All other tests
Running the tests
There is a wrapper script to invoke pytest: ./scripts/pytest.
It accepts all the arguments that are accepted by pytest.
Depending on your installation options pytest might be invoked directly.
Test state (postgres data, pageserver state, and log files) will
be stored under a directory test_output.
You can run all the tests with:
./scripts/pytest
If you want to run all the tests in a particular file:
./scripts/pytest test_pgbench.py
If you want to run all tests that have the string "bench" in their names:
./scripts/pytest -k bench
Useful environment variables:
ZENITH_BIN: The directory where zenith binaries can be found.
POSTGRES_DISTRIB_DIR: The directory where postgres distribution can be found.
TEST_OUTPUT: Set the directory where test state and test output files
should go.
TEST_SHARED_FIXTURES: Try to re-use a single pageserver for all the tests.
ZENITH_PAGESERVER_OVERRIDES: add a ;-separated set of configs that will be passed as
FORCE_MOCK_S3: inits every test's pageserver with a mock S3 used as a remote storage.
--pageserver-config-override=${value} parameter values when zenith cli is invoked
RUST_LOG: logging configuration to pass into Zenith CLI
Let stdout, stderr and INFO log messages go to the terminal instead of capturing them:
./scripts/pytest -s --log-cli-level=INFO ...
(Note many tests capture subprocess outputs separately, so this may not
show much.)
Exit after the first test failure:
./scripts/pytest -x ...
(there are many more pytest options; run pytest -h to see them.)
Writing a test
Every test needs a Zenith Environment, or ZenithEnv to operate in. A Zenith Environment is like a little cloud-in-a-box, and consists of a Pageserver, 0-N Safekeepers, and compute Postgres nodes. The connections between them can be configured to use JWT authentication tokens, and some other configuration options can be tweaked too.
The easiest way to get access to a Zenith Environment is by using the zenith_simple_env
fixture. The 'simple' env may be shared across multiple tests, so don't shut down the nodes
or make other destructive changes in that environment. Also don't assume that
there are no tenants or branches or data in the cluster. For convenience, there is a
branch called empty, though. The convention is to create a test-specific branch of
that and load any test data there, instead of the 'main' branch.
For more complicated cases, you can build a custom Zenith Environment, with the zenith_env
fixture:
def test_foobar(zenith_env_builder: ZenithEnvBuilder):
# Prescribe the environment.
# We want to have 3 safekeeper nodes, and use JWT authentication in the
# connections to the page server
zenith_env_builder.num_safekeepers = 3
zenith_env_builder.set_pageserver_auth(True)
# Now create the environment. This initializes the repository, and starts
# up the page server and the safekeepers
env = zenith_env_builder.init_start()
# Run the test
...
For more information about pytest fixtures, see https://docs.pytest.org/en/stable/fixture.html
At the end of a test, all the nodes in the environment are automatically stopped, so you
don't need to worry about cleaning up. Logs and test data are preserved for the analysis,
in a directory under ../test_output/<testname>
Before submitting a patch
Ensure that you pass all obligatory checks.
Also consider:
- Writing a couple of docstrings to clarify the reasoning behind a new test.
- Adding more type hints to your code to avoid
Any, especially:- For fixture parameters, they are not automatically deduced.
- For function arguments and return values.