Heikki Linnakangas b45d5368b0 Don't create an image layer for dropped relations.
I noticed that the timeline directory contained files like this:

    pg_xact_0000_0_000000000169C3C2_00000000016BB399
    pg_xact_0000_0_00000000016BB399
    pg_xact_0000_0_00000000016BB399_00000000016BDD06
    pg_xact_0000_0_00000000016BDD06
    pg_xact_0000_0_00000000016BDD06_00000000016C63AA
    pg_xact_0000_0_00000000016C63AA
    pg_xact_0000_0_00000000016C63AA_0000000001765226_DROPPED
    pg_xact_0000_0_0000000001765226
    pg_xact_0001_0_00000000016BB77E_00000000016BDD06
    pg_xact_0001_0_00000000016BDD06
    pg_xact_0001_0_00000000016BDD06_0000000001765226_DROPPED
    pg_xact_0001_0_0000000001765226

Note how there is an image file after each DROPPED file. It's a waste of
time and space to materialize an image of the file at the point where it's
dropped, no one is going to request pages on a dropped relation. And it's
a correctness issue too: list_rels() and list_nonrels() will not consider
the relation as unlinked, unless the latest layer indicates so, and there
is no concept of a dropped image layer. That was causing test_clog_truncate
test to fail, when I adjusted the checkpointer to force a checkpoint more
aggressively.

There are a bunch more issues related to dropped rels and branching,
see https://github.com/zenithdb/zenith/issues/502. Hence this doesn't
completely fix the issue I saw with test_clog_truncate either. But it's
a start.
2021-09-01 09:42:18 +03:00
2021-08-30 09:18:49 -07:00
2021-05-07 13:08:31 -07:00
2021-08-04 23:51:02 +03:00
2021-08-26 13:53:10 +03:00
2021-06-01 16:08:32 +03:00
2021-08-25 18:51:15 +03:00
2021-05-27 15:33:08 +03:00
2021-05-27 15:33:08 +03:00
2021-08-23 17:28:45 +03:00
2021-06-15 10:52:11 -07:00

Zenith

Zenith substitutes PostgreSQL storage layer and redistributes data across a cluster of nodes

Architecture overview

A Zenith installation consists of Compute nodes and Storage engine.

Compute nodes are stateles PostgreSQL nodes, backed by zenith storage.

Zenith storage engine consists of two major components:

  • Pageserver. Scalable storage backend for compute nodes.
  • WAL service. The service that receives WAL from compute node and ensures that it is stored durably.

Pageserver consists of:

  • Repository - Zenith storage implementation.
  • WAL receiver - service that receives WAL from WAL service and stores it in the repository.
  • Page service - service that communicates with compute nodes and responds with pages from the repository.
  • WAL redo - service that builds pages from base images and WAL records on Page service request.

Running local installation

  1. Install build dependencies and other useful packages

On Ubuntu or Debian this set of packages should be sufficient to build the code:

apt install build-essential libtool libreadline-dev zlib1g-dev flex bison libseccomp-dev \
libssl-dev clang

[Rust] 1.52 or later is also required.

To run the psql client, install the postgresql-client package or modify PATH and LD_LIBRARY_PATH to include tmp_install/bin and tmp_install/lib, respectively.

To run the integration tests (not required to use the code), install Python (3.6 or higher), and install python3 packages with pipenv using pipenv install in the project directory.

  1. Build zenith and patched postgres
git clone --recursive https://github.com/zenithdb/zenith.git
cd zenith
make -j5
  1. Start pageserver and postgres on top of it (should be called from repo root):
# Create repository in .zenith with proper paths to binaries and data
# Later that would be responsibility of a package install script
> ./target/debug/zenith init
pageserver init succeeded

# start pageserver
> ./target/debug/zenith start
Starting pageserver at '127.0.0.1:64000' in .zenith
Pageserver started

# start postgres on top on the pageserver
> ./target/debug/zenith pg start main
Starting postgres node at 'host=127.0.0.1 port=55432 user=stas'
waiting for server to start.... done

# check list of running postgres instances
> ./target/debug/zenith pg list
BRANCH	ADDRESS		LSN		STATUS
main	127.0.0.1:55432	0/1609610	running
  1. Now it is possible to connect to postgres and run some queries:
> psql -p55432 -h 127.0.0.1 -U zenith_admin postgres
postgres=# CREATE TABLE t(key int primary key, value text);
CREATE TABLE
postgres=# insert into t values(1,1);
INSERT 0 1
postgres=# select * from t;
 key | value
-----+-------
   1 | 1
(1 row)
  1. And create branches and run postgres on them:
# create branch named migration_check
> ./target/debug/zenith branch migration_check main
Created branch 'migration_check' at 0/1609610

# check branches tree
> ./target/debug/zenith branch
 main
 ┗━ @0/1609610: migration_check

# start postgres on that branch
> ./target/debug/zenith pg start migration_check
Starting postgres node at 'host=127.0.0.1 port=55433 user=stas'
waiting for server to start.... done

# this new postgres instance will have all the data from 'main' postgres,
# but all modifications would not affect data in original postgres
> psql -p55433 -h 127.0.0.1 -U zenith_admin postgres
postgres=# select * from t;
 key | value
-----+-------
   1 | 1
(1 row)

postgres=# insert into t values(2,2);
INSERT 0 1

Running tests

git clone --recursive https://github.com/zenithdb/zenith.git
make # builds also postgres and installs it to ./tmp_install
cd test_runner
pytest

Documentation

Now we use README files to cover design ideas and overall architecture for each module and rustdoc style documentation comments. See also /docs/ a top-level overview of all available markdown documentation.

To view your rustdoc documentation in a browser, try running cargo doc --no-deps --open

Join the development

Description
Neon: Serverless Postgres. We separated storage and compute to offer autoscaling, code-like database branching, and scale to zero.
Readme Apache-2.0 133 MiB
Languages
Rust 73.5%
Python 19.4%
C 5.2%
Dockerfile 0.8%
Shell 0.3%
Other 0.8%