Major changes and new concepts:
Simplify Repository to a value-store
------------------------------------
Move the responsibility of tracking relation metadata, like which relations
exist and what are their sizes, from Repository to a new module,
pgdatadir_mapping.rs. The interface to Repository is now a simple key-value
PUT/GET operations.
It's still not any old key-value store though. A Repository is still
responsible from handling branching, and every GET operation comes with
an LSN.
Key
---
The key to the Repository key-value store is a Key struct, which consists
of a few integer fields. It's wide enough to store a full RelFileNode,
fork and block number, and to distinguish those from metadata keys.
See pgdatadir_mapping.rs for how relation blocks and metadata keys are
mapped to the Key struct.
Store arbitrary key-ranges in the layer files
---------------------------------------------
The concept of a "segment" is gone. Each layer file can store an arbitrary
range of Keys.
TODO:
- Deleting keys, to reclaim space. This isn't visible to Postgres, dropping
or truncating a relation works as you would expect if you look at it from
the compute node. If you drop a relation, for example, the relation is
removed from the metadata entry, so that it appears to be gone. However,
the layered repository implementation never reclaims the storage.
- Tracking "logical database size", for disk space quotas. That ought to
be reimplemented now in pgdatadir_mapping.rs, or perhaps in walingest.rs.
- LSM compaction. The logic for checkpointing and creating image layers is
very dumb. AFAIK the *read* code could deal with a full-fledged LSM tree
now consisting of the delta and image layers. But there's no code to
take a bunch of delta layers and compact them, and the heuristics for
when to create image layers is pretty dumb.
- The code to track the layers is inefficient. All layers are just stored in
a vector, and whenever we need to find a layer, we do a linear search in
it.
Have separate routine and http endpoint to create timeline on safekeepers. It is
not used yet, i.e. timeline is still created implicitly, but we'll change that
once infrastructure for learning which tlis are assigned to which safekeepers
will be ready, preventing accidental creation by compute.
Changes format of safekeeper control file, allowing to store set of
peers. Knowing peers provides a part of foundation for peer
recovery (calculating min horizons like truncate_lsn for WAL truncation and
commit_lsn for sync-safekeepers replacement) and proper membership change;
similarly, we don't yet use it for now.
Employing cf file version bump, extracts tenant_id and timeline_id to top level
where it is more suitable. Also adds a bunch of LSNs there and rename
truncate_lsn to more specific peer_horizon_lsn.
* Add --id argument to safekeeper setting its unique u64 id.
In preparation for storage node messaging. IDs are supposed to be monotonically
assigned by the console. In tests it is issued by ZenithEnv; at the zenith cli
level and fixtures, string name is completely replaced by integer id. Example
TOML configs are adjusted accordingly.
Sequential ids are chosen over Zid mainly because they are compact and easy to
type/remember.
* add node id to pageserver
This adds node id parameter to pageserver configuration. Also I use a
simple builder to construct pageserver config struct to avoid setting
node id to some temporary invalid value. Some of the changes in test
fixtures are needed to split init and start operations for envrionment.
Co-authored-by: Arseny Sher <sher-ars@yandex.ru>
* new deployment flow for staging and production
* ansible playbooks and circleci config fixes
* cleanup before merge
* additional cleanup before merge
* debug deployment to staging env
* debug deployment to staging env
* debug deployment to staging env
* debug deployment to staging env
* debug deployment to staging env
* debug deployment to staging env
* bianries artifacts path fix for ansible playbooks
* deployment flow refactored
* base64 decode fix for ssh key
* fix for console notification and production deploy settings
* cleanup after deployment tests
* fix - trigger release binaries download for production deploy
When several AppendRequest's can be read from socket without blocking,
they are processed together and fsync() to segment file is only called
once. Segment file is no longer opened for every write request, now
last opened file is cached inside PhysicalStorage. New metric for WAL
flushes was added to the storage, FLUSH_WAL_SECONDS. More errors were
added to storage for non-sequential WAL writes, now write_lsn can be
moved only with calls to truncate_lsn(new_lsn).
New messages have been added to ProposerAcceptorMessage enum. They
can't be deserialized directly and now are used only for optimizing
flushes. Existing protocol wasn't changed and flush will be called for
every AppendRequest, as it was before.
Since commit fdd987c3ad, it was only used in InMemoryLayers. Let's
just "inline" the code into InMemoryLayer itself.
I originally did this as part of a bigger PR (#1267). With that PR,
one in-memory layer, and one ephemeral file, would hold page versions
belonging to multiple segments. Currently, PageVersions can only hold
versions for a single segment, so that would need to be changed.
Rather than modify PageVersions to support that, just remove it
altogether.
These tests have intimate knowledge of the directory layeout and layer
file names used by the LayeredRepository implementation of the
Repository trait. Move them, so that all the tests that remain in
repository.rs are expected to work without changes with any
implementation of Repository. Not that we have any plans to create
another Repository implementaiton any time soon, but as long as we
have the Repository interface, let's try to maintain that abstraction
in the tests too.
The test creates a page version with a string like "foo 123 at 0/10"
as the content. But the LSN stored in that string was wrong: the page
version stored at LSN 0/20 would say "foo <blk> at 0/10".
wal_storage.rs was split up from timeline.rs, safekeeper.rs and send_wal.rs,
and now contains all WAL related code from the safekeeper. Now there are
PhysicalStorage for persisting WAL to disk and WalReader for reading it.
This allows optimizing PhysicalStorage without affecting too much of other
code.
Also there is a separate structure for persisting control file now in
control_file.rs.