Currently, the tests don't work in parallel. The tests try to create a
data directory in the same path and clash with each other. That should be
fixed, but for now, let's at least fix the instructions on how it works
now.
Now most of CI check time is spent during dependencies installation and compilation (~ 10min total). Use actions/cache@v2 to cache things between checks. This commit sets up two caching targets:
* ./tmp_install with postgres build files and installed binaries uses $runner.os-pg-$pg_submodule_revision as a cache key and will be rebuilt only if linked submodule revision changes.
* ./target with cargo dependencies. That one uses hash(Cargo.lock) as a caching key and will be rebuilt only on deps update.
Also add tg notifications in a passing.
If we start walreceiver with identify_system.xlogpos() we will have race condition with
postgres start: postgres may request page that was modified with lsn
smaller than identify_system.xlogpos().
Current procedure for starting postgres will anyway be changed to something
different like having 'initdb' method on a pageserver (or importing some shared
empty database snapshot), so for now I just put start of first segment which
seems to be a valid record and is strictly before first lsn records.
Each postgres will use its own page cache with associated data
structures. Postgres system_id is used to distinguish instances.
That also means that backup should have valid system_id stashed
somewhere. For now I put '42' as sys_id during S3 restore, but
that ought to be fixed.
Also this commit introduces new way of starting WAL receivers:
postgres can initiate such connection by calling 'callmemaybe $url'
command in the page_service -- that will start appropriate wal-redo
and wal-receiver threads. This way page server may start without
a priori knowledge of compute node addreses.