Mainly because it has better support for installing the packages from
different python versions.
It also has better dependency resolver than Pipenv. And supports modern
standard for python dependency management. This includes usage of
pyproject.toml for project specific configuration instead of per
tool conf files. See following links for details:
https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/https://www.python.org/dev/peps/pep-0518/
* change zenith-perf-data checkout ref to be main
* set cluster id through secrets so there is no code changes required
when we wipe out clusters on staging
* display full pgbench output on error
tests are based on self-hosted runner which is physically close
to our staging deployment in aws, currently tests consist of
various configurations of pgbenchi runs.
Also these changes rework benchmark fixture by removing globals and
allowing to collect reports with desired metrics and dump them to json
for further analysis. This is also applicable to usual performance tests
which use local zenith binaries.
- Move notifications to a separate job, run only on push.
- Build and test will execute on [pull_request, push].
- Use actions-rs/toolchain@v1 to get the rust toolchain.
- Add matrix hook to allow multiple toolchain versions in the future
(now set to [stable]).
- Run all the cargo tests, not just test_pageserver
Now most of CI check time is spent during dependencies installation and compilation (~ 10min total). Use actions/cache@v2 to cache things between checks. This commit sets up two caching targets:
* ./tmp_install with postgres build files and installed binaries uses $runner.os-pg-$pg_submodule_revision as a cache key and will be rebuilt only if linked submodule revision changes.
* ./target with cargo dependencies. That one uses hash(Cargo.lock) as a caching key and will be rebuilt only on deps update.
Also add tg notifications in a passing.
Each postgres will use its own page cache with associated data
structures. Postgres system_id is used to distinguish instances.
That also means that backup should have valid system_id stashed
somewhere. For now I put '42' as sys_id during S3 restore, but
that ought to be fixed.
Also this commit introduces new way of starting WAL receivers:
postgres can initiate such connection by calling 'callmemaybe $url'
command in the page_service -- that will start appropriate wal-redo
and wal-receiver threads. This way page server may start without
a priori knowledge of compute node addreses.