Files
neon/compute_tools
Alexey Kondratov f2c21447ce [compute_ctl] Create check availability data during full configuration (#5084)
I've moved it to the API handler in the 589cf1ed2 to do not delay
compute start. Yet, we now skip full configuration and catalog updates
in the most hot path -- waking up suspended compute, and only do it at:

- first start
- start with applying new configuration
- start for availability check

so it doesn't really matter anymore.

The problem with creating the table and test record in the API handler
is that someone can fill up timeline till the logical limit. Then it's
suspended and availability check is scheduled, so it fails.

If table + test row are created at the very beginning, we reserve a 8 KB
page for future checks, which theoretically will last almost forever.
For example, my ~1y old branch still has 8 KB sized test table:
```sql
cloud_admin@postgres=# select pg_relation_size('health_check');
 pg_relation_size
------------------
             8192
(1 row)
```

---------

Co-authored-by: Anastasia Lubennikova <anastasia@neon.tech>
2023-08-30 17:44:28 +02:00
..
2023-08-24 15:54:37 -04:00

Compute node tools

Postgres wrapper (compute_ctl) is intended to be run as a Docker entrypoint or as a systemd ExecStart option. It will handle all the Neon specifics during compute node initialization:

  • compute_ctl accepts cluster (compute node) specification as a JSON file.
  • Every start is a fresh start, so the data directory is removed and initialized again on each run.
  • Next it will put configuration files into the PGDATA directory.
  • Sync safekeepers and get commit LSN.
  • Get basebackup from pageserver using the returned on the previous step LSN.
  • Try to start postgres and wait until it is ready to accept connections.
  • Check and alter/drop/create roles and databases.
  • Hang waiting on the postmaster process to exit.

Also compute_ctl spawns two separate service threads:

  • compute-monitor checks the last Postgres activity timestamp and saves it into the shared ComputeNode;
  • http-endpoint runs a Hyper HTTP API server, which serves readiness and the last activity requests.

If AUTOSCALING environment variable is set, compute_ctl will start the vm-monitor located in [neon/libs/vm_monitor]. For VM compute nodes, vm-monitor communicates with the VM autoscaling system. It coordinates downscaling and requests immediate upscaling under resource pressure.

Usage example:

compute_ctl -D /var/db/postgres/compute \
            -C 'postgresql://cloud_admin@localhost/postgres' \
            -S /var/db/postgres/specs/current.json \
            -b /usr/local/bin/postgres

Tests

Cargo formatter:

cargo fmt

Run tests:

cargo test

Clippy linter:

cargo clippy --all --all-targets -- -Dwarnings -Drust-2018-idioms

Cross-platform compilation

Imaging that you are on macOS (x86) and you want a Linux GNU (x86_64-unknown-linux-gnu platform in rust terminology) executable.

Using docker

You can use a throw-away Docker container (rustlang/rust image) for doing that:

docker run --rm \
    -v $(pwd):/compute_tools \
    -w /compute_tools \
    -t rustlang/rust:nightly cargo build --release --target=x86_64-unknown-linux-gnu

or one-line:

docker run --rm -v $(pwd):/compute_tools -w /compute_tools -t rust:latest cargo build --release --target=x86_64-unknown-linux-gnu

Using rust native cross-compilation

Another way is to add x86_64-unknown-linux-gnu target on your host system:

rustup target add x86_64-unknown-linux-gnu

Install macOS cross-compiler toolchain:

brew tap SergioBenitez/osxct
brew install x86_64-unknown-linux-gnu

And finally run cargo build:

CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER=x86_64-unknown-linux-gnu-gcc cargo build --target=x86_64-unknown-linux-gnu --release