Files
neon/compute_tools
Alexey Kondratov 42fb3c4d30 fix(compute_ctl): Allow usage of DB names with whitespaces (#9919)
## Problem

We used `set_path()` to replace the database name in the connection
string. It automatically does url-safe encoding if the path is not
already encoded, but it does it as per the URL standard, which assumes
that tabs can be safely removed from the path without changing the
meaning of the URL. See, e.g.,
https://url.spec.whatwg.org/#concept-basic-url-parser. It also breaks
for DBs with properly %-encoded names, like with `%20`, as they are kept
intact, but actually should be escaped.

Yet, this is not true for Postgres, where it's completely valid to have
trailing tabs in the database name.

I think this is the PR that caused this regression
https://github.com/neondatabase/neon/pull/9717, as it switched from
`postgres::config::Config` back to `set_path()`.

This was fixed a while ago already [1], btw, I just haven't added a test
to catch this regression back then :(

## Summary of changes

This commit changes the code back to use
`postgres/tokio_postgres::Config` everywhere.

While on it, also do some changes around, as I had to touch this code:
1. Bump some logging from `debug` to `info` in the spec apply path. We
do not use `debug` in prod, and it was tricky to understand what was
going on with this bug in prod.
2. Refactor configuration concurrency calculation code so it was
reusable. Yet, still keep `1` in the case of reconfiguration. The
database can be actively used at this moment, so we cannot guarantee
that there will be enough spare connection slots, and the underlying
code won't handle connection errors properly.
3. Simplify the installed extensions code. It was spawning a blocking
task inside async function, which doesn't make much sense. Instead, just
have a main sync function and call it with `spawn_blocking` in the API
code -- the only place we need it to be async.
4. Add regression python test to cover this and related problems in the
future. Also, add more extensive testing of schema dump and DBs and
roles listing API.

[1]:
4d1e48f3b9
[2]:
https://www.postgresql.org/message-id/flat/20151023003445.931.91267%40wrigleys.postgresql.org

Resolves neondatabase/cloud#20869
2024-11-28 21:38:30 +00:00
..
2023-10-18 16:42:22 +03:00
2024-03-20 17:10:46 -05:00

Compute node tools

Postgres wrapper (compute_ctl) is intended to be run as a Docker entrypoint or as a systemd ExecStart option. It will handle all the Neon specifics during compute node initialization:

  • compute_ctl accepts cluster (compute node) specification as a JSON file.
  • Every start is a fresh start, so the data directory is removed and initialized again on each run.
  • Next it will put configuration files into the PGDATA directory.
  • Sync safekeepers and get commit LSN.
  • Get basebackup from pageserver using the returned on the previous step LSN.
  • Try to start postgres and wait until it is ready to accept connections.
  • Check and alter/drop/create roles and databases.
  • Hang waiting on the postmaster process to exit.

Also compute_ctl spawns two separate service threads:

  • compute-monitor checks the last Postgres activity timestamp and saves it into the shared ComputeNode;
  • http-endpoint runs a Hyper HTTP API server, which serves readiness and the last activity requests.

If AUTOSCALING environment variable is set, compute_ctl will start the vm-monitor located in [neon/libs/vm_monitor]. For VM compute nodes, vm-monitor communicates with the VM autoscaling system. It coordinates downscaling and requests immediate upscaling under resource pressure.

Usage example:

compute_ctl -D /var/db/postgres/compute \
            -C 'postgresql://cloud_admin@localhost/postgres' \
            -S /var/db/postgres/specs/current.json \
            -b /usr/local/bin/postgres

State Diagram

Computes can be in various states. Below is a diagram that details how a compute moves between states.

%% https://mermaid.js.org/syntax/stateDiagram.html
stateDiagram-v2
  [*] --> Empty : Compute spawned
  Empty --> ConfigurationPending : Waiting for compute spec
  ConfigurationPending --> Configuration : Received compute spec
  Configuration --> Failed : Failed to configure the compute
  Configuration --> Running : Compute has been configured
  Empty --> Init : Compute spec is immediately available
  Empty --> TerminationPending : Requested termination
  Init --> Failed : Failed to start Postgres
  Init --> Running : Started Postgres
  Running --> TerminationPending : Requested termination
  TerminationPending --> Terminated : Terminated compute
  Failed --> [*] : Compute exited
  Terminated --> [*] : Compute exited

Tests

Cargo formatter:

cargo fmt

Run tests:

cargo test

Clippy linter:

cargo clippy --all --all-targets -- -Dwarnings -Drust-2018-idioms

Cross-platform compilation

Imaging that you are on macOS (x86) and you want a Linux GNU (x86_64-unknown-linux-gnu platform in rust terminology) executable.

Using docker

You can use a throw-away Docker container (rustlang/rust image) for doing that:

docker run --rm \
    -v $(pwd):/compute_tools \
    -w /compute_tools \
    -t rustlang/rust:nightly cargo build --release --target=x86_64-unknown-linux-gnu

or one-line:

docker run --rm -v $(pwd):/compute_tools -w /compute_tools -t rust:latest cargo build --release --target=x86_64-unknown-linux-gnu

Using rust native cross-compilation

Another way is to add x86_64-unknown-linux-gnu target on your host system:

rustup target add x86_64-unknown-linux-gnu

Install macOS cross-compiler toolchain:

brew tap SergioBenitez/osxct
brew install x86_64-unknown-linux-gnu

And finally run cargo build:

CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER=x86_64-unknown-linux-gnu-gcc cargo build --target=x86_64-unknown-linux-gnu --release