## Problem Inf37eeb56, I properly escaped the identifier, but I haven't noticed that the resulting string is used in the `format('...')`, so it needs additional escaping. Yet, after looking at it closer and with Heikki's and Tristan's help, it appeared to be that it's a full can of worms and we have problems all over the code in places where we use PL/pgSQL blocks. ## Summary of changes Add a new `pg_quote_dollar()` helper to deal with it, as dollar-quoting of strings seems to be the only robust way to escape strings in dynamic PL/pgSQL blocks. We mimic the Postgres' `pg_get_functiondef` logic here [1]. While on it, I added more tests and caught a couple of more bugs with string escaping: 1. `get_existing_dbs_async()` was wrapping `owner` in additional double-quotes if it contained special characters 2. `construct_superuser_query()` was flawed in even more ways than the rest of the code. It wasn't realistic to fix it quickly, but after thinking about it more, I realized that we could drop most of it altogether. IIUC, it was added as some sort of migration, probably back when we haven't had migrations yet. So all the complicated code was needed to properly update existing roles and DBs. In the current Neon, this code only runs before we create the very first DB and role. When we create roles and DBs, all `neon_superuser` grants are added in the different places. So the worst thing that could happen is that there is an ancient branch somewhere, so when users poke it, they will realize that not all Neon features work as expected. Yet, the fix is simple and self-serve -- just create a new role via UI or API, and it will get a proper `neon_superuser` grant. [1]:8b49392b27/src/backend/utils/adt/ruleutils.c (L3153)Closes neondatabase/cloud#25048
Compute node tools
Postgres wrapper (compute_ctl) is intended to be run as a Docker entrypoint or as a systemd
ExecStart option. It will handle all the Neon specifics during compute node
initialization:
compute_ctlaccepts cluster (compute node) specification as a JSON file.- Every start is a fresh start, so the data directory is removed and initialized again on each run.
- Next it will put configuration files into the
PGDATAdirectory. - Sync safekeepers and get commit LSN.
- Get
basebackupfrom pageserver using the returned on the previous step LSN. - Try to start
postgresand wait until it is ready to accept connections. - Check and alter/drop/create roles and databases.
- Hang waiting on the
postmasterprocess to exit.
Also compute_ctl spawns two separate service threads:
compute-monitorchecks the last Postgres activity timestamp and saves it into the sharedComputeNode;http-endpointruns a Hyper HTTP API server, which serves readiness and the last activity requests.
If AUTOSCALING environment variable is set, compute_ctl will start the
vm-monitor located in [neon/libs/vm_monitor]. For VM compute nodes,
vm-monitor communicates with the VM autoscaling system. It coordinates
downscaling and requests immediate upscaling under resource pressure.
Usage example:
compute_ctl -D /var/db/postgres/compute \
-C 'postgresql://cloud_admin@localhost/postgres' \
-S /var/db/postgres/specs/current.json \
-b /usr/local/bin/postgres
State Diagram
Computes can be in various states. Below is a diagram that details how a compute moves between states.
%% https://mermaid.js.org/syntax/stateDiagram.html
stateDiagram-v2
[*] --> Empty : Compute spawned
Empty --> ConfigurationPending : Waiting for compute spec
ConfigurationPending --> Configuration : Received compute spec
Configuration --> Failed : Failed to configure the compute
Configuration --> Running : Compute has been configured
Empty --> Init : Compute spec is immediately available
Empty --> TerminationPending : Requested termination
Init --> Failed : Failed to start Postgres
Init --> Running : Started Postgres
Running --> TerminationPending : Requested termination
TerminationPending --> Terminated : Terminated compute
Failed --> [*] : Compute exited
Terminated --> [*] : Compute exited
Tests
Cargo formatter:
cargo fmt
Run tests:
cargo test
Clippy linter:
cargo clippy --all --all-targets -- -Dwarnings -Drust-2018-idioms
Cross-platform compilation
Imaging that you are on macOS (x86) and you want a Linux GNU (x86_64-unknown-linux-gnu platform in rust terminology) executable.
Using docker
You can use a throw-away Docker container (rustlang/rust image) for doing that:
docker run --rm \
-v $(pwd):/compute_tools \
-w /compute_tools \
-t rustlang/rust:nightly cargo build --release --target=x86_64-unknown-linux-gnu
or one-line:
docker run --rm -v $(pwd):/compute_tools -w /compute_tools -t rust:latest cargo build --release --target=x86_64-unknown-linux-gnu
Using rust native cross-compilation
Another way is to add x86_64-unknown-linux-gnu target on your host system:
rustup target add x86_64-unknown-linux-gnu
Install macOS cross-compiler toolchain:
brew tap SergioBenitez/osxct
brew install x86_64-unknown-linux-gnu
And finally run cargo build:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER=x86_64-unknown-linux-gnu-gcc cargo build --target=x86_64-unknown-linux-gnu --release