Exposes an endpoint "/profile/cpu" for profiling the postgres processes (currently spawned and the new ones) using "perf". Adds the corresponding python test to test the added endpoint and confirm the output expected is the profiling data in the expected format. Add "perf" binary to the sudo list. Fix python poetry ruff Address the clippy lints Document the code Format python code Address code review Prettify Embed profile_pb2.py and small code/test fixes. Make the code slightly better. 1. Makes optional the sampling_frequency parameter for profiling. 2. Avoids using unsafe code when killing a child. Better code, better tests More tests Separate start and stop of profiling Correctly check for the exceptions Address clippy lint Final fixes. 1. Allows the perf to be found in $PATH instead of having the path hardcoded. 2. Changes the path to perf in the sudoers file so that the compute can run it properly. 3. Changes the way perf is invoked, now it is with sudo and the path from $PATH. 4. Removes the authentication requirement from the /profile/cpu/ endpoint. hakari thing Python fixes Fix python formatting More python fixes Update poetry lock Fix ruff Address the review comments Fix the tests Try fixing the flaky test for pg17? Try fixing the flaky test for pg17? PYTHON Fix the tests Remove the PROGRESS parameter Remove unused Increase the timeout due to concurrency Increase the timeout to 60 Increase the profiling window timeout Try this Lets see the error Just log all the errors Add perf into the build environment uijdfghjdf Update tempfile to 3.20 Snapshot Use bbc-profile Update tempfile to 3.20 Provide bpfcc-tools in debian Properly respond with status Python check Fix build-tools dockerfile Add path probation for the bcc profile Try err printing Refactor Add bpfcc-tools to the final image Add error context sudo not found? Print more errors for verbosity Remove procfs and use libproc Update hakari Debug sudo in CI Rebase and adjust hakari remove leftover Add archiving support Correct the paths to the perf binary Try hardcoded sudo path Add sudo into build-tools dockerfile Minor cleanup Print out the sudoers file from github Stop the tests earlier Add the sudoers entry for nonroot, install kmod for modprobe for bcc-profile Try hacking the kernel headers for bcc-profile Redeclare the kernel version argument Try using the kernel of the runner Try another way Check bpfcc-tools
Compute node tools
Postgres wrapper (compute_ctl) is intended to be run as a Docker entrypoint or as a systemd
ExecStart option. It will handle all the Neon specifics during compute node
initialization:
compute_ctlaccepts cluster (compute node) specification as a JSON file.- Every start is a fresh start, so the data directory is removed and initialized again on each run.
- Next it will put configuration files into the
PGDATAdirectory. - Sync safekeepers and get commit LSN.
- Get
basebackupfrom pageserver using the returned on the previous step LSN. - Try to start
postgresand wait until it is ready to accept connections. - Check and alter/drop/create roles and databases.
- Hang waiting on the
postmasterprocess to exit.
Also compute_ctl spawns two separate service threads:
compute-monitorchecks the last Postgres activity timestamp and saves it into the sharedComputeNode;http-endpointruns a Hyper HTTP API server, which serves readiness and the last activity requests.
If AUTOSCALING environment variable is set, compute_ctl will start the
vm-monitor located in [neon/libs/vm_monitor]. For VM compute nodes,
vm-monitor communicates with the VM autoscaling system. It coordinates
downscaling and requests immediate upscaling under resource pressure.
Usage example:
compute_ctl -D /var/db/postgres/compute \
-C 'postgresql://cloud_admin@localhost/postgres' \
-S /var/db/postgres/specs/current.json \
-b /usr/local/bin/postgres
State Diagram
Computes can be in various states. Below is a diagram that details how a compute moves between states.
%% https://mermaid.js.org/syntax/stateDiagram.html
stateDiagram-v2
[*] --> Empty : Compute spawned
Empty --> ConfigurationPending : Waiting for compute spec
ConfigurationPending --> Configuration : Received compute spec
Configuration --> Failed : Failed to configure the compute
Configuration --> Running : Compute has been configured
Empty --> Init : Compute spec is immediately available
Empty --> TerminationPendingFast : Requested termination
Empty --> TerminationPendingImmediate : Requested termination
Init --> Failed : Failed to start Postgres
Init --> Running : Started Postgres
Running --> TerminationPendingFast : Requested termination
Running --> TerminationPendingImmediate : Requested termination
TerminationPendingFast --> Terminated compute with 30s delay for cplane to inspect status
TerminationPendingImmediate --> Terminated : Terminated compute immediately
Failed --> [*] : Compute exited
Terminated --> [*] : Compute exited
Tests
Cargo formatter:
cargo fmt
Run tests:
cargo test
Clippy linter:
cargo clippy --all --all-targets -- -Dwarnings -Drust-2018-idioms
Cross-platform compilation
Imaging that you are on macOS (x86) and you want a Linux GNU (x86_64-unknown-linux-gnu platform in rust terminology) executable.
Using docker
You can use a throw-away Docker container (rustlang/rust image) for doing that:
docker run --rm \
-v $(pwd):/compute_tools \
-w /compute_tools \
-t rustlang/rust:nightly cargo build --release --target=x86_64-unknown-linux-gnu
or one-line:
docker run --rm -v $(pwd):/compute_tools -w /compute_tools -t rust:latest cargo build --release --target=x86_64-unknown-linux-gnu
Using rust native cross-compilation
Another way is to add x86_64-unknown-linux-gnu target on your host system:
rustup target add x86_64-unknown-linux-gnu
Install macOS cross-compiler toolchain:
brew tap SergioBenitez/osxct
brew install x86_64-unknown-linux-gnu
And finally run cargo build:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER=x86_64-unknown-linux-gnu-gcc cargo build --target=x86_64-unknown-linux-gnu --release