## Problem
The code is intended to reschedule compaction immediately if there are
pending tasks. We set the duration to 0 before if there are pending
tasks, but this will go through the `if period == Duration::ZERO {`
branch and sleep for another 10 seconds.
## Summary of changes
Set duration to 1 so that it doesn't sleep for too long.
Signed-off-by: Alex Chi Z <chi@neon.tech>
## Problem
Any compaction should never produce l0 layers. This never happened in my
experiments, but would be good to guard it early.
## Summary of changes
Disallow gc-compaction to produce l0 layers.
Signed-off-by: Alex Chi Z <chi@neon.tech>
The full build of all extensions takes a long time. When working locally
on parts that don't need extensions, you can iterate more quickly by
skipping the unnecessary extensions.
This adds a build argument to the dockerfile to specify extensions to
build. There are three options:
- EXTENSIONS=all (default)
- EXTENSIONS=minimal: Build only a few extensions that are listed in
shared_preload_libraries in the default neon config.
- EXTENSIONS=none: Build no extensions (except for the mandatory 'neon'
extension).
## Problem
close https://github.com/neondatabase/neon/issues/10651
## Summary of changes
* Image layer creation starts from the next partition of the last
processed partition if the previous attempt was not complete.
* Add tests.
---------
Signed-off-by: Alex Chi Z <chi@neon.tech>
## Problem
When a shard has two secondary locations, but one of them is on a node
with MaySchedule::No, the optimiser would get stuck, because it couldn't
decide which secondary to remove.
This is generally okay if a node is offline, but if a node is in Pause
mode for a long period of time, it's a problem.
Closes: https://github.com/neondatabase/neon/issues/10646
## Summary of changes
- Instead of insisting on finding a node in the wrong AZ to remove, find
an available node in the _right_ AZ, and remove all the others. This
ensures that if there is one live suitable node, then other
offline/paused nodes cannot hold things up.
Given a remote extensions manifest of the following:
```json
{
"public_extensions": [],
"custom_extensions": null,
"library_index": {
"pg_search": "pg_search"
},
"extension_data": {
"pg_search": {
"control_data": {
"pg_search.control": "comment = 'pg_search: Full text search for PostgreSQL using BM25'\ndefault_version = '0.14.1'\nmodule_pathname = '$libdir/pg_search'\nrelocatable = false\nsuperuser = true\nschema = paradedb\ntrusted = true\n"
},
"archive_path": "13117844657/v14/extensions/pg_search.tar.zst"
}
}
}
```
We were allowing a compute to install a remote extension that wasn't
listed in either public_extensions or custom_extensions.
Signed-off-by: Tristan Partin <tristan@neon.tech>
## Problem
We don't have visibility into the ratio of image vs. delta pages
ingested in Pageservers. This might be useful to determine whether we
should compress WAL records before storing them, which in turn might
make compaction more efficient.
## Summary of changes
Add `pageserver_wal_ingest_values_committed` metric with dimensions
`class=metadata|data` and `kind=image|delta`.
## Problem
We get WARN log noise on tenant creations. Cplane creates tenants via
/location_config. That returns the attached locations in the response
and spawns a reconciliation which will also attempt to notify cplane. If
the notification is attempted before cplane persists the shards to its
database, storcon gets back a 404. The situation is harmless, but
annoying.
## Summary of Changes
* Add a tenant creation hint to the reconciler config
* If the hint is true and we get back a 404 on the notification from
cplane, ignore the error, but still queue the reconcile up for a retry.
Closes https://github.com/neondatabase/cloud/issues/20732
## Problem
Ref: https://github.com/neondatabase/cloud/issues/23461
and follow-up after: https://github.com/neondatabase/neon/pull/10553
we used `echo` to set-up `.wgetrc` and `.curlrc`, and there we used `\n`
to make these multiline configs with one echo command.
The problem is that Debian `/bin/sh`'s built-in echo command behaves
differently from the `/bin/echo` executable and from the `echo` built-in
in `bash`. Namely, it does not support the`-e` option, and while it does
treat `\n` as a newline, passing `-e` here will add that `-e` to the
output.
At the same time, when we use different base images, for example
`alpine/curl`, their `/bin/sh` supports and requires `-e` for treating
escape sequences like `\n`.
But having different `echo` and remembering difference in their
behaviour isn't best experience for the developer and makes bad
experience maintaining Dockerfiles.
Work-arounds:
- Explicitly use `/bin/bash` (like in this PR)
- Use `/bin/echo` instead of the shell's built-in echo function
- Use printf "foo\n" instead of echo -e "foo\n"
## Summary of changes
1. To fix that, we process with the option setting `/bin/bash` as a
SHELL for the debian-baysed layers
2. With no changes for `alpine/curl` based layers.
3. And one more change here: in `extensions` layer split to the 2 steps:
installing dependencies from `CPAN` and installing `lcov` from github,
so upgrading `lcov` could reuse previous layer with installed cpan
modules.
## Problem
We wish to make heatmap generation additive in
https://github.com/neondatabase/neon/pull/10597.
However, if the pageserver restarts and has a heatmap on disk from when
it was a secondary long ago,
we can end up keeping extra layers on the secondary's disk.
## Summary of changes
Persist the heatmap after a successful upload.
## Problem
Hopefully this can resolve
https://github.com/neondatabase/neon/issues/10517. The reason why the
test is flaky is that after restart the compute node might write some
data so that the pageserver flush some layers, and in the end, causing
L0 compaction to run, and we cannot get the test scenario as we want.
## Summary of changes
Ensure all L0 layers are compacted before starting the test.
Signed-off-by: Alex Chi Z <chi@neon.tech>
## Problem
In #9895, we fixed some issues where `ClearVmBits` were broadcast to all
shards, even those not owning the VM relation. As part of that, we found
some ancient code from #1417, which discarded spurious incorrect
`ClearVmBits` records for pages outside of the VM relation. We added
observability in #9911 to see how often this actually happens in the
wild.
After two months, we have not seen this happen once in production or
staging. However, out of caution, we don't want a hard error and break
WAL ingestion.
Resolves#10067.
## Summary of changes
Log a critical error when ingesting `ClearVmBits` for unknown VM
relations or pages.
## Problem
We want to switch proxy and ideally all Rust services to structured JSON
logging to support better filtering and cross-referencing with tracing.
## Summary of changes
* Introduce a custom tracing-subscriber to write the JSON. In a first
attempt a customized tracing::fmt::FmtSubscriber was used, but it's very
inefficient and can still generate invalid JSON. It's also doesn't allow
us to add important fields to the root object.
* Make this opt in: the `LOGFMT` env var can be set to `"json"` to
enable to new logger at startup.
This PR does a bunch of things:
* only allow errors of the server cert verification, not of the TLS
handshake. The TLS handshake doesn't cause any errors for us so we can
just always require it to be valid. This simplifies the code a little.
* As the solution is more permanent than originally anticipated, I think
it makes sense to move the `AcceptAll` verifier outside.
* log the connstr information. this helps with figuring out which domain
names are configured in the connstr, etc. I think it is generally useful
to print it. make extra sure that the password is not leaked.
Follow-up of #10640
Refactor how extensions are built in compute Dockerfile
1. Rename some of the extension layers, so that names correspond more
precisely to the upstream repository name and the source directory
name. For example, instead of "pg-jsonschema-pg-build", spell it
"pg_jsonschema-build". Some of the layer names had the extra "pg-"
part, and some didn't; harmonize on not having it. And use an
underscore if the upstream project name uses an underscore.
2. Each extension now consists of two dockerfile targets:
[extension]-src and [extension]-build. By convention, the -src
target downloads the sources and applies any neon-specific patches
if necessary. The source tarball is downloaded and extracted under
/ext-src. For example, the 'pgvector' extension creates the
following files and directory:
/ext-src/pgvector.tar.gz # original tarball
/ext-src/pgvector.patch # neon-specific patch, copied from patches/ dir
/ext-src/pgvector-src/ # extracted tarball, with patch applied
This separation avoids re-downloading the sources every time the
extension is recompiled. The 'extension-tests' target also uses the
[extension]-src layers, by copying the /ext-src/ dirs from all
the extensions together into one image.
This refactoring came about when I was experimenting with different
ways of splitting up the Dockerfile so that each extension would be in
a separate file. That's not part of this PR yet, but this is a good
step in modularizing the extensions.
## Problem
Image layer generation could block L0 compactions for a long time.
## Summary of changes
* Refactored the return value of `create_image_layers_for_*` functions
to make it self-explainable.
* Preempt image layer generation in `Try` mode if L0 piles up.
Note that we might potentially run into a state that only the beginning
part of the keyspace gets image coverage. In that case, we either need
to implement something to prioritize some keyspaces with image coverage,
or tune the image_creation_threshold to ensure that the frequency of
image creation could keep up with L0 compaction.
---------
Signed-off-by: Alex Chi Z <chi@neon.tech>
Co-authored-by: Erik Grinaker <erik@neon.tech>
## Problem
We don't currently have good alerts for critical errors, e.g. data
loss/corruption.
Touches #10094.
## Summary of changes
Add a `critical!` macro and corresponding
`libmetrics_tracing_event_count{level="critical"}` metric. This will:
* Emit an `ERROR` log message with prefix `"CRITICAL:"` and a backtrace.
* Increment `libmetrics_tracing_event_count{level="critical"}`, and
indirectly `level="error"`.
* Trigger a pageable alert (via the metric above).
* In debug builds, panic the process.
I'll add uses of the macro separately.
## Problem
I noticed when onboarding lots of tenants that the AZ scheduling
violation stat was climbing, before falling later as optimisations
happened. This was happening because we first add the tenant with
PlacementPolicy::Secondary, and then later go to
PlacementPolicy::Attached, and the scheduler's behavior led to a bad AZ
choice:
1. Create a secondary location in the non-preferred AZ
2. Upgrade to Attached where we promote that non-preferred-AZ location
to attached and then create another secondary
3. Optimiser later realises we're in the wrong AZ and moves us
## Summary of changes
- Extend some logging to give more information about AZs
- When scheduling secondary location in PlacementPolicy::Secondary,
select it as if we were attached: in this mode, our business goal is to
have a warm pageserver location that we can make available as attached
quickly if needed, therefore we want it to be in the preferred AZ.
- Make optimize_secondary logic the same, so that it will consider a
secondary location in the preferred AZ to be optimal when in
PlacementPolicy::Secondary
- When transitioning to from PlacementPolicy::Attached(N) to
PlacementPolicy::Secondary, instead of arbitrarily picking a location to
keep, prefer to keep the location in the preferred AZ
We encountered some TLS validation errors for the storcon since applying
#10614. Add an option to downgrade them to logged errors instead to
allow us to debug with more peace.
cc issue https://github.com/neondatabase/cloud/issues/23583
## Problem
* The behavior of this flag changed. Plus, it's not necessary to disable
the IP check as long as there are no IPs listed in the local postgres.
## Summary of changes
* Drop the flag from the command in the README.md section.
* Change the postgres URL passed to proxy to not use the endpoint
hostname.
* Also swap postgres creation and proxy startup, so the DB is running
when proxy comes up.
## Problem
Tests for logical replication (on Staging) have been failing for some
time because logical replication is not enabled for them. This issue
occurred after switching to an org API key with a different default
setting, where logical replication was not enabled by default.
## Summary of changes
- Add `enable_logical_replication` input to
`actions/neon-project-create`
- Enable logical replication in `test-logical-replication` job
Logging errors with the debug format specifier causes multi-line errors,
which are sometimes a pain to deal with. Instead, we should use anyhow's
alternate display format, which shows the same information on a single
line.
Also adjusted a couple of error messages that were stale.
Fixesneondatabase/cloud#14710.
## Problem
1. First of all it's more correct
2. Current usage allows ` Time-of-Check-Time-of-Use (TOCTOU) 'Pwn
Request' vulnerabilities`. Please check security slack channel or reach
me for more details. I will update PR description after merge.
## Summary of changes
1. Use `actions/checkout` with `ref: ${{
github.event.pull_request.head.sha }}`
Discovered by and Co-author: @varunsh-coder
Fixes flaky test_lr_with_slow_safekeeper test #10242
Fix query to `pg_catalog.pg_stat_subscription` catalog to handle table
synchronization and parallel LR correctly.
Successor of #10280 after it was reverted in #10592.
Re-introduce the usage of diesel-async again, but now also add TLS
support so that we connect to the storcon database using TLS. By
default, diesel-async doesn't support TLS, so add some code to make us
explicitly request TLS.
cc https://github.com/neondatabase/cloud/issues/23583
## Problem
While working on https://github.com/neondatabase/neon/pull/10617 I
(unintentionally) merged the PR before the main CI pipeline has
finished.
I suspect this happens because we have received all the required job
results from the pre-merge-checks workflow, which runs on PRs that
include changes to relevant files.
## Summary of changes
- Skip the `conclusion` job in `pre-merge-checks` workflows for PRs
## Problem
This test would sometimes emit unexpected logs from the storage
controller's requests to do migrations, which overlap with the test's
restarts of pageservers, where those migrations are happening some time
after a shard split as the controller moves load around.
Example:
https://neon-github-public-dev.s3.amazonaws.com/reports/pr-10602/13067323736/index.html#testresult/f66f1329557a1fc5/retries
## Summary of changes
- Do a reconcile_until_idle after shard split, so that the rest of the
test doesn't run concurrently with migrations
In the safekeeper, we block deletions on the timeline's gate closing,
and any `WalResidentTimeline` keeps the gate open (because it owns a
gate lock object). Thus, unless the `main_task` function of a partial
backup doesn't return, we can't delete the associated timeline.
In order to make these tasks exit early, we call the cancellation token
of the timeline upon its shutdown. However, the partial backup task
wasn't looking for the cancellation while waiting to acquire a partial
backup permit.
On a staging safekeeper we have been in a situation in the past where
the semaphore was already empty for a duration of many hours, rendering
all attempted deletions unable to proceed until a restart where the
semaphore was reset:
https://neondb.slack.com/archives/C03H1K0PGKH/p1738416586442029
## Problem
when introducing pg17 for job step `Generate matrix for OLAP benchmarks`
I introduced a syntax error that only hits on Saturdays.
## Summary of changes
Remove trailing comma
## successful test run
https://github.com/neondatabase/neon/actions/runs/13086363907
- Wired up filtering on VPC endpoints
- Wired up block access from public internet / VPC depending on per
project flag
- Added cache invalidation for VPC endpoints (partially based on PR from
Raphael)
- Removed BackendIpAllowlist trait
---------
Co-authored-by: Ivan Efremov <ivan@neon.tech>
We forked copy_bidirectional to solve some issues like fast-shutdown
(disallowing half-open connections) and to introduce better error
tracking (which side of the conn closed down).
A change recently made its way upstream offering performance
improvements: https://github.com/tokio-rs/tokio/pull/6532. These seem
applicable to our fork, thus it makes sense to apply them here as well.
The primary benefit is that all the ad hoc get_matches() calls are no
longer necessary. Now all it takes to get at the CLI arguments is
referencing a struct member. It's also great the we can replace the ad
hoc CLI struct we had with this more formal solution.
Signed-off-by: Tristan Partin <tristan@neon.tech>
## Problem
Merge Queue fails if changes include Rust code.
## Summary of changes
- Fix condition for `build-build-tools-image`
- Add a couple of no-op `false ||` to make predicates look
symmetric
## Problem
When a client dropped before a request completed, and a handler returned
an ApiError, we would log that at error severity. That was excessive in
the case of a request erroring on a shutdown, and could cause test
flakes.
example:
https://neon-github-public-dev.s3.amazonaws.com/reports/main/13067651123/index.html#suites/ad9c266207b45eafe19909d1020dd987/6021ce86a0d72ae7/
```
Cancelled request finished with an error: ShuttingDown
```
## Summary of changes
- Log a different info-level on ShuttingDown and ResourceUnavailable API
errors from cancelled requests
## Problem
This assertion is incorrect: it is legal to see another shard's data at
this point, after a shard split.
Closes: https://github.com/neondatabase/neon/issues/10609
## Summary of changes
- Remove faulty assertion
## Problem
There are two (related) problems with the previous handling of
`cargo-deny`:
- When a new advisory is added to rustsec that affects a dependency,
unrelated pull requests will fail.
- New advisories rely on pushes or PRs to be surfaced. Problems that
already exist on main will only be found if we try to merge new things
into main.
## Summary of changes
We split out `cargo-deny` into a separate workflow that runs on all PRs
that touch `Cargo.lock`, and on a schedule on `main`, `release`,
`release-compute` and `release-proxy` to find new advisories.
Update to a rebased version of our rust-postgres patches, rebased on
[this](98f5a11bc0)
commit this time.
With #10280 reapplied, this means that the rust-postgres crates will be
deduplicated, as the new crate versions are finally compatible with the
requirements of diesel-async.
Earlier update: #10561
rust-postgres PR: https://github.com/neondatabase/rust-postgres/pull/39
## Problem
We want to check that `diesel print-schema` doesn't generate any changes
(`storage_controller/src/schema.rs`) in comparison with the list of
migration.
## Summary of changes
- Add `diesel_cli` to `build-tools` image
- Add `Check diesel schema` step to `build-neon` job, at this stage we
have all required binaries, so don't need to compile anything
additionally
- Check runs only on x86 release builds to be sure we do it at least
once per CI run.
Ref: https://github.com/neondatabase/cloud/issues/23461
## Problem
Just made changes around and see these 2 base layers could be optimised.
and after review comment from @myrrc setting up timeouts and retries in
`alpine/curl` image
## Summary of changes
## Problem
This test may not fully detect data corruption during splits, since we
don't force-compact the entire keyspace.
## Summary of changes
Force-compact all data in `test_sharding_autosplit`.
## Problem
If offloading races with normal shutdown, we get a "failed to freeze and
flush: cannot flush frozen layers when flush_loop is not running, state
is Exited". This is harmless but points to it being quite strange to try
and freeze and flush such a timeline. flushing on shutdown for an
archived timeline isn't useful.
Related: https://github.com/neondatabase/neon/issues/10389
## Summary of changes
- During Timeline::shutdown, ignore ShutdownMode::FreezeAndFlush if the
timeline is archived