Commit Graph

7406 Commits

Author SHA1 Message Date
Folke Behrens
16b8a3f598 Update Jinja2 to 3.1.6 (#11109)
https://github.com/neondatabase/neon/security/dependabot/89
2025-03-06 09:55:41 +00:00
Folke Behrens
f343537e4d proxy: Small adjustments to json logging (#11107)
* Remove callsite identifier registration on span creation. Forgot to
remove from last PR. Was part of alternative idea.
* Move "spans" object to right after "fields", so event and span fields
are listed together.
2025-03-06 09:18:28 +00:00
Alex Chi Z.
78b322f616 rfc: add 041-rel-sparse-keyspace (#10412)
Based on the PoC patch I've done in #10316, I'd like to put an RFC in
advance to ensure everyone is on the same page, and start incrementally
port the code to the main branch.

https://github.com/neondatabase/neon/issues/9516


[Rendered](https://github.com/neondatabase/neon/blob/skyzh/rfc-041-rel-sparse-keyspace/docs/rfcs/041-rel-sparse-keyspace.md)

---------

Signed-off-by: Alex Chi Z <chi@neon.tech>
Co-authored-by: Erik Grinaker <erik@neon.tech>
2025-03-05 21:43:16 +00:00
Alex Chi Z.
2de3629b88 test(pageserver): use reldirv2 by default in regress tests (#11081)
## Problem

For pg_regress test, we do both v1 and v2; for all the rest, we default
to v2.

part of https://github.com/neondatabase/neon/issues/9516

## Summary of changes

Use reldir v2 across test cases by default.

---------

Signed-off-by: Alex Chi Z <chi@neon.tech>
2025-03-05 21:02:44 +00:00
Em Sharnoff
1fe23fe8d2 compute/lfc: Add chunk size to neon_lfc_stats (#11100)
This PR adds a new key to neon.neon_lfc_stats —
'file_cache_chunk_size_pages'. It just returns the value of
BLOCKS_PER_CHUNK from the LFC implementation.

The new value should (eventually) allow changing the chunk size without
breaking any places that rely on LFC stats values measured in number of
chunks.

See neondatabase/cloud#25170 for more.
2025-03-05 20:35:08 +00:00
Peter Bendel
604eb5e8d4 fix grafana dashboard link for pooler endoints (#11099)
## Problem

Our benchmarking workflows contain links to grafana dashboards to
troubleshoot problems.
This works fine for non-pooled endpoints.
For pooled endpoints we need to remove the `-pooler` suffix from the
endpoint's hostname to get a valid endpoint ID.

Example link that doesn't work in this run


https://github.com/neondatabase/neon/actions/runs/13678933253/job/38246028316#step:8:311

## Summary of changes

Check if connection string is a -pooler connection string and if so
remove this suffix from the endpoint ID.

---------

Co-authored-by: Alexander Bayandin <alexander@neon.tech>
2025-03-05 20:01:17 +00:00
Tristan Partin
d599d2df80 Update postgres_exporter to 0.17.1 (#11094)
Signed-off-by: Tristan Partin <tristan@neon.tech>
2025-03-05 18:32:45 +00:00
Alexey Kondratov
8263107f6c feat(compute): Add filename label to remote ext requests metric (#11091)
## Problem

We realized that we may use this metric for more 'live' info about
extension installations vs. what we have with installed extensions
metric, which is only updated at start, atm.

## Summary of changes

Add `filename` label to `compute_ctl_remote_ext_requests_total`. Note
that it contains the raw archive name with `.tar.zst` at the end, so the
consumer may need to strip this suffix.

Closes https://github.com/neondatabase/cloud/issues/24694
2025-03-05 18:17:57 +00:00
Anastasia Lubennikova
d94fc75cfc Setup compute_ctl pgaudit and rsyslog (#10615)
Setup pgaudit and pgauditlogtofile extensions
in compute_ctl when the ComputeAuditLogLevel is
set to 'hipaa'.

See cloud PR https://github.com/neondatabase/cloud/pull/24568

Add rsyslog setup for compute_ctl.
Spin up a rsyslog server in the compute VM,
and configure it to send logs to the endpoint
specified in AUDIT_LOGGING_ENDPOINT env.
2025-03-05 18:01:00 +00:00
Alex Chi Z.
9cdc8c0e6c feat(pageserver): revisit error types for gc-compaction (#11082)
## Problem

part of https://github.com/neondatabase/neon/issues/9114

We used anyhow::Error everywhere and it's time to fix.

## Summary of changes

* Make sure that cancel errors are correctly propagated as
CompactionError::ShuttingDown.
* Skip all the trigger computation work if gc_cutoff is not generated
yet.

---------

Signed-off-by: Alex Chi Z <chi@neon.tech>
2025-03-05 15:57:38 +00:00
Arpad Müller
2d45522fa6 storcon db: load safekeepers from DB again (#11087)
Earlier PR #11041 soft-disabled the loading code for safekeepers from
the storcon db. This PR makes us load the safekeepers from the database
again, now that we have [JWTs available on
staging](https://github.com/neondatabase/neon/pull/11087) and soon on
prod.

This reverts commit 23fb8053c5.

Part of https://github.com/neondatabase/cloud/issues/24727
2025-03-05 15:45:43 +00:00
JC Grünhage
94e6897ead fix(ci): make deploy job depend on pushing images to dev registries (#11089)
## Problem
If an image fails to push to dev registries, we shouldn't trigger the
deploy job, because that depends on images existing in dev registries.
To ensure this is the case, the deploy job needs to depend on pushing to
dev registries.

## Summary of changes

Make `deploy` depend on `push-neon-image-dev` and
`push-compute-image-dev`.
2025-03-05 14:28:43 +00:00
Erik Grinaker
332aae1484 test_runner/regress: speed up test_check_visibility_map (#11086)
## Problem

`test_check_visibility_map` is the slowest test in CI, and can cause
timeouts under particularly slow configurations (`debug` and
`without-lfc`).

## Summary of changes

* Reduce the `pgbench` scale factor from 10 to 8.
* Omit a redundant vacuum during `pgbench` init.
* Remove a final `vacuum freeze` + `pg_check_visible` pass, which has
questionable value (we've already done a vacuum freeze previously, and
we don't flush the compute cache before checking anyway).
2025-03-05 13:50:35 +00:00
Vlad Lazar
8c12ccf729 pageserver: gate previous heatmap behind config flag (#11088)
## Problem

On unarchival, we update the previous heatmap with all visible layers.
When the primary generates a new heatmap it includes all those layers,
so the secondary will download them. Since they're not actually resident
on the primary (we didn't call the warm up API), they'll never be
evicted, so they remain in the heatmap.

This leads to oversized secondary locations like we saw in pre-prod.

## Summary of changes

Gate the loading of the previous heatmaps and the heatmap generation on
unarchival behind configuration
flags. They are disabled by default, but enabled in tests.
2025-03-05 12:20:18 +00:00
Vlad Lazar
abae7637d6 pageserver: do big reads to fetch slru segment (#11029)
## Problem

Each page of the slru segment is fetched individually when it's loaded
on demand.

## Summary of Changes

Use `Timeline::get_vectored` to fetch 16 at a time.
2025-03-05 11:55:55 +00:00
Anastasia Lubennikova
38a883118a Skip dropping tablesync replication slots on the publisher from branch (#11073)
fixes https://github.com/neondatabase/cloud/issues/24292

Do not drop tablesync replication slots on the publisher, 
when we're in the process of dropping subscriptions inherited by a neon
branch.
Because these slots are still needed by the parent branch subscriptions.

For regular slots we handle this by setting the slot_name to NONE
before calling DROP SUBSCRIPTION, but tablesync slots are not exposed to
SQL.

rely on GUC disable_logical_replication_subscribers=true 
to know that we're in the Neon-specific process of dropping
subscriptions.
2025-03-05 11:29:46 +00:00
Erik Grinaker
40aa4d7151 utils: log Sentry initialization (#11077)
## Problem

We don't have any logging for Sentry initialization. This makes it hard
to verify that it has been configured correctly.

## Summary of changes

Log some basic info when Sentry has been initialized, but omit the
public key (which allows submitting events). Also log when `SENTRY_DSN`
isn't specified at all, and when it fails to initialize (which is
supposed to panic, but we may as well).
2025-03-05 11:23:07 +00:00
Folke Behrens
8e51bfc597 proxy: JSON logging field refactor (#11078)
## Problem

Grafana Loki's JSON handling is somewhat limited and the log message
should be structured in a way that it's easy to sift through logs and
filter.

## Summary of changes

* Drop span_id. It's too short lived to be of value and only bloats the
logs.
* Use the span's name as the object key, but append a unique numeric
value to prevent name collisions.
* Extract interesting span fields into a separate object at the root.

New format:
```json
{
  "timestamp": "2025-03-04T18:54:44.134435Z",
  "level": "INFO",
  "message": "connected to compute node at 127.0.0.1 (127.0.0.1:5432) latency=client: 22.002292ms, cplane: 0ns, compute: 5.338875ms, retry: 0ns",
  "fields": {
    "cold_start_info": "unknown"
  },
  "process_id": 56675,
  "thread_id": 9122892,
  "task_id": "24",
  "target": "proxy::compute",
  "src": "proxy/src/compute.rs:288",
  "trace_id": "5eb89b840ec63fee5fc56cebd633e197",
  "spans": {
    "connect_request#1": {
      "ep": "endpoint",
      "role": "proxy",
      "session_id": "b8a41818-12bd-4c3f-8ef0-9a942cc99514",
      "protocol": "tcp",
      "conn_info": "127.0.0.1"
    },
    "connect_to_compute#6": {},
    "connect_once#8": {
      "compute_id": "compute",
      "pid": "853"
    }
  },
  "extract": {
    "session_id": "b8a41818-12bd-4c3f-8ef0-9a942cc99514"
  }
}
```
2025-03-05 10:27:46 +00:00
Peter Bendel
906d7468cc exclude separate perf tests from bench step (#11084)
## Problem

Our benchmarking workflow has a job step `bench`which runs all tests in
test_runner/performance/* except those that we want to run separately.
We recently added two test cases to that testcase directory that we want
to run separately but forgot to ignore them during the bench step. This
is now causing
[failures](https://github.com/neondatabase/neon/actions/runs/13667689340/job/38212087331#step:7:392).

## Summary of changes

Ignore the separately run tests in the bench step.
2025-03-05 10:14:51 +00:00
Konstantin Knizhnik
438f7bb726 Check response status in prefetch_lookup (#11080)
## Problem

New async prefetch introduces `prefetch+lookup[` function which is
called before LFC lookup to check if prefetch request is already
completed.

This function is not containing now check that response is actually
`T_NeonGetPageResponse` (and not error).

## Summary of changes

Add checks for response tag.

---------

Co-authored-by: Konstantin Knizhnik <knizhnik@neon.tech>
2025-03-05 10:03:09 +00:00
Peter Bendel
f62ddb11ed Distinguish manually submitted runs for periodic pagebench in grafana dashboard (#11079)
## Problem

Periodic pagebench workflow runs periodically from latest main commit
and also allows to dispatch it manually for a given commit hash to
bi-sect regressions.
However in the dashboards we can not distinguish manual runs from
periodic runs which makes it harder to follow the trend.

## Summary of changes

Send an additional flag commit type to the benchmark runner instance to
distinguish the run type.

Note: this needs a follow-up PR on the receiving side.
2025-03-04 18:11:43 +00:00
Tristan Partin
7b7e4a9fd3 Authorize compute_ctl requests from the control plane (#10530)
The compute should only act if requests come from the control plane.

Signed-off-by: Tristan Partin <tristan@neon.tech>

Signed-off-by: Tristan Partin <tristan@neon.tech>
2025-03-04 18:08:00 +00:00
Erik Grinaker
4bbdb758ec compute_tools: appease unused lint on macOS (#11074)
## Problem

On macOS, the `unused` lint complains about two variables not used in
`!linux` builds.

These were introduced in #11007.

## Summary of changes

Appease the linter by explicitly using the variables in `!linux`
branches.
2025-03-04 16:39:32 +00:00
Alex Chi Z.
20af9cef17 fix(test): use the same value for reldir v1+v2 (#11070)
## Problem

part of https://github.com/neondatabase/neon/issues/11067

My observation is that with the current value of settings, x86-v1
usually takes 30s, arm-v1 1m30s, x86-v2 1m, arm-v2 3m. But sometimes the
system could run too slow and cause test to timeout on arm with reldir
v2.

While I investigate what's going on and further improve the performance,
I'd like to set both of them to use the same test input, so that it
doesn't timeout and we don't abuse this test case as a performance test.

## Summary of changes

Use the same settings for both test cases.

Signed-off-by: Alex Chi Z <chi@neon.tech>
2025-03-04 14:55:50 +00:00
Erik Grinaker
a2902e774a http-utils: generate heap profiles with jemalloc_pprof (#11075)
## Problem

The code to generate symbolized pprof heap profiles and flamegraph SVGs
has been upstreamed to the `jemalloc_pprof` crate:

* https://github.com/polarsignals/rust-jemalloc-pprof/pull/22
* https://github.com/polarsignals/rust-jemalloc-pprof/pull/23

## Summary of changes

Use `jemalloc_pprof` to generate symbolized pprof heap profiles and
flamegraph SVGs.

This reintroduces a bunch of internal jemalloc stack frames that we'd
previously strip, e.g. each stack now always ends with
`prof_backtrace_impl` (where jemalloc takes a stack trace for heap
profiling), but that seems ok.
2025-03-04 12:13:41 +00:00
Vlad Lazar
435bf452e6 tests: remove obsolete err log whitelisting (#11069)
The pageserver read path now supports overlapped in-memory and image
layers via
https://github.com/neondatabase/neon/pull/11000. These allowed errors
are now obsolete.
2025-03-04 08:18:19 +00:00
Erik Grinaker
65addfc524 storcon: add per-tenant rate limiting for API requests (#10924)
## Problem

Incoming requests often take the service lock, and sometimes even do
database transactions. That creates a risk that a rogue client can
starve the controller of the ability to do its primary job of
reconciling tenants to an available state.

## Summary of changes

* Use the `governor` crate to rate limit tenant requests at 10 requests
per second. This is ~10-100x lower than the worst "attack" we've seen
from a client bug. Admin APIs are not rate limited.
* Add a `storage_controller_http_request_rate_limited` histogram for
rate limited requests.
* Log a warning every 10 seconds for rate limited tenants.

The rate limiter is parametrized on TenantId, because the kinds of
client bug we're protecting against generally happen within tenant
scope, and the rates should be somewhat stable: we expect the global
rate of requests to increase as we do more work, but we do not expect
the rate of requests to one tenant to increase.

---------

Co-authored-by: John Spray <john@neon.tech>
2025-03-03 22:04:59 +00:00
Alex Chi Z.
6d0976dad5 feat(pageserver): persist reldir v2 migration status (#10980)
## Problem

part of https://github.com/neondatabase/neon/issues/9516

## Summary of changes

Similar to the aux v2 migration, we persist the relv2 migration status
into index_part, so that even the config item is set to false, we will
still read from the v2 storage to avoid loss of data.

Note that only the two variants `None` and
`Some(RelSizeMigration::Migrating)` are used for now. We don't have full
migration implemented so it will never be set to
`RelSizeMigration::Migrated`.

---------

Signed-off-by: Alex Chi Z <chi@neon.tech>
2025-03-03 21:05:43 +00:00
Alex Chi Z.
dbf9a80261 fix(pageserver): avoid flooding gc-compaction logs (#11024)
## Problem

The "did not trigger" gets logged at 10k/minute in staging.

## Summary of changes

Change it to debug level.

Signed-off-by: Alex Chi Z <chi@neon.tech>
2025-03-03 20:23:20 +00:00
Vlad Lazar
6ca49b4d0c safekeeper: fix a gap tracking edge case (#11054)
The interpreted reader tracks a record aligned current position in the
WAL stream.
Partial reads move the stream internally, but not from the pov of the
interpreted WAL reader.

Hence, where new shards subscribe with a start position that matches the
reader's current position,
but we've also done some partial reads. This confuses the gap tracking.
To make it more robust,
update the current batch start to the min between the new start position
and its current value.

Since no record has been decoded yet (position matches), we can't have
lost it
2025-03-03 19:16:03 +00:00
Vlad Lazar
5197e43396 pageserver: add recurse flag to layer download spec (#11068)
I missed updating the open api spec in the original PR.
We need this so that the cplane auto-generated client sees the flag.
2025-03-03 19:04:01 +00:00
Alexander Lakhin
9a4e2eab61 Fix artifact name for build with sanitizers (#11066)
## Problem
When a build is made with sanitizers, this is not reflected in the
artifact name, which can lead to overriding normal builds with sanitized
ones.

## Summary of changes
Take this property of a build into account when constructing the
artifact name.
2025-03-03 18:00:53 +00:00
Vlad Lazar
8298bc903c pageserver: handle in-memory layer overlaps with persistent layers (#11000)
## Problem

Image layers may be nested inside in-memory layers as diagnosed
[here](https://github.com/neondatabase/neon/issues/10720#issuecomment-2649419252).
The read path doesn't support this and may skip over the image layer,
resulting in a failure to reconstruct the page.

## Summary of changes

We already support nesting of image layers inside delta layers. The
logic lives in `LayerMap::select_layer`.
The main goal of this PR is to propagate the candidate in-memory layer
down to that point and update
the selection logic.

Important changes are:
1. Support partial reads for the in-memory layer. Previously, we could
only specify the start LSN of the read.
We need to control the end LSN too.
2. `LayerMap::ranged_search` considers in-memory layers too. Previously,
the search for in-memory layers
was done explicitly in `Timeline::get_reconstruct_data_timeline`. Note
that `LayerMap::ranged_search` now returns
a weak readable layer which the `LayerManager` can upgrade. This dance
is such that we can unit test the layer selection logic.
3. Update `LayerMap::select_layer` to consider the candidate in-memory
layer too

Loosely related drive bys:
1. Remove the "keys not found" tracking in the ranged search. This
wasn't used anywhere and it just complicates things.
2. Remove the difficulty map stuff from the layer map. Again, not used
anywhere.

Closes https://github.com/neondatabase/neon/issues/9185
Closes https://github.com/neondatabase/neon/issues/10720
2025-03-03 17:52:59 +00:00
John Spray
b953daa21f safekeeper: allow remote deletion to proceed after dropped requests (#11042)
## Problem

If a caller times out on safekeeper timeline deletion on a large
timeline, and waits a while before retrying, the deletion will not
progress while the retry is waiting. The net effect is very very slow
deletion as it only proceeds in 30 second bursts across 5 minute idle
periods.

Related: https://github.com/neondatabase/neon/issues/10265

## Summary of changes

- Run remote deletion in a background task
- Carry a watch::Receiver on the Timeline for other callers to join the
wait
- Restart deletion if the API is called again and the previous attempt
failed
2025-03-03 16:03:51 +00:00
Peter Bendel
a07599949f First version of a new benchmark to test larger OLTP workload (#11053)
## Problem

We want to support larger tenants (regarding logical database size,
number of transactions per second etc.) and should increase our test
coverage of OLTP transactions at larger scale.

## Summary of changes

Start a new benchmark that over time will add more OLTP tests at larger
scale.
This PR covers the first version and will be extended in further PRs.

Also fix some infrastructure:
- default for new connections and large tenants is to use connection
pooler pgbouncer, however our fixture always added
`statement_timeout=120` which is not compatible with pooler
[see](https://neon.tech/docs/connect/connection-errors#unsupported-startup-parameter)
- action to create branch timed out after 10 seconds and 10 retries but
for large tenants it can take longer so use increasing back-off for
retries

## Test run

https://github.com/neondatabase/neon/actions/runs/13593446706
2025-03-03 15:25:48 +00:00
Vlad Lazar
38277497fd pageserver: log shutdown at info level for basebackup (#11046)
## Problem

Timeline shutdown during basebackup logs at error level because the the
canecellation error is smushed into BasebackupError::Server.

## Summary of changes
Introduce BasebackupError::Shutdown and use it. `log_query_error` will
now see `QueryError::Shutdown` and log at info level.
2025-03-03 13:46:50 +00:00
Arseny Sher
ef2b50994c walproposer: basic infra to enable generations (#11002)
## Problem

Preparation for https://github.com/neondatabase/neon/issues/10851

## Summary of changes

Add walproposer `safekeepers_generations` field which can be set by
prefixing `neon.safekeepers` GUC with `g#n:`. Non zero value (n) forces
walproposer to use generations. In particular, this also disables
implicit timeline creation as timeline will be created by storcon. Add
test checking this. Also add missing infra: `--safekeepers-generation`
flag to neon_local endpoint start + fix `--start-timeout` flag: it
existed but value wasn't used.
2025-03-03 13:20:20 +00:00
Konstantin Knizhnik
8669bfe493 Do not store zero pages in inmem SMGR for walredo (#11043)
## Problem


See https://neondb.slack.com/archives/C033RQ5SPDH/p1740157873114339

smgrextend for FSM fork is called during page reconstruction by walredo
process causing overflow of inmem SMGR (64 pages).

## Summary of changes

Do not store zero pages in inmem SMGR because `inmem_read` returns zero
page if it is not able to locate specified block.

Co-authored-by: Konstantin Knizhnik <knizhnik@neon.tech>
2025-03-03 12:50:07 +00:00
Misha Sakhnov
625c526bdd ci: create multiarch vm images (#11017)
## Problem

We build compute-nodes as multi-arch images, but not the
vm-compute-nodes. The PR adds multiarch vm images the same way as in
autoscaling repo.

## Summary of changes

Add architecture to the matrix for vm compute build steps
Add merge job

---------

Co-authored-by: Alexander Bayandin <alexander@neon.tech>
2025-03-03 11:47:09 +00:00
a-masterov
df0767176a Change the tags names according to the curent state (#11059)
## Problem
We have not synced `force-test-extensions-upgrade.yml` with the last
changes.
The variable `TEST_EXTENSIONS_UPGRADE` was ignored in the script and
actually set to `NEW_COMPUTE_TAG` while it should be set to
`OLD_COMPUTE_TAG` as we are about to run compatibility tests.
## Summary of changes
The tag names were synced, the logic was fixed.
2025-03-03 09:40:49 +00:00
Heikki Linnakangas
38ddfab643 compute_ctl: Perform more startup actions in parallel (#11008)
To speed up compute startup. Resizing swap in particular takes about 100
ms on my laptop. By performing it in parallel with downloading the
basebackup, that latency is effectively hidden. I would imagine that
downloading remote extensions can also take a non-trivial amount of
time, although I didn't try to measure that. In any case that's now also
performed in parallel with downloading the basebackup.
2025-03-03 00:29:37 +00:00
Heikki Linnakangas
066324d6ec compute_ctl: Rearrange startup code (#11007)
Move most of the code to compute.rs, so that all the major startup steps
are visible in one place. You can now get a pretty good picture of what
happens in the latency-critical path at compute startup by reading
ComputeNode::start_compute().

This also clarifies the error handling in start_compute. Previously, the
start_postgres function sometimes returned an Err, and sometimes Ok but
with the compute status already set to Failed. Now the start_compute
function always returns Err on failure, and it's the caller's
responsibility to change the compute status to Failed. Separately from
that, it returns a handle to the Postgres process via a `&mut` reference
if it had already started Postgres (i.e. on success, or if the failure
happens after launching the Postgres process).

---------

Co-authored-by: Alexey Kondratov <kondratov.aleksey@gmail.com>
2025-02-28 22:48:05 +00:00
Suhas Thalanki
ee0c8ca8fd Add -fsigned-char for cross platform signed chars (#10852)
## Problem

In multi-character keys, the GIN index creates a CRC Hash of the first 3
bytes of the key.
The hash can have the first bit to be set or unset, needing to have a
consistent representation
of `char` across architectures for consistent results. GIN stores these
keys by their hashes
which determines the order in which the keys are obtained from the GIN
index.

By default, chars are signed in x86 and unsigned in arm, leading to
inconsistent behavior across different platform architectures. Adding
the `-fsigned-char` flag to the GCC compiler forces chars to be treated
as signed across platforms, ensuring the ordering in which the keys are
obtained consistent.

## Summary of changes

Added `-fsigned-char` to the `CFLAGS` to force GCC to use signed chars
across platforms.
Added a test to check this across platforms.

Fixes: https://github.com/neondatabase/cloud/issues/23199
2025-02-28 21:07:21 +00:00
Ivan Efremov
56033189c1 feat(proxy): Log latency after connect to compute (#11048)
## Problem
To measure latency accurate we should associate the testodrome role
within a latency data

## Summary of changes
Add latency logging to associate different roles within a latency.

Relates to the #22486
2025-02-28 17:58:42 +00:00
Erik Grinaker
d857f63e3b pageserver: fix race that can wedge background tasks (#11047)
## Problem

`wait_for_active_tenant()`, used when starting background tasks, has a
race condition that can cause it to wait forever (until cancelled). It
first checks the current tenant state, and then subscribes for state
updates, but if the state changes between these then it won't be
notified about it.

We've seen this wedge compaction tasks, which can cause unbounded layer
file buildup and read amplification.

## Summary of changes

Use `watch::Receiver::wait_for()` to check both the current and new
tenant states.
2025-02-28 17:00:22 +00:00
Alex Chi Z.
f79ee0bb88 fix(storcon): loop in chaos injection (#11004)
## Problem

Somehow the previous patch loses the loop in the chaos injector function
so everything will only run once.
https://github.com/neondatabase/neon/pull/10934

## Summary of changes

Add back the loop.

Signed-off-by: Alex Chi Z <chi@neon.tech>
2025-02-28 15:49:15 +00:00
Vlad Lazar
23fb8053c5 storcon: soft disable SK heartbeats (#11041)
## Problem

JWT tokens aren't in place, so all SK heartbeats fail. This is
equivalent to a wait before applying the PS heartbeats and makes things
more flaky.

## Summary of Changes

Add a flag that skips loading SKs from the db on start-up and at
runtime.
2025-02-28 15:49:09 +00:00
Conrad Ludgate
d9ced89ec0 feat(proxy): require TLS to compute if prompted by cplane (#10717)
https://github.com/neondatabase/cloud/issues/23008

For TLS between proxy and compute, we are using an internally
provisioned CA to sign the compute certificates. This change ensures
that proxy will load them from a supplied env var pointing to the
correct file - this file and env var will be configured later, using a
kubernetes secret.

Control plane responds with a `server_name` field if and only if the
compute uses TLS. This server name is the name we use to validate the
certificate. Control plane still sends us the IP to connect to as well
(to support overlay IP).

To support this change, I'd had to split `host` and `host_addr` into
separate fields. Using `host_addr` and bypassing `lookup_addr` if
possible (which is what happens in production). `host` then is only used
for the TLS connection.

There's no blocker to merging this. The code paths will not be triggered
until the new control plane is deployed and the `enableTLS` compute flag
is enabled on a project.
2025-02-28 14:20:25 +00:00
Erik Grinaker
c7ff3c4c9b safekeeper: downgrade interpreted reader errors (#11034)
## Problem

This `critical!` could fire on IO errors, which is just noisy.

Resolves #11027.

## Summary of changes

Downgrade to error, except for decode errors. These could be either data
corruption or a bug, but seem worth investigating either way.
2025-02-28 14:06:56 +00:00
Christian Schwarz
7c53fd0d56 refactor(page_service / timeline::handle): the GateGuard need not be a special case (#11030)
# Changes

While working on
- https://github.com/neondatabase/neon/pull/7202

I found myself needing to cache another expensive Arc::clone inside
inside the timeline::handle::Cache by wrapping it in another Arc.

Before this PR, it seemed like the only expensive thing we were caching
was the
connection handler tasks' clone of `Arc<Timeline>`.

But in fact the GateGuard was another such thing, but it was
special-cased
in the implementation.

So, this refactoring PR de-special-cases the GateGuard.

# Performance

With this PR we are doing strictly _less_ operations per `Cache::get`.
The reason is that we wrap the entire `Types::Timeline` into one Arc.
Before this PR, it was a separate Arc around the Arc<Timeline> and
one around the Arc<GateGuard>.

With this PR, we avoid an allocation per cached item, namely,
the separate Arc around the GateGuard.

This PR does not change the amount of shared mutable state.

So, all in all, it should be a net positive, albeit probably not
noticable
with our small non-NUMA instances and generally high CPU usage per
request.

# Reviewing

To understand the refactoring logistics, look at the changes to the unit
test types first.
Then read the improved module doc comment.
Then the remaining changes.

In the future, we could rename things to be even more generic.
For example, `Types::TenantMgr` could really be a `Types::Resolver`.
And `Types::Timeline` should, to avoid constant confusion in the doc
comment,
be called `Types::Cached` or `Types::Resolved`.
Because the `handle` module, after this PR, really doesn't care that
we're
using it for storing Arc's and GateGuards.

Then again, specicifity is sometimes more useful than being generic.
And writing the module doc comment in a totally generic way would
probably also be more confusing than helpful.
2025-02-28 12:31:52 +00:00