## Problem
The scrubber would like to check the highest mtime in a tenant's objects
as a safety check during purges. It recently switched to use
GenericRemoteStorage, so we need to expose that in the listing methods.
## Summary of changes
- In Listing.keys, return a ListingObject{} including a last_modified
field, instead of a RemotePath
---------
Co-authored-by: Arpad Müller <arpad-m@users.noreply.github.com>
Part of #8128.
## Problem
Scrubber uses `scan_metadata` command to flag metadata inconsistencies.
To trust it at scale, we need to make sure the errors we emit is a
reflection of real scenario. One check performed in the scrubber is to
see whether layers listed in the latest `index_part.json` is present in
object listing. Currently, the scrubber does not robustly handle the
case where objects are uploaded/deleted during the scan.
## Summary of changes
**Condition for success:** An object in the index is (1) in the object
listing we acquire from S3 or (2) found in a HeadObject request (new
object).
- Add in the `HeadObject` requests for the layers missing from the
object listing.
- Keep the order of first getting the object listing and then
downloading the layers.
- Update check to only consider shards with highest shard count.
- Skip analyzing a timeline if `deleted_at` tombstone is marked in
`index_part.json`.
- Add new test to see if scrubber actually detect the metadata
inconsistency.
_Misc_
- A timeline with no ancestor should always have some layers.
- Removed experimental histograms
_Caveat_
- Ancestor layer is not cleaned until #8308 is implemented. If ancestor
layers reference non-existing layers in the index, the scrubber will
emit false positives.
Signed-off-by: Yuchen Liang <yuchen@neon.tech>
Starts using the `remote_storage` crate in the S3 scrubber for the
`PurgeGarbage` subcommand.
The `remote_storage` crate is generic over various backends and thus
using it gives us the ability to run the scrubber against all supported
backends.
Start with the `PurgeGarbage` subcommand as it doesn't use
`stream_tenants`.
Part of #7547.
## Problem
After a shard split, the pageserver leaves the ancestor shard's content
in place. It may be referenced by child shards, but eventually child
shards will de-reference most ancestor layers as they write their own
data and do GC. We would like to eventually clean up those ancestor
layers to reclaim space.
## Summary of changes
- Extend the physical GC command with `--mode=full`, which includes
cleaning up unreferenced ancestor shard layers
- Add test `test_scrubber_physical_gc_ancestors`
- Remove colored log output: in testing this is irritating ANSI code
spam in logs, and in interactive use doesn't add much.
- Refactor storage controller API client code out of storcon_client into
a `storage_controller/client` crate
- During physical GC of ancestors, call into the storage controller to
check that the latest shards seen in S3 reflect the latest state of the
tenant, and there is no shard split in progress.
part of https://github.com/neondatabase/cloud/issues/14024, k8s does not
always have a volume available for logging, and I'm running into weird
permission errors... While I could spend time figuring out how to create
temp directories for logging, I think it would be better to just disable
file logging as k8s containers are ephemeral and we cannot retrieve
anything on the fs after the container gets removed.
## Summary of changes
`PAGESERVER_DISABLE_FILE_LOGGING=1` -> file logging disabled
Signed-off-by: Alex Chi Z <chi@neon.tech>
The find-large-objects scrubber subcommand is quite fast if you run it
in an environment with low latency to the S3 bucket (say an EC2 instance
in the same region). However, the higher the latency gets, the slower
the command becomes. Therefore, add a concurrency param and make it
parallelized. This doesn't change that general relationship, but at
least lets us do multiple requests in parallel and therefore hopefully
faster.
Running with concurrency of 64 (default):
```
2024-07-05T17:30:22.882959Z INFO lazy_load_identity [...]
[...]
2024-07-05T17:30:28.289853Z INFO Scanned 500 shards. [...]
```
With concurrency of 1, simulating state before this PR:
```
2024-07-05T17:31:43.375153Z INFO lazy_load_identity [...]
[...]
2024-07-05T17:33:51.987092Z INFO Scanned 500 shards. [...]
```
In other words, to list 500 shards, speed is increased from 2:08 minutes
to 6 seconds.
Follow-up of #8257, part of #5431
Adds a find-large-objects subcommand to the scrubber to allow listing
layer objects larger than a specific size.
To be used like:
```
AWS_PROFILE=dev REGION=us-east-2 BUCKET=neon-dev-storage-us-east-2 cargo run -p storage_scrubber -- find-large-objects --min-size 250000000 --ignore-deltas
```
Part of #5431
The S3 scrubber contains "S3" in its name, but we want to make it
generic in terms of which storage is used (#7547). Therefore, rename it
to "storage scrubber", following the naming scheme of already existing
components "storage broker" and "storage controller".
Part of #7547