mirror of
https://github.com/neondatabase/neon.git
synced 2026-01-06 13:02:55 +00:00
On-Demand Download
The code in this change was extracted from #2595 (Heikki’s on-demand download draft PR). High-Level Changes - New RemoteLayer Type - On-Demand Download As An Effect Of Page Reconstruction - Breaking Semantics For Physical Size Metrics There are several follow-up work items planned. Refer to the Epic issue on GitHub: https://github.com/neondatabase/neon/issues/2029 closes https://github.com/neondatabase/neon/pull/3013 Co-authored-by: Kirill Bulatov <kirill@neon.tech> Co-authored-by: Christian Schwarz <christian@neon.tech> New RemoteLayer Type ==================== Instead of downloading all layers during tenant attach, we create RemoteLayer instances for each of them and add them to the layer map. On-Demand Download As An Effect Of Page Reconstruction ====================================================== At the heart of pageserver is Timeline::get_reconstruct_data(). It traverses the layer map until it has collected all the data it needs to produce the page image. Most code in the code base uses it, though many layers of indirection. Before this patch, the function would use synchronous filesystem IO to load data from disk-resident layer files if the data was not cached. That is not possible with RemoteLayer, because the layer file has not been downloaded yet. So, we do the download when get_reconstruct_data gets there, i.e., “on demand”. The mechanics of how the download is done are rather involved, because of the infamous async-sync-async sandwich problem that plagues the async Rust world. We use the new PageReconstructResult type to work around this. Its introduction is the cause for a good amount of code churn in this patch. Refer to the block comment on `with_ondemand_download()` for details. Breaking Semantics For Physical Size Metrics ============================================ We rename prometheus metric pageserver_{current,resident}_physical_size to reflect what this metric actually represents with on-demand download. This intentionally BREAKS existing grafana dashboard and the cost model data pipeline. Breaking is desirable because the meaning of this metrics has changed with on-demand download. See https://docs.google.com/document/d/12AFpvKY-7FZdR5a4CaD6Ir_rI3QokdCLSPJ6upHxJBo/edit# for how we will handle this breakage. Likewise, we rename the new billing_metrics’s PhysicalSize => ResidentSize. This is not yet used anywhere, so, this is not a breaking change. There is still a field called TimelineInfo::current_physical_size. It is now the sum of the layer sizes in layer map, regardless of whether local or remote. To compute that sum, we added a new trait method PersistentLayer::file_size(). When updating the Python tests, we got rid of current_physical_size_non_incremental. An earlier commit removed it from the OpenAPI spec already, so this is not a breaking change. test_timeline_size.py has grown additional assertions on the resident_physical_size metric.
This commit is contained in:
committed by
Christian Schwarz
parent
31543c4acc
commit
7ff591ffbf
@@ -13,12 +13,15 @@ from fixtures.neon_fixtures import (
|
||||
PageserverHttpClient,
|
||||
PortDistributor,
|
||||
Postgres,
|
||||
assert_no_in_progress_downloads_for_tenant,
|
||||
assert_tenant_status,
|
||||
tenant_exists,
|
||||
wait_for_last_record_lsn,
|
||||
wait_for_upload,
|
||||
wait_until,
|
||||
wait_while,
|
||||
)
|
||||
from fixtures.types import Lsn, TenantId, TimelineId
|
||||
from fixtures.utils import query_scalar, start_in_background, subprocess_capture, wait_until
|
||||
from fixtures.utils import query_scalar, start_in_background, subprocess_capture
|
||||
|
||||
|
||||
def assert_abs_margin_ratio(a: float, b: float, margin_ratio: float):
|
||||
@@ -406,17 +409,13 @@ def test_tenant_relocation(
|
||||
# call to attach timeline to new pageserver
|
||||
new_pageserver_http.tenant_attach(tenant_id)
|
||||
|
||||
# check that it shows that download is in progress
|
||||
# wait for tenant to finish attaching
|
||||
tenant_status = new_pageserver_http.tenant_status(tenant_id=tenant_id)
|
||||
assert tenant_status.get("has_in_progress_downloads"), tenant_status
|
||||
|
||||
# wait until tenant is downloaded
|
||||
assert tenant_status["state"] in ["Attaching", "Active"]
|
||||
wait_until(
|
||||
number_of_iterations=10,
|
||||
interval=1,
|
||||
func=lambda: assert_no_in_progress_downloads_for_tenant(
|
||||
new_pageserver_http, tenant_id
|
||||
),
|
||||
func=lambda: assert_tenant_status(new_pageserver_http, tenant_id, "Active"),
|
||||
)
|
||||
|
||||
check_timeline_attached(
|
||||
@@ -459,9 +458,15 @@ def test_tenant_relocation(
|
||||
|
||||
# detach tenant from old pageserver before we check
|
||||
# that all the data is there to be sure that old pageserver
|
||||
# is no longer involved, and if it is, we will see the errors
|
||||
# is no longer involved, and if it is, we will see the error
|
||||
pageserver_http.tenant_detach(tenant_id)
|
||||
|
||||
# Wait a little, so that the detach operation has time to finish.
|
||||
wait_while(
|
||||
number_of_iterations=100,
|
||||
interval=1,
|
||||
func=lambda: tenant_exists(pageserver_http, tenant_id),
|
||||
)
|
||||
post_migration_check(pg_main, 500500, old_local_path_main)
|
||||
post_migration_check(pg_second, 1001000, old_local_path_second)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user