mirror of
https://github.com/neondatabase/neon.git
synced 2026-01-07 21:42:56 +00:00
The code in this change was extracted from #2595 (Heikki’s on-demand download draft PR). High-Level Changes - New RemoteLayer Type - On-Demand Download As An Effect Of Page Reconstruction - Breaking Semantics For Physical Size Metrics There are several follow-up work items planned. Refer to the Epic issue on GitHub: https://github.com/neondatabase/neon/issues/2029 closes https://github.com/neondatabase/neon/pull/3013 Co-authored-by: Kirill Bulatov <kirill@neon.tech> Co-authored-by: Christian Schwarz <christian@neon.tech> New RemoteLayer Type ==================== Instead of downloading all layers during tenant attach, we create RemoteLayer instances for each of them and add them to the layer map. On-Demand Download As An Effect Of Page Reconstruction ====================================================== At the heart of pageserver is Timeline::get_reconstruct_data(). It traverses the layer map until it has collected all the data it needs to produce the page image. Most code in the code base uses it, though many layers of indirection. Before this patch, the function would use synchronous filesystem IO to load data from disk-resident layer files if the data was not cached. That is not possible with RemoteLayer, because the layer file has not been downloaded yet. So, we do the download when get_reconstruct_data gets there, i.e., “on demand”. The mechanics of how the download is done are rather involved, because of the infamous async-sync-async sandwich problem that plagues the async Rust world. We use the new PageReconstructResult type to work around this. Its introduction is the cause for a good amount of code churn in this patch. Refer to the block comment on `with_ondemand_download()` for details. Breaking Semantics For Physical Size Metrics ============================================ We rename prometheus metric pageserver_{current,resident}_physical_size to reflect what this metric actually represents with on-demand download. This intentionally BREAKS existing grafana dashboard and the cost model data pipeline. Breaking is desirable because the meaning of this metrics has changed with on-demand download. See https://docs.google.com/document/d/12AFpvKY-7FZdR5a4CaD6Ir_rI3QokdCLSPJ6upHxJBo/edit# for how we will handle this breakage. Likewise, we rename the new billing_metrics’s PhysicalSize => ResidentSize. This is not yet used anywhere, so, this is not a breaking change. There is still a field called TimelineInfo::current_physical_size. It is now the sum of the layer sizes in layer map, regardless of whether local or remote. To compute that sum, we added a new trait method PersistentLayer::file_size(). When updating the Python tests, we got rid of current_physical_size_non_incremental. An earlier commit removed it from the OpenAPI spec already, so this is not a breaking change. test_timeline_size.py has grown additional assertions on the resident_physical_size metric.
68 lines
2.6 KiB
Python
68 lines
2.6 KiB
Python
from fixtures.log_helper import log
|
|
from fixtures.neon_fixtures import NeonEnvBuilder
|
|
from fixtures.types import TenantId, TimelineId
|
|
from fixtures.utils import wait_until
|
|
|
|
|
|
def get_only_element(l): # noqa: E741
|
|
assert len(l) == 1
|
|
return l[0]
|
|
|
|
|
|
# Test that gc and compaction tenant tasks start and stop correctly
|
|
def test_tenant_tasks(neon_env_builder: NeonEnvBuilder):
|
|
name = "test_tenant_tasks"
|
|
env = neon_env_builder.init_start()
|
|
client = env.pageserver.http_client()
|
|
|
|
def get_state(tenant):
|
|
all_states = client.tenant_list()
|
|
matching = [t for t in all_states if TenantId(t["id"]) == tenant]
|
|
return get_only_element(matching)["state"]
|
|
|
|
def delete_all_timelines(tenant: TenantId):
|
|
timelines = [TimelineId(t["timeline_id"]) for t in client.timeline_list(tenant)]
|
|
for t in timelines:
|
|
client.timeline_delete(tenant, t)
|
|
|
|
def assert_active(tenant):
|
|
assert get_state(tenant) == "Active"
|
|
|
|
# Create tenant, start compute
|
|
tenant, _ = env.neon_cli.create_tenant()
|
|
env.neon_cli.create_timeline(name, tenant_id=tenant)
|
|
pg = env.postgres.create_start(name, tenant_id=tenant)
|
|
assert (
|
|
get_state(tenant) == "Active"
|
|
), "Pageserver should activate a tenant and start background jobs if timelines are loaded"
|
|
|
|
# Stop compute
|
|
pg.stop()
|
|
|
|
# Delete all timelines on all tenants.
|
|
#
|
|
# FIXME: we used to check that the background jobs are stopped when all timelines
|
|
# are removed, but we don't stop them anymore. Not sure if this test still makes sense
|
|
# or we should just remove it.
|
|
for tenant_info in client.tenant_list():
|
|
tenant_id = TenantId(tenant_info["id"])
|
|
delete_all_timelines(tenant_id)
|
|
wait_until(10, 0.2, lambda: assert_active(tenant_id))
|
|
|
|
# Assert that all tasks finish quickly after tenant is detached
|
|
task_starts = client.get_metric_value('pageserver_tenant_task_events{event="start"}')
|
|
assert task_starts is not None
|
|
assert int(task_starts) > 0
|
|
client.tenant_detach(tenant)
|
|
client.tenant_detach(env.initial_tenant)
|
|
|
|
def assert_tasks_finish():
|
|
tasks_started = client.get_metric_value('pageserver_tenant_task_events{event="start"}')
|
|
tasks_ended = client.get_metric_value('pageserver_tenant_task_events{event="stop"}')
|
|
tasks_panicked = client.get_metric_value('pageserver_tenant_task_events{event="panic"}')
|
|
log.info(f"started {tasks_started}, ended {tasks_ended}, panicked {tasks_panicked}")
|
|
assert tasks_started == tasks_ended
|
|
assert tasks_panicked is None or int(tasks_panicked) == 0
|
|
|
|
wait_until(10, 0.2, assert_tasks_finish)
|