mirror of
https://github.com/neondatabase/neon.git
synced 2026-01-15 09:22:55 +00:00
## Problem Compactions might generate files of exactly the same name as before compaction due to our naming of layer files. This could have already caused some mess in the system, and is known to cause some issues like https://github.com/neondatabase/neon/issues/4088. Therefore, we now consider duplicated layers in the post-compaction process to avoid violating the layer map duplicate checks. related previous works: close https://github.com/neondatabase/neon/pull/4094 error reported in: https://github.com/neondatabase/neon/issues/4690, https://github.com/neondatabase/neon/issues/4088 ## Summary of changes If a file already exists in the layer map before the compaction, do not modify the layer map and do not delete the file. The file on disk at that time should be the new one overwritten by the compaction process. This PR also adds a test case with a fail point that produces exactly the same set of files. This bypassing behavior is safe because the produced layer files have the same content / are the same representation of the original file. An alternative might be directly removing the duplicate check in the layer map, but I feel it would be good if we can prevent that in the first place. --------- Signed-off-by: Alex Chi Z <chi@neon.tech> Co-authored-by: Konstantin Knizhnik <knizhnik@garret.ru> Co-authored-by: Heikki Linnakangas <heikki@neon.tech> Co-authored-by: Joonas Koivunen <joonas@neon.tech>
37 lines
1.3 KiB
Python
37 lines
1.3 KiB
Python
import time
|
|
|
|
import pytest
|
|
from fixtures.neon_fixtures import NeonEnvBuilder, PgBin
|
|
|
|
|
|
# Test duplicate layer detection
|
|
#
|
|
# This test sets fail point at the end of first compaction phase:
|
|
# after flushing new L1 layers but before deletion of L0 layers
|
|
# it should cause generation of duplicate L1 layer by compaction after restart.
|
|
@pytest.mark.timeout(600)
|
|
def test_duplicate_layers(neon_env_builder: NeonEnvBuilder, pg_bin: PgBin):
|
|
env = neon_env_builder.init_start()
|
|
pageserver_http = env.pageserver.http_client()
|
|
|
|
# Use aggressive compaction and checkpoint settings
|
|
tenant_id, _ = env.neon_cli.create_tenant(
|
|
conf={
|
|
"checkpoint_distance": f"{1024 ** 2}",
|
|
"compaction_target_size": f"{1024 ** 2}",
|
|
"compaction_period": "5 s",
|
|
"compaction_threshold": "3",
|
|
}
|
|
)
|
|
|
|
pageserver_http.configure_failpoints(("compact-level0-phase1-return-same", "return"))
|
|
|
|
endpoint = env.endpoints.create_start("main", tenant_id=tenant_id)
|
|
connstr = endpoint.connstr(options="-csynchronous_commit=off")
|
|
pg_bin.run_capture(["pgbench", "-i", "-s1", connstr])
|
|
|
|
time.sleep(10) # let compaction to be performed
|
|
assert env.pageserver.log_contains("compact-level0-phase1-return-same")
|
|
|
|
pg_bin.run_capture(["pgbench", "-P1", "-N", "-c5", "-T500", "-Mprepared", connstr])
|