Compare commits

..

95 Commits

Author SHA1 Message Date
Christian Schwarz
9d0405b798 Merge remote-tracking branch 'origin/main' into problame/batching-sidecar-task 2024-11-30 00:03:30 +01:00
Christian Schwarz
60b9238383 switch back to serial mode by default 2024-11-29 20:55:22 +01:00
Christian Schwarz
ce83418aa4 spsc_fold: fix missing wakeup on receiver or sender drop while other side is waiting 2024-11-29 20:54:28 +01:00
Christian Schwarz
1cd333ce38 read_messages => batcher 2024-11-29 20:40:51 +01:00
Christian Schwarz
1cf5b1463f benchmark results on hetzner box
--------------------------------------------------------------------------- Benchmark results ---------------------------------------------------------------------------
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].pipelining_config.mode: serial
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.time: 0.9880
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 0.8227
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 1.0653
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.9743
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 1.0309
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.8490
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 1.0774
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.9896
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 1.0229
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.8428
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].pipelining_config.mode: serial
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.time: 0.7338
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 0.7370
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.6314
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.7970
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.7640
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.7692
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.4609
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,402.7188
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 3,206.7344
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,402.7188
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.5242
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.9966
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.4988
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,402.9500
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 3,206.9667
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,402.9500
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.4975
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.9966
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.3504
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,402.0824
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 1,651.0941
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,402.0824
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3765
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 3.8775
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.3973
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,402.3600
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 1,651.3733
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,402.3600
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3847
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 3.8770
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.3140
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.8632
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 876.8737
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.8632
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3249
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 7.3008
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.3275
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.9451
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 876.9560
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.9451
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3145
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 7.3002
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.2905
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.7184
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 490.7282
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.7184
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2897
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 13.0453
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.3148
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.8632
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 490.8737
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.8632
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3002
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 13.0418
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.2587
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.5259
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 297.5345
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.5259
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2539
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.5152
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.2804
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.6698
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 297.6792
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.6698
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2674
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.5053
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_mean: 0.128 ms
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_percentiles.p95: 0.153 ms
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_percentiles.p99: 0.169 ms
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_percentiles.p99.9: 0.257 ms
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_percentiles.p99.99: 0.588 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_mean: 0.128 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p95: 0.153 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99: 0.169 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.283 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.556 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_mean: 0.148 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p95: 0.178 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99: 0.193 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.308 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.632 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_mean: 0.126 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p95: 0.150 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99: 0.173 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.268 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.631 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_mean: 0.130 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p95: 0.150 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99: 0.165 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.267 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.530 ms
2024-11-29 17:42:50 +01:00
Christian Schwarz
2cab051921 fix escaping of lfc path (exposed by the benchmark) 2024-11-29 17:25:54 +01:00
Christian Schwarz
9b65b268ed stop Box'ing stuff & clean up the passing-through of errors (remove enum Batch) 2024-11-29 16:14:03 +01:00
Christian Schwarz
53e18b24fc less repetitive match arms; https://github.com/neondatabase/neon/pull/9851#discussion_r1860535860 2024-11-29 16:12:19 +01:00
Christian Schwarz
0d28084985 merge brought back test_pageserver_getpage_merge.py 2024-11-29 15:23:47 +01:00
Christian Schwarz
199a4bd6c1 Merge remote-tracking branch 'origin/main' into problame/batching-sidecar-task 2024-11-29 14:26:37 +01:00
Christian Schwarz
bab3dd009d Merge remote-tracking branch 'origin/main' into problame/batching-sidecar-task 2024-11-29 14:19:06 +01:00
Christian Schwarz
90ef03c3b5 benchmarks on hetzner box
--------------------------------------------------------------------------- Benchmark results ---------------------------------------------------------------------------
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].pipelining_config.mode: serial
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.time: 0.8920
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 0.7482
test_throughput[release-pg16-50-pipelining_config0-30-1-128-not batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.9038
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.8488
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.9325
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.7834
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.9282
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.8647
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.9148
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.7734
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].pipelining_config.mode: serial
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.time: 0.6814
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].counters.pageserver_cpu_seconds_total: 0.6663
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'mode': 'serial'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.6394
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.7907
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.7021
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.6926
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.4521
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,402.6818
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 3,206.6970
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,402.6818
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.4967
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.9967
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.4691
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,402.7619
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 3,206.7778
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,402.7619
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.4438
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 1.9966
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.3613
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,402.1585
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 1,651.1707
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,402.1585
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3716
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 3.8773
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.3925
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,402.3289
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 1,651.3421
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,402.3289
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3608
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 3.8770
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.3002
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.7879
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 876.7980
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.7879
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2960
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 7.3013
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.3337
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.9888
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 877.0112
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.9888
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.3044
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 7.2998
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.2844
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.6857
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 490.6952
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.6857
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2677
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 13.0462
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.2959
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.7525
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 490.7624
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.7525
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2654
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 13.0445
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.execution: concurrent-futures
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.time: 0.2662
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.5804
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 297.5893
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.5804
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2445
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.5115
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.execution: tasks
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].pipelining_config.mode: pipelined
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.time: 0.2798
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_getpage_count: 6,401.6542
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_vectored_get_count: 297.6636
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.compute_getpage_count: 6,401.6542
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].counters.pageserver_cpu_seconds_total: 0.2504
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].perfmetric.batching_factor: 21.5063
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_mean: 0.123 ms
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_percentiles.p95: 0.159 ms
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_percentiles.p99: 0.170 ms
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_percentiles.p99.9: 0.306 ms
test_latency[release-pg16-pipelining_config0-{'mode': 'serial'}].latency_percentiles.p99.99: 0.579 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_mean: 0.122 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p95: 0.163 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99: 0.181 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.290 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.623 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_mean: 0.143 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p95: 0.175 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99: 0.188 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.322 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.636 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_mean: 0.122 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p95: 0.161 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99: 0.175 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.281 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'execution': 'concurrent-futures', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.605 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_mean: 0.117 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p95: 0.132 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99: 0.154 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99.9: 0.402 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'execution': 'tasks', 'mode': 'pipelined'}].latency_percentiles.p99.99: 0.569 ms
2024-11-29 13:50:28 +01:00
Christian Schwarz
dfcbb139fb the None configuration in the benchmark would use the default instead
of the serial configuration; fix that
2024-11-29 13:35:24 +01:00
Christian Schwarz
27c72e4ff3 benchmark on hetzner box
--------------------------------------------------------------------------- Benchmark results ---------------------------------------------------------------------------
test_throughput[release-pg16-50-None-30-1-128-not batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-30-1-128-not batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-30-1-128-not batchable None].effective_io_concurrency: 1
test_throughput[release-pg16-50-None-30-1-128-not batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.time: 0.8864
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_cpu_seconds_total: 0.8297
test_throughput[release-pg16-50-None-30-1-128-not batchable None].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.9974
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.9223
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.9171
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.7762
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.8903
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.8303
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.9611
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.8074
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-None-30-100-128-batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-30-100-128-batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-30-100-128-batchable None].effective_io_concurrency: 100
test_throughput[release-pg16-50-None-30-100-128-batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.time: 0.2695
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_getpage_count: 6,401.5946
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_vectored_get_count: 297.5946
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.compute_getpage_count: 6,401.5946
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_cpu_seconds_total: 0.2469
test_throughput[release-pg16-50-None-30-100-128-batchable None].perfmetric.batching_factor: 21.5111
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.6611
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.8180
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.7554
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.7308
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.4535
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,402.6364
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 3,206.6515
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,402.6364
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.4974
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.9967
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.4630
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,402.7656
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 3,206.7812
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,402.7656
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.4397
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.9966
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.3465
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,402.0581
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 1,651.0698
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,402.0581
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.3615
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 3.8775
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3628
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,402.1585
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 1,651.1707
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,402.1585
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3394
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 3.8773
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2961
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.7525
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 876.7525
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.7525
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.2923
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 7.3017
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3317
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.9667
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 876.9778
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.9667
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3008
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 7.3000
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2885
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.6893
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 490.7087
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.6893
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.2701
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 13.0458
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3042
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.8061
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 490.8163
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.8061
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.2699
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 13.0432
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2704
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.6091
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 297.6182
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.6091
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.2476
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 21.5095
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.2706
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.5946
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 297.5946
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.5946
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.2425
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 21.5111
test_latency[release-pg16-None-None].latency_mean: 0.136 ms
test_latency[release-pg16-None-None].latency_percentiles.p95: 0.172 ms
test_latency[release-pg16-None-None].latency_percentiles.p99: 0.194 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.9: 0.319 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.99: 0.637 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_mean: 0.121 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p95: 0.150 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99: 0.168 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.9: 0.317 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.99: 0.607 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_mean: 0.124 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p95: 0.161 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99: 0.170 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.9: 0.294 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.99: 0.592 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_mean: 0.122 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p95: 0.157 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99: 0.170 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.9: 0.267 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.99: 0.606 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_mean: 0.125 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p95: 0.161 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99: 0.170 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.9: 0.287 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.99: 0.610 ms
2024-11-29 12:08:05 +01:00
Christian Schwarz
9a5611a5ef merge reader&batcher stages, update docs 2024-11-29 11:39:16 +01:00
Christian Schwarz
f44bfcc7f4 benchmark on hetzner runner
-------------------------------------------------------------------------------------------------------------------- Benchmark results ---------------------------------------------------------------------------------------------------------------------
test_throughput[release-pg16-50-None-30-1-128-not batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-30-1-128-not batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-30-1-128-not batchable None].effective_io_concurrency: 1
test_throughput[release-pg16-50-None-30-1-128-not batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.time: 0.8905
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_cpu_seconds_total: 0.8585
test_throughput[release-pg16-50-None-30-1-128-not batchable None].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.8965
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.8694
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.9287
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.7891
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.8859
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.8582
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.9158
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.7703
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-None-30-100-128-batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-30-100-128-batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-30-100-128-batchable None].effective_io_concurrency: 100
test_throughput[release-pg16-50-None-30-100-128-batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.time: 0.2526
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_getpage_count: 6,401.5000
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_vectored_get_count: 307.8475
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.compute_getpage_count: 6,401.5000
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_cpu_seconds_total: 0.2999
test_throughput[release-pg16-50-None-30-100-128-batchable None].perfmetric.batching_factor: 20.7944
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.6182
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.7483
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.6925
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.6863
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.4250
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,402.5286
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 3,207.5714
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,402.5286
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.5241
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.9961
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.4981
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,402.7500
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 3,300.7500
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,402.7500
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.4903
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.9398
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.3438
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,402.0345
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 1,660.0230
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,402.0345
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.4123
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 3.8566
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3766
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,402.2405
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 1,752.2405
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,402.2405
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3699
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 3.6537
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.3048
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.8061
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 886.7755
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.8061
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.3583
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 7.2192
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3517
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,402.0824
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 978.0941
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,402.0824
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3421
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 6.5455
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2682
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.5946
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 500.5495
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.5946
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.3187
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 12.7891
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3163
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.8830
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 591.5851
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.8830
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3117
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 10.8216
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2504
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.4874
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 307.8067
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.4874
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.2972
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 20.7971
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.2899
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.7184
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 398.4466
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.7184
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.2901
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 16.0667
test_latency[release-pg16-None-None].latency_mean: 0.138 ms
test_latency[release-pg16-None-None].latency_percentiles.p95: 0.173 ms
test_latency[release-pg16-None-None].latency_percentiles.p99: 0.193 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.9: 0.302 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.99: 0.667 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_mean: 0.116 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p95: 0.137 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99: 0.165 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.9: 0.406 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.99: 0.560 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_mean: 0.140 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p95: 0.172 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99: 0.189 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.9: 0.315 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.99: 0.705 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_mean: 0.133 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p95: 0.170 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99: 0.192 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.9: 0.337 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.99: 0.653 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_mean: 0.128 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p95: 0.166 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99: 0.176 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.9: 0.284 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.99: 0.616 ms
2024-11-28 22:40:53 +01:00
Christian Schwarz
a2a3613185 reintroduce task-based execution 2024-11-28 20:50:06 +01:00
Christian Schwarz
6bd39f95f5 rn benchmark on hetzner runner
-------------------------------------------------------------------------------------------------------------------- Benchmark results ---------------------------------------------------------------------------------------------------------------------
test_throughput[release-pg16-50-None-30-1-128-not batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-30-1-128-not batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-30-1-128-not batchable None].effective_io_concurrency: 1
test_throughput[release-pg16-50-None-30-1-128-not batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.time: 0.8905
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_cpu_seconds_total: 0.8633
test_throughput[release-pg16-50-None-30-1-128-not batchable None].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].counters.time: 0.9195
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].counters.pageserver_cpu_seconds_total: 0.8925
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].counters.time: 0.8724
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].counters.pageserver_cpu_seconds_total: 0.8406
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 32}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-None-30-100-128-batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-30-100-128-batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-30-100-128-batchable None].effective_io_concurrency: 100
test_throughput[release-pg16-50-None-30-100-128-batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.time: 0.2576
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_getpage_count: 6,401.5259
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_vectored_get_count: 307.8534
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.compute_getpage_count: 6,401.5259
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_cpu_seconds_total: 0.3043
test_throughput[release-pg16-50-None-30-100-128-batchable None].perfmetric.batching_factor: 20.7941
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].counters.time: 0.6187
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].counters.pageserver_cpu_seconds_total: 0.7473
test_throughput[release-pg16-50-pipelining_config4-30-100-128-batchable {'max_batch_size': 1}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].counters.time: 0.4419
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].counters.pageserver_getpage_count: 6,402.6418
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].counters.pageserver_vectored_get_count: 3,207.7015
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].counters.compute_getpage_count: 6,402.6418
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].counters.pageserver_cpu_seconds_total: 0.5391
test_throughput[release-pg16-50-pipelining_config5-30-100-128-batchable {'max_batch_size': 2}].perfmetric.batching_factor: 1.9960
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].counters.time: 0.3569
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].counters.pageserver_getpage_count: 6,402.1071
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].counters.pageserver_vectored_get_count: 1,660.0952
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].counters.compute_getpage_count: 6,402.1071
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].counters.pageserver_cpu_seconds_total: 0.4244
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 4}].perfmetric.batching_factor: 3.8565
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].counters.time: 0.2977
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].counters.pageserver_getpage_count: 6,401.7700
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].counters.pageserver_vectored_get_count: 886.6900
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].counters.compute_getpage_count: 6,401.7700
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].counters.pageserver_cpu_seconds_total: 0.3511
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 8}].perfmetric.batching_factor: 7.2199
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].counters.time: 0.2697
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].counters.pageserver_getpage_count: 6,401.5946
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].counters.pageserver_vectored_get_count: 500.5766
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].counters.compute_getpage_count: 6,401.5946
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].counters.pageserver_cpu_seconds_total: 0.3195
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 16}].perfmetric.batching_factor: 12.7884
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].counters.time: 0.2548
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].counters.pageserver_getpage_count: 6,401.5128
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].counters.pageserver_vectored_get_count: 307.7692
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].counters.compute_getpage_count: 6,401.5128
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].counters.pageserver_cpu_seconds_total: 0.3015
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 32}].perfmetric.batching_factor: 20.7997
test_latency[release-pg16-None-None].latency_mean: 0.127 ms
test_latency[release-pg16-None-None].latency_percentiles.p95: 0.166 ms
test_latency[release-pg16-None-None].latency_percentiles.p99: 0.187 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.9: 0.292 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.99: 0.624 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1}].latency_mean: 0.139 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1}].latency_percentiles.p95: 0.175 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1}].latency_percentiles.p99: 0.200 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1}].latency_percentiles.p99.9: 0.444 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1}].latency_percentiles.p99.99: 0.658 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 32}].latency_mean: 0.119 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 32}].latency_percentiles.p95: 0.155 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 32}].latency_percentiles.p99: 0.172 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 32}].latency_percentiles.p99.9: 0.267 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 32}].latency_percentiles.p99.99: 0.587 ms
2024-11-28 20:24:01 +01:00
Christian Schwarz
07358dea89 converge on approach that pushes read Result through pipeline 2024-11-28 20:06:15 +01:00
Christian Schwarz
82e1fa3f83 WIP 2024-11-27 12:31:56 +01:00
Christian Schwarz
7fb3d95596 review & identified a cast that isn't handled, document that 2024-11-27 10:33:53 +01:00
Christian Schwarz
e0123c8a80 explain the pipeline cancellation story 2024-11-27 10:13:51 +01:00
Christian Schwarz
18ffaba975 fix pipeline cancellation 2024-11-26 20:44:36 +01:00
Christian Schwarz
41ddc6772c benchmark
non-package-mode-py3.10christian@neon-hetzner-dev-christian:[~/src/neon]: DEFAULT_PG_VERSION=16 BUILD_TYPE=release poetry run pytest --alluredir ~/tmp/alluredir --clean-alluredir 'test_runner/performance/pageserver/test_page_service_batching.py' --maxfail=1

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Benchmark results ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_throughput[release-pg16-50-None-30-1-128-not batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-30-1-128-not batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-30-1-128-not batchable None].effective_io_concurrency: 1
test_throughput[release-pg16-50-None-30-1-128-not batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.time: 0.9443
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-30-1-128-not batchable None].counters.pageserver_cpu_seconds_total: 0.9010
test_throughput[release-pg16-50-None-30-1-128-not batchable None].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.9273
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.8844
test_throughput[release-pg16-50-pipelining_config1-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.9105
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.7669
test_throughput[release-pg16-50-pipelining_config2-30-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.8828
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.8512
test_throughput[release-pg16-50-pipelining_config3-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.9431
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.7971
test_throughput[release-pg16-50-pipelining_config4-30-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-None-30-100-128-batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-30-100-128-batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-30-100-128-batchable None].effective_io_concurrency: 100
test_throughput[release-pg16-50-None-30-100-128-batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.time: 0.2604
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_getpage_count: 6,401.5391
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_vectored_get_count: 307.7217
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.compute_getpage_count: 6,401.5391
test_throughput[release-pg16-50-None-30-100-128-batchable None].counters.pageserver_cpu_seconds_total: 0.3023
test_throughput[release-pg16-50-None-30-100-128-batchable None].perfmetric.batching_factor: 20.8030
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.6268
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.7596
test_throughput[release-pg16-50-pipelining_config6-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.6696
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.6684
test_throughput[release-pg16-50-pipelining_config7-30-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.4530
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,402.6515
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 3,207.7121
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,402.6515
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.5427
test_throughput[release-pg16-50-pipelining_config8-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.9960
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.5434
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 3,301.0000
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.5318
test_throughput[release-pg16-50-pipelining_config9-30-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.9397
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.3455
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,402.0581
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 1,660.0349
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,402.0581
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.4078
test_throughput[release-pg16-50-pipelining_config10-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 3.8566
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3785
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,402.2785
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 1,752.2785
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,402.2785
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3705
test_throughput[release-pg16-50-pipelining_config11-30-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 3.6537
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.3063
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.8247
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 886.7629
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.8247
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.3537
test_throughput[release-pg16-50-pipelining_config12-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 7.2193
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3365
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.9888
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 978.0000
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.9888
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3256
test_throughput[release-pg16-50-pipelining_config13-30-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 6.5460
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2730
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.6239
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 500.2936
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.6239
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.3162
test_throughput[release-pg16-50-pipelining_config14-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 12.7957
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3091
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.8438
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 591.5312
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.8438
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3022
test_throughput[release-pg16-50-pipelining_config15-30-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 10.8225
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2609
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.5391
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 307.6174
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.5391
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.3014
test_throughput[release-pg16-50-pipelining_config16-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 20.8101
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.2910
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.7184
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 398.4660
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.7184
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.2903
test_throughput[release-pg16-50-pipelining_config17-30-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 16.0659
test_latency[release-pg16-None-None].latency_mean: 0.120 ms
test_latency[release-pg16-None-None].latency_percentiles.p95: 0.151 ms
test_latency[release-pg16-None-None].latency_percentiles.p99: 0.172 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.9: 0.276 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.99: 0.609 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_mean: 0.128 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p95: 0.167 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99: 0.186 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.9: 0.294 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.99: 0.642 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_mean: 0.136 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p95: 0.170 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99: 0.185 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.9: 0.294 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.99: 0.623 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_mean: 0.117 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p95: 0.156 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99: 0.174 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.9: 0.279 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.99: 0.598 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_mean: 0.121 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p95: 0.141 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99: 0.156 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.9: 0.256 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.99: 0.518 ms
2024-11-26 13:56:46 +01:00
Christian Schwarz
a23abb2cc0 adopt spsc_fold 2024-11-26 13:30:40 +01:00
Christian Schwarz
9bf2618c45 implement spsc_fold 2024-11-26 13:27:28 +01:00
Christian Schwarz
99b664c9ed expand fix to tasks mode; add some comments 2024-11-25 11:51:58 +01:00
Christian Schwarz
b9477aa945 fix: batcher wouldn't shut down after executor exits 2024-11-25 11:28:30 +01:00
Christian Schwarz
0bb037240d logging to debug test_pageserver_restarts_under_worload 2024-11-25 10:36:48 +01:00
Christian Schwarz
6ec5ac1ac8 DO NOT MERGE: enable pipelining (32,concurrent-futures) by default so we get test suite coverage 2024-11-25 09:16:09 +01:00
Christian Schwarz
bd31f42f52 run benchmarks 2024-11-22 15:06:11 +01:00
Christian Schwarz
990e44dda4 longer target runtime 2024-11-22 14:37:01 +01:00
Christian Schwarz
cbe18393d3 Benchmark results (metal node, AMD Ryzen 9 7950X3D 16-Core Processor)
------------------------------------------ Benchmark results -------------------------------------------
test_throughput[release-pg16-50-None-5-1-128-not batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-5-1-128-not batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-5-1-128-not batchable None].effective_io_concurrency: 1
test_throughput[release-pg16-50-None-5-1-128-not batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-5-1-128-not batchable None].counters.time: 0.8347
test_throughput[release-pg16-50-None-5-1-128-not batchable None].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-5-1-128-not batchable None].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-None-5-1-128-not batchable None].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-5-1-128-not batchable None].counters.pageserver_cpu_seconds_total: 0.7400
test_throughput[release-pg16-50-None-5-1-128-not batchable None].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.8589
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.8740
test_throughput[release-pg16-50-pipelining_config1-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.7986
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.7067
test_throughput[release-pg16-50-pipelining_config2-5-1-128-not batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.7606
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.7700
test_throughput[release-pg16-50-pipelining_config3-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 1
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.7943
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.7083
test_throughput[release-pg16-50-pipelining_config4-5-1-128-not batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-None-5-100-128-batchable None].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-None-5-100-128-batchable None].pipelining_enabled: 0
test_throughput[release-pg16-50-None-5-100-128-batchable None].effective_io_concurrency: 100
test_throughput[release-pg16-50-None-5-100-128-batchable None].readhead_buffer_size: 128
test_throughput[release-pg16-50-None-5-100-128-batchable None].counters.time: 0.5889
test_throughput[release-pg16-50-None-5-100-128-batchable None].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-5-100-128-batchable None].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-None-5-100-128-batchable None].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-None-5-100-128-batchable None].counters.pageserver_cpu_seconds_total: 0.5875
test_throughput[release-pg16-50-None-5-100-128-batchable None].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.5360
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.6700
test_throughput[release-pg16-50-pipelining_config6-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 1
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.6398
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,403.0000
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.6386
test_throughput[release-pg16-50-pipelining_config7-5-100-128-batchable {'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.0000
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.4210
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,402.4545
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 3,207.0909
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,402.4545
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.5109
test_throughput[release-pg16-50-pipelining_config8-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 1.9963
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 2
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.4619
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,402.7000
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 3,300.7000
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,402.7000
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.4650
test_throughput[release-pg16-50-pipelining_config9-5-100-128-batchable {'max_batch_size': 2, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 1.9398
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.3333
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.9286
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 1,659.2857
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.9286
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.3914
test_throughput[release-pg16-50-pipelining_config10-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 3.8582
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 4
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3526
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,402.1429
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 1,752.1429
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,402.1429
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3507
test_throughput[release-pg16-50-pipelining_config11-5-100-128-batchable {'max_batch_size': 4, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 3.6539
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2921
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.7647
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 885.9412
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.6471
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.3353
test_throughput[release-pg16-50-pipelining_config12-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 7.2259
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 8
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.3317
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,402.0000
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 978.0000
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,402.0000
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.3300
test_throughput[release-pg16-50-pipelining_config13-5-100-128-batchable {'max_batch_size': 8, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 6.5460
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2409
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.5000
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 499.8000
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.5000
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.2820
test_throughput[release-pg16-50-pipelining_config14-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 12.8081
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 16
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.2807
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.5882
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 590.6471
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.5882
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.2882
test_throughput[release-pg16-50-pipelining_config15-5-100-128-batchable {'max_batch_size': 16, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 10.8383
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].pipelining_config.protocol_pipelining_mode: concurrent-futures
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.time: 0.2510
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_getpage_count: 6,401.4211
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_vectored_get_count: 307.3684
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.compute_getpage_count: 6,401.4211
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].counters.pageserver_cpu_seconds_total: 0.2889
test_throughput[release-pg16-50-pipelining_config16-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].perfmetric.batching_factor: 20.8265
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].tablesize_mib: 50 MiB
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_enabled: 1
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].effective_io_concurrency: 100
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].readhead_buffer_size: 128
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.max_batch_size: 32
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].pipelining_config.protocol_pipelining_mode: tasks
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.time: 0.2517
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_getpage_count: 6,401.4211
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_vectored_get_count: 397.4737
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.compute_getpage_count: 6,401.4211
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].counters.pageserver_cpu_seconds_total: 0.2658
test_throughput[release-pg16-50-pipelining_config17-5-100-128-batchable {'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].perfmetric.batching_factor: 16.1053
test_latency[release-pg16-None-None].latency_mean: 0.117 ms
test_latency[release-pg16-None-None].latency_percentiles.p95: 0.153 ms
test_latency[release-pg16-None-None].latency_percentiles.p99: 0.162 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.9: 0.248 ms
test_latency[release-pg16-None-None].latency_percentiles.p99.99: 0.429 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_mean: 0.127 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p95: 0.161 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99: 0.181 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.9: 0.295 ms
test_latency[release-pg16-pipelining_config1-{'max_batch_size': 1, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.99: 0.553 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_mean: 0.123 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p95: 0.162 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99: 0.174 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.9: 0.264 ms
test_latency[release-pg16-pipelining_config2-{'max_batch_size': 1, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.99: 0.499 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_mean: 0.120 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p95: 0.159 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99: 0.175 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.9: 0.258 ms
test_latency[release-pg16-pipelining_config3-{'max_batch_size': 32, 'protocol_pipelining_mode': 'concurrent-futures'}].latency_percentiles.p99.99: 0.522 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_mean: 0.116 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p95: 0.131 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99: 0.150 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.9: 0.404 ms
test_latency[release-pg16-pipelining_config4-{'max_batch_size': 32, 'protocol_pipelining_mode': 'tasks'}].latency_percentiles.p99.99: 0.499 ms
2024-11-22 14:33:55 +01:00
Christian Schwarz
5796f3ba57 fix test 2024-11-22 14:27:59 +01:00
Christian Schwarz
11dc7135b1 rename test file to test_page_service_batching 2024-11-22 13:19:12 +01:00
Christian Schwarz
d6e5a46015 eliminate the word batch and stale doc comments 2024-11-22 12:46:52 +01:00
Christian Schwarz
a28c54dac1 cosmetics 2024-11-22 12:44:31 +01:00
Christian Schwarz
ef502f8311 remove async-timer heritage 2024-11-22 12:43:55 +01:00
Christian Schwarz
39e45f9e51 improve tests 2024-11-22 12:27:38 +01:00
Christian Schwarz
c1e8347160 make configurable whether pipelining should use concurrent futures or tasks 2024-11-22 11:27:23 +01:00
Christian Schwarz
093674b2fb impmlement the serial mode 2024-11-22 09:53:08 +01:00
Christian Schwarz
0fa8ae3c0a WIP refactor to allow truly serial mode 2024-11-22 09:47:49 +01:00
Christian Schwarz
c1040bc25d task-based mode 2024-11-22 09:36:45 +01:00
Christian Schwarz
a3d1cf636b config changes to express pipelining config (not respected yet) 2024-11-22 08:36:17 +01:00
Christian Schwarz
89d9d16130 cherry-pick from problame/batching-benchmark while it's waiting for merge 2024-11-22 08:17:30 +01:00
Christian Schwarz
88fd8aed52 watch-based approach 2024-11-21 23:03:21 +01:00
Christian Schwarz
db9093f938 revert back to 'span fixes' commit 2024-11-21 22:07:05 +01:00
Christian Schwarz
240e48df59 improvements 2024-11-21 21:57:53 +01:00
Christian Schwarz
7680aa12a8 draft 2024-11-21 21:34:58 +01:00
Christian Schwarz
56de07154e fruitless debugging 2024-11-21 20:46:56 +01:00
Christian Schwarz
73046fdf5b span fixes 2024-11-21 20:21:55 +01:00
Christian Schwarz
408bc8fc71 cleanups 2024-11-21 19:42:43 +01:00
Christian Schwarz
345f8b6c3b fix ready_for_next_batch order 2024-11-21 19:11:57 +01:00
Christian Schwarz
aa1032aeff no need for cancel & ctx in pagestream_do_batch 2024-11-21 18:40:22 +01:00
Christian Schwarz
a1bb2e7bb0 WIP: pipelined batching 2024-11-21 18:33:34 +01:00
Christian Schwarz
09e7485004 Merge branch 'problame/merge-getpage-test' into problame/batching-timer 2024-11-21 11:28:12 +01:00
Christian Schwarz
058b35f884 Merge branch 'problame/batching-benchmark' into problame/merge-getpage-test 2024-11-21 11:27:16 +01:00
Christian Schwarz
ff0aa152f1 Merge remote-tracking branch 'origin/main' into problame/batching-benchmark 2024-11-21 11:25:23 +01:00
Christian Schwarz
3375f28990 pytest.approx; https://github.com/neondatabase/neon/pull/9820#discussion_r1850679974 2024-11-21 11:21:50 +01:00
Christian Schwarz
e82deb2ccc high-resolution CPU usage 2024-11-21 11:16:00 +01:00
Christian Schwarz
fa7ce2ca07 the final choice: async-timer 1.0beta15 with features=["tokio1"] 2024-11-21 11:15:02 +01:00
Christian Schwarz
89b6cb8eba Revert "vanilla tokio based timer impl based on tokio::time::Sleep"
This reverts commit 517dda849f.
2024-11-20 20:17:49 +01:00
Christian Schwarz
c68661dfb3 Revert "undo local modifications to benchmark"
This reverts commit 7be13bc5a6.
2024-11-20 19:53:06 +01:00
Christian Schwarz
517dda849f vanilla tokio based timer impl based on tokio::time::Sleep 2024-11-20 19:52:47 +01:00
Christian Schwarz
f22ad868cf Revert "tokio_timerfd::Delay based impl"
This reverts commit fcda7a72c6.
2024-11-20 19:45:37 +01:00
Christian Schwarz
fcda7a72c6 tokio_timerfd::Delay based impl
Performs identically great to the async-timer::Timer features=tokio1 impl
Makes sense because it's the same thing that's happening under the hood.

https://www.notion.so/neondatabase/benchmarking-notes-143f189e004780c4a630cb5f426e39ba?pvs=4#144f189e004780ea9decc82281f6b8d1
2024-11-20 19:42:00 +01:00
Christian Schwarz
469ce810fc Revert "async-timer based approach (again, with data)"
This reverts commit 689788cbba.
2024-11-20 19:40:24 +01:00
Christian Schwarz
21866faa8a Revert "try async-timer 1.0.0-beta15 (still signal-based timers)"
This reverts commit c73e9e40e9.
2024-11-20 19:37:51 +01:00
Christian Schwarz
cbb5817997 Revert "async-timer 1.0.0-beta15 with features=tokio1"
This reverts commit 68550f0f50.
2024-11-20 19:37:44 +01:00
Christian Schwarz
5f3e6f398c Revert "try interval-based impl to cross-chec"
This reverts commit 721643beed.
2024-11-20 18:52:55 +01:00
Christian Schwarz
721643beed try interval-based impl to cross-chec
=> zero batching

https://www.notion.so/neondatabase/benchmarking-notes-143f189e004780c4a630cb5f426e39ba?pvs=4#144f189e00478065a9b3e51726082885
2024-11-20 18:50:48 +01:00
Christian Schwarz
68550f0f50 async-timer 1.0.0-beta15 with features=tokio1
Best batching factor so far with no worse degradation of
un-batchable workloads than the other candidates.

https://www.notion.so/neondatabase/benchmarking-notes-143f189e004780c4a630cb5f426e39ba?pvs=4#144f189e004780c0921fe99e1da0e8c9
2024-11-20 18:41:31 +01:00
Christian Schwarz
c73e9e40e9 try async-timer 1.0.0-beta15 (still signal-based timers)
Results unchanged to 0.7.4

https://www.notion.so/neondatabase/benchmarking-notes-143f189e004780c4a630cb5f426e39ba?pvs=4#144f189e004780e18416cc0faf2aca65
2024-11-20 18:32:53 +01:00
Christian Schwarz
7be13bc5a6 undo local modifications to benchmark 2024-11-20 16:00:19 +01:00
Christian Schwarz
689788cbba async-timer based approach (again, with data)
Yep, it's clearly the best one with best batching factor at lowest CPU
usage.

https://www.notion.so/neondatabase/benchmarking-notes-143f189e004780c4a630cb5f426e39ba?pvs=4#144f189e004780d0a205e081458b46db
2024-11-20 15:36:10 +01:00
Christian Schwarz
f9bf038d2c Revert "tokio_timerfd::Interval"
This reverts commit 12124b28d0.
2024-11-20 15:25:52 +01:00
Christian Schwarz
12124b28d0 tokio_timerfd::Interval
Resolution not high enough to do _any_ batching at 10us or 20us

https://www.notion.so/neondatabase/benchmarking-notes-143f189e004780c4a630cb5f426e39ba?pvs=4#144f189e0047800fb74bd8f4ab6cf8e2
2024-11-20 15:25:14 +01:00
Christian Schwarz
1d85bec0ea Revert "tokio::time::Interval based approach"
This reverts commit 81d99704ee.
2024-11-20 15:13:26 +01:00
Christian Schwarz
81d99704ee tokio::time::Interval based approach
batching at 10us doesn't work well enough, prob the future is ready
too soon. batching factor is just 1.5

https://www.notion.so/neondatabase/benchmarking-notes-143f189e004780c4a630cb5f426e39ba?pvs=4#144f189e004780b79c8dd6d007dbb120
2024-11-20 15:13:11 +01:00
Christian Schwarz
f3ed5692ea Revert "async-timer based approach"
This reverts commit 1639b26002.
2024-11-20 14:49:09 +01:00
Christian Schwarz
1639b26002 async-timer based approach
With this, 10us batching timeout works, but it has some other wrinkles:

- it uses the signal-based timer APIs instead of going through epoll (=> timerfd)
= it needs to make a syscall for each batch, which costs around 1-2us, so, probably significant CPU time wasted on this.
2024-11-20 14:49:01 +01:00
Christian Schwarz
af95320a8c Revert "Revert "switch back to tokio::time::sleep, to get the numbers""
This reverts commit aa695b2ad7.
2024-11-20 14:25:05 +01:00
Christian Schwarz
b299eb19e2 fixup whitespace stuff 2024-11-20 14:23:55 +01:00
Christian Schwarz
88d52b31b7 Merge branch 'problame/batching-benchmark' into problame/merge-getpage-test 2024-11-20 14:23:22 +01:00
Christian Schwarz
aa695b2ad7 Revert "switch back to tokio::time::sleep, to get the numbers"
This reverts commit b9746168ff.
2024-11-20 14:22:31 +01:00
Christian Schwarz
b695907752 page_service: add benchmark for batching
This PR adds a benchmark to demonstrate the effect of server-side
getpage request batching added in https://github.com/neondatabase/neon/pull/9321.

Refs:

- Epic: https://github.com/neondatabase/neon/issues/9376
- Extracted from https://github.com/neondatabase/neon/pull/9792
2024-11-20 14:18:42 +01:00
Christian Schwarz
75041cb61b bench fixups 2024-11-20 13:59:21 +01:00
Christian Schwarz
e80ce970f7 collect CPU utilization 2024-11-20 13:55:05 +01:00
Christian Schwarz
f2de5b504f make it a proper benchmark 2024-11-20 13:52:05 +01:00
Christian Schwarz
b9746168ff switch back to tokio::time::sleep, to get the numbers
=> https://www.notion.so/neondatabase/benchmarking-notes-143f189e004780c4a630cb5f426e39ba?pvs=4#144f189e00478054b8a3e325735ffa19

=> unacceptable
2024-11-20 12:50:29 +01:00
Christian Schwarz
5cc0059088 parametrize more test 2024-11-20 12:48:47 +01:00
Christian Schwarz
911946a3cd fixes 2024-11-19 19:01:55 +01:00
Christian Schwarz
61ff84a3a2 compiles 2024-11-19 18:40:56 +01:00
Christian Schwarz
15e21c714b got it working and turn it more into a benchmark 2024-11-18 23:57:14 +01:00
Christian Schwarz
0689965282 WIP: page_service: add basic testcase for merging
The steps in the test work in neon_local + psql
but for some reason they don't work in the test.

Asked compute team on Slack for help:
https://neondb.slack.com/archives/C04DGM6SMTM/p1731952688386789
2024-11-18 23:57:14 +01:00
181 changed files with 2850 additions and 6895 deletions

View File

@@ -249,7 +249,7 @@ jobs:
# Post both success and failure to the Slack channel
- name: Post to a Slack channel
if: ${{ github.event.schedule && !cancelled() }}
if: ${{ github.event.schedule }}
uses: slackapi/slack-github-action@v1
with:
channel-id: "C06T9AMNDQQ" # on-call-compute-staging-stream

View File

@@ -669,7 +669,7 @@ jobs:
neondatabase/compute-node-${{ matrix.version.pg }}:${{ needs.tag.outputs.build-tag }}-${{ matrix.version.debian }}-${{ matrix.arch }}
- name: Build neon extensions test image
if: matrix.version.pg >= 'v16'
if: matrix.version.pg == 'v16'
uses: docker/build-push-action@v6
with:
context: .
@@ -684,7 +684,8 @@ jobs:
pull: true
file: compute/compute-node.Dockerfile
target: neon-pg-ext-test
cache-from: type=registry,ref=cache.neon.build/compute-node-${{ matrix.version.pg }}:cache-${{ matrix.version.debian }}-${{ matrix.arch }}
cache-from: type=registry,ref=cache.neon.build/neon-test-extensions-${{ matrix.version.pg }}:cache-${{ matrix.version.debian }}-${{ matrix.arch }}
cache-to: ${{ github.ref_name == 'main' && format('type=registry,ref=cache.neon.build/neon-test-extensions-{0}:cache-{1}-{2},mode=max', matrix.version.pg, matrix.version.debian, matrix.arch) || '' }}
tags: |
neondatabase/neon-test-extensions-${{ matrix.version.pg }}:${{needs.tag.outputs.build-tag}}-${{ matrix.version.debian }}-${{ matrix.arch }}
@@ -707,7 +708,7 @@ jobs:
push: true
pull: true
file: compute/compute-node.Dockerfile
cache-from: type=registry,ref=cache.neon.build/compute-node-${{ matrix.version.pg }}:cache-${{ matrix.version.debian }}-${{ matrix.arch }}
cache-from: type=registry,ref=cache.neon.build/neon-test-extensions-${{ matrix.version.pg }}:cache-${{ matrix.version.debian }}-${{ matrix.arch }}
cache-to: ${{ github.ref_name == 'main' && format('type=registry,ref=cache.neon.build/compute-tools-{0}:cache-{1}-{2},mode=max', matrix.version.pg, matrix.version.debian, matrix.arch) || '' }}
tags: |
neondatabase/compute-tools:${{ needs.tag.outputs.build-tag }}-${{ matrix.version.debian }}-${{ matrix.arch }}
@@ -743,7 +744,7 @@ jobs:
neondatabase/compute-node-${{ matrix.version.pg }}:${{ needs.tag.outputs.build-tag }}-${{ matrix.version.debian }}-arm64
- name: Create multi-arch neon-test-extensions image
if: matrix.version.pg >= 'v16'
if: matrix.version.pg == 'v16'
run: |
docker buildx imagetools create -t neondatabase/neon-test-extensions-${{ matrix.version.pg }}:${{ needs.tag.outputs.build-tag }} \
-t neondatabase/neon-test-extensions-${{ matrix.version.pg }}:${{ needs.tag.outputs.build-tag }}-${{ matrix.version.debian }} \
@@ -832,7 +833,6 @@ jobs:
fail-fast: false
matrix:
arch: [ x64, arm64 ]
pg_version: [v16, v17]
runs-on: ${{ fromJson(format('["self-hosted", "{0}"]', matrix.arch == 'arm64' && 'small-arm64' || 'small')) }}
@@ -871,10 +871,7 @@ jobs:
- name: Verify docker-compose example and test extensions
timeout-minutes: 20
env:
TAG: ${{needs.tag.outputs.build-tag}}
TEST_VERSION_ONLY: ${{ matrix.pg_version }}
run: ./docker-compose/docker_compose_test.sh
run: env TAG=${{needs.tag.outputs.build-tag}} ./docker-compose/docker_compose_test.sh
- name: Print logs and clean up
if: always()

271
Cargo.lock generated
View File

@@ -185,7 +185,7 @@ checksum = "965c2d33e53cb6b267e148a4cb0760bc01f4904c1cd4bb4002a085bb016d1490"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
"synstructure",
]
@@ -197,7 +197,7 @@ checksum = "7b18050c2cd6fe86c3a76584ef5e0baf286d038cda203eb6223df2cc413565f7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -256,7 +256,7 @@ checksum = "16e62a023e7c117e27523144c5d2459f4397fcc3cab0085af8e2224f643a0193"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -267,7 +267,7 @@ checksum = "b9ccdd8f2a161be9bd5c023df56f1b2a0bd1d83872ae53b71a84a12c9bf6e842"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -301,7 +301,7 @@ dependencies = [
"aws-smithy-types",
"aws-types",
"bytes",
"fastrand 2.2.0",
"fastrand 2.0.0",
"hex",
"http 0.2.9",
"hyper 0.14.30",
@@ -341,7 +341,7 @@ dependencies = [
"aws-smithy-types",
"aws-types",
"bytes",
"fastrand 2.2.0",
"fastrand 2.0.0",
"http 0.2.9",
"http-body 0.4.5",
"once_cell",
@@ -417,7 +417,7 @@ dependencies = [
"aws-smithy-xml",
"aws-types",
"bytes",
"fastrand 2.2.0",
"fastrand 2.0.0",
"hex",
"hmac",
"http 0.2.9",
@@ -621,7 +621,7 @@ dependencies = [
"aws-smithy-runtime-api",
"aws-smithy-types",
"bytes",
"fastrand 2.2.0",
"fastrand 2.0.0",
"h2 0.3.26",
"http 0.2.9",
"http-body 0.4.5",
@@ -969,7 +969,7 @@ dependencies = [
"regex",
"rustc-hash",
"shlex",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -1198,7 +1198,7 @@ dependencies = [
"heck 0.4.1",
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -1615,7 +1615,7 @@ dependencies = [
"proc-macro2",
"quote",
"strsim",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -1626,7 +1626,7 @@ checksum = "29a358ff9f12ec09c3e61fef9b5a9902623a695a46a917b07f269bff1445611a"
dependencies = [
"darling_core",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -1749,7 +1749,7 @@ dependencies = [
"dsl_auto_type",
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -1769,7 +1769,7 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "209c735641a413bc68c4923a9d6ad4bcb3ca306b794edaa7eb0b3228a99ffb25"
dependencies = [
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -1792,7 +1792,7 @@ checksum = "487585f4d0c6655fe74905e2504d8ad6908e4db67f744eb140876906c2f3175d"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -1815,7 +1815,7 @@ dependencies = [
"heck 0.5.0",
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -1947,7 +1947,7 @@ dependencies = [
"darling",
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -1980,7 +1980,7 @@ checksum = "3bf679796c0322556351f287a51b49e48f7c4986e727b5dd78c972d30e2e16cc"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -2054,9 +2054,9 @@ dependencies = [
[[package]]
name = "fastrand"
version = "2.2.0"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "486f806e73c5707928240ddc295403b1b93c96a02038563881c4a2fd84b81ac4"
checksum = "6999dc1837253364c2ebb0704ba97994bd874e8f195d665c50b7548f6ea92764"
[[package]]
name = "ff"
@@ -2234,7 +2234,7 @@ checksum = "162ee34ebcb7c64a8abebc059ce0fee27c2262618d7b60ed8faf72fef13c3650"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -2337,7 +2337,7 @@ checksum = "53010ccb100b96a67bc32c0175f0ed1426b31b655d562898e57325f81c023ac0"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -2912,23 +2912,6 @@ version = "1.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "49f1f14873335454500d59611f1cf4a4b0f786f9ac11f4312a78e4cf2566695b"
[[package]]
name = "jemalloc_pprof"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a883828bd6a4b957cd9f618886ff19e5f3ebd34e06ba0e855849e049fef32fb"
dependencies = [
"anyhow",
"libc",
"mappings",
"once_cell",
"pprof_util",
"tempfile",
"tikv-jemalloc-ctl",
"tokio",
"tracing",
]
[[package]]
name = "jobserver"
version = "0.1.32"
@@ -3039,9 +3022,9 @@ dependencies = [
[[package]]
name = "libc"
version = "0.2.167"
version = "0.2.150"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09d6582e104315a817dff97f75133544b2e094ee22447d2acf4a74e189ba06fc"
checksum = "89d92a4743f9a61002fae18374ed11e7973f530cb3a3255fb354818118b2203c"
[[package]]
name = "libloading"
@@ -3061,9 +3044,9 @@ checksum = "4ec2a862134d2a7d32d7983ddcdd1c4923530833c9f2ea1a44fc5fa473989058"
[[package]]
name = "linux-raw-sys"
version = "0.4.14"
version = "0.4.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78b3ae25bc7c8c38cec158d1f2757ee79e9b3740fbc7ccf0e59e4b08d793fa89"
checksum = "01cda141df6706de531b6c46c3a33ecca755538219bd484262fa09410c13539c"
[[package]]
name = "linux-raw-sys"
@@ -3096,19 +3079,6 @@ dependencies = [
"hashbrown 0.14.5",
]
[[package]]
name = "mappings"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ce9229c438fbf1c333926e2053c4c091feabbd40a1b590ec62710fea2384af9e"
dependencies = [
"anyhow",
"libc",
"once_cell",
"pprof_util",
"tracing",
]
[[package]]
name = "matchers"
version = "0.1.0"
@@ -3172,7 +3142,7 @@ dependencies = [
"heck 0.5.0",
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -3376,7 +3346,6 @@ version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b05180d69e3da0e530ba2a1dae5110317e49e3b7f3d41be227dc5f92e49ee7af"
dependencies = [
"num-bigint",
"num-complex",
"num-integer",
"num-iter",
@@ -3465,7 +3434,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0638a1c9d0a3c0914158145bc76cff373a75a627e6ecbfb71cbe6f453a5a19b0"
dependencies = [
"autocfg",
"num-bigint",
"num-integer",
"num-traits",
]
@@ -3529,9 +3497,9 @@ dependencies = [
[[package]]
name = "once_cell"
version = "1.20.2"
version = "1.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1261fe7e33c73b354eab43b1273a57c8f967d0391e80353e51f764ac02cf6775"
checksum = "dd8b5dd2ae5ed71462c540258bedcb51965123ad7e7ccf4b9a8cafaa4a63576d"
[[package]]
name = "oorandom"
@@ -3547,9 +3515,9 @@ checksum = "ff011a302c396a5197692431fc1948019154afc178baf7d8e37367442a4601cf"
[[package]]
name = "opentelemetry"
version = "0.26.0"
version = "0.24.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "570074cc999d1a58184080966e5bd3bf3a9a4af650c3b05047c2621e7405cd17"
checksum = "4c365a63eec4f55b7efeceb724f1336f26a9cf3427b70e59e2cd2a5b947fba96"
dependencies = [
"futures-core",
"futures-sink",
@@ -3561,9 +3529,9 @@ dependencies = [
[[package]]
name = "opentelemetry-http"
version = "0.26.0"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6351496aeaa49d7c267fb480678d85d1cd30c5edb20b497c48c56f62a8c14b99"
checksum = "ad31e9de44ee3538fb9d64fe3376c1362f406162434609e79aea2a41a0af78ab"
dependencies = [
"async-trait",
"bytes",
@@ -3574,9 +3542,9 @@ dependencies = [
[[package]]
name = "opentelemetry-otlp"
version = "0.26.0"
version = "0.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "29e1f9c8b032d4f635c730c0efcf731d5e2530ea13fa8bef7939ddc8420696bd"
checksum = "6b925a602ffb916fb7421276b86756027b37ee708f9dce2dbdcc51739f07e727"
dependencies = [
"async-trait",
"futures-core",
@@ -3592,9 +3560,9 @@ dependencies = [
[[package]]
name = "opentelemetry-proto"
version = "0.26.1"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9d3968ce3aefdcca5c27e3c4ea4391b37547726a70893aab52d3de95d5f8b34"
checksum = "30ee9f20bff9c984511a02f082dc8ede839e4a9bf15cc2487c8d6fea5ad850d9"
dependencies = [
"opentelemetry",
"opentelemetry_sdk",
@@ -3604,15 +3572,15 @@ dependencies = [
[[package]]
name = "opentelemetry-semantic-conventions"
version = "0.26.0"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db945c1eaea8ac6a9677185357480d215bb6999faa9f691d0c4d4d641eab7a09"
checksum = "1cefe0543875379e47eb5f1e68ff83f45cc41366a92dfd0d073d513bf68e9a05"
[[package]]
name = "opentelemetry_sdk"
version = "0.26.0"
version = "0.24.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2c627d9f4c9cdc1f21a29ee4bfbd6028fcb8bcf2a857b43f3abdf72c9c862f3"
checksum = "692eac490ec80f24a17828d49b40b60f5aeaccdfe6a503f939713afd22bc28df"
dependencies = [
"async-trait",
"futures-channel",
@@ -3986,7 +3954,7 @@ dependencies = [
"parquet",
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -4088,7 +4056,7 @@ checksum = "39407670928234ebc5e6e580247dd567ad73a3578460c5990f9503df207e8f07"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -4170,8 +4138,8 @@ dependencies = [
[[package]]
name = "postgres"
version = "0.19.9"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon-rebase1#a88729d425754cb64423aa7140caac9f1cc9bb2a"
version = "0.19.4"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#00940fcdb57a8e99e805297b75839e7c4c7b1796"
dependencies = [
"bytes",
"fallible-iterator",
@@ -4183,10 +4151,10 @@ dependencies = [
[[package]]
name = "postgres-protocol"
version = "0.6.7"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon-rebase1#a88729d425754cb64423aa7140caac9f1cc9bb2a"
version = "0.6.4"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#00940fcdb57a8e99e805297b75839e7c4c7b1796"
dependencies = [
"base64 0.22.1",
"base64 0.20.0",
"byteorder",
"bytes",
"fallible-iterator",
@@ -4197,6 +4165,7 @@ dependencies = [
"rand 0.8.5",
"sha2",
"stringprep",
"tokio",
]
[[package]]
@@ -4208,6 +4177,7 @@ dependencies = [
"bytes",
"fallible-iterator",
"hmac",
"md-5",
"memchr",
"rand 0.8.5",
"sha2",
@@ -4217,8 +4187,8 @@ dependencies = [
[[package]]
name = "postgres-types"
version = "0.2.8"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon-rebase1#a88729d425754cb64423aa7140caac9f1cc9bb2a"
version = "0.2.4"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#00940fcdb57a8e99e805297b75839e7c4c7b1796"
dependencies = [
"bytes",
"fallible-iterator",
@@ -4328,19 +4298,6 @@ dependencies = [
"thiserror",
]
[[package]]
name = "pprof_util"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "65c568b3f8c1c37886ae07459b1946249e725c315306b03be5632f84c239f781"
dependencies = [
"anyhow",
"flate2",
"num",
"paste",
"prost",
]
[[package]]
name = "ppv-lite86"
version = "0.2.17"
@@ -4377,7 +4334,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8d3928fb5db768cb86f891ff014f0144589297e3c6a1aba6ed7cecfdace270c7"
dependencies = [
"proc-macro2",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -4391,9 +4348,9 @@ dependencies = [
[[package]]
name = "proc-macro2"
version = "1.0.92"
version = "1.0.78"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37d3544b3f2748c54e147655edb5025752e2303145b5aefb3c3ea2c78b973bb0"
checksum = "e2422ad645d89c99f8f3e6b88a9fdeca7fabeac836b1002371c4367c8f984aae"
dependencies = [
"unicode-ident",
]
@@ -4467,7 +4424,7 @@ dependencies = [
"prost",
"prost-types",
"regex",
"syn 2.0.90",
"syn 2.0.52",
"tempfile",
]
@@ -4481,7 +4438,7 @@ dependencies = [
"itertools 0.12.1",
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -4544,7 +4501,6 @@ dependencies = [
"ecdsa 0.16.9",
"env_logger",
"fallible-iterator",
"flate2",
"framed-websockets",
"futures",
"hashbrown 0.14.5",
@@ -4610,7 +4566,6 @@ dependencies = [
"tikv-jemalloc-ctl",
"tikv-jemallocator",
"tokio",
"tokio-postgres",
"tokio-postgres2",
"tokio-rustls 0.26.0",
"tokio-tungstenite",
@@ -5036,9 +4991,9 @@ dependencies = [
[[package]]
name = "reqwest-middleware"
version = "0.4.0"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d1ccd3b55e711f91a9885a2fa6fbbb2e39db1776420b062efc058c6410f7e5e3"
checksum = "0209efb52486ad88136190094ee214759ef7507068b27992256ed6610eb71a01"
dependencies = [
"anyhow",
"async-trait",
@@ -5051,12 +5006,13 @@ dependencies = [
[[package]]
name = "reqwest-retry"
version = "0.7.0"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "29c73e4195a6bfbcb174b790d9b3407ab90646976c55de58a6515da25d851178"
checksum = "40f342894422862af74c50e1e9601cf0931accc9c6981e5eb413c46603b616b5"
dependencies = [
"anyhow",
"async-trait",
"chrono",
"futures",
"getrandom 0.2.11",
"http 1.1.0",
@@ -5065,7 +5021,6 @@ dependencies = [
"reqwest 0.12.4",
"reqwest-middleware",
"retry-policies",
"thiserror",
"tokio",
"tracing",
"wasm-timer",
@@ -5073,9 +5028,9 @@ dependencies = [
[[package]]
name = "reqwest-tracing"
version = "0.5.4"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff82cf5730a1311fb9413b0bc2b8e743e0157cd73f010ab4ec374a923873b6a2"
checksum = "bfdd9bfa64c72233d8dd99ab7883efcdefe9e16d46488ecb9228b71a2e2ceb45"
dependencies = [
"anyhow",
"async-trait",
@@ -5091,10 +5046,12 @@ dependencies = [
[[package]]
name = "retry-policies"
version = "0.4.0"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5875471e6cab2871bc150ecb8c727db5113c9338cc3354dc5ee3425b6aa40a1c"
checksum = "493b4243e32d6eedd29f9a398896e35c6943a123b55eec97dcaee98310d25810"
dependencies = [
"anyhow",
"chrono",
"rand 0.8.5",
]
@@ -5218,7 +5175,7 @@ dependencies = [
"regex",
"relative-path",
"rustc_version",
"syn 2.0.90",
"syn 2.0.52",
"unicode-ident",
]
@@ -5264,14 +5221,14 @@ dependencies = [
[[package]]
name = "rustix"
version = "0.38.41"
version = "0.38.28"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7f649912bc1495e167a6edee79151c84b1bad49748cb4f1f1167f459f6224f6"
checksum = "72e572a5e8ca657d7366229cdde4bd14c4eb5499a9573d4d366fe1b599daa316"
dependencies = [
"bitflags 2.4.1",
"errno",
"libc",
"linux-raw-sys 0.4.14",
"linux-raw-sys 0.4.13",
"windows-sys 0.52.0",
]
@@ -5726,7 +5683,7 @@ checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -5808,7 +5765,7 @@ dependencies = [
"darling",
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -6181,7 +6138,7 @@ dependencies = [
"proc-macro2",
"quote",
"rustversion",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -6232,9 +6189,9 @@ dependencies = [
[[package]]
name = "syn"
version = "2.0.90"
version = "2.0.52"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "919d3b74a5dd0ccd15aeb8f93e7006bd9e14c295087c9896a110f490752bcf31"
checksum = "b699d15b36d1f02c3e7c69f8ffef53de37aefae075d8488d4ba1a7788d574a07"
dependencies = [
"proc-macro2",
"quote",
@@ -6264,7 +6221,7 @@ checksum = "c8af7666ab7b6390ab78131fb5b0fce11d6b7a6951602017c35fa82800708971"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -6295,13 +6252,13 @@ dependencies = [
[[package]]
name = "tempfile"
version = "3.14.0"
version = "3.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "28cce251fcbc87fac86a866eeb0d6c2d536fc16d06f184bb61aeae11aa4cee0c"
checksum = "01ce4141aa927a6d1bd34a041795abd0db1cccba5d5f24b009f694bdf3a1f3fa"
dependencies = [
"cfg-if",
"fastrand 2.2.0",
"once_cell",
"fastrand 2.0.0",
"redox_syscall 0.4.1",
"rustix",
"windows-sys 0.52.0",
]
@@ -6342,27 +6299,27 @@ checksum = "78ea17a2dc368aeca6f554343ced1b1e31f76d63683fa8016e5844bd7a5144a1"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
name = "thiserror"
version = "1.0.69"
version = "1.0.57"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b6aaf5339b578ea85b50e080feb250a3e8ae8cfcdff9a461c9ec2904bc923f52"
checksum = "1e45bcbe8ed29775f228095caf2cd67af7a4ccf756ebff23a306bf3e8b47b24b"
dependencies = [
"thiserror-impl",
]
[[package]]
name = "thiserror-impl"
version = "1.0.69"
version = "1.0.57"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4fee6c4efc90059e10f81e6d42c60a18f76588c3d74cb83a0b242a2b6c7504c1"
checksum = "a953cb265bef375dae3de6663da4d3804eee9682ea80d8e2542529b73c531c81"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -6536,13 +6493,13 @@ checksum = "5f5ae998a069d4b5aba8ee9dad856af7d520c3699e6159b185c2acd48155d39a"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
name = "tokio-postgres"
version = "0.7.12"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon-rebase1#a88729d425754cb64423aa7140caac9f1cc9bb2a"
version = "0.7.7"
source = "git+https://github.com/neondatabase/rust-postgres.git?branch=neon#00940fcdb57a8e99e805297b75839e7c4c7b1796"
dependencies = [
"async-trait",
"byteorder",
@@ -6557,11 +6514,9 @@ dependencies = [
"pin-project-lite",
"postgres-protocol",
"postgres-types",
"rand 0.8.5",
"socket2",
"tokio",
"tokio-util",
"whoami",
]
[[package]]
@@ -6763,7 +6718,7 @@ dependencies = [
"prost-build",
"prost-types",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -6800,9 +6755,9 @@ checksum = "b6bc1c9ce2b5135ac7f93c72918fc37feb872bdc6a5533a8b85eb4b86bfdae52"
[[package]]
name = "tracing"
version = "0.1.41"
version = "0.1.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "784e0ac535deb450455cbfa28a6f0df145ea1bb7ae51b821cf5e7927fdcfbdd0"
checksum = "c3523ab5a71916ccf420eebdf5521fcef02141234bbc0b8a49f2fdc4544364ef"
dependencies = [
"log",
"pin-project-lite",
@@ -6823,20 +6778,20 @@ dependencies = [
[[package]]
name = "tracing-attributes"
version = "0.1.28"
version = "0.1.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "395ae124c09f9e6918a2310af6038fba074bcf474ac352496d5910dd59a2226d"
checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
name = "tracing-core"
version = "0.1.33"
version = "0.1.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e672c95779cf947c5311f83787af4fa8fffd12fb27e4993211a84bdfd9610f9c"
checksum = "c06d3da6113f116aaee68e4d601191614c9053067f9ab7f6edbcb161237daa54"
dependencies = [
"once_cell",
"valuable",
@@ -6865,9 +6820,9 @@ dependencies = [
[[package]]
name = "tracing-opentelemetry"
version = "0.27.0"
version = "0.25.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc58af5d3f6c5811462cabb3289aec0093f7338e367e5a33d28c0433b3c7360b"
checksum = "a9784ed4da7d921bc8df6963f8c80a0e4ce34ba6ba76668acadd3edbd985ff3b"
dependencies = [
"js-sys",
"once_cell",
@@ -6883,9 +6838,9 @@ dependencies = [
[[package]]
name = "tracing-serde"
version = "0.2.0"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "704b1aeb7be0d0a84fc9828cae51dab5970fee5088f83d1dd7ee6f6246fc6ff1"
checksum = "bc6b213177105856957181934e4920de57730fc69bf42c37ee5bb664d406d9e1"
dependencies = [
"serde",
"tracing-core",
@@ -6893,9 +6848,9 @@ dependencies = [
[[package]]
name = "tracing-subscriber"
version = "0.3.19"
version = "0.3.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8189decb5ac0fa7bc8b96b7cb9b2701d60d48805aca84a238004d665fcc4008"
checksum = "ad0f048c97dbd9faa9b7df56362b8ebcaa52adb06b498c050d2f4e32f90a7a8b"
dependencies = [
"matchers",
"once_cell",
@@ -7104,7 +7059,6 @@ dependencies = [
"hex-literal",
"humantime",
"hyper 0.14.30",
"jemalloc_pprof",
"jsonwebtoken",
"metrics",
"nix 0.27.1",
@@ -7303,7 +7257,7 @@ dependencies = [
"once_cell",
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
"wasm-bindgen-shared",
]
@@ -7337,7 +7291,7 @@ checksum = "e94f17b526d0a461a191c78ea52bbce64071ed5c04c9ffe424dcb38f74171bb7"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
"wasm-bindgen-backend",
"wasm-bindgen-shared",
]
@@ -7691,12 +7645,8 @@ dependencies = [
"memchr",
"nix 0.26.4",
"nom",
"num",
"num-bigint",
"num-complex",
"num-integer",
"num-iter",
"num-rational",
"num-traits",
"once_cell",
"parquet",
@@ -7718,9 +7668,8 @@ dependencies = [
"smallvec",
"spki 0.7.3",
"subtle",
"syn 2.0.90",
"syn 2.0.52",
"sync_wrapper 0.1.2",
"tikv-jemalloc-ctl",
"tikv-jemalloc-sys",
"time",
"time-macros",
@@ -7819,7 +7768,7 @@ checksum = "b3c129550b3e6de3fd0ba67ba5c81818f9805e58b8d7fee80a3a59d2c9fc601a"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]
@@ -7840,7 +7789,7 @@ checksum = "ce36e65b0d2999d2aafac989fb249189a141aee1f53c612c1f37d72631959f69"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.90",
"syn 2.0.52",
]
[[package]]

View File

@@ -115,7 +115,6 @@ indoc = "2"
ipnet = "2.10.0"
itertools = "0.10"
itoa = "1.0.11"
jemalloc_pprof = "0.6"
jsonwebtoken = "9"
lasso = "0.7"
libc = "0.2"
@@ -128,10 +127,10 @@ notify = "6.0.0"
num_cpus = "1.15"
num-traits = "0.2.15"
once_cell = "1.13"
opentelemetry = "0.26"
opentelemetry_sdk = "0.26"
opentelemetry-otlp = { version = "0.26", default-features=false, features = ["http-proto", "trace", "http", "reqwest-client"] }
opentelemetry-semantic-conventions = "0.26"
opentelemetry = "0.24"
opentelemetry_sdk = "0.24"
opentelemetry-otlp = { version = "0.17", default-features=false, features = ["http-proto", "trace", "http", "reqwest-client"] }
opentelemetry-semantic-conventions = "0.16"
parking_lot = "0.12"
parquet = { version = "53", default-features = false, features = ["zstd"] }
parquet_derive = "53"
@@ -145,9 +144,9 @@ rand = "0.8"
redis = { version = "0.25.2", features = ["tokio-rustls-comp", "keep-alive"] }
regex = "1.10.2"
reqwest = { version = "0.12", default-features = false, features = ["rustls-tls"] }
reqwest-tracing = { version = "0.5", features = ["opentelemetry_0_26"] }
reqwest-middleware = "0.4"
reqwest-retry = "0.7"
reqwest-tracing = { version = "0.5", features = ["opentelemetry_0_24"] }
reqwest-middleware = "0.3.0"
reqwest-retry = "0.5"
routerify = "3"
rpds = "0.13"
rustc-hash = "1.1.0"
@@ -176,7 +175,7 @@ sync_wrapper = "0.1.2"
tar = "0.4"
test-context = "0.3"
thiserror = "1.0"
tikv-jemallocator = { version = "0.6", features = ["profiling", "stats", "unprefixed_malloc_on_supported_platforms"] }
tikv-jemallocator = { version = "0.6", features = ["stats"] }
tikv-jemalloc-ctl = { version = "0.6", features = ["stats"] }
tokio = { version = "1.17", features = ["macros"] }
tokio-epoll-uring = { git = "https://github.com/neondatabase/tokio-epoll-uring.git" , branch = "main" }
@@ -192,7 +191,7 @@ tonic = {version = "0.12.3", features = ["tls", "tls-roots"]}
tower-service = "0.3.2"
tracing = "0.1"
tracing-error = "0.2"
tracing-opentelemetry = "0.27"
tracing-opentelemetry = "0.25"
tracing-subscriber = { version = "0.3", default-features = false, features = ["smallvec", "fmt", "tracing-log", "std", "env-filter", "json"] }
try-lock = "0.2.5"
twox-hash = { version = "1.6.3", default-features = false }
@@ -211,10 +210,10 @@ env_logger = "0.10"
log = "0.4"
## Libraries from neondatabase/ git forks, ideally with changes to be upstreamed
postgres = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon-rebase1" }
postgres-protocol = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon-rebase1" }
postgres-types = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon-rebase1" }
tokio-postgres = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon-rebase1" }
postgres = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon" }
postgres-protocol = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon" }
postgres-types = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon" }
tokio-postgres = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon" }
## Local libraries
compute_api = { version = "0.1", path = "./libs/compute_api/" }
@@ -254,7 +253,7 @@ tonic-build = "0.12"
[patch.crates-io]
# Needed to get `tokio-postgres-rustls` to depend on our fork.
tokio-postgres = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon-rebase1" }
tokio-postgres = { git = "https://github.com/neondatabase/rust-postgres.git", branch = "neon" }
################# Binary contents sections

View File

@@ -14,9 +14,6 @@ ARG DEBIAN_FLAVOR=${DEBIAN_VERSION}-slim
FROM debian:$DEBIAN_FLAVOR AS build-deps
ARG DEBIAN_VERSION
# Use strict mode for bash to catch errors early
SHELL ["/bin/bash", "-euo", "pipefail", "-c"]
RUN case $DEBIAN_VERSION in \
# Version-specific installs for Bullseye (PG14-PG16):
# The h3_pg extension needs a cmake 3.20+, but Debian bullseye has 3.18.
@@ -109,7 +106,6 @@ RUN cd postgres && \
#
#########################################################################################
FROM build-deps AS postgis-build
ARG DEBIAN_VERSION
ARG PG_VERSION
COPY --from=pg-build /usr/local/pgsql/ /usr/local/pgsql/
RUN apt update && \
@@ -126,12 +122,12 @@ RUN apt update && \
# and also we must check backward compatibility with older versions of PostGIS.
#
# Use new version only for v17
RUN case "${DEBIAN_VERSION}" in \
"bookworm") \
RUN case "${PG_VERSION}" in \
"v17") \
export SFCGAL_VERSION=1.4.1 \
export SFCGAL_CHECKSUM=1800c8a26241588f11cddcf433049e9b9aea902e923414d2ecef33a3295626c3 \
;; \
"bullseye") \
"v14" | "v15" | "v16") \
export SFCGAL_VERSION=1.3.10 \
export SFCGAL_CHECKSUM=4e39b3b2adada6254a7bdba6d297bb28e1a9835a9f879b74f37e2dab70203232 \
;; \
@@ -232,8 +228,6 @@ FROM build-deps AS plv8-build
ARG PG_VERSION
COPY --from=pg-build /usr/local/pgsql/ /usr/local/pgsql/
COPY compute/patches/plv8-3.1.10.patch /plv8-3.1.10.patch
RUN apt update && \
apt install --no-install-recommends -y ninja-build python3-dev libncurses5 binutils clang
@@ -245,6 +239,8 @@ RUN apt update && \
#
# Use new version only for v17
# because since v3.2, plv8 doesn't include plcoffee and plls extensions
ENV PLV8_TAG=v3.2.3
RUN case "${PG_VERSION}" in \
"v17") \
export PLV8_TAG=v3.2.3 \
@@ -259,9 +255,8 @@ RUN case "${PG_VERSION}" in \
git clone --recurse-submodules --depth 1 --branch ${PLV8_TAG} https://github.com/plv8/plv8.git plv8-src && \
tar -czf plv8.tar.gz --exclude .git plv8-src && \
cd plv8-src && \
if [[ "${PG_VERSION}" < "v17" ]]; then patch -p1 < /plv8-3.1.10.patch; fi && \
# generate and copy upgrade scripts
mkdir -p upgrade && ./generate_upgrade.sh ${PLV8_TAG#v} && \
mkdir -p upgrade && ./generate_upgrade.sh 3.1.10 && \
cp upgrade/* /usr/local/pgsql/share/extension/ && \
export PATH="/usr/local/pgsql/bin:$PATH" && \
make DOCKER=1 -j $(getconf _NPROCESSORS_ONLN) install && \
@@ -358,10 +353,10 @@ COPY compute/patches/pgvector.patch /pgvector.patch
# because we build the images on different machines than where we run them.
# Pass OPTFLAGS="" to remove it.
#
# vector >0.7.4 supports v17
# last release v0.8.0 - Oct 30, 2024
RUN wget https://github.com/pgvector/pgvector/archive/refs/tags/v0.8.0.tar.gz -O pgvector.tar.gz && \
echo "867a2c328d4928a5a9d6f052cd3bc78c7d60228a9b914ad32aa3db88e9de27b0 pgvector.tar.gz" | sha256sum --check && \
# vector 0.7.4 supports v17
# last release v0.7.4 - Aug 5, 2024
RUN wget https://github.com/pgvector/pgvector/archive/refs/tags/v0.7.4.tar.gz -O pgvector.tar.gz && \
echo "0341edf89b1924ae0d552f617e14fb7f8867c0194ed775bcc44fa40288642583 pgvector.tar.gz" | sha256sum --check && \
mkdir pgvector-src && cd pgvector-src && tar xzf ../pgvector.tar.gz --strip-components=1 -C . && \
patch -p1 < /pgvector.patch && \
make -j $(getconf _NPROCESSORS_ONLN) OPTFLAGS="" PG_CONFIG=/usr/local/pgsql/bin/pg_config && \
@@ -1367,12 +1362,15 @@ RUN make PG_VERSION="${PG_VERSION}" -C compute
FROM neon-pg-ext-build AS neon-pg-ext-test
ARG PG_VERSION
RUN mkdir /ext-src
RUN case "${PG_VERSION}" in "v17") \
echo "v17 extensions are not supported yet. Quit" && exit 0;; \
esac && \
mkdir /ext-src
#COPY --from=postgis-build /postgis.tar.gz /ext-src/
#COPY --from=postgis-build /sfcgal/* /usr
COPY --from=plv8-build /plv8.tar.gz /ext-src/
#COPY --from=h3-pg-build /h3-pg.tar.gz /ext-src/
COPY --from=h3-pg-build /h3-pg.tar.gz /ext-src/
COPY --from=unit-pg-build /postgresql-unit.tar.gz /ext-src/
COPY --from=vector-pg-build /pgvector.tar.gz /ext-src/
COPY --from=vector-pg-build /pgvector.patch /ext-src/
@@ -1392,7 +1390,7 @@ COPY --from=hll-pg-build /hll.tar.gz /ext-src
COPY --from=plpgsql-check-pg-build /plpgsql_check.tar.gz /ext-src
#COPY --from=timescaledb-pg-build /timescaledb.tar.gz /ext-src
COPY --from=pg-hint-plan-pg-build /pg_hint_plan.tar.gz /ext-src
COPY compute/patches/pg_hint_plan_${PG_VERSION}.patch /ext-src
COPY compute/patches/pg_hint_plan.patch /ext-src
COPY --from=pg-cron-pg-build /pg_cron.tar.gz /ext-src
COPY compute/patches/pg_cron.patch /ext-src
#COPY --from=pg-pgx-ulid-build /home/nonroot/pgx_ulid.tar.gz /ext-src
@@ -1402,23 +1400,38 @@ COPY --from=pg-roaringbitmap-pg-build /pg_roaringbitmap.tar.gz /ext-src
COPY --from=pg-semver-pg-build /pg_semver.tar.gz /ext-src
#COPY --from=pg-embedding-pg-build /home/nonroot/pg_embedding-src/ /ext-src
#COPY --from=wal2json-pg-build /wal2json_2_5.tar.gz /ext-src
#pg_anon is not supported yet for pg v17 so, don't fail if nothing found
COPY --from=pg-anon-pg-build /pg_anon.tar.g? /ext-src
COPY --from=pg-anon-pg-build /pg_anon.tar.gz /ext-src
COPY compute/patches/pg_anon.patch /ext-src
COPY --from=pg-ivm-build /pg_ivm.tar.gz /ext-src
COPY --from=pg-partman-build /pg_partman.tar.gz /ext-src
RUN cd /ext-src/ && for f in *.tar.gz; \
RUN case "${PG_VERSION}" in "v17") \
echo "v17 extensions are not supported yet. Quit" && exit 0;; \
esac && \
cd /ext-src/ && for f in *.tar.gz; \
do echo $f; dname=$(echo $f | sed 's/\.tar.*//')-src; \
rm -rf $dname; mkdir $dname; tar xzf $f --strip-components=1 -C $dname \
|| exit 1; rm -f $f; done
RUN cd /ext-src/rum-src && patch -p1 <../rum.patch
RUN cd /ext-src/pgvector-src && patch -p1 <../pgvector.patch
RUN cd /ext-src/pg_hint_plan-src && patch -p1 < /ext-src/pg_hint_plan_${PG_VERSION}.patch
RUN case "${PG_VERSION}" in "v17") \
echo "v17 extensions are not supported yet. Quit" && exit 0;; \
esac && \
cd /ext-src/rum-src && patch -p1 <../rum.patch
RUN case "${PG_VERSION}" in "v17") \
echo "v17 extensions are not supported yet. Quit" && exit 0;; \
esac && \
cd /ext-src/pgvector-src && patch -p1 <../pgvector.patch
RUN case "${PG_VERSION}" in "v17") \
echo "v17 extensions are not supported yet. Quit" && exit 0;; \
esac && \
cd /ext-src/pg_hint_plan-src && patch -p1 < /ext-src/pg_hint_plan.patch
COPY --chmod=755 docker-compose/run-tests.sh /run-tests.sh
RUN case "${PG_VERSION}" in "v17") \
echo "postgresql_anonymizer does not yet support PG17" && exit 0;; \
esac && patch -p1 </ext-src/pg_anon.patch
RUN patch -p1 </ext-src/pg_cron.patch
echo "v17 extensions are not supported yet. Quit" && exit 0;; \
esac && \
patch -p1 </ext-src/pg_anon.patch
RUN case "${PG_VERSION}" in "v17") \
echo "v17 extensions are not supported yet. Quit" && exit 0;; \
esac && \
patch -p1 </ext-src/pg_cron.patch
ENV PATH=/usr/local/pgsql/bin:$PATH
ENV PGHOST=compute
ENV PGPORT=55433

View File

@@ -1,174 +0,0 @@
diff --git a/expected/ut-A.out b/expected/ut-A.out
index e7d68a1..65a056c 100644
--- a/expected/ut-A.out
+++ b/expected/ut-A.out
@@ -9,13 +9,16 @@ SET search_path TO public;
----
-- No.A-1-1-3
CREATE EXTENSION pg_hint_plan;
+LOG: Sending request to compute_ctl: http://localhost:3080/extension_server/pg_hint_plan
-- No.A-1-2-3
DROP EXTENSION pg_hint_plan;
-- No.A-1-1-4
CREATE SCHEMA other_schema;
CREATE EXTENSION pg_hint_plan SCHEMA other_schema;
+LOG: Sending request to compute_ctl: http://localhost:3080/extension_server/pg_hint_plan
ERROR: extension "pg_hint_plan" must be installed in schema "hint_plan"
CREATE EXTENSION pg_hint_plan;
+LOG: Sending request to compute_ctl: http://localhost:3080/extension_server/pg_hint_plan
DROP SCHEMA other_schema;
----
---- No. A-5-1 comment pattern
diff --git a/expected/ut-J.out b/expected/ut-J.out
index 2fa3c70..314e929 100644
--- a/expected/ut-J.out
+++ b/expected/ut-J.out
@@ -789,38 +789,6 @@ NestLoop(st1 st2)
MergeJoin(t1 t2)
not used hint:
duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-NestLoop(st1 st2)
-MergeJoin(t1 t2)
-duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-NestLoop(st1 st2)
-MergeJoin(t1 t2)
-duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-NestLoop(st1 st2)
-MergeJoin(t1 t2)
-duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-NestLoop(st1 st2)
-MergeJoin(t1 t2)
-duplication hint:
error hint:
explain_filter
diff --git a/expected/ut-S.out b/expected/ut-S.out
index 0bfcfb8..e75f581 100644
--- a/expected/ut-S.out
+++ b/expected/ut-S.out
@@ -4415,34 +4415,6 @@ used hint:
IndexScan(ti1 ti1_pred)
not used hint:
duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-IndexScan(ti1 ti1_pred)
-duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-IndexScan(ti1 ti1_pred)
-duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-IndexScan(ti1 ti1_pred)
-duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-IndexScan(ti1 ti1_pred)
-duplication hint:
error hint:
explain_filter
diff --git a/expected/ut-W.out b/expected/ut-W.out
index a09bd34..0ad227c 100644
--- a/expected/ut-W.out
+++ b/expected/ut-W.out
@@ -1341,54 +1341,6 @@ IndexScan(ft1)
IndexScan(t)
Parallel(s1 3 hard)
duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-IndexScan(*VALUES*)
-SeqScan(cte1)
-IndexScan(ft1)
-IndexScan(t)
-Parallel(p1 5 hard)
-Parallel(s1 3 hard)
-duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-IndexScan(*VALUES*)
-SeqScan(cte1)
-IndexScan(ft1)
-IndexScan(t)
-Parallel(p1 5 hard)
-Parallel(s1 3 hard)
-duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-IndexScan(*VALUES*)
-SeqScan(cte1)
-IndexScan(ft1)
-IndexScan(t)
-Parallel(p1 5 hard)
-Parallel(s1 3 hard)
-duplication hint:
-error hint:
-
-LOG: pg_hint_plan:
-used hint:
-not used hint:
-IndexScan(*VALUES*)
-SeqScan(cte1)
-IndexScan(ft1)
-IndexScan(t)
-Parallel(p1 5 hard)
-Parallel(s1 3 hard)
-duplication hint:
error hint:
explain_filter
diff --git a/expected/ut-fdw.out b/expected/ut-fdw.out
index 017fa4b..98d989b 100644
--- a/expected/ut-fdw.out
+++ b/expected/ut-fdw.out
@@ -7,6 +7,7 @@ SET pg_hint_plan.debug_print TO on;
SET client_min_messages TO LOG;
SET pg_hint_plan.enable_hint TO on;
CREATE EXTENSION file_fdw;
+LOG: Sending request to compute_ctl: http://localhost:3080/extension_server/file_fdw
CREATE SERVER file_server FOREIGN DATA WRAPPER file_fdw;
CREATE USER MAPPING FOR PUBLIC SERVER file_server;
CREATE FOREIGN TABLE ft1 (id int, val int) SERVER file_server OPTIONS (format 'csv', filename :'filename');

View File

@@ -1,42 +0,0 @@
commit 46b38d3e46f9cd6c70d9b189dd6ff4abaa17cf5e
Author: Alexander Bayandin <alexander@neon.tech>
Date: Sat Nov 30 18:29:32 2024 +0000
Fix v8 9.7.37 compilation on Debian 12
diff --git a/patches/code/84cf3230a9680aac3b73c410c2b758760b6d3066.patch b/patches/code/84cf3230a9680aac3b73c410c2b758760b6d3066.patch
new file mode 100644
index 0000000..f0a5dc7
--- /dev/null
+++ b/patches/code/84cf3230a9680aac3b73c410c2b758760b6d3066.patch
@@ -0,0 +1,30 @@
+From 84cf3230a9680aac3b73c410c2b758760b6d3066 Mon Sep 17 00:00:00 2001
+From: Michael Lippautz <mlippautz@chromium.org>
+Date: Thu, 27 Jan 2022 14:14:11 +0100
+Subject: [PATCH] cppgc: Fix include
+
+Add <utility> to cover for std::exchange.
+
+Bug: v8:12585
+Change-Id: Ida65144e93e466be8914527d0e646f348c136bcb
+Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/3420309
+Auto-Submit: Michael Lippautz <mlippautz@chromium.org>
+Reviewed-by: Omer Katz <omerkatz@chromium.org>
+Commit-Queue: Michael Lippautz <mlippautz@chromium.org>
+Cr-Commit-Position: refs/heads/main@{#78820}
+---
+ src/heap/cppgc/prefinalizer-handler.h | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/heap/cppgc/prefinalizer-handler.h b/src/heap/cppgc/prefinalizer-handler.h
+index bc17c99b1838..c82c91ff5a45 100644
+--- a/src/heap/cppgc/prefinalizer-handler.h
++++ b/src/heap/cppgc/prefinalizer-handler.h
+@@ -5,6 +5,7 @@
+ #ifndef V8_HEAP_CPPGC_PREFINALIZER_HANDLER_H_
+ #define V8_HEAP_CPPGC_PREFINALIZER_HANDLER_H_
+
++#include <utility>
+ #include <vector>
+
+ #include "include/cppgc/prefinalizer.h"

View File

@@ -335,7 +335,6 @@ fn wait_spec(
pgdata: pgdata.to_string(),
pgbin: pgbin.to_string(),
pgversion: get_pg_version_string(pgbin),
http_port,
live_config_allowed,
state: Mutex::new(new_state),
state_changed: Condvar::new(),
@@ -390,6 +389,7 @@ fn wait_spec(
Ok(WaitSpecResult {
compute,
http_port,
resize_swap_on_bind,
set_disk_quota_for_fs: set_disk_quota_for_fs.cloned(),
})
@@ -397,6 +397,8 @@ fn wait_spec(
struct WaitSpecResult {
compute: Arc<ComputeNode>,
// passed through from ProcessCliResult
http_port: u16,
resize_swap_on_bind: bool,
set_disk_quota_for_fs: Option<String>,
}
@@ -406,6 +408,7 @@ fn start_postgres(
#[allow(unused_variables)] matches: &clap::ArgMatches,
WaitSpecResult {
compute,
http_port,
resize_swap_on_bind,
set_disk_quota_for_fs,
}: WaitSpecResult,
@@ -478,10 +481,12 @@ fn start_postgres(
}
}
let extension_server_port: u16 = http_port;
// Start Postgres
let mut pg = None;
if !prestartup_failed {
pg = match compute.start_compute() {
pg = match compute.start_compute(extension_server_port) {
Ok(pg) => Some(pg),
Err(err) => {
error!("could not start the compute node: {:#}", err);

View File

@@ -30,8 +30,7 @@ pub async fn check_writability(compute: &ComputeNode) -> Result<()> {
match client.simple_query(query).await {
Result::Ok(result) => {
// one extra item for row description
if result.len() != 2 {
if result.len() != 1 {
return Err(anyhow::anyhow!(
"expected 1 query results, but got {}",
result.len()

View File

@@ -79,8 +79,6 @@ pub struct ComputeNode {
/// - we push spec and it does configuration
/// - but then it is restarted without any spec again
pub live_config_allowed: bool,
/// The port that the compute's HTTP server listens on
pub http_port: u16,
/// Volatile part of the `ComputeNode`, which should be used under `Mutex`.
/// To allow HTTP API server to serving status requests, while configuration
/// is in progress, lock should be held only for short periods of time to do
@@ -613,7 +611,11 @@ impl ComputeNode {
/// Do all the preparations like PGDATA directory creation, configuration,
/// safekeepers sync, basebackup, etc.
#[instrument(skip_all)]
pub fn prepare_pgdata(&self, compute_state: &ComputeState) -> Result<()> {
pub fn prepare_pgdata(
&self,
compute_state: &ComputeState,
extension_server_port: u16,
) -> Result<()> {
let pspec = compute_state.pspec.as_ref().expect("spec must be set");
let spec = &pspec.spec;
let pgdata_path = Path::new(&self.pgdata);
@@ -623,7 +625,7 @@ impl ComputeNode {
config::write_postgres_conf(
&pgdata_path.join("postgresql.conf"),
&pspec.spec,
self.http_port,
Some(extension_server_port),
)?;
// Syncing safekeepers is only safe with primary nodes: if a primary
@@ -1241,7 +1243,7 @@ impl ComputeNode {
// Write new config
let pgdata_path = Path::new(&self.pgdata);
let postgresql_conf_path = pgdata_path.join("postgresql.conf");
config::write_postgres_conf(&postgresql_conf_path, &spec, self.http_port)?;
config::write_postgres_conf(&postgresql_conf_path, &spec, None)?;
// TODO(ololobus): We need a concurrency during reconfiguration as well,
// but DB is already running and used by user. We can easily get out of
@@ -1282,7 +1284,10 @@ impl ComputeNode {
}
#[instrument(skip_all)]
pub fn start_compute(&self) -> Result<(std::process::Child, std::thread::JoinHandle<()>)> {
pub fn start_compute(
&self,
extension_server_port: u16,
) -> Result<(std::process::Child, std::thread::JoinHandle<()>)> {
let compute_state = self.state.lock().unwrap().clone();
let pspec = compute_state.pspec.as_ref().expect("spec must be set");
info!(
@@ -1357,7 +1362,7 @@ impl ComputeNode {
info!("{:?}", remote_ext_metrics);
}
self.prepare_pgdata(&compute_state)?;
self.prepare_pgdata(&compute_state, extension_server_port)?;
let start_time = Utc::now();
let pg_process = self.start_postgres(pspec.storage_auth_token.clone())?;

View File

@@ -37,7 +37,7 @@ pub fn line_in_file(path: &Path, line: &str) -> Result<bool> {
pub fn write_postgres_conf(
path: &Path,
spec: &ComputeSpec,
extension_server_port: u16,
extension_server_port: Option<u16>,
) -> Result<()> {
// File::create() destroys the file content if it exists.
let mut file = File::create(path)?;
@@ -127,7 +127,9 @@ pub fn write_postgres_conf(
writeln!(file, "# Managed by compute_ctl: end")?;
}
writeln!(file, "neon.extension_server_port={}", extension_server_port)?;
if let Some(port) = extension_server_port {
writeln!(file, "neon.extension_server_port={}", port)?;
}
// This is essential to keep this line at the end of the file,
// because it is intended to override any settings above.

View File

@@ -134,7 +134,7 @@ fn try_acquire_lsn_lease(
let mut client = config.connect(NoTls)?;
let cmd = format!("lease lsn {} {} {} ", tenant_shard_id, timeline_id, lsn);
let res = client.simple_query(&cmd)?;
let msg = match res.get(1) {
let msg = match res.first() {
Some(msg) => msg,
None => bail!("empty response"),
};

View File

@@ -37,7 +37,7 @@ pub async fn ping_safekeeper(
// Parse result
info!("done with {}", id);
if let postgres::SimpleQueryMessage::Row(row) = &result[1] {
if let postgres::SimpleQueryMessage::Row(row) = &result[0] {
use std::str::FromStr;
let response = TimelineStatusResponse::Ok(TimelineStatusOkResponse {
flush_lsn: Lsn::from_str(row.get("flush_lsn").unwrap())?,

View File

@@ -310,10 +310,6 @@ impl Endpoint {
conf.append("wal_log_hints", "off");
conf.append("max_replication_slots", "10");
conf.append("hot_standby", "on");
// Set to 1MB to both exercise getPage requests/LFC, and still have enough room for
// Postgres to operate. Everything smaller might be not enough for Postgres under load,
// and can cause errors like 'no unpinned buffers available', see
// <https://github.com/neondatabase/neon/issues/9956>
conf.append("shared_buffers", "1MB");
conf.append("fsync", "off");
conf.append("max_connections", "100");

View File

@@ -288,7 +288,7 @@ impl StorageController {
// But tokio-postgres fork doesn't have this upstream commit:
// https://github.com/sfackler/rust-postgres/commit/cb609be758f3fb5af537f04b584a2ee0cebd5e79
// => we should rebase our fork => TODO https://github.com/neondatabase/neon/issues/8399
.user(username())
.user(&username())
.dbname(DB_NAME)
.connect(tokio_postgres::NoTls)
.await

View File

@@ -560,26 +560,14 @@ async fn main() -> anyhow::Result<()> {
.await?;
}
Command::TenantDescribe { tenant_id } => {
let TenantDescribeResponse {
tenant_id,
shards,
stripe_size,
policy,
config,
} = storcon_client
let describe_response = storcon_client
.dispatch::<(), TenantDescribeResponse>(
Method::GET,
format!("control/v1/tenant/{tenant_id}"),
None,
)
.await?;
println!("Tenant {tenant_id}");
let mut table = comfy_table::Table::new();
table.add_row(["Policy", &format!("{:?}", policy)]);
table.add_row(["Stripe size", &format!("{:?}", stripe_size)]);
table.add_row(["Config", &serde_json::to_string_pretty(&config).unwrap()]);
println!("{table}");
println!("Shards:");
let shards = describe_response.shards;
let mut table = comfy_table::Table::new();
table.set_header(["Shard", "Attached", "Secondary", "Last error", "status"]);
for shard in shards {

View File

@@ -4,16 +4,14 @@ ARG TAG=latest
FROM $REPOSITORY/${COMPUTE_IMAGE}:$TAG
ARG COMPUTE_IMAGE
USER root
RUN apt-get update && \
apt-get install -y curl \
jq \
python3-pip \
netcat-openbsd
netcat
#Faker is required for the pg_anon test
RUN case $COMPUTE_IMAGE in compute-node-v17) OPT="--break-system-packages";; *) OPT= ;; esac && pip3 install $OPT Faker
RUN pip3 install Faker
#This is required for the pg_hintplan test
RUN mkdir -p /ext-src/pg_hint_plan-src && chown postgres /ext-src/pg_hint_plan-src

View File

@@ -30,17 +30,10 @@ cleanup() {
docker compose --profile test-extensions -f $COMPOSE_FILE down
}
for pg_version in ${TEST_VERSION_ONLY-14 15 16 17}; do
pg_version=${pg_version/v/}
for pg_version in 14 15 16; do
echo "clean up containers if exists"
cleanup
PG_TEST_VERSION=$((pg_version < 16 ? 16 : pg_version))
# The support of pg_anon not yet added to PG17, so we have to remove the corresponding option
if [ $pg_version -eq 17 ]; then
SPEC_PATH="compute_wrapper/var/db/postgres/specs"
mv $SPEC_PATH/spec.json $SPEC_PATH/spec.bak
jq 'del(.cluster.settings[] | select (.name == "session_preload_libraries"))' $SPEC_PATH/spec.bak > $SPEC_PATH/spec.json
fi
PG_TEST_VERSION=$(($pg_version < 16 ? 16 : $pg_version))
PG_VERSION=$pg_version PG_TEST_VERSION=$PG_TEST_VERSION docker compose --profile test-extensions -f $COMPOSE_FILE up --build -d
echo "wait until the compute is ready. timeout after 60s. "
@@ -61,7 +54,8 @@ for pg_version in ${TEST_VERSION_ONLY-14 15 16 17}; do
fi
done
if [ $pg_version -ge 16 ]; then
if [ $pg_version -ge 16 ]
then
echo Enabling trust connection
docker exec $COMPUTE_CONTAINER_NAME bash -c "sed -i '\$d' /var/db/postgres/compute/pg_hba.conf && echo -e 'host\t all\t all\t all\t trust' >> /var/db/postgres/compute/pg_hba.conf && psql $PSQL_OPTION -c 'select pg_reload_conf()' "
echo Adding postgres role
@@ -74,13 +68,10 @@ for pg_version in ${TEST_VERSION_ONLY-14 15 16 17}; do
# The test assumes that it is running on the same host with the postgres engine.
# In our case it's not true, that's why we are copying files to the compute node
TMPDIR=$(mktemp -d)
# Add support for pg_anon for pg_v16
if [ $pg_version -ne 17 ]; then
docker cp $TEST_CONTAINER_NAME:/ext-src/pg_anon-src/data $TMPDIR/data
echo -e '1\t too \t many \t tabs' > $TMPDIR/data/bad.csv
docker cp $TMPDIR/data $COMPUTE_CONTAINER_NAME:/tmp/tmp_anon_alternate_data
docker cp $TEST_CONTAINER_NAME:/ext-src/pg_anon-src/data $TMPDIR/data
echo -e '1\t too \t many \t tabs' > $TMPDIR/data/bad.csv
docker cp $TMPDIR/data $COMPUTE_CONTAINER_NAME:/tmp/tmp_anon_alternate_data
rm -rf $TMPDIR
fi
TMPDIR=$(mktemp -d)
# The following block does the same for the pg_hintplan test
docker cp $TEST_CONTAINER_NAME:/ext-src/pg_hint_plan-src/data $TMPDIR/data
@@ -106,8 +97,4 @@ for pg_version in ${TEST_VERSION_ONLY-14 15 16 17}; do
fi
fi
cleanup
# The support of pg_anon not yet added to PG17, so we have to remove the corresponding option
if [ $pg_version -eq 17 ]; then
mv $SPEC_PATH/spec.bak $SPEC_PATH/spec.json
fi
done

View File

@@ -103,12 +103,11 @@ impl<'a> IdempotencyKey<'a> {
}
}
/// Split into chunks of 1000 metrics to avoid exceeding the max request size.
pub const CHUNK_SIZE: usize = 1000;
// Just a wrapper around a slice of events
// to serialize it as `{"events" : [ ] }
#[derive(Debug, serde::Serialize, serde::Deserialize, PartialEq)]
pub struct EventChunk<'a, T: Clone + PartialEq> {
#[derive(serde::Serialize, Deserialize)]
pub struct EventChunk<'a, T: Clone> {
pub events: std::borrow::Cow<'a, [T]>,
}

View File

@@ -442,14 +442,7 @@ impl Default for ConfigToml {
tenant_config: TenantConfigToml::default(),
no_sync: None,
wal_receiver_protocol: DEFAULT_WAL_RECEIVER_PROTOCOL,
page_service_pipelining: if !cfg!(test) {
PageServicePipeliningConfig::Serial
} else {
PageServicePipeliningConfig::Pipelined(PageServicePipeliningConfigPipelined {
max_batch_size: NonZeroUsize::new(32).unwrap(),
execution: PageServiceProtocolPipelinedExecutionStrategy::ConcurrentFutures,
})
},
page_service_pipelining: PageServicePipeliningConfig::Serial,
}
}
}

View File

@@ -48,7 +48,7 @@ pub struct TenantCreateResponse {
pub shards: Vec<TenantCreateResponseShard>,
}
#[derive(Serialize, Deserialize, Debug, Clone)]
#[derive(Serialize, Deserialize)]
pub struct NodeRegisterRequest {
pub node_id: NodeId,
@@ -75,7 +75,7 @@ pub struct TenantPolicyRequest {
pub scheduling: Option<ShardSchedulingPolicy>,
}
#[derive(Clone, Serialize, Deserialize, PartialEq, Eq, Hash, Debug)]
#[derive(Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
pub struct AvailabilityZone(pub String);
impl Display for AvailabilityZone {

View File

@@ -770,11 +770,6 @@ impl Key {
&& self.field6 == 1
}
#[inline(always)]
pub fn is_aux_file_key(&self) -> bool {
self.field1 == AUX_KEY_PREFIX
}
/// Guaranteed to return `Ok()` if [`Self::is_rel_block_key`] returns `true` for `key`.
#[inline(always)]
pub fn to_rel_block(self) -> anyhow::Result<(RelTag, BlockNumber)> {

View File

@@ -501,9 +501,7 @@ pub struct EvictionPolicyLayerAccessThreshold {
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
pub struct ThrottleConfig {
/// See [`ThrottleConfigTaskKinds`] for why we do the serde `rename`.
#[serde(rename = "task_kinds")]
pub enabled: ThrottleConfigTaskKinds,
pub task_kinds: Vec<String>, // TaskKind
pub initial: u32,
#[serde(with = "humantime_serde")]
pub refill_interval: Duration,
@@ -511,38 +509,10 @@ pub struct ThrottleConfig {
pub max: u32,
}
/// Before <https://github.com/neondatabase/neon/pull/9962>
/// the throttle was a per `Timeline::get`/`Timeline::get_vectored` call.
/// The `task_kinds` field controlled which Pageserver "Task Kind"s
/// were subject to the throttle.
///
/// After that PR, the throttle is applied at pagestream request level
/// and the `task_kinds` field does not apply since the only task kind
/// that us subject to the throttle is that of the page service.
///
/// However, we don't want to make a breaking config change right now
/// because it means we have to migrate all the tenant configs.
/// This will be done in a future PR.
///
/// In the meantime, we use emptiness / non-emptsiness of the `task_kinds`
/// field to determine if the throttle is enabled or not.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
#[serde(transparent)]
pub struct ThrottleConfigTaskKinds(Vec<String>);
impl ThrottleConfigTaskKinds {
pub fn disabled() -> Self {
Self(vec![])
}
pub fn is_enabled(&self) -> bool {
!self.0.is_empty()
}
}
impl ThrottleConfig {
pub fn disabled() -> Self {
Self {
enabled: ThrottleConfigTaskKinds::disabled(),
task_kinds: vec![], // effectively disables the throttle
// other values don't matter with emtpy `task_kinds`.
initial: 0,
refill_interval: Duration::from_millis(1),
@@ -556,30 +526,6 @@ impl ThrottleConfig {
}
}
#[cfg(test)]
mod throttle_config_tests {
use super::*;
#[test]
fn test_disabled_is_disabled() {
let config = ThrottleConfig::disabled();
assert!(!config.enabled.is_enabled());
}
#[test]
fn test_enabled_backwards_compat() {
let input = serde_json::json!({
"task_kinds": ["PageRequestHandler"],
"initial": 40000,
"refill_interval": "50ms",
"refill_amount": 1000,
"max": 40000,
"fair": true
});
let config: ThrottleConfig = serde_json::from_value(input).unwrap();
assert!(config.enabled.is_enabled());
}
}
/// A flattened analog of a `pagesever::tenant::LocationMode`, which
/// lists out all possible states (and the virtual "Detached" state)
/// in a flat form rather than using rust-style enums.

View File

@@ -170,37 +170,19 @@ impl ShardIdentity {
}
}
/// Return true if the key should be stored on all shards, not just one.
fn is_key_global(&self, key: &Key) -> bool {
if key.is_slru_block_key() || key.is_slru_segment_size_key() || key.is_aux_file_key() {
// Special keys that are only stored on shard 0
false
} else if key.is_rel_block_key() {
// Ordinary relation blocks are distributed across shards
false
} else if key.is_rel_size_key() {
// All shards maintain rel size keys (although only shard 0 is responsible for
// keeping it strictly accurate, other shards just reflect the highest block they've ingested)
true
} else {
// For everything else, we assume it must be kept everywhere, because ingest code
// might assume this -- this covers functionality where the ingest code has
// not (yet) been made fully shard aware.
true
}
}
/// Return true if the key should be discarded if found in this shard's
/// data store, e.g. during compaction after a split.
///
/// Shards _may_ drop keys which return false here, but are not obliged to.
pub fn is_key_disposable(&self, key: &Key) -> bool {
if self.count < ShardCount(2) {
// Fast path: unsharded tenant doesn't dispose of anything
return false;
}
if self.is_key_global(key) {
if key_is_shard0(key) {
// Q: Why can't we dispose of shard0 content if we're not shard 0?
// A1: because the WAL ingestion logic currently ingests some shard 0
// content on all shards, even though it's only read on shard 0. If we
// dropped it, then subsequent WAL ingest to these keys would encounter
// an error.
// A2: because key_is_shard0 also covers relation size keys, which are written
// on all shards even though they're only maintained accurately on shard 0.
false
} else {
!self.is_key_local(key)

View File

@@ -32,9 +32,9 @@ impl<IO: AsyncRead + AsyncWrite + Unpin + Send> Handler<IO> for TestHandler {
_query_string: &str,
) -> Result<(), QueryError> {
pgb.write_message_noflush(&BeMessage::RowDescription(&[RowDescriptor::text_col(
b"column",
b"hey",
)]))?
.write_message_noflush(&BeMessage::DataRow(&[Some("data".as_bytes())]))?
.write_message_noflush(&BeMessage::DataRow(&[Some("hey".as_bytes())]))?
.write_message_noflush(&BeMessage::CommandComplete(b"SELECT 1"))?;
Ok(())
}
@@ -64,17 +64,10 @@ async fn simple_select() {
}
});
let resp = client.simple_query("SELECT 42;").await.expect("select");
if let SimpleQueryMessage::RowDescription(desc) = &resp[0] {
let first_col = desc[0].name();
assert_eq!(first_col, "column");
} else {
panic!("expected SimpleQueryMessage::RowDescription");
}
if let SimpleQueryMessage::Row(row) = &resp[1] {
let first_val = &(client.simple_query("SELECT 42;").await.expect("select"))[0];
if let SimpleQueryMessage::Row(row) = first_val {
let first_col = row.get(0).expect("first column");
assert_eq!(first_col, "data");
assert_eq!(first_col, "hey");
} else {
panic!("expected SimpleQueryMessage::Row");
}
@@ -147,17 +140,10 @@ async fn simple_select_ssl() {
}
});
let resp = client.simple_query("SELECT 42;").await.expect("select");
if let SimpleQueryMessage::RowDescription(desc) = &resp[0] {
let first_col = desc[0].name();
assert_eq!(first_col, "column");
} else {
panic!("expected SimpleQueryMessage::RowDescription");
}
if let SimpleQueryMessage::Row(row) = &resp[1] {
let first_val = &(client.simple_query("SELECT 42;").await.expect("select"))[0];
if let SimpleQueryMessage::Row(row) = first_val {
let first_col = row.get(0).expect("first column");
assert_eq!(first_col, "data");
assert_eq!(first_col, "hey");
} else {
panic!("expected SimpleQueryMessage::Row");
}

View File

@@ -126,7 +126,7 @@ impl PgConnectionConfig {
// Use `tokio_postgres::Config` instead of `postgres::Config` because
// the former supports more options to fiddle with later.
let mut config = tokio_postgres::Config::new();
config.host(self.host().to_string()).port(self.port);
config.host(&self.host().to_string()).port(self.port);
if let Some(password) = &self.password {
config.password(password);
}
@@ -146,7 +146,8 @@ impl PgConnectionConfig {
// establishing a new connection.
#[allow(unstable_name_collisions)]
config.options(
self.options
&self
.options
.iter()
.map(|s| {
if s.contains(['\\', ' ']) {

View File

@@ -565,8 +565,6 @@ pub enum BeMessage<'a> {
/// Batch of interpreted, shard filtered WAL records,
/// ready for the pageserver to ingest
InterpretedWalRecords(InterpretedWalRecordsBody<'a>),
Raw(u8, &'a [u8]),
}
/// Common shorthands.
@@ -756,10 +754,6 @@ impl BeMessage<'_> {
/// one more buffer.
pub fn write(buf: &mut BytesMut, message: &BeMessage) -> Result<(), ProtocolError> {
match message {
BeMessage::Raw(code, data) => {
buf.put_u8(*code);
write_body(buf, |b| b.put_slice(data))
}
BeMessage::AuthenticationOk => {
buf.put_u8(b'R');
write_body(buf, |buf| {

View File

@@ -10,6 +10,7 @@ byteorder.workspace = true
bytes.workspace = true
fallible-iterator.workspace = true
hmac.workspace = true
md-5 = "0.10"
memchr = "2.0"
rand.workspace = true
sha2.workspace = true

View File

@@ -1,2 +1,37 @@
//! Authentication protocol support.
use md5::{Digest, Md5};
pub mod sasl;
/// Hashes authentication information in a way suitable for use in response
/// to an `AuthenticationMd5Password` message.
///
/// The resulting string should be sent back to the database in a
/// `PasswordMessage` message.
#[inline]
pub fn md5_hash(username: &[u8], password: &[u8], salt: [u8; 4]) -> String {
let mut md5 = Md5::new();
md5.update(password);
md5.update(username);
let output = md5.finalize_reset();
md5.update(format!("{:x}", output));
md5.update(salt);
format!("md5{:x}", md5.finalize())
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn md5() {
let username = b"md5_user";
let password = b"password";
let salt = [0x2a, 0x3d, 0x8f, 0xe0];
assert_eq!(
md5_hash(username, password, salt),
"md562af4dd09bbb41884907a838a3233294"
);
}
}

View File

@@ -79,7 +79,7 @@ pub enum Message {
AuthenticationCleartextPassword,
AuthenticationGss,
AuthenticationKerberosV5,
AuthenticationMd5Password,
AuthenticationMd5Password(AuthenticationMd5PasswordBody),
AuthenticationOk,
AuthenticationScmCredential,
AuthenticationSspi,
@@ -191,7 +191,11 @@ impl Message {
0 => Message::AuthenticationOk,
2 => Message::AuthenticationKerberosV5,
3 => Message::AuthenticationCleartextPassword,
5 => Message::AuthenticationMd5Password,
5 => {
let mut salt = [0; 4];
buf.read_exact(&mut salt)?;
Message::AuthenticationMd5Password(AuthenticationMd5PasswordBody { salt })
}
6 => Message::AuthenticationScmCredential,
7 => Message::AuthenticationGss,
8 => Message::AuthenticationGssContinue,
@@ -537,10 +541,6 @@ impl NoticeResponseBody {
pub fn fields(&self) -> ErrorFields<'_> {
ErrorFields { buf: &self.storage }
}
pub fn as_bytes(&self) -> &[u8] {
&self.storage
}
}
pub struct NotificationResponseBody {

View File

@@ -8,6 +8,7 @@
use crate::authentication::sasl;
use hmac::{Hmac, Mac};
use md5::Md5;
use rand::RngCore;
use sha2::digest::FixedOutput;
use sha2::{Digest, Sha256};
@@ -87,3 +88,20 @@ pub(crate) async fn scram_sha_256_salt(
base64::encode(server_key)
)
}
/// **Not recommended, as MD5 is not considered to be secure.**
///
/// Hash password using MD5 with the username as the salt.
///
/// The client may assume the returned string doesn't contain any
/// special characters that would require escaping.
pub fn md5(password: &[u8], username: &str) -> String {
// salt password with username
let mut salted_password = Vec::from(password);
salted_password.extend_from_slice(username.as_bytes());
let mut hash = Md5::new();
hash.update(&salted_password);
let digest = hash.finalize();
format!("md5{:x}", digest)
}

View File

@@ -9,3 +9,11 @@ async fn test_encrypt_scram_sha_256() {
"SCRAM-SHA-256$4096:AQIDBAUGBwgJCgsMDQ4PEA==$8rrDg00OqaiWXJ7p+sCgHEIaBSHY89ZJl3mfIsf32oY=:05L1f+yZbiN8O0AnO40Og85NNRhvzTS57naKRWCcsIA="
);
}
#[test]
fn test_encrypt_md5() {
assert_eq!(
password::md5(b"secret", "foo"),
"md54ab2c5d00339c4b2a4e921d2dc4edec7"
);
}

View File

@@ -10,10 +10,10 @@ use tokio::net::TcpStream;
/// connection.
#[derive(Clone)]
pub struct CancelToken {
pub socket_config: Option<SocketConfig>,
pub ssl_mode: SslMode,
pub process_id: i32,
pub secret_key: i32,
pub(crate) socket_config: Option<SocketConfig>,
pub(crate) ssl_mode: SslMode,
pub(crate) process_id: i32,
pub(crate) secret_key: i32,
}
impl CancelToken {

View File

@@ -138,7 +138,7 @@ impl InnerClient {
}
#[derive(Clone)]
pub struct SocketConfig {
pub(crate) struct SocketConfig {
pub host: Host,
pub port: u16,
pub connect_timeout: Option<Duration>,
@@ -152,7 +152,7 @@ pub struct SocketConfig {
pub struct Client {
inner: Arc<InnerClient>,
socket_config: SocketConfig,
socket_config: Option<SocketConfig>,
ssl_mode: SslMode,
process_id: i32,
secret_key: i32,
@@ -161,7 +161,6 @@ pub struct Client {
impl Client {
pub(crate) fn new(
sender: mpsc::UnboundedSender<Request>,
socket_config: SocketConfig,
ssl_mode: SslMode,
process_id: i32,
secret_key: i32,
@@ -173,7 +172,7 @@ impl Client {
buffer: Default::default(),
}),
socket_config,
socket_config: None,
ssl_mode,
process_id,
secret_key,
@@ -189,6 +188,10 @@ impl Client {
&self.inner
}
pub(crate) fn set_socket_config(&mut self, socket_config: SocketConfig) {
self.socket_config = Some(socket_config);
}
/// Creates a new prepared statement.
///
/// Prepared statements can be executed repeatedly, and may contain query parameters (indicated by `$1`, `$2`, etc),
@@ -409,7 +412,7 @@ impl Client {
/// connection associated with this client.
pub fn cancel_token(&self) -> CancelToken {
CancelToken {
socket_config: Some(self.socket_config.clone()),
socket_config: self.socket_config.clone(),
ssl_mode: self.ssl_mode,
process_id: self.process_id,
secret_key: self.secret_key,

View File

@@ -2,13 +2,14 @@
use crate::connect::connect;
use crate::connect_raw::connect_raw;
use crate::connect_raw::RawConnection;
use crate::tls::MakeTlsConnect;
use crate::tls::TlsConnect;
use crate::{Client, Connection, Error};
use std::fmt;
use std::borrow::Cow;
use std::str;
use std::str::FromStr;
use std::time::Duration;
use std::{error, fmt, iter, mem};
use tokio::io::{AsyncRead, AsyncWrite};
pub use postgres_protocol2::authentication::sasl::ScramKeys;
@@ -146,9 +147,6 @@ pub enum AuthKeys {
/// ```
#[derive(Clone, PartialEq, Eq)]
pub struct Config {
pub(crate) host: Host,
pub(crate) port: u16,
pub(crate) user: Option<String>,
pub(crate) password: Option<Vec<u8>>,
pub(crate) auth_keys: Option<Box<AuthKeys>>,
@@ -156,6 +154,8 @@ pub struct Config {
pub(crate) options: Option<String>,
pub(crate) application_name: Option<String>,
pub(crate) ssl_mode: SslMode,
pub(crate) host: Vec<Host>,
pub(crate) port: Vec<u16>,
pub(crate) connect_timeout: Option<Duration>,
pub(crate) target_session_attrs: TargetSessionAttrs,
pub(crate) channel_binding: ChannelBinding,
@@ -163,12 +163,16 @@ pub struct Config {
pub(crate) max_backend_message_size: Option<usize>,
}
impl Default for Config {
fn default() -> Config {
Config::new()
}
}
impl Config {
/// Creates a new configuration.
pub fn new(host: String, port: u16) -> Config {
pub fn new() -> Config {
Config {
host: Host::Tcp(host),
port,
user: None,
password: None,
auth_keys: None,
@@ -176,6 +180,8 @@ impl Config {
options: None,
application_name: None,
ssl_mode: SslMode::Prefer,
host: vec![],
port: vec![],
connect_timeout: None,
target_session_attrs: TargetSessionAttrs::Any,
channel_binding: ChannelBinding::Prefer,
@@ -278,14 +284,32 @@ impl Config {
self.ssl_mode
}
/// Adds a host to the configuration.
///
/// Multiple hosts can be specified by calling this method multiple times, and each will be tried in order.
pub fn host(&mut self, host: &str) -> &mut Config {
self.host.push(Host::Tcp(host.to_string()));
self
}
/// Gets the hosts that have been added to the configuration with `host`.
pub fn get_host(&self) -> &Host {
pub fn get_hosts(&self) -> &[Host] {
&self.host
}
/// Adds a port to the configuration.
///
/// Multiple ports can be specified by calling this method multiple times. There must either be no ports, in which
/// case the default of 5432 is used, a single port, in which it is used for all hosts, or the same number of ports
/// as hosts.
pub fn port(&mut self, port: u16) -> &mut Config {
self.port.push(port);
self
}
/// Gets the ports that have been added to the configuration with `port`.
pub fn get_port(&self) -> u16 {
self.port
pub fn get_ports(&self) -> &[u16] {
&self.port
}
/// Sets the timeout applied to socket-level connection attempts.
@@ -355,6 +379,99 @@ impl Config {
self.max_backend_message_size
}
fn param(&mut self, key: &str, value: &str) -> Result<(), Error> {
match key {
"user" => {
self.user(value);
}
"password" => {
self.password(value);
}
"dbname" => {
self.dbname(value);
}
"options" => {
self.options(value);
}
"application_name" => {
self.application_name(value);
}
"sslmode" => {
let mode = match value {
"disable" => SslMode::Disable,
"prefer" => SslMode::Prefer,
"require" => SslMode::Require,
_ => return Err(Error::config_parse(Box::new(InvalidValue("sslmode")))),
};
self.ssl_mode(mode);
}
"host" => {
for host in value.split(',') {
self.host(host);
}
}
"port" => {
for port in value.split(',') {
let port = if port.is_empty() {
5432
} else {
port.parse()
.map_err(|_| Error::config_parse(Box::new(InvalidValue("port"))))?
};
self.port(port);
}
}
"connect_timeout" => {
let timeout = value
.parse::<i64>()
.map_err(|_| Error::config_parse(Box::new(InvalidValue("connect_timeout"))))?;
if timeout > 0 {
self.connect_timeout(Duration::from_secs(timeout as u64));
}
}
"target_session_attrs" => {
let target_session_attrs = match value {
"any" => TargetSessionAttrs::Any,
"read-write" => TargetSessionAttrs::ReadWrite,
_ => {
return Err(Error::config_parse(Box::new(InvalidValue(
"target_session_attrs",
))));
}
};
self.target_session_attrs(target_session_attrs);
}
"channel_binding" => {
let channel_binding = match value {
"disable" => ChannelBinding::Disable,
"prefer" => ChannelBinding::Prefer,
"require" => ChannelBinding::Require,
_ => {
return Err(Error::config_parse(Box::new(InvalidValue(
"channel_binding",
))))
}
};
self.channel_binding(channel_binding);
}
"max_backend_message_size" => {
let limit = value.parse::<usize>().map_err(|_| {
Error::config_parse(Box::new(InvalidValue("max_backend_message_size")))
})?;
if limit > 0 {
self.max_backend_message_size(limit);
}
}
key => {
return Err(Error::config_parse(Box::new(UnknownOption(
key.to_string(),
))));
}
}
Ok(())
}
/// Opens a connection to a PostgreSQL database.
///
/// Requires the `runtime` Cargo feature (enabled by default).
@@ -368,11 +485,14 @@ impl Config {
connect(tls, self).await
}
/// Connects to a PostgreSQL database over an arbitrary stream.
///
/// All of the settings other than `user`, `password`, `dbname`, `options`, and `application_name` name are ignored.
pub async fn connect_raw<S, T>(
&self,
stream: S,
tls: T,
) -> Result<RawConnection<S, T::Stream>, Error>
) -> Result<(Client, Connection<S, T::Stream>), Error>
where
S: AsyncRead + AsyncWrite + Unpin,
T: TlsConnect<S>,
@@ -381,6 +501,17 @@ impl Config {
}
}
impl FromStr for Config {
type Err = Error;
fn from_str(s: &str) -> Result<Config, Error> {
match UrlParser::parse(s)? {
Some(config) => Ok(config),
None => Parser::parse(s),
}
}
}
// Omit password from debug output
impl fmt::Debug for Config {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
@@ -407,3 +538,360 @@ impl fmt::Debug for Config {
.finish()
}
}
#[derive(Debug)]
struct UnknownOption(String);
impl fmt::Display for UnknownOption {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(fmt, "unknown option `{}`", self.0)
}
}
impl error::Error for UnknownOption {}
#[derive(Debug)]
struct InvalidValue(&'static str);
impl fmt::Display for InvalidValue {
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(fmt, "invalid value for option `{}`", self.0)
}
}
impl error::Error for InvalidValue {}
struct Parser<'a> {
s: &'a str,
it: iter::Peekable<str::CharIndices<'a>>,
}
impl<'a> Parser<'a> {
fn parse(s: &'a str) -> Result<Config, Error> {
let mut parser = Parser {
s,
it: s.char_indices().peekable(),
};
let mut config = Config::new();
while let Some((key, value)) = parser.parameter()? {
config.param(key, &value)?;
}
Ok(config)
}
fn skip_ws(&mut self) {
self.take_while(char::is_whitespace);
}
fn take_while<F>(&mut self, f: F) -> &'a str
where
F: Fn(char) -> bool,
{
let start = match self.it.peek() {
Some(&(i, _)) => i,
None => return "",
};
loop {
match self.it.peek() {
Some(&(_, c)) if f(c) => {
self.it.next();
}
Some(&(i, _)) => return &self.s[start..i],
None => return &self.s[start..],
}
}
}
fn eat(&mut self, target: char) -> Result<(), Error> {
match self.it.next() {
Some((_, c)) if c == target => Ok(()),
Some((i, c)) => {
let m = format!(
"unexpected character at byte {}: expected `{}` but got `{}`",
i, target, c
);
Err(Error::config_parse(m.into()))
}
None => Err(Error::config_parse("unexpected EOF".into())),
}
}
fn eat_if(&mut self, target: char) -> bool {
match self.it.peek() {
Some(&(_, c)) if c == target => {
self.it.next();
true
}
_ => false,
}
}
fn keyword(&mut self) -> Option<&'a str> {
let s = self.take_while(|c| match c {
c if c.is_whitespace() => false,
'=' => false,
_ => true,
});
if s.is_empty() {
None
} else {
Some(s)
}
}
fn value(&mut self) -> Result<String, Error> {
let value = if self.eat_if('\'') {
let value = self.quoted_value()?;
self.eat('\'')?;
value
} else {
self.simple_value()?
};
Ok(value)
}
fn simple_value(&mut self) -> Result<String, Error> {
let mut value = String::new();
while let Some(&(_, c)) = self.it.peek() {
if c.is_whitespace() {
break;
}
self.it.next();
if c == '\\' {
if let Some((_, c2)) = self.it.next() {
value.push(c2);
}
} else {
value.push(c);
}
}
if value.is_empty() {
return Err(Error::config_parse("unexpected EOF".into()));
}
Ok(value)
}
fn quoted_value(&mut self) -> Result<String, Error> {
let mut value = String::new();
while let Some(&(_, c)) = self.it.peek() {
if c == '\'' {
return Ok(value);
}
self.it.next();
if c == '\\' {
if let Some((_, c2)) = self.it.next() {
value.push(c2);
}
} else {
value.push(c);
}
}
Err(Error::config_parse(
"unterminated quoted connection parameter value".into(),
))
}
fn parameter(&mut self) -> Result<Option<(&'a str, String)>, Error> {
self.skip_ws();
let keyword = match self.keyword() {
Some(keyword) => keyword,
None => return Ok(None),
};
self.skip_ws();
self.eat('=')?;
self.skip_ws();
let value = self.value()?;
Ok(Some((keyword, value)))
}
}
// This is a pretty sloppy "URL" parser, but it matches the behavior of libpq, where things really aren't very strict
struct UrlParser<'a> {
s: &'a str,
config: Config,
}
impl<'a> UrlParser<'a> {
fn parse(s: &'a str) -> Result<Option<Config>, Error> {
let s = match Self::remove_url_prefix(s) {
Some(s) => s,
None => return Ok(None),
};
let mut parser = UrlParser {
s,
config: Config::new(),
};
parser.parse_credentials()?;
parser.parse_host()?;
parser.parse_path()?;
parser.parse_params()?;
Ok(Some(parser.config))
}
fn remove_url_prefix(s: &str) -> Option<&str> {
for prefix in &["postgres://", "postgresql://"] {
if let Some(stripped) = s.strip_prefix(prefix) {
return Some(stripped);
}
}
None
}
fn take_until(&mut self, end: &[char]) -> Option<&'a str> {
match self.s.find(end) {
Some(pos) => {
let (head, tail) = self.s.split_at(pos);
self.s = tail;
Some(head)
}
None => None,
}
}
fn take_all(&mut self) -> &'a str {
mem::take(&mut self.s)
}
fn eat_byte(&mut self) {
self.s = &self.s[1..];
}
fn parse_credentials(&mut self) -> Result<(), Error> {
let creds = match self.take_until(&['@']) {
Some(creds) => creds,
None => return Ok(()),
};
self.eat_byte();
let mut it = creds.splitn(2, ':');
let user = self.decode(it.next().unwrap())?;
self.config.user(&user);
if let Some(password) = it.next() {
let password = Cow::from(percent_encoding::percent_decode(password.as_bytes()));
self.config.password(password);
}
Ok(())
}
fn parse_host(&mut self) -> Result<(), Error> {
let host = match self.take_until(&['/', '?']) {
Some(host) => host,
None => self.take_all(),
};
if host.is_empty() {
return Ok(());
}
for chunk in host.split(',') {
let (host, port) = if chunk.starts_with('[') {
let idx = match chunk.find(']') {
Some(idx) => idx,
None => return Err(Error::config_parse(InvalidValue("host").into())),
};
let host = &chunk[1..idx];
let remaining = &chunk[idx + 1..];
let port = if let Some(port) = remaining.strip_prefix(':') {
Some(port)
} else if remaining.is_empty() {
None
} else {
return Err(Error::config_parse(InvalidValue("host").into()));
};
(host, port)
} else {
let mut it = chunk.splitn(2, ':');
(it.next().unwrap(), it.next())
};
self.host_param(host)?;
let port = self.decode(port.unwrap_or("5432"))?;
self.config.param("port", &port)?;
}
Ok(())
}
fn parse_path(&mut self) -> Result<(), Error> {
if !self.s.starts_with('/') {
return Ok(());
}
self.eat_byte();
let dbname = match self.take_until(&['?']) {
Some(dbname) => dbname,
None => self.take_all(),
};
if !dbname.is_empty() {
self.config.dbname(&self.decode(dbname)?);
}
Ok(())
}
fn parse_params(&mut self) -> Result<(), Error> {
if !self.s.starts_with('?') {
return Ok(());
}
self.eat_byte();
while !self.s.is_empty() {
let key = match self.take_until(&['=']) {
Some(key) => self.decode(key)?,
None => return Err(Error::config_parse("unterminated parameter".into())),
};
self.eat_byte();
let value = match self.take_until(&['&']) {
Some(value) => {
self.eat_byte();
value
}
None => self.take_all(),
};
if key == "host" {
self.host_param(value)?;
} else {
let value = self.decode(value)?;
self.config.param(&key, &value)?;
}
}
Ok(())
}
fn host_param(&mut self, s: &str) -> Result<(), Error> {
let s = self.decode(s)?;
self.config.param("host", &s)
}
fn decode(&self, s: &'a str) -> Result<Cow<'a, str>, Error> {
percent_encoding::percent_decode(s.as_bytes())
.decode_utf8()
.map_err(|e| Error::config_parse(e.into()))
}
}

View File

@@ -1,16 +1,13 @@
use crate::client::SocketConfig;
use crate::codec::BackendMessage;
use crate::config::{Host, TargetSessionAttrs};
use crate::connect_raw::connect_raw;
use crate::connect_socket::connect_socket;
use crate::tls::{MakeTlsConnect, TlsConnect};
use crate::{Client, Config, Connection, Error, RawConnection, SimpleQueryMessage};
use crate::{Client, Config, Connection, Error, SimpleQueryMessage};
use futures_util::{future, pin_mut, Future, FutureExt, Stream};
use postgres_protocol2::message::backend::Message;
use std::io;
use std::task::Poll;
use tokio::net::TcpStream;
use tokio::sync::mpsc;
pub async fn connect<T>(
mut tls: T,
@@ -19,18 +16,38 @@ pub async fn connect<T>(
where
T: MakeTlsConnect<TcpStream>,
{
let hostname = match &config.host {
Host::Tcp(host) => host.as_str(),
};
let tls = tls
.make_tls_connect(hostname)
.map_err(|e| Error::tls(e.into()))?;
match connect_once(&config.host, config.port, tls, config).await {
Ok((client, connection)) => Ok((client, connection)),
Err(e) => Err(e),
if config.host.is_empty() {
return Err(Error::config("host missing".into()));
}
if config.port.len() > 1 && config.port.len() != config.host.len() {
return Err(Error::config("invalid number of ports".into()));
}
let mut error = None;
for (i, host) in config.host.iter().enumerate() {
let port = config
.port
.get(i)
.or_else(|| config.port.first())
.copied()
.unwrap_or(5432);
let hostname = match host {
Host::Tcp(host) => host.as_str(),
};
let tls = tls
.make_tls_connect(hostname)
.map_err(|e| Error::tls(e.into()))?;
match connect_once(host, port, tls, config).await {
Ok((client, connection)) => return Ok((client, connection)),
Err(e) => error = Some(e),
}
}
Err(error.unwrap())
}
async fn connect_once<T>(
@@ -43,36 +60,7 @@ where
T: TlsConnect<TcpStream>,
{
let socket = connect_socket(host, port, config.connect_timeout).await?;
let RawConnection {
stream,
parameters,
delayed_notice,
process_id,
secret_key,
} = connect_raw(socket, tls, config).await?;
let socket_config = SocketConfig {
host: host.clone(),
port,
connect_timeout: config.connect_timeout,
};
let (sender, receiver) = mpsc::unbounded_channel();
let client = Client::new(
sender,
socket_config,
config.ssl_mode,
process_id,
secret_key,
);
// delayed notices are always sent as "Async" messages.
let delayed = delayed_notice
.into_iter()
.map(|m| BackendMessage::Async(Message::NoticeResponse(m)))
.collect();
let mut connection = Connection::new(stream, delayed, parameters, receiver);
let (mut client, mut connection) = connect_raw(socket, tls, config).await?;
if let TargetSessionAttrs::ReadWrite = config.target_session_attrs {
let rows = client.simple_query_raw("SHOW transaction_read_only");
@@ -114,5 +102,11 @@ where
}
}
client.set_socket_config(SocketConfig {
host: host.clone(),
port,
connect_timeout: config.connect_timeout,
});
Ok((client, connection))
}

View File

@@ -3,25 +3,27 @@ use crate::config::{self, AuthKeys, Config, ReplicationMode};
use crate::connect_tls::connect_tls;
use crate::maybe_tls_stream::MaybeTlsStream;
use crate::tls::{TlsConnect, TlsStream};
use crate::Error;
use crate::{Client, Connection, Error};
use bytes::BytesMut;
use fallible_iterator::FallibleIterator;
use futures_util::{ready, Sink, SinkExt, Stream, TryStreamExt};
use postgres_protocol2::authentication;
use postgres_protocol2::authentication::sasl;
use postgres_protocol2::authentication::sasl::ScramSha256;
use postgres_protocol2::message::backend::{AuthenticationSaslBody, Message, NoticeResponseBody};
use postgres_protocol2::message::backend::{AuthenticationSaslBody, Message};
use postgres_protocol2::message::frontend;
use std::collections::HashMap;
use std::collections::{HashMap, VecDeque};
use std::io;
use std::pin::Pin;
use std::task::{Context, Poll};
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::sync::mpsc;
use tokio_util::codec::Framed;
pub struct StartupStream<S, T> {
inner: Framed<MaybeTlsStream<S, T>, PostgresCodec>,
buf: BackendMessages,
delayed_notice: Vec<NoticeResponseBody>,
delayed: VecDeque<BackendMessage>,
}
impl<S, T> Sink<FrontendMessage> for StartupStream<S, T>
@@ -76,19 +78,11 @@ where
}
}
pub struct RawConnection<S, T> {
pub stream: Framed<MaybeTlsStream<S, T>, PostgresCodec>,
pub parameters: HashMap<String, String>,
pub delayed_notice: Vec<NoticeResponseBody>,
pub process_id: i32,
pub secret_key: i32,
}
pub async fn connect_raw<S, T>(
stream: S,
tls: T,
config: &Config,
) -> Result<RawConnection<S, T::Stream>, Error>
) -> Result<(Client, Connection<S, T::Stream>), Error>
where
S: AsyncRead + AsyncWrite + Unpin,
T: TlsConnect<S>,
@@ -103,20 +97,18 @@ where
},
),
buf: BackendMessages::empty(),
delayed_notice: Vec::new(),
delayed: VecDeque::new(),
};
startup(&mut stream, config).await?;
authenticate(&mut stream, config).await?;
let (process_id, secret_key, parameters) = read_info(&mut stream).await?;
Ok(RawConnection {
stream: stream.inner,
parameters,
delayed_notice: stream.delayed_notice,
process_id,
secret_key,
})
let (sender, receiver) = mpsc::unbounded_channel();
let client = Client::new(sender, config.ssl_mode, process_id, secret_key);
let connection = Connection::new(stream.inner, stream.delayed, parameters, receiver);
Ok((client, connection))
}
async fn startup<S, T>(stream: &mut StartupStream<S, T>, config: &Config) -> Result<(), Error>
@@ -173,11 +165,25 @@ where
authenticate_password(stream, pass).await?;
}
Some(Message::AuthenticationMd5Password(body)) => {
can_skip_channel_binding(config)?;
let user = config
.user
.as_ref()
.ok_or_else(|| Error::config("user missing".into()))?;
let pass = config
.password
.as_ref()
.ok_or_else(|| Error::config("password missing".into()))?;
let output = authentication::md5_hash(user.as_bytes(), pass, body.salt());
authenticate_password(stream, output.as_bytes()).await?;
}
Some(Message::AuthenticationSasl(body)) => {
authenticate_sasl(stream, body, config).await?;
}
Some(Message::AuthenticationMd5Password)
| Some(Message::AuthenticationKerberosV5)
Some(Message::AuthenticationKerberosV5)
| Some(Message::AuthenticationScmCredential)
| Some(Message::AuthenticationGss)
| Some(Message::AuthenticationSspi) => {
@@ -341,7 +347,9 @@ where
body.value().map_err(Error::parse)?.to_string(),
);
}
Some(Message::NoticeResponse(body)) => stream.delayed_notice.push(body),
Some(msg @ Message::NoticeResponse(_)) => {
stream.delayed.push_back(BackendMessage::Async(msg))
}
Some(Message::ReadyForQuery(_)) => return Ok((process_id, secret_key, parameters)),
Some(Message::ErrorResponse(body)) => return Err(Error::db(body)),
Some(_) => return Err(Error::unexpected_message()),

View File

@@ -349,6 +349,7 @@ enum Kind {
Parse,
Encode,
Authentication,
ConfigParse,
Config,
Connect,
Timeout,
@@ -385,6 +386,7 @@ impl fmt::Display for Error {
Kind::Parse => fmt.write_str("error parsing response from server")?,
Kind::Encode => fmt.write_str("error encoding message to server")?,
Kind::Authentication => fmt.write_str("authentication error")?,
Kind::ConfigParse => fmt.write_str("invalid connection string")?,
Kind::Config => fmt.write_str("invalid configuration")?,
Kind::Connect => fmt.write_str("error connecting to server")?,
Kind::Timeout => fmt.write_str("timeout waiting for server")?,
@@ -480,6 +482,10 @@ impl Error {
Error::new(Kind::Authentication, Some(e))
}
pub(crate) fn config_parse(e: Box<dyn error::Error + Sync + Send>) -> Error {
Error::new(Kind::ConfigParse, Some(e))
}
pub(crate) fn config(e: Box<dyn error::Error + Sync + Send>) -> Error {
Error::new(Kind::Config, Some(e))
}

View File

@@ -1,10 +1,9 @@
//! An asynchronous, pipelined, PostgreSQL client.
#![warn(rust_2018_idioms, clippy::all)]
#![warn(rust_2018_idioms, clippy::all, missing_docs)]
pub use crate::cancel_token::CancelToken;
pub use crate::client::{Client, SocketConfig};
pub use crate::client::Client;
pub use crate::config::Config;
pub use crate::connect_raw::RawConnection;
pub use crate::connection::Connection;
use crate::error::DbError;
pub use crate::error::Error;
@@ -13,12 +12,14 @@ pub use crate::query::RowStream;
pub use crate::row::{Row, SimpleQueryRow};
pub use crate::simple_query::SimpleQueryStream;
pub use crate::statement::{Column, Statement};
use crate::tls::MakeTlsConnect;
pub use crate::tls::NoTls;
pub use crate::to_statement::ToStatement;
pub use crate::transaction::Transaction;
pub use crate::transaction_builder::{IsolationLevel, TransactionBuilder};
use crate::types::ToSql;
use postgres_protocol2::message::backend::ReadyForQueryBody;
use tokio::net::TcpStream;
/// After executing a query, the connection will be in one of these states
#[derive(Clone, Copy, Debug, PartialEq)]
@@ -70,6 +71,24 @@ mod transaction;
mod transaction_builder;
pub mod types;
/// A convenience function which parses a connection string and connects to the database.
///
/// See the documentation for [`Config`] for details on the connection string format.
///
/// Requires the `runtime` Cargo feature (enabled by default).
///
/// [`Config`]: config/struct.Config.html
pub async fn connect<T>(
config: &str,
tls: T,
) -> Result<(Client, Connection<TcpStream, T::Stream>), Error>
where
T: MakeTlsConnect<TcpStream>,
{
let config = config.parse::<Config>()?;
config.connect(tls).await
}
/// An asynchronous notification.
#[derive(Clone, Debug)]
pub struct Notification {

View File

@@ -26,7 +26,6 @@ humantime.workspace = true
hyper0 = { workspace = true, features = ["full"] }
fail.workspace = true
futures = { workspace = true}
jemalloc_pprof.workspace = true
jsonwebtoken.workspace = true
nix.workspace = true
once_cell.workspace = true

View File

@@ -10,7 +10,6 @@ use metrics::{register_int_counter, Encoder, IntCounter, TextEncoder};
use once_cell::sync::Lazy;
use routerify::ext::RequestExt;
use routerify::{Middleware, RequestInfo, Router, RouterBuilder};
use tokio_util::io::ReaderStream;
use tracing::{debug, info, info_span, warn, Instrument};
use std::future::Future;
@@ -408,69 +407,6 @@ pub async fn profile_cpu_handler(req: Request<Body>) -> Result<Response<Body>, A
}
}
/// Generates heap profiles.
///
/// This only works with jemalloc on Linux.
pub async fn profile_heap_handler(req: Request<Body>) -> Result<Response<Body>, ApiError> {
enum Format {
Jemalloc,
Pprof,
}
// Parameters.
let format = match get_query_param(&req, "format")?.as_deref() {
None => Format::Pprof,
Some("jemalloc") => Format::Jemalloc,
Some("pprof") => Format::Pprof,
Some(format) => return Err(ApiError::BadRequest(anyhow!("invalid format {format}"))),
};
// Obtain profiler handle.
let mut prof_ctl = jemalloc_pprof::PROF_CTL
.as_ref()
.ok_or(ApiError::InternalServerError(anyhow!(
"heap profiling not enabled"
)))?
.lock()
.await;
if !prof_ctl.activated() {
return Err(ApiError::InternalServerError(anyhow!(
"heap profiling not enabled"
)));
}
// Take and return the profile.
match format {
Format::Jemalloc => {
// NB: file is an open handle to a tempfile that's already deleted.
let file = tokio::task::spawn_blocking(move || prof_ctl.dump())
.await
.map_err(|join_err| ApiError::InternalServerError(join_err.into()))?
.map_err(ApiError::InternalServerError)?;
let stream = ReaderStream::new(tokio::fs::File::from_std(file));
Response::builder()
.status(200)
.header(CONTENT_TYPE, "application/octet-stream")
.header(CONTENT_DISPOSITION, "attachment; filename=\"heap.dump\"")
.body(Body::wrap_stream(stream))
.map_err(|err| ApiError::InternalServerError(err.into()))
}
Format::Pprof => {
let data = tokio::task::spawn_blocking(move || prof_ctl.dump_pprof())
.await
.map_err(|join_err| ApiError::InternalServerError(join_err.into()))?
.map_err(ApiError::InternalServerError)?;
Response::builder()
.status(200)
.header(CONTENT_TYPE, "application/octet-stream")
.header(CONTENT_DISPOSITION, "attachment; filename=\"heap.pb\"")
.body(Body::from(data))
.map_err(|err| ApiError::InternalServerError(err.into()))
}
}
}
pub fn add_request_id_middleware<B: hyper::body::HttpBody + Send + Sync + 'static>(
) -> Middleware<B, ApiError> {
Middleware::pre(move |req| async move {

View File

@@ -112,38 +112,30 @@ impl MetadataRecord {
};
// Next, filter the metadata record by shard.
match metadata_record {
Some(
MetadataRecord::Heapam(HeapamRecord::ClearVmBits(ref mut clear_vm_bits))
| MetadataRecord::Neonrmgr(NeonrmgrRecord::ClearVmBits(ref mut clear_vm_bits)),
) => {
// Route VM page updates to the shards that own them. VM pages are stored in the VM fork
// of the main relation. These are sharded and managed just like regular relation pages.
// See: https://github.com/neondatabase/neon/issues/9855
let is_local_vm_page = |heap_blk| {
let vm_blk = pg_constants::HEAPBLK_TO_MAPBLOCK(heap_blk);
shard.is_key_local(&rel_block_to_key(clear_vm_bits.vm_rel, vm_blk))
};
// Send the old and new VM page updates to their respective shards.
clear_vm_bits.old_heap_blkno = clear_vm_bits
.old_heap_blkno
.filter(|&blkno| is_local_vm_page(blkno));
clear_vm_bits.new_heap_blkno = clear_vm_bits
.new_heap_blkno
.filter(|&blkno| is_local_vm_page(blkno));
// If neither VM page belongs to this shard, discard the record.
if clear_vm_bits.old_heap_blkno.is_none() && clear_vm_bits.new_heap_blkno.is_none()
{
metadata_record = None
}
// Route VM page updates to the shards that own them. VM pages are stored in the VM fork
// of the main relation. These are sharded and managed just like regular relation pages.
// See: https://github.com/neondatabase/neon/issues/9855
if let Some(
MetadataRecord::Heapam(HeapamRecord::ClearVmBits(ref mut clear_vm_bits))
| MetadataRecord::Neonrmgr(NeonrmgrRecord::ClearVmBits(ref mut clear_vm_bits)),
) = metadata_record
{
let is_local_vm_page = |heap_blk| {
let vm_blk = pg_constants::HEAPBLK_TO_MAPBLOCK(heap_blk);
shard.is_key_local(&rel_block_to_key(clear_vm_bits.vm_rel, vm_blk))
};
// Send the old and new VM page updates to their respective shards.
clear_vm_bits.old_heap_blkno = clear_vm_bits
.old_heap_blkno
.filter(|&blkno| is_local_vm_page(blkno));
clear_vm_bits.new_heap_blkno = clear_vm_bits
.new_heap_blkno
.filter(|&blkno| is_local_vm_page(blkno));
// If neither VM page belongs to this shard, discard the record.
if clear_vm_bits.old_heap_blkno.is_none() && clear_vm_bits.new_heap_blkno.is_none() {
metadata_record = None
}
Some(MetadataRecord::LogicalMessage(LogicalMessageRecord::Put(_))) => {
// Filter LogicalMessage records (AUX files) to only be stored on shard zero
if !shard.is_shard_zero() {
metadata_record = None;
}
}
_ => {}
}
Ok(metadata_record)

View File

@@ -53,11 +53,6 @@ project_build_tag!(BUILD_TAG);
#[global_allocator]
static GLOBAL: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
/// Configure jemalloc to sample allocations for profiles every 1 MB (1 << 20).
#[allow(non_upper_case_globals)]
#[export_name = "malloc_conf"]
pub static malloc_conf: &[u8] = b"prof:true,prof_active:true,lg_prof_sample:20\0";
const PID_FILE_NAME: &str = "pageserver.pid";
const FEATURES: &[&str] = &[
@@ -132,7 +127,6 @@ fn main() -> anyhow::Result<()> {
info!(?conf.virtual_file_io_engine, "starting with virtual_file IO engine");
info!(?conf.virtual_file_io_mode, "starting with virtual_file IO mode");
info!(?conf.wal_receiver_protocol, "starting with WAL receiver protocol");
info!(?conf.page_service_pipelining, "starting with page service pipelining config");
// The tenants directory contains all the pageserver local disk state.
// Create if not exists and make sure all the contents are durable before proceeding.
@@ -308,7 +302,7 @@ fn start_pageserver(
pageserver::metrics::tokio_epoll_uring::Collector::new(),
))
.unwrap();
pageserver::preinitialize_metrics(conf);
pageserver::preinitialize_metrics();
// If any failpoints were set from FAILPOINTS environment variable,
// print them to the log for debugging purposes
@@ -636,59 +630,45 @@ fn start_pageserver(
tokio::net::TcpListener::from_std(pageserver_listener).context("create tokio listener")?
});
// All started up! Now just sit and wait for shutdown signal.
BACKGROUND_RUNTIME.block_on(async move {
let signal_token = CancellationToken::new();
let signal_cancel = signal_token.child_token();
let mut shutdown_pageserver = Some(shutdown_pageserver.drop_guard());
// Spawn signal handlers. Runs in a loop since we want to be responsive to multiple signals
// even after triggering shutdown (e.g. a SIGQUIT after a slow SIGTERM shutdown). See:
// https://github.com/neondatabase/neon/issues/9740.
tokio::spawn(async move {
// All started up! Now just sit and wait for shutdown signal.
{
BACKGROUND_RUNTIME.block_on(async move {
let mut sigint = tokio::signal::unix::signal(SignalKind::interrupt()).unwrap();
let mut sigterm = tokio::signal::unix::signal(SignalKind::terminate()).unwrap();
let mut sigquit = tokio::signal::unix::signal(SignalKind::quit()).unwrap();
loop {
let signal = tokio::select! {
_ = sigquit.recv() => {
info!("Got signal SIGQUIT. Terminating in immediate shutdown mode.");
std::process::exit(111);
}
_ = sigint.recv() => "SIGINT",
_ = sigterm.recv() => "SIGTERM",
};
if !signal_token.is_cancelled() {
info!("Got signal {signal}. Terminating gracefully in fast shutdown mode.");
signal_token.cancel();
} else {
info!("Got signal {signal}. Already shutting down.");
let signal = tokio::select! {
_ = sigquit.recv() => {
info!("Got signal SIGQUIT. Terminating in immediate shutdown mode",);
std::process::exit(111);
}
}
});
_ = sigint.recv() => { "SIGINT" },
_ = sigterm.recv() => { "SIGTERM" },
};
// Wait for cancellation signal and shut down the pageserver.
//
// This cancels the `shutdown_pageserver` cancellation tree. Right now that tree doesn't
// reach very far, and `task_mgr` is used instead. The plan is to change that over time.
signal_cancel.cancelled().await;
info!("Got signal {signal}. Terminating gracefully in fast shutdown mode",);
shutdown_pageserver.cancel();
pageserver::shutdown_pageserver(
http_endpoint_listener,
page_service,
consumption_metrics_tasks,
disk_usage_eviction_task,
&tenant_manager,
background_purges,
deletion_queue.clone(),
secondary_controller_tasks,
0,
)
.await;
unreachable!();
})
// This cancels the `shutdown_pageserver` cancellation tree.
// Right now that tree doesn't reach very far, and `task_mgr` is used instead.
// The plan is to change that over time.
shutdown_pageserver.take();
pageserver::shutdown_pageserver(
http_endpoint_listener,
page_service,
consumption_metrics_tasks,
disk_usage_eviction_task,
&tenant_manager,
background_purges,
deletion_queue.clone(),
secondary_controller_tasks,
0,
)
.await;
unreachable!()
})
}
}
async fn create_remote_storage_client(

View File

@@ -91,6 +91,8 @@
use crate::task_mgr::TaskKind;
pub(crate) mod optional_counter;
// The main structure of this module, see module-level comment.
#[derive(Debug)]
pub struct RequestContext {
@@ -98,6 +100,7 @@ pub struct RequestContext {
download_behavior: DownloadBehavior,
access_stats_behavior: AccessStatsBehavior,
page_content_kind: PageContentKind,
pub micros_spent_throttled: optional_counter::MicroSecondsCounterU32,
}
/// The kind of access to the page cache.
@@ -155,6 +158,7 @@ impl RequestContextBuilder {
download_behavior: DownloadBehavior::Download,
access_stats_behavior: AccessStatsBehavior::Update,
page_content_kind: PageContentKind::Unknown,
micros_spent_throttled: Default::default(),
},
}
}
@@ -168,6 +172,7 @@ impl RequestContextBuilder {
download_behavior: original.download_behavior,
access_stats_behavior: original.access_stats_behavior,
page_content_kind: original.page_content_kind,
micros_spent_throttled: Default::default(),
},
}
}

View File

@@ -0,0 +1,101 @@
use std::{
sync::atomic::{AtomicU32, Ordering},
time::Duration,
};
#[derive(Debug)]
pub struct CounterU32 {
inner: AtomicU32,
}
impl Default for CounterU32 {
fn default() -> Self {
Self {
inner: AtomicU32::new(u32::MAX),
}
}
}
impl CounterU32 {
pub fn open(&self) -> Result<(), &'static str> {
match self
.inner
.compare_exchange(u32::MAX, 0, Ordering::Relaxed, Ordering::Relaxed)
{
Ok(_) => Ok(()),
Err(_) => Err("open() called on clsoed state"),
}
}
pub fn close(&self) -> Result<u32, &'static str> {
match self.inner.swap(u32::MAX, Ordering::Relaxed) {
u32::MAX => Err("close() called on closed state"),
x => Ok(x),
}
}
pub fn add(&self, count: u32) -> Result<(), &'static str> {
if count == 0 {
return Ok(());
}
let mut had_err = None;
self.inner
.fetch_update(Ordering::Relaxed, Ordering::Relaxed, |cur| match cur {
u32::MAX => {
had_err = Some("add() called on closed state");
None
}
x => {
let (new, overflowed) = x.overflowing_add(count);
if new == u32::MAX || overflowed {
had_err = Some("add() overflowed the counter");
None
} else {
Some(new)
}
}
})
.map_err(|_| had_err.expect("we set it whenever the function returns None"))
.map(|_| ())
}
}
#[derive(Default, Debug)]
pub struct MicroSecondsCounterU32 {
inner: CounterU32,
}
impl MicroSecondsCounterU32 {
pub fn open(&self) -> Result<(), &'static str> {
self.inner.open()
}
pub fn add(&self, duration: Duration) -> Result<(), &'static str> {
match duration.as_micros().try_into() {
Ok(x) => self.inner.add(x),
Err(_) => Err("add(): duration conversion error"),
}
}
pub fn close_and_checked_sub_from(&self, from: Duration) -> Result<Duration, &'static str> {
let val = self.inner.close()?;
let val = Duration::from_micros(val as u64);
let subbed = match from.checked_sub(val) {
Some(v) => v,
None => return Err("Duration::checked_sub"),
};
Ok(subbed)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic() {
let counter = MicroSecondsCounterU32::default();
counter.open().unwrap();
counter.add(Duration::from_micros(23)).unwrap();
let res = counter
.close_and_checked_sub_from(Duration::from_micros(42))
.unwrap();
assert_eq!(res, Duration::from_micros(42 - 23));
}
}

View File

@@ -115,10 +115,6 @@ impl ControllerUpcallClient {
Ok(res)
}
pub(crate) fn base_url(&self) -> &Url {
&self.base_url
}
}
impl ControlPlaneGenerationsApi for ControllerUpcallClient {
@@ -195,15 +191,13 @@ impl ControlPlaneGenerationsApi for ControllerUpcallClient {
let request = ReAttachRequest {
node_id: self.node_id,
register: register.clone(),
register,
};
let response: ReAttachResponse = self.retry_http_forever(&re_attach_path, request).await?;
tracing::info!(
"Received re-attach response with {} tenants (node {}, register: {:?})",
response.tenants.len(),
self.node_id,
register,
"Received re-attach response with {} tenants",
response.tenants.len()
);
failpoint_support::sleep_millis_async!("control-plane-client-re-attach");

View File

@@ -56,9 +56,9 @@ use tokio_util::sync::CancellationToken;
use tracing::*;
use utils::auth::JwtAuth;
use utils::failpoint_support::failpoints_handler;
use utils::http::endpoint::{
profile_cpu_handler, profile_heap_handler, prometheus_metrics_handler, request_span,
};
use utils::http::endpoint::profile_cpu_handler;
use utils::http::endpoint::prometheus_metrics_handler;
use utils::http::endpoint::request_span;
use utils::http::request::must_parse_query_param;
use utils::http::request::{get_request_param, must_get_query_param, parse_query_param};
@@ -155,7 +155,6 @@ impl State {
"/swagger.yml",
"/metrics",
"/profile/cpu",
"/profile/heap",
];
Ok(Self {
conf,
@@ -3204,7 +3203,6 @@ pub fn make_router(
.data(state)
.get("/metrics", |r| request_span(r, prometheus_metrics_handler))
.get("/profile/cpu", |r| request_span(r, profile_cpu_handler))
.get("/profile/heap", |r| request_span(r, profile_heap_handler))
.get("/v1/status", |r| api_handler(r, status_handler))
.put("/v1/failpoints", |r| {
testing_api_handler("manage failpoints", r, failpoints_handler)

View File

@@ -575,24 +575,18 @@ async fn import_file(
} else if file_path.starts_with("pg_xact") {
let slru = SlruKind::Clog;
if modification.tline.tenant_shard_id.is_shard_zero() {
import_slru(modification, slru, file_path, reader, len, ctx).await?;
debug!("imported clog slru");
}
import_slru(modification, slru, file_path, reader, len, ctx).await?;
debug!("imported clog slru");
} else if file_path.starts_with("pg_multixact/offsets") {
let slru = SlruKind::MultiXactOffsets;
if modification.tline.tenant_shard_id.is_shard_zero() {
import_slru(modification, slru, file_path, reader, len, ctx).await?;
debug!("imported multixact offsets slru");
}
import_slru(modification, slru, file_path, reader, len, ctx).await?;
debug!("imported multixact offsets slru");
} else if file_path.starts_with("pg_multixact/members") {
let slru = SlruKind::MultiXactMembers;
if modification.tline.tenant_shard_id.is_shard_zero() {
import_slru(modification, slru, file_path, reader, len, ctx).await?;
debug!("imported multixact members slru");
}
import_slru(modification, slru, file_path, reader, len, ctx).await?;
debug!("imported multixact members slru");
} else if file_path.starts_with("pg_twophase") {
let bytes = read_all_bytes(reader).await?;

View File

@@ -7,10 +7,6 @@ use metrics::{
IntCounterPairVec, IntCounterVec, IntGauge, IntGaugeVec, UIntGauge, UIntGaugeVec,
};
use once_cell::sync::Lazy;
use pageserver_api::config::{
PageServicePipeliningConfig, PageServicePipeliningConfigPipelined,
PageServiceProtocolPipelinedExecutionStrategy,
};
use pageserver_api::shard::TenantShardId;
use postgres_backend::{is_expected_io_error, QueryError};
use pq_proto::framed::ConnectionError;
@@ -217,16 +213,31 @@ impl<'a> ScanLatencyOngoingRecording<'a> {
ScanLatencyOngoingRecording { parent, start }
}
pub(crate) fn observe(self) {
pub(crate) fn observe(self, throttled: Option<Duration>) {
let elapsed = self.start.elapsed();
self.parent.observe(elapsed.as_secs_f64());
let ex_throttled = if let Some(throttled) = throttled {
elapsed.checked_sub(throttled)
} else {
Some(elapsed)
};
if let Some(ex_throttled) = ex_throttled {
self.parent.observe(ex_throttled.as_secs_f64());
} else {
use utils::rate_limit::RateLimit;
static LOGGED: Lazy<Mutex<RateLimit>> =
Lazy::new(|| Mutex::new(RateLimit::new(Duration::from_secs(10))));
let mut rate_limit = LOGGED.lock().unwrap();
rate_limit.call(|| {
warn!("error deducting time spent throttled; this message is logged at a global rate limit");
});
}
}
}
pub(crate) static GET_VECTORED_LATENCY: Lazy<GetVectoredLatency> = Lazy::new(|| {
let inner = register_histogram_vec!(
"pageserver_get_vectored_seconds",
"Time spent in get_vectored.",
"Time spent in get_vectored, excluding time spent in timeline_get_throttle.",
&["task_kind"],
CRITICAL_OP_BUCKETS.into(),
)
@@ -249,7 +260,7 @@ pub(crate) static GET_VECTORED_LATENCY: Lazy<GetVectoredLatency> = Lazy::new(||
pub(crate) static SCAN_LATENCY: Lazy<ScanLatency> = Lazy::new(|| {
let inner = register_histogram_vec!(
"pageserver_scan_seconds",
"Time spent in scan.",
"Time spent in scan, excluding time spent in timeline_get_throttle.",
&["task_kind"],
CRITICAL_OP_BUCKETS.into(),
)
@@ -1205,33 +1216,28 @@ pub(crate) mod virtual_file_io_engine {
});
}
pub(crate) struct SmgrOpTimer {
global_latency_histo: Histogram,
struct GlobalAndPerTimelineHistogramTimer<'a, 'c> {
global_latency_histo: &'a Histogram,
// Optional because not all op types are tracked per-timeline
per_timeline_latency_histo: Option<Histogram>,
per_timeline_latency_histo: Option<&'a Histogram>,
start: Instant,
throttled: Duration,
ctx: &'c RequestContext,
start: std::time::Instant,
op: SmgrQueryType,
count: usize,
}
impl SmgrOpTimer {
pub(crate) fn deduct_throttle(&mut self, throttle: &Option<Duration>) {
let Some(throttle) = throttle else {
return;
};
self.throttled += *throttle;
}
}
impl Drop for SmgrOpTimer {
impl Drop for GlobalAndPerTimelineHistogramTimer<'_, '_> {
fn drop(&mut self) {
let elapsed = self.start.elapsed();
let elapsed = match elapsed.checked_sub(self.throttled) {
Some(elapsed) => elapsed,
None => {
let ex_throttled = self
.ctx
.micros_spent_throttled
.close_and_checked_sub_from(elapsed);
let ex_throttled = match ex_throttled {
Ok(res) => res,
Err(error) => {
use utils::rate_limit::RateLimit;
static LOGGED: Lazy<Mutex<enum_map::EnumMap<SmgrQueryType, RateLimit>>> =
Lazy::new(|| {
@@ -1242,17 +1248,18 @@ impl Drop for SmgrOpTimer {
let mut guard = LOGGED.lock().unwrap();
let rate_limit = &mut guard[self.op];
rate_limit.call(|| {
warn!(op=?self.op, ?elapsed, ?self.throttled, "implementation error: time spent throttled exceeds total request wall clock time");
warn!(op=?self.op, error, "error deducting time spent throttled; this message is logged at a global rate limit");
});
elapsed // un-throttled time, more info than just saturating to 0
elapsed
}
};
let elapsed = elapsed.as_secs_f64();
self.global_latency_histo.observe(elapsed);
if let Some(per_timeline_getpage_histo) = &self.per_timeline_latency_histo {
per_timeline_getpage_histo.observe(elapsed);
for _ in 0..self.count {
self.global_latency_histo
.observe(ex_throttled.as_secs_f64());
if let Some(per_timeline_getpage_histo) = self.per_timeline_latency_histo {
per_timeline_getpage_histo.observe(ex_throttled.as_secs_f64());
}
}
}
}
@@ -1282,8 +1289,6 @@ pub(crate) struct SmgrQueryTimePerTimeline {
global_latency: [Histogram; SmgrQueryType::COUNT],
per_timeline_getpage_started: IntCounter,
per_timeline_getpage_latency: Histogram,
global_batch_size: Histogram,
per_timeline_batch_size: Histogram,
}
static SMGR_QUERY_STARTED_GLOBAL: Lazy<IntCounterVec> = Lazy::new(|| {
@@ -1376,76 +1381,6 @@ static SMGR_QUERY_TIME_GLOBAL: Lazy<HistogramVec> = Lazy::new(|| {
.expect("failed to define a metric")
});
static PAGE_SERVICE_BATCH_SIZE_BUCKETS_GLOBAL: Lazy<Vec<f64>> = Lazy::new(|| {
(1..=u32::try_from(Timeline::MAX_GET_VECTORED_KEYS).unwrap())
.map(|v| v.into())
.collect()
});
static PAGE_SERVICE_BATCH_SIZE_GLOBAL: Lazy<Histogram> = Lazy::new(|| {
register_histogram!(
"pageserver_page_service_batch_size_global",
"Batch size of pageserver page service requests",
PAGE_SERVICE_BATCH_SIZE_BUCKETS_GLOBAL.clone(),
)
.expect("failed to define a metric")
});
static PAGE_SERVICE_BATCH_SIZE_BUCKETS_PER_TIMELINE: Lazy<Vec<f64>> = Lazy::new(|| {
let mut buckets = Vec::new();
for i in 0.. {
let bucket = 1 << i;
if bucket > u32::try_from(Timeline::MAX_GET_VECTORED_KEYS).unwrap() {
break;
}
buckets.push(bucket.into());
}
buckets
});
static PAGE_SERVICE_BATCH_SIZE_PER_TENANT_TIMELINE: Lazy<HistogramVec> = Lazy::new(|| {
register_histogram_vec!(
"pageserver_page_service_batch_size",
"Batch size of pageserver page service requests",
&["tenant_id", "shard_id", "timeline_id"],
PAGE_SERVICE_BATCH_SIZE_BUCKETS_PER_TIMELINE.clone()
)
.expect("failed to define a metric")
});
pub(crate) static PAGE_SERVICE_CONFIG_MAX_BATCH_SIZE: Lazy<IntGaugeVec> = Lazy::new(|| {
register_int_gauge_vec!(
"pageserver_page_service_config_max_batch_size",
"Configured maximum batch size for the server-side batching functionality of page_service. \
Labels expose more of the configuration parameters.",
&["mode", "execution"]
)
.expect("failed to define a metric")
});
fn set_page_service_config_max_batch_size(conf: &PageServicePipeliningConfig) {
PAGE_SERVICE_CONFIG_MAX_BATCH_SIZE.reset();
let (label_values, value) = match conf {
PageServicePipeliningConfig::Serial => (["serial", "-"], 1),
PageServicePipeliningConfig::Pipelined(PageServicePipeliningConfigPipelined {
max_batch_size,
execution,
}) => {
let mode = "pipelined";
let execution = match execution {
PageServiceProtocolPipelinedExecutionStrategy::ConcurrentFutures => {
"concurrent-futures"
}
PageServiceProtocolPipelinedExecutionStrategy::Tasks => "tasks",
};
([mode, execution], max_batch_size.get())
}
};
PAGE_SERVICE_CONFIG_MAX_BATCH_SIZE
.with_label_values(&label_values)
.set(value.try_into().unwrap());
}
impl SmgrQueryTimePerTimeline {
pub(crate) fn new(tenant_shard_id: &TenantShardId, timeline_id: &TimelineId) -> Self {
let tenant_id = tenant_shard_id.tenant_id.to_string();
@@ -1481,53 +1416,78 @@ impl SmgrQueryTimePerTimeline {
])
.unwrap();
let global_batch_size = PAGE_SERVICE_BATCH_SIZE_GLOBAL.clone();
let per_timeline_batch_size = PAGE_SERVICE_BATCH_SIZE_PER_TENANT_TIMELINE
.get_metric_with_label_values(&[&tenant_id, &shard_slug, &timeline_id])
.unwrap();
Self {
global_started,
global_latency,
per_timeline_getpage_latency,
per_timeline_getpage_started,
global_batch_size,
per_timeline_batch_size,
}
}
pub(crate) fn start_smgr_op(&self, op: SmgrQueryType, started_at: Instant) -> SmgrOpTimer {
pub(crate) fn start_timer<'c: 'a, 'a>(
&'a self,
op: SmgrQueryType,
ctx: &'c RequestContext,
) -> Option<impl Drop + 'a> {
self.start_timer_many(op, 1, ctx)
}
pub(crate) fn start_timer_many<'c: 'a, 'a>(
&'a self,
op: SmgrQueryType,
count: usize,
ctx: &'c RequestContext,
) -> Option<impl Drop + 'a> {
let start = Instant::now();
self.global_started[op as usize].inc();
// We subtract time spent throttled from the observed latency.
match ctx.micros_spent_throttled.open() {
Ok(()) => (),
Err(error) => {
use utils::rate_limit::RateLimit;
static LOGGED: Lazy<Mutex<enum_map::EnumMap<SmgrQueryType, RateLimit>>> =
Lazy::new(|| {
Mutex::new(enum_map::EnumMap::from_array(std::array::from_fn(|_| {
RateLimit::new(Duration::from_secs(10))
})))
});
let mut guard = LOGGED.lock().unwrap();
let rate_limit = &mut guard[op];
rate_limit.call(|| {
warn!(?op, error, "error opening micros_spent_throttled; this message is logged at a global rate limit");
});
}
}
let per_timeline_latency_histo = if matches!(op, SmgrQueryType::GetPageAtLsn) {
self.per_timeline_getpage_started.inc();
Some(self.per_timeline_getpage_latency.clone())
Some(&self.per_timeline_getpage_latency)
} else {
None
};
SmgrOpTimer {
global_latency_histo: self.global_latency[op as usize].clone(),
Some(GlobalAndPerTimelineHistogramTimer {
global_latency_histo: &self.global_latency[op as usize],
per_timeline_latency_histo,
start: started_at,
ctx,
start,
op,
throttled: Duration::ZERO,
}
}
pub(crate) fn observe_getpage_batch_start(&self, batch_size: usize) {
self.global_batch_size.observe(batch_size as f64);
self.per_timeline_batch_size.observe(batch_size as f64);
count,
})
}
}
#[cfg(test)]
mod smgr_query_time_tests {
use std::time::Instant;
use pageserver_api::shard::TenantShardId;
use strum::IntoEnumIterator;
use utils::id::{TenantId, TimelineId};
use crate::{
context::{DownloadBehavior, RequestContext},
task_mgr::TaskKind,
};
// Regression test, we used hard-coded string constants before using an enum.
#[test]
fn op_label_name() {
@@ -1571,7 +1531,8 @@ mod smgr_query_time_tests {
let (pre_global, pre_per_tenant_timeline) = get_counts();
assert_eq!(pre_per_tenant_timeline, 0);
let timer = metrics.start_smgr_op(*op, Instant::now());
let ctx = RequestContext::new(TaskKind::UnitTest, DownloadBehavior::Download);
let timer = metrics.start_timer(*op, &ctx);
drop(timer);
let (post_global, post_per_tenant_timeline) = get_counts();
@@ -1618,24 +1579,58 @@ pub(crate) static BASEBACKUP_QUERY_TIME: Lazy<BasebackupQueryTime> = Lazy::new(|
}
});
pub(crate) struct BasebackupQueryTimeOngoingRecording<'a> {
pub(crate) struct BasebackupQueryTimeOngoingRecording<'a, 'c> {
parent: &'a BasebackupQueryTime,
ctx: &'c RequestContext,
start: std::time::Instant,
}
impl BasebackupQueryTime {
pub(crate) fn start_recording(&self) -> BasebackupQueryTimeOngoingRecording<'_> {
pub(crate) fn start_recording<'c: 'a, 'a>(
&'a self,
ctx: &'c RequestContext,
) -> BasebackupQueryTimeOngoingRecording<'a, 'a> {
let start = Instant::now();
match ctx.micros_spent_throttled.open() {
Ok(()) => (),
Err(error) => {
use utils::rate_limit::RateLimit;
static LOGGED: Lazy<Mutex<RateLimit>> =
Lazy::new(|| Mutex::new(RateLimit::new(Duration::from_secs(10))));
let mut rate_limit = LOGGED.lock().unwrap();
rate_limit.call(|| {
warn!(error, "error opening micros_spent_throttled; this message is logged at a global rate limit");
});
}
}
BasebackupQueryTimeOngoingRecording {
parent: self,
ctx,
start,
}
}
}
impl BasebackupQueryTimeOngoingRecording<'_> {
impl BasebackupQueryTimeOngoingRecording<'_, '_> {
pub(crate) fn observe<T>(self, res: &Result<T, QueryError>) {
let elapsed = self.start.elapsed().as_secs_f64();
let elapsed = self.start.elapsed();
let ex_throttled = self
.ctx
.micros_spent_throttled
.close_and_checked_sub_from(elapsed);
let ex_throttled = match ex_throttled {
Ok(ex_throttled) => ex_throttled,
Err(error) => {
use utils::rate_limit::RateLimit;
static LOGGED: Lazy<Mutex<RateLimit>> =
Lazy::new(|| Mutex::new(RateLimit::new(Duration::from_secs(10))));
let mut rate_limit = LOGGED.lock().unwrap();
rate_limit.call(|| {
warn!(error, "error deducting time spent throttled; this message is logged at a global rate limit");
});
elapsed
}
};
// If you want to change categorize of a specific error, also change it in `log_query_error`.
let metric = match res {
Ok(_) => &self.parent.ok,
@@ -1646,7 +1641,7 @@ impl BasebackupQueryTimeOngoingRecording<'_> {
}
Err(_) => &self.parent.error,
};
metric.observe(elapsed);
metric.observe(ex_throttled.as_secs_f64());
}
}
@@ -2727,11 +2722,6 @@ impl TimelineMetrics {
shard_id,
timeline_id,
]);
let _ = PAGE_SERVICE_BATCH_SIZE_PER_TENANT_TIMELINE.remove_label_values(&[
tenant_id,
shard_id,
timeline_id,
]);
}
}
@@ -2757,12 +2747,10 @@ use std::sync::{Arc, Mutex};
use std::task::{Context, Poll};
use std::time::{Duration, Instant};
use crate::config::PageServerConf;
use crate::context::{PageContentKind, RequestContext};
use crate::task_mgr::TaskKind;
use crate::tenant::mgr::TenantSlot;
use crate::tenant::tasks::BackgroundLoopKind;
use crate::tenant::Timeline;
/// Maintain a per timeline gauge in addition to the global gauge.
pub(crate) struct PerTimelineRemotePhysicalSizeGauge {
@@ -3319,7 +3307,7 @@ pub(crate) mod tenant_throttling {
use once_cell::sync::Lazy;
use utils::shard::TenantShardId;
use crate::tenant::{self};
use crate::tenant::{self, throttle::Metric};
struct GlobalAndPerTenantIntCounter {
global: IntCounter,
@@ -3338,7 +3326,7 @@ pub(crate) mod tenant_throttling {
}
}
pub(crate) struct Metrics<const KIND: usize> {
pub(crate) struct TimelineGet {
count_accounted_start: GlobalAndPerTenantIntCounter,
count_accounted_finish: GlobalAndPerTenantIntCounter,
wait_time: GlobalAndPerTenantIntCounter,
@@ -3411,41 +3399,40 @@ pub(crate) mod tenant_throttling {
.unwrap()
});
const KINDS: &[&str] = &["pagestream"];
pub type Pagestream = Metrics<0>;
const KIND: &str = "timeline_get";
impl<const KIND: usize> Metrics<KIND> {
impl TimelineGet {
pub(crate) fn new(tenant_shard_id: &TenantShardId) -> Self {
let per_tenant_label_values = &[
KINDS[KIND],
KIND,
&tenant_shard_id.tenant_id.to_string(),
&tenant_shard_id.shard_slug().to_string(),
];
Metrics {
TimelineGet {
count_accounted_start: {
GlobalAndPerTenantIntCounter {
global: COUNT_ACCOUNTED_START.with_label_values(&[KINDS[KIND]]),
global: COUNT_ACCOUNTED_START.with_label_values(&[KIND]),
per_tenant: COUNT_ACCOUNTED_START_PER_TENANT
.with_label_values(per_tenant_label_values),
}
},
count_accounted_finish: {
GlobalAndPerTenantIntCounter {
global: COUNT_ACCOUNTED_FINISH.with_label_values(&[KINDS[KIND]]),
global: COUNT_ACCOUNTED_FINISH.with_label_values(&[KIND]),
per_tenant: COUNT_ACCOUNTED_FINISH_PER_TENANT
.with_label_values(per_tenant_label_values),
}
},
wait_time: {
GlobalAndPerTenantIntCounter {
global: WAIT_USECS.with_label_values(&[KINDS[KIND]]),
global: WAIT_USECS.with_label_values(&[KIND]),
per_tenant: WAIT_USECS_PER_TENANT
.with_label_values(per_tenant_label_values),
}
},
count_throttled: {
GlobalAndPerTenantIntCounter {
global: WAIT_COUNT.with_label_values(&[KINDS[KIND]]),
global: WAIT_COUNT.with_label_values(&[KIND]),
per_tenant: WAIT_COUNT_PER_TENANT
.with_label_values(per_tenant_label_values),
}
@@ -3468,17 +3455,15 @@ pub(crate) mod tenant_throttling {
&WAIT_USECS_PER_TENANT,
&WAIT_COUNT_PER_TENANT,
] {
for kind in KINDS {
let _ = m.remove_label_values(&[
kind,
&tenant_shard_id.tenant_id.to_string(),
&tenant_shard_id.shard_slug().to_string(),
]);
}
let _ = m.remove_label_values(&[
KIND,
&tenant_shard_id.tenant_id.to_string(),
&tenant_shard_id.shard_slug().to_string(),
]);
}
}
impl<const KIND: usize> tenant::throttle::Metric for Metrics<KIND> {
impl Metric for TimelineGet {
#[inline(always)]
fn accounting_start(&self) {
self.count_accounted_start.inc();
@@ -3577,9 +3562,7 @@ pub(crate) fn set_tokio_runtime_setup(setup: &str, num_threads: NonZeroUsize) {
.set(u64::try_from(num_threads.get()).unwrap());
}
pub fn preinitialize_metrics(conf: &'static PageServerConf) {
set_page_service_config_max_batch_size(&conf.page_service_pipelining);
pub fn preinitialize_metrics() {
// Python tests need these and on some we do alerting.
//
// FIXME(4813): make it so that we have no top level metrics as this fn will easily fall out of
@@ -3647,7 +3630,6 @@ pub fn preinitialize_metrics(conf: &'static PageServerConf) {
&WAL_REDO_RECORDS_HISTOGRAM,
&WAL_REDO_BYTES_HISTOGRAM,
&WAL_REDO_PROCESS_LAUNCH_DURATION_HISTOGRAM,
&PAGE_SERVICE_BATCH_SIZE_GLOBAL,
]
.into_iter()
.for_each(|h| {

View File

@@ -51,7 +51,7 @@ use crate::auth::check_permission;
use crate::basebackup::BasebackupError;
use crate::config::PageServerConf;
use crate::context::{DownloadBehavior, RequestContext};
use crate::metrics::{self, SmgrOpTimer};
use crate::metrics::{self};
use crate::metrics::{ComputeCommandKind, COMPUTE_COMMANDS_COUNTERS, LIVE_CONNECTIONS};
use crate::pgdatadir_mapping::Version;
use crate::span::debug_assert_current_span_has_tenant_and_timeline_id;
@@ -540,13 +540,11 @@ impl From<WaitLsnError> for QueryError {
enum BatchedFeMessage {
Exists {
span: Span,
timer: SmgrOpTimer,
shard: timeline::handle::Handle<TenantManagerTypes>,
req: models::PagestreamExistsRequest,
},
Nblocks {
span: Span,
timer: SmgrOpTimer,
shard: timeline::handle::Handle<TenantManagerTypes>,
req: models::PagestreamNblocksRequest,
},
@@ -554,17 +552,15 @@ enum BatchedFeMessage {
span: Span,
shard: timeline::handle::Handle<TenantManagerTypes>,
effective_request_lsn: Lsn,
pages: smallvec::SmallVec<[(RelTag, BlockNumber, SmgrOpTimer); 1]>,
pages: smallvec::SmallVec<[(RelTag, BlockNumber); 1]>,
},
DbSize {
span: Span,
timer: SmgrOpTimer,
shard: timeline::handle::Handle<TenantManagerTypes>,
req: models::PagestreamDbSizeRequest,
},
GetSlruSegment {
span: Span,
timer: SmgrOpTimer,
shard: timeline::handle::Handle<TenantManagerTypes>,
req: models::PagestreamGetSlruSegmentRequest,
},
@@ -574,41 +570,6 @@ enum BatchedFeMessage {
},
}
impl BatchedFeMessage {
async fn throttle(&mut self, cancel: &CancellationToken) -> Result<(), QueryError> {
let (shard, tokens, timers) = match self {
BatchedFeMessage::Exists { shard, timer, .. }
| BatchedFeMessage::Nblocks { shard, timer, .. }
| BatchedFeMessage::DbSize { shard, timer, .. }
| BatchedFeMessage::GetSlruSegment { shard, timer, .. } => {
(
shard,
// 1 token is probably under-estimating because these
// request handlers typically do several Timeline::get calls.
1,
itertools::Either::Left(std::iter::once(timer)),
)
}
BatchedFeMessage::GetPage { shard, pages, .. } => (
shard,
pages.len(),
itertools::Either::Right(pages.iter_mut().map(|(_, _, timer)| timer)),
),
BatchedFeMessage::RespondError { .. } => return Ok(()),
};
let throttled = tokio::select! {
throttled = shard.pagestream_throttle.throttle(tokens) => { throttled }
_ = cancel.cancelled() => {
return Err(QueryError::Shutdown);
}
};
for timer in timers {
timer.deduct_throttle(&throttled);
}
Ok(())
}
}
impl PageServerHandler {
pub fn new(
tenant_manager: Arc<TenantManager>,
@@ -671,8 +632,6 @@ impl PageServerHandler {
msg = pgb.read_message() => { msg }
};
let received_at = Instant::now();
let copy_data_bytes = match msg? {
Some(FeMessage::CopyData(bytes)) => bytes,
Some(FeMessage::Terminate) => {
@@ -701,15 +660,7 @@ impl PageServerHandler {
.get(tenant_id, timeline_id, ShardSelector::Zero)
.instrument(span.clone()) // sets `shard_id` field
.await?;
let timer = shard
.query_metrics
.start_smgr_op(metrics::SmgrQueryType::GetRelExists, received_at);
BatchedFeMessage::Exists {
span,
timer,
shard,
req,
}
BatchedFeMessage::Exists { span, shard, req }
}
PagestreamFeMessage::Nblocks(req) => {
let span = tracing::info_span!(parent: parent_span, "handle_get_nblocks_request", rel = %req.rel, req_lsn = %req.request_lsn);
@@ -717,15 +668,7 @@ impl PageServerHandler {
.get(tenant_id, timeline_id, ShardSelector::Zero)
.instrument(span.clone()) // sets `shard_id` field
.await?;
let timer = shard
.query_metrics
.start_smgr_op(metrics::SmgrQueryType::GetRelSize, received_at);
BatchedFeMessage::Nblocks {
span,
timer,
shard,
req,
}
BatchedFeMessage::Nblocks { span, shard, req }
}
PagestreamFeMessage::DbSize(req) => {
let span = tracing::info_span!(parent: parent_span, "handle_db_size_request", dbnode = %req.dbnode, req_lsn = %req.request_lsn);
@@ -733,15 +676,7 @@ impl PageServerHandler {
.get(tenant_id, timeline_id, ShardSelector::Zero)
.instrument(span.clone()) // sets `shard_id` field
.await?;
let timer = shard
.query_metrics
.start_smgr_op(metrics::SmgrQueryType::GetDbSize, received_at);
BatchedFeMessage::DbSize {
span,
timer,
shard,
req,
}
BatchedFeMessage::DbSize { span, shard, req }
}
PagestreamFeMessage::GetSlruSegment(req) => {
let span = tracing::info_span!(parent: parent_span, "handle_get_slru_segment_request", kind = %req.kind, segno = %req.segno, req_lsn = %req.request_lsn);
@@ -749,15 +684,7 @@ impl PageServerHandler {
.get(tenant_id, timeline_id, ShardSelector::Zero)
.instrument(span.clone()) // sets `shard_id` field
.await?;
let timer = shard
.query_metrics
.start_smgr_op(metrics::SmgrQueryType::GetSlruSegment, received_at);
BatchedFeMessage::GetSlruSegment {
span,
timer,
shard,
req,
}
BatchedFeMessage::GetSlruSegment { span, shard, req }
}
PagestreamFeMessage::GetPage(PagestreamGetPageRequest {
request_lsn,
@@ -801,14 +728,6 @@ impl PageServerHandler {
return respond_error!(e.into());
}
};
// It's important to start the timer before waiting for the LSN
// so that the _started counters are incremented before we do
// any serious waiting, e.g., for LSNs.
let timer = shard
.query_metrics
.start_smgr_op(metrics::SmgrQueryType::GetPageAtLsn, received_at);
let effective_request_lsn = match Self::wait_or_get_last_lsn(
&shard,
request_lsn,
@@ -828,7 +747,7 @@ impl PageServerHandler {
span,
shard,
effective_request_lsn,
pages: smallvec::smallvec![(rel, blkno, timer)],
pages: smallvec::smallvec![(rel, blkno)],
}
}
};
@@ -913,112 +832,88 @@ impl PageServerHandler {
IO: AsyncRead + AsyncWrite + Send + Sync + Unpin,
{
// invoke handler function
let (handler_results, span): (
Vec<Result<(PagestreamBeMessage, SmgrOpTimer), PageStreamError>>,
_,
) = match batch {
BatchedFeMessage::Exists {
span,
timer,
shard,
req,
} => {
fail::fail_point!("ps::handle-pagerequest-message::exists");
(
vec![self
.handle_get_rel_exists_request(&shard, &req, ctx)
.instrument(span.clone())
.await
.map(|msg| (msg, timer))],
let (handler_results, span): (Vec<Result<PagestreamBeMessage, PageStreamError>>, _) =
match batch {
BatchedFeMessage::Exists { span, shard, req } => {
fail::fail_point!("ps::handle-pagerequest-message::exists");
(
vec![
self.handle_get_rel_exists_request(&shard, &req, ctx)
.instrument(span.clone())
.await,
],
span,
)
}
BatchedFeMessage::Nblocks { span, shard, req } => {
fail::fail_point!("ps::handle-pagerequest-message::nblocks");
(
vec![
self.handle_get_nblocks_request(&shard, &req, ctx)
.instrument(span.clone())
.await,
],
span,
)
}
BatchedFeMessage::GetPage {
span,
)
}
BatchedFeMessage::Nblocks {
span,
timer,
shard,
req,
} => {
fail::fail_point!("ps::handle-pagerequest-message::nblocks");
(
vec![self
.handle_get_nblocks_request(&shard, &req, ctx)
.instrument(span.clone())
.await
.map(|msg| (msg, timer))],
span,
)
}
BatchedFeMessage::GetPage {
span,
shard,
effective_request_lsn,
pages,
} => {
fail::fail_point!("ps::handle-pagerequest-message::getpage");
(
{
let npages = pages.len();
trace!(npages, "handling getpage request");
let res = self
.handle_get_page_at_lsn_request_batched(
&shard,
effective_request_lsn,
pages,
ctx,
)
.instrument(span.clone())
.await;
assert_eq!(res.len(), npages);
res
},
span,
)
}
BatchedFeMessage::DbSize {
span,
timer,
shard,
req,
} => {
fail::fail_point!("ps::handle-pagerequest-message::dbsize");
(
vec![self
.handle_db_size_request(&shard, &req, ctx)
.instrument(span.clone())
.await
.map(|msg| (msg, timer))],
span,
)
}
BatchedFeMessage::GetSlruSegment {
span,
timer,
shard,
req,
} => {
fail::fail_point!("ps::handle-pagerequest-message::slrusegment");
(
vec![self
.handle_get_slru_segment_request(&shard, &req, ctx)
.instrument(span.clone())
.await
.map(|msg| (msg, timer))],
span,
)
}
BatchedFeMessage::RespondError { span, error } => {
// We've already decided to respond with an error, so we don't need to
// call the handler.
(vec![Err(error)], span)
}
};
shard,
effective_request_lsn,
pages,
} => {
fail::fail_point!("ps::handle-pagerequest-message::getpage");
(
{
let npages = pages.len();
trace!(npages, "handling getpage request");
let res = self
.handle_get_page_at_lsn_request_batched(
&shard,
effective_request_lsn,
pages,
ctx,
)
.instrument(span.clone())
.await;
assert_eq!(res.len(), npages);
res
},
span,
)
}
BatchedFeMessage::DbSize { span, shard, req } => {
fail::fail_point!("ps::handle-pagerequest-message::dbsize");
(
vec![
self.handle_db_size_request(&shard, &req, ctx)
.instrument(span.clone())
.await,
],
span,
)
}
BatchedFeMessage::GetSlruSegment { span, shard, req } => {
fail::fail_point!("ps::handle-pagerequest-message::slrusegment");
(
vec![
self.handle_get_slru_segment_request(&shard, &req, ctx)
.instrument(span.clone())
.await,
],
span,
)
}
BatchedFeMessage::RespondError { span, error } => {
// We've already decided to respond with an error, so we don't need to
// call the handler.
(vec![Err(error)], span)
}
};
// Map handler result to protocol behavior.
// Some handler errors cause exit from pagestream protocol.
// Other handler errors are sent back as an error message and we stay in pagestream protocol.
let mut timers: smallvec::SmallVec<[_; 1]> =
smallvec::SmallVec::with_capacity(handler_results.len());
for handler_result in handler_results {
let response_msg = match handler_result {
Err(e) => match &e {
@@ -1049,12 +944,7 @@ impl PageServerHandler {
})
}
},
Ok((response_msg, timer)) => {
// Extending the lifetime of the timers so observations on drop
// include the flush time.
timers.push(timer);
response_msg
}
Ok(response_msg) => response_msg,
};
// marshal & transmit response message
@@ -1071,7 +961,6 @@ impl PageServerHandler {
res?;
}
}
drop(timers);
Ok(())
}
@@ -1192,18 +1081,13 @@ impl PageServerHandler {
Ok(msg) => msg,
Err(e) => break e,
};
let mut msg = match msg {
let msg = match msg {
Some(msg) => msg,
None => {
debug!("pagestream subprotocol end observed");
return ((pgb_reader, timeline_handles), Ok(()));
}
};
if let Err(cancelled) = msg.throttle(&self.cancel).await {
break cancelled;
}
let err = self
.pagesteam_handle_batched_message(pgb_writer, msg, &cancel, ctx)
.await;
@@ -1361,13 +1245,12 @@ impl PageServerHandler {
return Ok(());
}
};
let mut batch = match batch {
let batch = match batch {
Ok(batch) => batch,
Err(e) => {
return Err(e);
}
};
batch.throttle(&self.cancel).await?;
self.pagesteam_handle_batched_message(pgb_writer, batch, &cancel, &ctx)
.await?;
}
@@ -1540,6 +1423,10 @@ impl PageServerHandler {
req: &PagestreamExistsRequest,
ctx: &RequestContext,
) -> Result<PagestreamBeMessage, PageStreamError> {
let _timer = timeline
.query_metrics
.start_timer(metrics::SmgrQueryType::GetRelExists, ctx);
let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn();
let lsn = Self::wait_or_get_last_lsn(
timeline,
@@ -1566,6 +1453,10 @@ impl PageServerHandler {
req: &PagestreamNblocksRequest,
ctx: &RequestContext,
) -> Result<PagestreamBeMessage, PageStreamError> {
let _timer = timeline
.query_metrics
.start_timer(metrics::SmgrQueryType::GetRelSize, ctx);
let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn();
let lsn = Self::wait_or_get_last_lsn(
timeline,
@@ -1592,6 +1483,10 @@ impl PageServerHandler {
req: &PagestreamDbSizeRequest,
ctx: &RequestContext,
) -> Result<PagestreamBeMessage, PageStreamError> {
let _timer = timeline
.query_metrics
.start_timer(metrics::SmgrQueryType::GetDbSize, ctx);
let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn();
let lsn = Self::wait_or_get_last_lsn(
timeline,
@@ -1617,41 +1512,26 @@ impl PageServerHandler {
&mut self,
timeline: &Timeline,
effective_lsn: Lsn,
requests: smallvec::SmallVec<[(RelTag, BlockNumber, SmgrOpTimer); 1]>,
pages: smallvec::SmallVec<[(RelTag, BlockNumber); 1]>,
ctx: &RequestContext,
) -> Vec<Result<(PagestreamBeMessage, SmgrOpTimer), PageStreamError>> {
) -> Vec<Result<PagestreamBeMessage, PageStreamError>> {
debug_assert_current_span_has_tenant_and_timeline_id();
let _timer = timeline.query_metrics.start_timer_many(
metrics::SmgrQueryType::GetPageAtLsn,
pages.len(),
ctx,
);
timeline
.query_metrics
.observe_getpage_batch_start(requests.len());
let results = timeline
.get_rel_page_at_lsn_batched(
requests.iter().map(|(reltag, blkno, _)| (reltag, blkno)),
effective_lsn,
ctx,
)
let pages = timeline
.get_rel_page_at_lsn_batched(pages, effective_lsn, ctx)
.await;
assert_eq!(results.len(), requests.len());
// TODO: avoid creating the new Vec here
Vec::from_iter(
requests
.into_iter()
.zip(results.into_iter())
.map(|((_, _, timer), res)| {
res.map(|page| {
(
PagestreamBeMessage::GetPage(models::PagestreamGetPageResponse {
page,
}),
timer,
)
})
.map_err(PageStreamError::from)
}),
)
Vec::from_iter(pages.into_iter().map(|page| {
page.map(|page| {
PagestreamBeMessage::GetPage(models::PagestreamGetPageResponse { page })
})
.map_err(PageStreamError::from)
}))
}
#[instrument(skip_all, fields(shard_id))]
@@ -1661,6 +1541,10 @@ impl PageServerHandler {
req: &PagestreamGetSlruSegmentRequest,
ctx: &RequestContext,
) -> Result<PagestreamBeMessage, PageStreamError> {
let _timer = timeline
.query_metrics
.start_timer(metrics::SmgrQueryType::GetSlruSegment, ctx);
let latest_gc_cutoff_lsn = timeline.get_latest_gc_cutoff_lsn();
let lsn = Self::wait_or_get_last_lsn(
timeline,
@@ -2161,7 +2045,7 @@ where
COMPUTE_COMMANDS_COUNTERS
.for_command(ComputeCommandKind::Basebackup)
.inc();
let metric_recording = metrics::BASEBACKUP_QUERY_TIME.start_recording();
let metric_recording = metrics::BASEBACKUP_QUERY_TIME.start_recording(&ctx);
let res = async {
self.handle_basebackup_request(
pgb,

View File

@@ -203,13 +203,9 @@ impl Timeline {
) -> Result<Bytes, PageReconstructError> {
match version {
Version::Lsn(effective_lsn) => {
let pages: smallvec::SmallVec<[_; 1]> = smallvec::smallvec![(tag, blknum)];
let pages = smallvec::smallvec![(tag, blknum)];
let res = self
.get_rel_page_at_lsn_batched(
pages.iter().map(|(tag, blknum)| (tag, blknum)),
effective_lsn,
ctx,
)
.get_rel_page_at_lsn_batched(pages, effective_lsn, ctx)
.await;
assert_eq!(res.len(), 1);
res.into_iter().next().unwrap()
@@ -244,7 +240,7 @@ impl Timeline {
/// The ordering of the returned vec corresponds to the ordering of `pages`.
pub(crate) async fn get_rel_page_at_lsn_batched(
&self,
pages: impl ExactSizeIterator<Item = (&RelTag, &BlockNumber)>,
pages: smallvec::SmallVec<[(RelTag, BlockNumber); 1]>,
effective_lsn: Lsn,
ctx: &RequestContext,
) -> Vec<Result<Bytes, PageReconstructError>> {
@@ -258,7 +254,7 @@ impl Timeline {
let result_slots = result.spare_capacity_mut();
let mut keys_slots: BTreeMap<Key, smallvec::SmallVec<[usize; 1]>> = BTreeMap::default();
for (response_slot_idx, (tag, blknum)) in pages.enumerate() {
for (response_slot_idx, (tag, blknum)) in pages.into_iter().enumerate() {
if tag.relnode == 0 {
result_slots[response_slot_idx].write(Err(PageReconstructError::Other(
RelationError::InvalidRelnode.into(),
@@ -269,7 +265,7 @@ impl Timeline {
}
let nblocks = match self
.get_rel_size(*tag, Version::Lsn(effective_lsn), ctx)
.get_rel_size(tag, Version::Lsn(effective_lsn), ctx)
.await
{
Ok(nblocks) => nblocks,
@@ -280,7 +276,7 @@ impl Timeline {
}
};
if *blknum >= nblocks {
if blknum >= nblocks {
debug!(
"read beyond EOF at {} blk {} at {}, size is {}: returning all-zeros page",
tag, blknum, effective_lsn, nblocks
@@ -290,7 +286,7 @@ impl Timeline {
continue;
}
let key = rel_block_to_key(*tag, *blknum);
let key = rel_block_to_key(tag, blknum);
let key_slots = keys_slots.entry(key).or_default();
key_slots.push(response_slot_idx);
@@ -530,7 +526,6 @@ impl Timeline {
lsn: Lsn,
ctx: &RequestContext,
) -> Result<Bytes, PageReconstructError> {
assert!(self.tenant_shard_id.is_shard_zero());
let n_blocks = self
.get_slru_segment_size(kind, segno, Version::Lsn(lsn), ctx)
.await?;
@@ -553,7 +548,6 @@ impl Timeline {
lsn: Lsn,
ctx: &RequestContext,
) -> Result<Bytes, PageReconstructError> {
assert!(self.tenant_shard_id.is_shard_zero());
let key = slru_block_to_key(kind, segno, blknum);
self.get(key, lsn, ctx).await
}
@@ -566,7 +560,6 @@ impl Timeline {
version: Version<'_>,
ctx: &RequestContext,
) -> Result<BlockNumber, PageReconstructError> {
assert!(self.tenant_shard_id.is_shard_zero());
let key = slru_segment_size_to_key(kind, segno);
let mut buf = version.get(self, key, ctx).await?;
Ok(buf.get_u32_le())
@@ -580,7 +573,6 @@ impl Timeline {
version: Version<'_>,
ctx: &RequestContext,
) -> Result<bool, PageReconstructError> {
assert!(self.tenant_shard_id.is_shard_zero());
// fetch directory listing
let key = slru_dir_to_key(kind);
let buf = version.get(self, key, ctx).await?;
@@ -1051,28 +1043,26 @@ impl Timeline {
}
// Iterate SLRUs next
if self.tenant_shard_id.is_shard_zero() {
for kind in [
SlruKind::Clog,
SlruKind::MultiXactMembers,
SlruKind::MultiXactOffsets,
] {
let slrudir_key = slru_dir_to_key(kind);
result.add_key(slrudir_key);
let buf = self.get(slrudir_key, lsn, ctx).await?;
let dir = SlruSegmentDirectory::des(&buf)?;
let mut segments: Vec<u32> = dir.segments.iter().cloned().collect();
segments.sort_unstable();
for segno in segments {
let segsize_key = slru_segment_size_to_key(kind, segno);
let mut buf = self.get(segsize_key, lsn, ctx).await?;
let segsize = buf.get_u32_le();
for kind in [
SlruKind::Clog,
SlruKind::MultiXactMembers,
SlruKind::MultiXactOffsets,
] {
let slrudir_key = slru_dir_to_key(kind);
result.add_key(slrudir_key);
let buf = self.get(slrudir_key, lsn, ctx).await?;
let dir = SlruSegmentDirectory::des(&buf)?;
let mut segments: Vec<u32> = dir.segments.iter().cloned().collect();
segments.sort_unstable();
for segno in segments {
let segsize_key = slru_segment_size_to_key(kind, segno);
let mut buf = self.get(segsize_key, lsn, ctx).await?;
let segsize = buf.get_u32_le();
result.add_range(
slru_block_to_key(kind, segno, 0)..slru_block_to_key(kind, segno, segsize),
);
result.add_key(segsize_key);
}
result.add_range(
slru_block_to_key(kind, segno, 0)..slru_block_to_key(kind, segno, segsize),
);
result.add_key(segsize_key);
}
}
@@ -1474,10 +1464,6 @@ impl<'a> DatadirModification<'a> {
blknum: BlockNumber,
rec: NeonWalRecord,
) -> anyhow::Result<()> {
if !self.tline.tenant_shard_id.is_shard_zero() {
return Ok(());
}
self.put(
slru_block_to_key(kind, segno, blknum),
Value::WalRecord(rec),
@@ -1511,8 +1497,6 @@ impl<'a> DatadirModification<'a> {
blknum: BlockNumber,
img: Bytes,
) -> anyhow::Result<()> {
assert!(self.tline.tenant_shard_id.is_shard_zero());
let key = slru_block_to_key(kind, segno, blknum);
if !key.is_valid_key_on_write_path() {
anyhow::bail!(
@@ -1554,7 +1538,6 @@ impl<'a> DatadirModification<'a> {
segno: u32,
blknum: BlockNumber,
) -> anyhow::Result<()> {
assert!(self.tline.tenant_shard_id.is_shard_zero());
let key = slru_block_to_key(kind, segno, blknum);
if !key.is_valid_key_on_write_path() {
anyhow::bail!(
@@ -1866,8 +1849,6 @@ impl<'a> DatadirModification<'a> {
nblocks: BlockNumber,
ctx: &RequestContext,
) -> anyhow::Result<()> {
assert!(self.tline.tenant_shard_id.is_shard_zero());
// Add it to the directory entry
let dir_key = slru_dir_to_key(kind);
let buf = self.get(dir_key, ctx).await?;
@@ -1900,8 +1881,6 @@ impl<'a> DatadirModification<'a> {
segno: u32,
nblocks: BlockNumber,
) -> anyhow::Result<()> {
assert!(self.tline.tenant_shard_id.is_shard_zero());
// Put size
let size_key = slru_segment_size_to_key(kind, segno);
let buf = nblocks.to_le_bytes();

View File

@@ -357,8 +357,8 @@ pub struct Tenant {
/// Throttle applied at the top of [`Timeline::get`].
/// All [`Tenant::timelines`] of a given [`Tenant`] instance share the same [`throttle::Throttle`] instance.
pub(crate) pagestream_throttle:
Arc<throttle::Throttle<crate::metrics::tenant_throttling::Pagestream>>,
pub(crate) timeline_get_throttle:
Arc<throttle::Throttle<crate::metrics::tenant_throttling::TimelineGet>>,
/// An ongoing timeline detach concurrency limiter.
///
@@ -1678,7 +1678,7 @@ impl Tenant {
remote_metadata,
TimelineResources {
remote_client,
pagestream_throttle: self.pagestream_throttle.clone(),
timeline_get_throttle: self.timeline_get_throttle.clone(),
l0_flush_global_state: self.l0_flush_global_state.clone(),
},
LoadTimelineCause::Attach,
@@ -3835,7 +3835,7 @@ impl Tenant {
}
}
fn get_pagestream_throttle_config(
fn get_timeline_get_throttle_config(
psconf: &'static PageServerConf,
overrides: &TenantConfOpt,
) -> throttle::Config {
@@ -3846,8 +3846,8 @@ impl Tenant {
}
pub(crate) fn tenant_conf_updated(&self, new_conf: &TenantConfOpt) {
let conf = Self::get_pagestream_throttle_config(self.conf, new_conf);
self.pagestream_throttle.reconfigure(conf)
let conf = Self::get_timeline_get_throttle_config(self.conf, new_conf);
self.timeline_get_throttle.reconfigure(conf)
}
/// Helper function to create a new Timeline struct.
@@ -4009,9 +4009,9 @@ impl Tenant {
attach_wal_lag_cooldown: Arc::new(std::sync::OnceLock::new()),
cancel: CancellationToken::default(),
gate: Gate::default(),
pagestream_throttle: Arc::new(throttle::Throttle::new(
Tenant::get_pagestream_throttle_config(conf, &attached_conf.tenant_conf),
crate::metrics::tenant_throttling::Metrics::new(&tenant_shard_id),
timeline_get_throttle: Arc::new(throttle::Throttle::new(
Tenant::get_timeline_get_throttle_config(conf, &attached_conf.tenant_conf),
crate::metrics::tenant_throttling::TimelineGet::new(&tenant_shard_id),
)),
tenant_conf: Arc::new(ArcSwap::from_pointee(attached_conf)),
ongoing_timeline_detach: std::sync::Mutex::default(),
@@ -4909,7 +4909,7 @@ impl Tenant {
fn build_timeline_resources(&self, timeline_id: TimelineId) -> TimelineResources {
TimelineResources {
remote_client: self.build_timeline_remote_client(timeline_id),
pagestream_throttle: self.pagestream_throttle.clone(),
timeline_get_throttle: self.timeline_get_throttle.clone(),
l0_flush_global_state: self.l0_flush_global_state.clone(),
}
}

View File

@@ -347,7 +347,7 @@ async fn init_load_generations(
);
emergency_generations(tenant_confs)
} else if let Some(client) = ControllerUpcallClient::new(conf, cancel) {
info!("Calling {} API to re-attach tenants", client.base_url());
info!("Calling control plane API to re-attach tenants");
// If we are configured to use the control plane API, then it is the source of truth for what tenants to load.
match client.re_attach(conf).await {
Ok(tenants) => tenants

View File

@@ -2564,9 +2564,9 @@ pub fn parse_remote_index_path(path: RemotePath) -> Option<Generation> {
}
/// Given the key of a tenant manifest, parse out the generation number
pub fn parse_remote_tenant_manifest_path(path: RemotePath) -> Option<Generation> {
pub(crate) fn parse_remote_tenant_manifest_path(path: RemotePath) -> Option<Generation> {
static RE: OnceLock<Regex> = OnceLock::new();
let re = RE.get_or_init(|| Regex::new(r".*tenant-manifest-([0-9a-f]{8}).json").unwrap());
let re = RE.get_or_init(|| Regex::new(r".+tenant-manifest-([0-9a-f]{8}).json").unwrap());
re.captures(path.get_path().as_str())
.and_then(|c| c.get(1))
.and_then(|m| Generation::parse_suffix(m.as_str()))

View File

@@ -43,7 +43,7 @@ impl TenantManifest {
offloaded_timelines: vec![],
}
}
pub fn from_json_bytes(bytes: &[u8]) -> Result<Self, serde_json::Error> {
pub(crate) fn from_json_bytes(bytes: &[u8]) -> Result<Self, serde_json::Error> {
serde_json::from_slice::<Self>(bytes)
}

View File

@@ -471,14 +471,14 @@ async fn ingest_housekeeping_loop(tenant: Arc<Tenant>, cancel: CancellationToken
// TODO: rename the background loop kind to something more generic, like, tenant housekeeping.
// Or just spawn another background loop for this throttle, it's not like it's super costly.
info_span!(parent: None, "pagestream_throttle", tenant_id=%tenant.tenant_shard_id, shard_id=%tenant.tenant_shard_id.shard_slug()).in_scope(|| {
info_span!(parent: None, "timeline_get_throttle", tenant_id=%tenant.tenant_shard_id, shard_id=%tenant.tenant_shard_id.shard_slug()).in_scope(|| {
let now = Instant::now();
let prev = std::mem::replace(&mut last_throttle_flag_reset_at, now);
let Stats { count_accounted_start, count_accounted_finish, count_throttled, sum_throttled_usecs} = tenant.pagestream_throttle.reset_stats();
let Stats { count_accounted_start, count_accounted_finish, count_throttled, sum_throttled_usecs} = tenant.timeline_get_throttle.reset_stats();
if count_throttled == 0 {
return;
}
let allowed_rps = tenant.pagestream_throttle.steady_rps();
let allowed_rps = tenant.timeline_get_throttle.steady_rps();
let delta = now - prev;
info!(
n_seconds=%format_args!("{:.3}", delta.as_secs_f64()),

View File

@@ -1,14 +1,19 @@
use std::{
str::FromStr,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
Arc, Mutex,
},
time::{Duration, Instant},
};
use arc_swap::ArcSwap;
use enumset::EnumSet;
use tracing::{error, warn};
use utils::leaky_bucket::{LeakyBucketConfig, RateLimiter};
use crate::{context::RequestContext, task_mgr::TaskKind};
/// Throttle for `async` functions.
///
/// Runtime reconfigurable.
@@ -30,7 +35,7 @@ pub struct Throttle<M: Metric> {
}
pub struct Inner {
enabled: bool,
task_kinds: EnumSet<TaskKind>,
rate_limiter: Arc<RateLimiter>,
}
@@ -74,12 +79,26 @@ where
}
fn new_inner(config: Config) -> Inner {
let Config {
enabled,
task_kinds,
initial,
refill_interval,
refill_amount,
max,
} = config;
let task_kinds: EnumSet<TaskKind> = task_kinds
.iter()
.filter_map(|s| match TaskKind::from_str(s) {
Ok(v) => Some(v),
Err(e) => {
// TODO: avoid this failure mode
error!(
"cannot parse task kind, ignoring for rate limiting {}",
utils::error::report_compact_sources(&e)
);
None
}
})
.collect();
// steady rate, we expect `refill_amount` requests per `refill_interval`.
// dividing gives us the rps.
@@ -93,7 +112,7 @@ where
let rate_limiter = RateLimiter::with_initial_tokens(config, f64::from(initial_tokens));
Inner {
enabled: enabled.is_enabled(),
task_kinds,
rate_limiter: Arc::new(rate_limiter),
}
}
@@ -122,13 +141,11 @@ where
self.inner.load().rate_limiter.steady_rps()
}
pub async fn throttle(&self, key_count: usize) -> Option<Duration> {
pub async fn throttle(&self, ctx: &RequestContext, key_count: usize) -> Option<Duration> {
let inner = self.inner.load_full(); // clones the `Inner` Arc
if !inner.enabled {
if !inner.task_kinds.contains(ctx.task_kind()) {
return None;
}
};
let start = std::time::Instant::now();
self.metric.accounting_start();
@@ -145,6 +162,19 @@ where
.fetch_add(wait_time.as_micros() as u64, Ordering::Relaxed);
let observation = Observation { wait_time };
self.metric.observe_throttling(&observation);
match ctx.micros_spent_throttled.add(wait_time) {
Ok(res) => res,
Err(error) => {
use once_cell::sync::Lazy;
use utils::rate_limit::RateLimit;
static WARN_RATE_LIMIT: Lazy<Mutex<RateLimit>> =
Lazy::new(|| Mutex::new(RateLimit::new(Duration::from_secs(10))));
let mut guard = WARN_RATE_LIMIT.lock().unwrap();
guard.call(move || {
warn!(error, "error adding time spent throttled; this message is logged at a global rate limit");
});
}
}
Some(wait_time)
} else {
None

View File

@@ -208,8 +208,8 @@ fn drop_wlock<T>(rlock: tokio::sync::RwLockWriteGuard<'_, T>) {
/// The outward-facing resources required to build a Timeline
pub struct TimelineResources {
pub remote_client: RemoteTimelineClient,
pub pagestream_throttle:
Arc<crate::tenant::throttle::Throttle<crate::metrics::tenant_throttling::Pagestream>>,
pub timeline_get_throttle:
Arc<crate::tenant::throttle::Throttle<crate::metrics::tenant_throttling::TimelineGet>>,
pub l0_flush_global_state: l0_flush::L0FlushGlobalState,
}
@@ -411,9 +411,9 @@ pub struct Timeline {
/// Timeline deletion will acquire both compaction and gc locks in whatever order.
gc_lock: tokio::sync::Mutex<()>,
/// Cloned from [`super::Tenant::pagestream_throttle`] on construction.
pub(crate) pagestream_throttle:
Arc<crate::tenant::throttle::Throttle<crate::metrics::tenant_throttling::Pagestream>>,
/// Cloned from [`super::Tenant::timeline_get_throttle`] on construction.
timeline_get_throttle:
Arc<crate::tenant::throttle::Throttle<crate::metrics::tenant_throttling::TimelineGet>>,
/// Size estimator for aux file v2
pub(crate) aux_file_size_estimator: AuxFileSizeEstimator,
@@ -949,7 +949,7 @@ impl Timeline {
/// If a remote layer file is needed, it is downloaded as part of this
/// call.
///
/// This method enforces [`Self::pagestream_throttle`] internally.
/// This method enforces [`Self::timeline_get_throttle`] internally.
///
/// NOTE: It is considered an error to 'get' a key that doesn't exist. The
/// abstraction above this needs to store suitable metadata to track what
@@ -977,6 +977,8 @@ impl Timeline {
// page_service.
debug_assert!(!self.shard_identity.is_key_disposable(&key));
self.timeline_get_throttle.throttle(ctx, 1).await;
let keyspace = KeySpace {
ranges: vec![key..key.next()],
};
@@ -1056,6 +1058,13 @@ impl Timeline {
.for_task_kind(ctx.task_kind())
.map(|metric| (metric, Instant::now()));
// start counting after throttle so that throttle time
// is always less than observation time
let throttled = self
.timeline_get_throttle
.throttle(ctx, key_count as usize)
.await;
let res = self
.get_vectored_impl(
keyspace.clone(),
@@ -1067,7 +1076,23 @@ impl Timeline {
if let Some((metric, start)) = start {
let elapsed = start.elapsed();
metric.observe(elapsed.as_secs_f64());
let ex_throttled = if let Some(throttled) = throttled {
elapsed.checked_sub(throttled)
} else {
Some(elapsed)
};
if let Some(ex_throttled) = ex_throttled {
metric.observe(ex_throttled.as_secs_f64());
} else {
use utils::rate_limit::RateLimit;
static LOGGED: Lazy<Mutex<RateLimit>> =
Lazy::new(|| Mutex::new(RateLimit::new(Duration::from_secs(10))));
let mut rate_limit = LOGGED.lock().unwrap();
rate_limit.call(|| {
warn!("error deducting time spent throttled; this message is logged at a global rate limit");
});
}
}
res
@@ -1112,6 +1137,14 @@ impl Timeline {
.for_task_kind(ctx.task_kind())
.map(ScanLatencyOngoingRecording::start_recording);
// start counting after throttle so that throttle time
// is always less than observation time
let throttled = self
.timeline_get_throttle
// assume scan = 1 quota for now until we find a better way to process this
.throttle(ctx, 1)
.await;
let vectored_res = self
.get_vectored_impl(
keyspace.clone(),
@@ -1122,7 +1155,7 @@ impl Timeline {
.await;
if let Some(recording) = start {
recording.observe();
recording.observe(throttled);
}
vectored_res
@@ -2338,7 +2371,7 @@ impl Timeline {
standby_horizon: AtomicLsn::new(0),
pagestream_throttle: resources.pagestream_throttle,
timeline_get_throttle: resources.timeline_get_throttle,
aux_file_size_estimator: AuxFileSizeEstimator::new(aux_file_metrics),

View File

@@ -298,7 +298,7 @@ impl DeleteTimelineFlow {
None, // Ancestor is not needed for deletion.
TimelineResources {
remote_client,
pagestream_throttle: tenant.pagestream_throttle.clone(),
timeline_get_throttle: tenant.timeline_get_throttle.clone(),
l0_flush_global_state: tenant.l0_flush_global_state.clone(),
},
// Important. We dont pass ancestor above because it can be missing.

View File

@@ -129,23 +129,22 @@ impl Flow {
}
// Import SLRUs
if self.timeline.tenant_shard_id.is_shard_zero() {
// pg_xact (01:00 keyspace)
self.import_slru(SlruKind::Clog, &self.storage.pgdata().join("pg_xact"))
.await?;
// pg_multixact/members (01:01 keyspace)
self.import_slru(
SlruKind::MultiXactMembers,
&self.storage.pgdata().join("pg_multixact/members"),
)
// pg_xact (01:00 keyspace)
self.import_slru(SlruKind::Clog, &self.storage.pgdata().join("pg_xact"))
.await?;
// pg_multixact/offsets (01:02 keyspace)
self.import_slru(
SlruKind::MultiXactOffsets,
&self.storage.pgdata().join("pg_multixact/offsets"),
)
.await?;
}
// pg_multixact/members (01:01 keyspace)
self.import_slru(
SlruKind::MultiXactMembers,
&self.storage.pgdata().join("pg_multixact/members"),
)
.await?;
// pg_multixact/offsets (01:02 keyspace)
self.import_slru(
SlruKind::MultiXactOffsets,
&self.storage.pgdata().join("pg_multixact/offsets"),
)
.await?;
// Import pg_twophase.
// TODO: as empty
@@ -303,8 +302,6 @@ impl Flow {
}
async fn import_slru(&mut self, kind: SlruKind, path: &RemotePath) -> anyhow::Result<()> {
assert!(self.timeline.tenant_shard_id.is_shard_zero());
let segments = self.storage.listfilesindir(path).await?;
let segments: Vec<(String, u32, usize)> = segments
.into_iter()
@@ -340,6 +337,7 @@ impl Flow {
debug!(%p, segno=%segno, %size, %start_key, %end_key, "scheduling SLRU segment");
self.tasks
.push(AnyImportTask::SlruBlocks(ImportSlruBlocksTask::new(
*self.timeline.get_shard_identity(),
start_key..end_key,
&p,
self.storage.clone(),
@@ -633,14 +631,21 @@ impl ImportTask for ImportRelBlocksTask {
}
struct ImportSlruBlocksTask {
shard_identity: ShardIdentity,
key_range: Range<Key>,
path: RemotePath,
storage: RemoteStorageWrapper,
}
impl ImportSlruBlocksTask {
fn new(key_range: Range<Key>, path: &RemotePath, storage: RemoteStorageWrapper) -> Self {
fn new(
shard_identity: ShardIdentity,
key_range: Range<Key>,
path: &RemotePath,
storage: RemoteStorageWrapper,
) -> Self {
ImportSlruBlocksTask {
shard_identity,
key_range,
path: path.clone(),
storage,
@@ -668,13 +673,17 @@ impl ImportTask for ImportSlruBlocksTask {
let mut file_offset = 0;
while blknum < end_blk {
let key = slru_block_to_key(kind, segno, blknum);
assert!(
!self.shard_identity.is_key_disposable(&key),
"SLRU keys need to go into every shard"
);
let buf = &buf[file_offset..(file_offset + 8192)];
file_offset += 8192;
layer_writer
.put_image(key, Bytes::copy_from_slice(buf), ctx)
.await?;
nimages += 1;
blknum += 1;
nimages += 1;
}
Ok(nimages)
}

View File

@@ -695,7 +695,7 @@ async fn identify_system(client: &Client) -> anyhow::Result<IdentifySystem> {
// extract the row contents into an IdentifySystem struct.
// written as a closure so I can use ? for Option here.
if let Some(SimpleQueryMessage::Row(first_row)) = response.get(1) {
if let Some(SimpleQueryMessage::Row(first_row)) = response.first() {
Ok(IdentifySystem {
systemid: get_parse(first_row, 0)?,
timeline: get_parse(first_row, 1)?,

View File

@@ -1392,10 +1392,6 @@ impl WalIngest {
img: Bytes,
ctx: &RequestContext,
) -> Result<()> {
if !self.shard.is_shard_zero() {
return Ok(());
}
self.handle_slru_extend(modification, kind, segno, blknum, ctx)
.await?;
modification.put_slru_page_image(kind, segno, blknum, img)?;

View File

@@ -15,9 +15,6 @@
#include "access/subtrans.h"
#include "access/twophase.h"
#include "access/xlog.h"
#if PG_MAJORVERSION_NUM >= 15
#include "access/xlogrecovery.h"
#endif
#include "replication/logical.h"
#include "replication/slot.h"
#include "replication/walsender.h"
@@ -435,16 +432,6 @@ _PG_init(void)
restore_running_xacts_callback = RestoreRunningXactsFromClog;
DefineCustomBoolVariable(
"neon.allow_replica_misconfig",
"Allow replica startup when some critical GUCs have smaller value than on primary node",
NULL,
&allowReplicaMisconfig,
true,
PGC_POSTMASTER,
0,
NULL, NULL, NULL);
DefineCustomEnumVariable(
"neon.running_xacts_overflow_policy",
"Action performed on snapshot overflow when restoring runnings xacts from CLOG",

View File

@@ -439,8 +439,6 @@ readahead_buffer_resize(int newsize, void *extra)
newPState->ring_unused = newsize;
newPState->ring_receive = newsize;
newPState->ring_flush = newsize;
newPState->max_shard_no = MyPState->max_shard_no;
memcpy(newPState->shard_bitmap, MyPState->shard_bitmap, sizeof(MyPState->shard_bitmap));
/*
* Copy over the prefetches.
@@ -497,11 +495,7 @@ readahead_buffer_resize(int newsize, void *extra)
for (; end >= MyPState->ring_last && end != UINT64_MAX; end -= 1)
{
PrefetchRequest *slot = GetPrfSlot(end);
if (slot->status == PRFS_RECEIVED)
{
pfree(slot->response);
}
prefetch_set_unused(end);
}
prfh_destroy(MyPState->prf_hash);
@@ -950,9 +944,6 @@ Retry:
Assert(entry == NULL);
Assert(slot == NULL);
/* There should be no buffer overflow */
Assert(MyPState->ring_last + readahead_buffer_size >= MyPState->ring_unused);
/*
* If the prefetch queue is full, we need to make room by clearing the
* oldest slot. If the oldest slot holds a buffer that was already
@@ -967,7 +958,7 @@ Retry:
* a prefetch request kind of goes against the principles of
* prefetching)
*/
if (MyPState->ring_last + readahead_buffer_size == MyPState->ring_unused)
if (MyPState->ring_last + readahead_buffer_size - 1 == MyPState->ring_unused)
{
uint64 cleanup_index = MyPState->ring_last;

View File

@@ -6,7 +6,7 @@ license.workspace = true
[features]
default = []
testing = ["dep:tokio-postgres"]
testing = []
[dependencies]
ahash.workspace = true
@@ -55,7 +55,6 @@ parquet.workspace = true
parquet_derive.workspace = true
pin-project-lite.workspace = true
postgres_backend.workspace = true
postgres-client = { package = "tokio-postgres2", path = "../libs/proxy/tokio-postgres2" }
postgres-protocol = { package = "postgres-protocol2", path = "../libs/proxy/postgres-protocol2" }
pq_proto.workspace = true
prometheus.workspace = true
@@ -82,7 +81,7 @@ subtle.workspace = true
thiserror.workspace = true
tikv-jemallocator.workspace = true
tikv-jemalloc-ctl = { workspace = true, features = ["use_std"] }
tokio-postgres = { workspace = true, optional = true }
tokio-postgres = { package = "tokio-postgres2", path = "../libs/proxy/tokio-postgres2" }
tokio-rustls.workspace = true
tokio-util.workspace = true
tokio = { workspace = true, features = ["signal"] }
@@ -113,11 +112,9 @@ workspace_hack.workspace = true
[dev-dependencies]
camino-tempfile.workspace = true
fallible-iterator.workspace = true
flate2.workspace = true
tokio-tungstenite.workspace = true
pbkdf2 = { workspace = true, features = ["simple", "std"] }
rcgen.workspace = true
rstest.workspace = true
walkdir.workspace = true
rand_distr = "0.4"
tokio-postgres.workspace = true

View File

@@ -66,7 +66,7 @@ pub(super) async fn authenticate(
Ok(ComputeCredentials {
info: creds,
keys: ComputeCredentialKeys::AuthKeys(postgres_client::config::AuthKeys::ScramSha256(
keys: ComputeCredentialKeys::AuthKeys(tokio_postgres::config::AuthKeys::ScramSha256(
scram_keys,
)),
})

View File

@@ -1,8 +1,8 @@
use async_trait::async_trait;
use postgres_client::config::SslMode;
use pq_proto::BeMessage as Be;
use thiserror::Error;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_postgres::config::SslMode;
use tracing::{info, info_span};
use super::ComputeCredentialKeys;
@@ -49,19 +49,13 @@ impl ReportableError for ConsoleRedirectError {
}
}
fn hello_message(
redirect_uri: &reqwest::Url,
session_id: &str,
duration: std::time::Duration,
) -> String {
let formatted_duration = humantime::format_duration(duration).to_string();
fn hello_message(redirect_uri: &reqwest::Url, session_id: &str) -> String {
format!(
concat![
"Welcome to Neon!\n",
"Authenticate by visiting (will expire in {duration}):\n",
"Authenticate by visiting:\n",
" {redirect_uri}{session_id}\n\n",
],
duration = formatted_duration,
redirect_uri = redirect_uri,
session_id = session_id,
)
@@ -124,11 +118,7 @@ async fn authenticate(
};
let span = info_span!("console_redirect", psql_session_id = &psql_session_id);
let greeting = hello_message(
link_uri,
&psql_session_id,
auth_config.console_redirect_confirmation_timeout,
);
let greeting = hello_message(link_uri, &psql_session_id);
// Give user a URL to spawn a new database.
info!(parent: &span, "sending the auth URL to the user");
@@ -161,8 +151,12 @@ async fn authenticate(
// This config should be self-contained, because we won't
// take username or dbname from client's startup message.
let mut config = compute::ConnCfg::new(db_info.host.to_string(), db_info.port);
config.dbname(&db_info.dbname).user(&db_info.user);
let mut config = compute::ConnCfg::new();
config
.host(&db_info.host)
.port(db_info.port)
.dbname(&db_info.dbname)
.user(&db_info.user);
ctx.set_dbname(db_info.dbname.into());
ctx.set_user(db_info.user.into());

View File

@@ -350,13 +350,6 @@ impl JwkCacheEntryLock {
let header = base64::decode_config(header, base64::URL_SAFE_NO_PAD)?;
let header = serde_json::from_slice::<JwtHeader<'_>>(&header)?;
let payloadb = base64::decode_config(payload, base64::URL_SAFE_NO_PAD)?;
let payload = serde_json::from_slice::<JwtPayload<'_>>(&payloadb)?;
if let Some(iss) = &payload.issuer {
ctx.set_jwt_issuer(iss.as_ref().to_owned());
}
let sig = base64::decode_config(signature, base64::URL_SAFE_NO_PAD)?;
let kid = header.key_id.ok_or(JwtError::MissingKeyId)?;
@@ -395,6 +388,9 @@ impl JwkCacheEntryLock {
key => return Err(JwtError::UnsupportedKeyType(key.into())),
};
let payloadb = base64::decode_config(payload, base64::URL_SAFE_NO_PAD)?;
let payload = serde_json::from_slice::<JwtPayload<'_>>(&payloadb)?;
tracing::debug!(?payload, "JWT signature valid with claims");
if let Some(aud) = expected_audience {

View File

@@ -29,7 +29,12 @@ impl LocalBackend {
api: http::Endpoint::new(compute_ctl, http::new_client()),
},
node_info: NodeInfo {
config: ConnCfg::new(postgres_addr.ip().to_string(), postgres_addr.port()),
config: {
let mut cfg = ConnCfg::new();
cfg.host(&postgres_addr.ip().to_string());
cfg.port(postgres_addr.port());
cfg
},
// TODO(conrad): make this better reflect compute info rather than endpoint info.
aux: MetricsAuxInfo {
endpoint_id: EndpointIdTag::get_interner().get_or_intern("local"),

View File

@@ -11,8 +11,8 @@ pub use console_redirect::ConsoleRedirectBackend;
pub(crate) use console_redirect::ConsoleRedirectError;
use ipnet::{Ipv4Net, Ipv6Net};
use local::LocalBackend;
use postgres_client::config::AuthKeys;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_postgres::config::AuthKeys;
use tracing::{debug, info, warn};
use crate::auth::credentials::check_peer_addr_is_in_list;

View File

@@ -227,7 +227,7 @@ pub(crate) async fn validate_password_and_exchange(
};
Ok(sasl::Outcome::Success(ComputeCredentialKeys::AuthKeys(
postgres_client::config::AuthKeys::ScramSha256(keys),
tokio_postgres::config::AuthKeys::ScramSha256(keys),
)))
}
}

View File

@@ -3,6 +3,14 @@ use std::pin::pin;
use std::sync::Arc;
use anyhow::bail;
use aws_config::environment::EnvironmentVariableCredentialsProvider;
use aws_config::imds::credentials::ImdsCredentialsProvider;
use aws_config::meta::credentials::CredentialsProviderChain;
use aws_config::meta::region::RegionProviderChain;
use aws_config::profile::ProfileFileCredentialsProvider;
use aws_config::provider_config::ProviderConfig;
use aws_config::web_identity_token::WebIdentityTokenCredentialsProvider;
use aws_config::Region;
use futures::future::Either;
use proxy::auth::backend::jwt::JwkCache;
use proxy::auth::backend::{AuthRateLimiter, ConsoleRedirectBackend, MaybeOwned};
@@ -306,7 +314,39 @@ async fn main() -> anyhow::Result<()> {
};
info!("Using region: {}", args.aws_region);
// TODO: untangle the config args
let region_provider =
RegionProviderChain::default_provider().or_else(Region::new(args.aws_region.clone()));
let provider_conf =
ProviderConfig::without_region().with_region(region_provider.region().await);
let aws_credentials_provider = {
// uses "AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY"
CredentialsProviderChain::first_try("env", EnvironmentVariableCredentialsProvider::new())
// uses "AWS_PROFILE" / `aws sso login --profile <profile>`
.or_else(
"profile-sso",
ProfileFileCredentialsProvider::builder()
.configure(&provider_conf)
.build(),
)
// uses "AWS_WEB_IDENTITY_TOKEN_FILE", "AWS_ROLE_ARN", "AWS_ROLE_SESSION_NAME"
// needed to access remote extensions bucket
.or_else(
"token",
WebIdentityTokenCredentialsProvider::builder()
.configure(&provider_conf)
.build(),
)
// uses imds v2
.or_else("imds", ImdsCredentialsProvider::builder().build())
};
let elasticache_credentials_provider = Arc::new(elasticache::CredentialsProvider::new(
elasticache::AWSIRSAConfig::new(
args.aws_region.clone(),
args.redis_cluster_name,
args.redis_user_id,
),
aws_credentials_provider,
));
let regional_redis_client = match (args.redis_auth_type.as_str(), &args.redis_notifications) {
("plain", redis_url) => match redis_url {
None => {
@@ -321,12 +361,7 @@ async fn main() -> anyhow::Result<()> {
ConnectionWithCredentialsProvider::new_with_credentials_provider(
host.to_string(),
port,
elasticache::CredentialsProvider::new(
args.aws_region,
args.redis_cluster_name,
args.redis_user_id,
)
.await,
elasticache_credentials_provider.clone(),
),
),
(None, None) => {
@@ -482,6 +517,10 @@ async fn main() -> anyhow::Result<()> {
if let Some(metrics_config) = &config.metric_collection {
// TODO: Add gc regardles of the metric collection being enabled.
maintenance_tasks.spawn(usage_metrics::task_main(metrics_config));
client_tasks.spawn(usage_metrics::task_backup(
&metrics_config.backup_metric_collection_config,
cancellation_token.clone(),
));
}
if let Either::Left(auth::Backend::ControlPlane(api, _)) = &auth_backend {

View File

@@ -3,11 +3,11 @@ use std::sync::Arc;
use dashmap::DashMap;
use ipnet::{IpNet, Ipv4Net, Ipv6Net};
use postgres_client::{CancelToken, NoTls};
use pq_proto::CancelKeyData;
use thiserror::Error;
use tokio::net::TcpStream;
use tokio::sync::Mutex;
use tokio_postgres::{CancelToken, NoTls};
use tracing::{debug, info};
use uuid::Uuid;
@@ -44,7 +44,7 @@ pub(crate) enum CancelError {
IO(#[from] std::io::Error),
#[error("{0}")]
Postgres(#[from] postgres_client::Error),
Postgres(#[from] tokio_postgres::Error),
#[error("rate limit exceeded")]
RateLimit,
@@ -70,7 +70,7 @@ impl ReportableError for CancelError {
impl<P: CancellationPublisher> CancellationHandler<P> {
/// Run async action within an ephemeral session identified by [`CancelKeyData`].
pub(crate) fn get_session(self: Arc<Self>) -> Session<P> {
// HACK: We'd rather get the real backend_pid but postgres_client doesn't
// HACK: We'd rather get the real backend_pid but tokio_postgres doesn't
// expose it and we don't want to do another roundtrip to query
// for it. The client will be able to notice that this is not the
// actual backend_pid, but backend_pid is not used for anything

View File

@@ -6,15 +6,13 @@ use std::time::Duration;
use futures::{FutureExt, TryFutureExt};
use itertools::Itertools;
use once_cell::sync::OnceCell;
use postgres_client::tls::MakeTlsConnect;
use postgres_client::{CancelToken, RawConnection};
use postgres_protocol::message::backend::NoticeResponseBody;
use pq_proto::StartupMessageParams;
use rustls::client::danger::ServerCertVerifier;
use rustls::crypto::ring;
use rustls::pki_types::InvalidDnsNameError;
use thiserror::Error;
use tokio::net::TcpStream;
use tokio_postgres::tls::MakeTlsConnect;
use tracing::{debug, error, info, warn};
use crate::auth::parse_endpoint_param;
@@ -34,9 +32,9 @@ pub const COULD_NOT_CONNECT: &str = "Couldn't connect to compute node";
#[derive(Debug, Error)]
pub(crate) enum ConnectionError {
/// This error doesn't seem to reveal any secrets; for instance,
/// `postgres_client::error::Kind` doesn't contain ip addresses and such.
/// `tokio_postgres::error::Kind` doesn't contain ip addresses and such.
#[error("{COULD_NOT_CONNECT}: {0}")]
Postgres(#[from] postgres_client::Error),
Postgres(#[from] tokio_postgres::Error),
#[error("{COULD_NOT_CONNECT}: {0}")]
CouldNotConnect(#[from] io::Error),
@@ -99,18 +97,18 @@ impl ReportableError for ConnectionError {
}
/// A pair of `ClientKey` & `ServerKey` for `SCRAM-SHA-256`.
pub(crate) type ScramKeys = postgres_client::config::ScramKeys<32>;
pub(crate) type ScramKeys = tokio_postgres::config::ScramKeys<32>;
/// A config for establishing a connection to compute node.
/// Eventually, `postgres_client` will be replaced with something better.
/// Eventually, `tokio_postgres` will be replaced with something better.
/// Newtype allows us to implement methods on top of it.
#[derive(Clone)]
pub(crate) struct ConnCfg(Box<postgres_client::Config>);
#[derive(Clone, Default)]
pub(crate) struct ConnCfg(Box<tokio_postgres::Config>);
/// Creation and initialization routines.
impl ConnCfg {
pub(crate) fn new(host: String, port: u16) -> Self {
Self(Box::new(postgres_client::Config::new(host, port)))
pub(crate) fn new() -> Self {
Self::default()
}
/// Reuse password or auth keys from the other config.
@@ -124,9 +122,13 @@ impl ConnCfg {
}
}
pub(crate) fn get_host(&self) -> Host {
match self.0.get_host() {
postgres_client::config::Host::Tcp(s) => s.into(),
pub(crate) fn get_host(&self) -> Result<Host, WakeComputeError> {
match self.0.get_hosts() {
[tokio_postgres::config::Host::Tcp(s)] => Ok(s.into()),
// we should not have multiple address or unix addresses.
_ => Err(WakeComputeError::BadComputeAddress(
"invalid compute address".into(),
)),
}
}
@@ -156,7 +158,7 @@ impl ConnCfg {
// TODO: This is especially ugly...
if let Some(replication) = params.get("replication") {
use postgres_client::config::ReplicationMode;
use tokio_postgres::config::ReplicationMode;
match replication {
"true" | "on" | "yes" | "1" => {
self.replication_mode(ReplicationMode::Physical);
@@ -178,7 +180,7 @@ impl ConnCfg {
}
impl std::ops::Deref for ConnCfg {
type Target = postgres_client::Config;
type Target = tokio_postgres::Config;
fn deref(&self) -> &Self::Target {
&self.0
@@ -195,7 +197,7 @@ impl std::ops::DerefMut for ConnCfg {
impl ConnCfg {
/// Establish a raw TCP connection to the compute node.
async fn connect_raw(&self, timeout: Duration) -> io::Result<(SocketAddr, TcpStream, &str)> {
use postgres_client::config::Host;
use tokio_postgres::config::Host;
// wrap TcpStream::connect with timeout
let connect_with_timeout = |host, port| {
@@ -220,23 +222,46 @@ impl ConnCfg {
})
};
// We can't reuse connection establishing logic from `postgres_client` here,
// We can't reuse connection establishing logic from `tokio_postgres` here,
// because it has no means for extracting the underlying socket which we
// require for our business.
let port = self.0.get_port();
let host = self.0.get_host();
let mut connection_error = None;
let ports = self.0.get_ports();
let hosts = self.0.get_hosts();
// the ports array is supposed to have 0 entries, 1 entry, or as many entries as in the hosts array
if ports.len() > 1 && ports.len() != hosts.len() {
return Err(io::Error::new(
io::ErrorKind::Other,
format!(
"bad compute config, \
ports and hosts entries' count does not match: {:?}",
self.0
),
));
}
let host = match host {
Host::Tcp(host) => host.as_str(),
};
for (i, host) in hosts.iter().enumerate() {
let port = ports.get(i).or_else(|| ports.first()).unwrap_or(&5432);
let host = match host {
Host::Tcp(host) => host.as_str(),
};
match connect_once(host, port).await {
Ok((sockaddr, stream)) => Ok((sockaddr, stream, host)),
Err(err) => {
warn!("couldn't connect to compute node at {host}:{port}: {err}");
Err(err)
match connect_once(host, *port).await {
Ok((sockaddr, stream)) => return Ok((sockaddr, stream, host)),
Err(err) => {
// We can't throw an error here, as there might be more hosts to try.
warn!("couldn't connect to compute node at {host}:{port}: {err}");
connection_error = Some(err);
}
}
}
Err(connection_error.unwrap_or_else(|| {
io::Error::new(
io::ErrorKind::Other,
format!("bad compute config: {:?}", self.0),
)
}))
}
}
@@ -245,15 +270,13 @@ type RustlsStream = <MakeRustlsConnect as MakeTlsConnect<tokio::net::TcpStream>>
pub(crate) struct PostgresConnection {
/// Socket connected to a compute node.
pub(crate) stream:
postgres_client::maybe_tls_stream::MaybeTlsStream<tokio::net::TcpStream, RustlsStream>,
tokio_postgres::maybe_tls_stream::MaybeTlsStream<tokio::net::TcpStream, RustlsStream>,
/// PostgreSQL connection parameters.
pub(crate) params: std::collections::HashMap<String, String>,
/// Query cancellation token.
pub(crate) cancel_closure: CancelClosure,
/// Labels for proxy's metrics.
pub(crate) aux: MetricsAuxInfo,
/// Notices received from compute after authenticating
pub(crate) delayed_notice: Vec<NoticeResponseBody>,
_guage: NumDbConnectionsGuard<'static>,
}
@@ -299,19 +322,10 @@ impl ConnCfg {
// connect_raw() will not use TLS if sslmode is "disable"
let pause = ctx.latency_timer_pause(crate::metrics::Waiting::Compute);
let connection = self.0.connect_raw(stream, tls).await?;
let (client, connection) = self.0.connect_raw(stream, tls).await?;
drop(pause);
let RawConnection {
stream,
parameters,
delayed_notice,
process_id,
secret_key,
} = connection;
tracing::Span::current().record("pid", tracing::field::display(process_id));
let stream = stream.into_inner();
tracing::Span::current().record("pid", tracing::field::display(client.get_process_id()));
let stream = connection.stream.into_inner();
// TODO: lots of useful info but maybe we can move it elsewhere (eg traces?)
info!(
@@ -320,23 +334,18 @@ impl ConnCfg {
self.0.get_ssl_mode()
);
// This is very ugly but as of now there's no better way to
// extract the connection parameters from tokio-postgres' connection.
// TODO: solve this problem in a more elegant manner (e.g. the new library).
let params = connection.parameters;
// NB: CancelToken is supposed to hold socket_addr, but we use connect_raw.
// Yet another reason to rework the connection establishing code.
let cancel_closure = CancelClosure::new(
socket_addr,
CancelToken {
socket_config: None,
ssl_mode: self.0.get_ssl_mode(),
process_id,
secret_key,
},
vec![],
);
let cancel_closure = CancelClosure::new(socket_addr, client.cancel_token(), vec![]);
let connection = PostgresConnection {
stream,
params: parameters,
delayed_notice,
params,
cancel_closure,
aux,
_guage: Metrics::get().proxy.db_connections.guard(ctx.protocol()),

View File

@@ -57,7 +57,6 @@ struct RequestContextInner {
application: Option<SmolStr>,
error_kind: Option<ErrorKind>,
pub(crate) auth_method: Option<AuthMethod>,
jwt_issuer: Option<String>,
success: bool,
pub(crate) cold_start_info: ColdStartInfo,
pg_options: Option<StartupMessageParams>,
@@ -80,7 +79,6 @@ pub(crate) enum AuthMethod {
ScramSha256,
ScramSha256Plus,
Cleartext,
Jwt,
}
impl Clone for RequestContext {
@@ -102,7 +100,6 @@ impl Clone for RequestContext {
application: inner.application.clone(),
error_kind: inner.error_kind,
auth_method: inner.auth_method.clone(),
jwt_issuer: inner.jwt_issuer.clone(),
success: inner.success,
rejected: inner.rejected,
cold_start_info: inner.cold_start_info,
@@ -151,7 +148,6 @@ impl RequestContext {
application: None,
error_kind: None,
auth_method: None,
jwt_issuer: None,
success: false,
rejected: None,
cold_start_info: ColdStartInfo::Unknown,
@@ -250,11 +246,6 @@ impl RequestContext {
this.auth_method = Some(auth_method);
}
pub(crate) fn set_jwt_issuer(&self, jwt_issuer: String) {
let mut this = self.0.try_lock().expect("should not deadlock");
this.jwt_issuer = Some(jwt_issuer);
}
pub fn has_private_peer_addr(&self) -> bool {
self.0
.try_lock()

View File

@@ -87,8 +87,6 @@ pub(crate) struct RequestData {
branch: Option<String>,
pg_options: Option<String>,
auth_method: Option<&'static str>,
jwt_issuer: Option<String>,
error: Option<&'static str>,
/// Success is counted if we form a HTTP response with sql rows inside
/// Or if we make it to proxy_pass
@@ -140,9 +138,7 @@ impl From<&RequestContextInner> for RequestData {
super::AuthMethod::ScramSha256 => "scram_sha_256",
super::AuthMethod::ScramSha256Plus => "scram_sha_256_plus",
super::AuthMethod::Cleartext => "cleartext",
super::AuthMethod::Jwt => "jwt",
}),
jwt_issuer: value.jwt_issuer.clone(),
protocol: value.protocol.as_str(),
region: value.region,
error: value.error_kind.as_ref().map(|e| e.to_metric_label()),
@@ -523,7 +519,6 @@ mod tests {
branch: Some(hex::encode(rng.gen::<[u8; 16]>())),
pg_options: None,
auth_method: None,
jwt_issuer: None,
protocol: ["tcp", "ws", "http"][rng.gen_range(0..3)],
region: "us-east-1",
error: None,
@@ -604,15 +599,15 @@ mod tests {
assert_eq!(
file_stats,
[
(1313105, 3, 6000),
(1313094, 3, 6000),
(1313153, 3, 6000),
(1313110, 3, 6000),
(1313246, 3, 6000),
(1313083, 3, 6000),
(1312877, 3, 6000),
(1313112, 3, 6000),
(438020, 1, 2000)
(1312632, 3, 6000),
(1312621, 3, 6000),
(1312680, 3, 6000),
(1312637, 3, 6000),
(1312773, 3, 6000),
(1312610, 3, 6000),
(1312404, 3, 6000),
(1312639, 3, 6000),
(437848, 1, 2000)
]
);
@@ -644,11 +639,11 @@ mod tests {
assert_eq!(
file_stats,
[
(1204324, 5, 10000),
(1204048, 5, 10000),
(1204349, 5, 10000),
(1204334, 5, 10000),
(1204588, 5, 10000)
(1203465, 5, 10000),
(1203189, 5, 10000),
(1203490, 5, 10000),
(1203475, 5, 10000),
(1203729, 5, 10000)
]
);
@@ -673,15 +668,15 @@ mod tests {
assert_eq!(
file_stats,
[
(1313105, 3, 6000),
(1313094, 3, 6000),
(1313153, 3, 6000),
(1313110, 3, 6000),
(1313246, 3, 6000),
(1313083, 3, 6000),
(1312877, 3, 6000),
(1313112, 3, 6000),
(438020, 1, 2000)
(1312632, 3, 6000),
(1312621, 3, 6000),
(1312680, 3, 6000),
(1312637, 3, 6000),
(1312773, 3, 6000),
(1312610, 3, 6000),
(1312404, 3, 6000),
(1312639, 3, 6000),
(437848, 1, 2000)
]
);
@@ -718,7 +713,7 @@ mod tests {
// files are smaller than the size threshold, but they took too long to fill so were flushed early
assert_eq!(
file_stats,
[(658014, 2, 3001), (657728, 2, 3000), (657524, 2, 2999)]
[(657696, 2, 3001), (657410, 2, 3000), (657206, 2, 2999)]
);
tmpdir.close().unwrap();

View File

@@ -5,6 +5,7 @@ use std::sync::Arc;
use futures::TryFutureExt;
use thiserror::Error;
use tokio_postgres::config::SslMode;
use tokio_postgres::Client;
use tracing::{error, info, info_span, warn, Instrument};
@@ -160,11 +161,11 @@ impl MockControlPlane {
}
async fn do_wake_compute(&self) -> Result<NodeInfo, WakeComputeError> {
let mut config = compute::ConnCfg::new(
self.endpoint.host_str().unwrap_or("localhost").to_owned(),
self.endpoint.port().unwrap_or(5432),
);
config.ssl_mode(postgres_client::config::SslMode::Disable);
let mut config = compute::ConnCfg::new();
config
.host(self.endpoint.host_str().unwrap_or("localhost"))
.port(self.endpoint.port().unwrap_or(5432))
.ssl_mode(SslMode::Disable);
let node = NodeInfo {
config,

View File

@@ -6,8 +6,8 @@ use std::time::Duration;
use ::http::header::AUTHORIZATION;
use ::http::HeaderName;
use futures::TryFutureExt;
use postgres_client::config::SslMode;
use tokio::time::Instant;
use tokio_postgres::config::SslMode;
use tracing::{debug, info, info_span, warn, Instrument};
use super::super::messages::{ControlPlaneErrorMessage, GetRoleSecret, WakeCompute};
@@ -241,8 +241,8 @@ impl NeonControlPlaneClient {
// Don't set anything but host and port! This config will be cached.
// We'll set username and such later using the startup message.
// TODO: add more type safety (in progress).
let mut config = compute::ConnCfg::new(host.to_owned(), port);
config.ssl_mode(SslMode::Disable); // TLS is not configured on compute nodes.
let mut config = compute::ConnCfg::new();
config.host(host).port(port).ssl_mode(SslMode::Disable); // TLS is not configured on compute nodes.
let node = NodeInfo {
config,

View File

@@ -84,7 +84,7 @@ pub(crate) trait ReportableError: fmt::Display + Send + 'static {
fn get_error_kind(&self) -> ErrorKind;
}
impl ReportableError for postgres_client::error::Error {
impl ReportableError for tokio_postgres::error::Error {
fn get_error_kind(&self) -> ErrorKind {
if self.as_db_error().is_some() {
ErrorKind::Postgres

View File

@@ -1,10 +1,10 @@
use std::convert::TryFrom;
use std::sync::Arc;
use postgres_client::tls::MakeTlsConnect;
use rustls::pki_types::ServerName;
use rustls::ClientConfig;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_postgres::tls::MakeTlsConnect;
mod private {
use std::future::Future;
@@ -12,9 +12,9 @@ mod private {
use std::pin::Pin;
use std::task::{Context, Poll};
use postgres_client::tls::{ChannelBinding, TlsConnect};
use rustls::pki_types::ServerName;
use tokio::io::{AsyncRead, AsyncWrite, ReadBuf};
use tokio_postgres::tls::{ChannelBinding, TlsConnect};
use tokio_rustls::client::TlsStream;
use tokio_rustls::TlsConnector;
@@ -59,7 +59,7 @@ mod private {
pub struct RustlsStream<S>(TlsStream<S>);
impl<S> postgres_client::tls::TlsStream for RustlsStream<S>
impl<S> tokio_postgres::tls::TlsStream for RustlsStream<S>
where
S: AsyncRead + AsyncWrite + Unpin,
{

View File

@@ -86,7 +86,7 @@ impl ConnectMechanism for TcpMechanism<'_> {
node_info: &control_plane::CachedNodeInfo,
timeout: time::Duration,
) -> Result<PostgresConnection, Self::Error> {
let host = node_info.config.get_host();
let host = node_info.config.get_host()?;
let permit = self.locks.get_permit(&host).await?;
permit.release_result(node_info.connect(ctx, timeout).await)
}

View File

@@ -384,13 +384,11 @@ pub(crate) async fn prepare_client_connection<P>(
// The new token (cancel_key_data) will be sent to the client.
let cancel_key_data = session.enable_query_cancellation(node.cancel_closure.clone());
// Forward all deferred notices to the client.
for notice in &node.delayed_notice {
stream.write_message_noflush(&Be::Raw(b'N', notice.as_bytes()))?;
}
// Forward all postgres connection params to the client.
// Right now the implementation is very hacky and inefficent (ideally,
// we don't need an intermediate hashmap), but at least it should be correct.
for (name, value) in &node.params {
// TODO: Theoretically, this could result in a big pile of params...
stream.write_message_noflush(&Be::ParameterStatus {
name: name.as_bytes(),
value: value.as_bytes(),

View File

@@ -31,9 +31,9 @@ impl CouldRetry for io::Error {
}
}
impl CouldRetry for postgres_client::error::DbError {
impl CouldRetry for tokio_postgres::error::DbError {
fn could_retry(&self) -> bool {
use postgres_client::error::SqlState;
use tokio_postgres::error::SqlState;
matches!(
self.code(),
&SqlState::CONNECTION_FAILURE
@@ -43,9 +43,9 @@ impl CouldRetry for postgres_client::error::DbError {
)
}
}
impl ShouldRetryWakeCompute for postgres_client::error::DbError {
impl ShouldRetryWakeCompute for tokio_postgres::error::DbError {
fn should_retry_wake_compute(&self) -> bool {
use postgres_client::error::SqlState;
use tokio_postgres::error::SqlState;
// Here are errors that happens after the user successfully authenticated to the database.
// TODO: there are pgbouncer errors that should be retried, but they are not listed here.
!matches!(
@@ -61,21 +61,21 @@ impl ShouldRetryWakeCompute for postgres_client::error::DbError {
}
}
impl CouldRetry for postgres_client::Error {
impl CouldRetry for tokio_postgres::Error {
fn could_retry(&self) -> bool {
if let Some(io_err) = self.source().and_then(|x| x.downcast_ref()) {
io::Error::could_retry(io_err)
} else if let Some(db_err) = self.source().and_then(|x| x.downcast_ref()) {
postgres_client::error::DbError::could_retry(db_err)
tokio_postgres::error::DbError::could_retry(db_err)
} else {
false
}
}
}
impl ShouldRetryWakeCompute for postgres_client::Error {
impl ShouldRetryWakeCompute for tokio_postgres::Error {
fn should_retry_wake_compute(&self) -> bool {
if let Some(db_err) = self.source().and_then(|x| x.downcast_ref()) {
postgres_client::error::DbError::should_retry_wake_compute(db_err)
tokio_postgres::error::DbError::should_retry_wake_compute(db_err)
} else {
// likely an IO error. Possible the compute has shutdown and the
// cache is stale.

View File

@@ -8,9 +8,9 @@ use std::fmt::Debug;
use bytes::{Bytes, BytesMut};
use futures::{SinkExt, StreamExt};
use postgres_client::tls::TlsConnect;
use postgres_protocol::message::frontend;
use tokio::io::{AsyncReadExt, DuplexStream};
use tokio_postgres::tls::TlsConnect;
use tokio_util::codec::{Decoder, Encoder};
use super::*;
@@ -158,8 +158,8 @@ async fn scram_auth_disable_channel_binding() -> anyhow::Result<()> {
Scram::new("password").await?,
));
let _client_err = postgres_client::Config::new("test".to_owned(), 5432)
.channel_binding(postgres_client::config::ChannelBinding::Disable)
let _client_err = tokio_postgres::Config::new()
.channel_binding(tokio_postgres::config::ChannelBinding::Disable)
.user("user")
.dbname("db")
.password("password")
@@ -175,7 +175,7 @@ async fn scram_auth_disable_channel_binding() -> anyhow::Result<()> {
async fn scram_auth_prefer_channel_binding() -> anyhow::Result<()> {
connect_failure(
Intercept::None,
postgres_client::config::ChannelBinding::Prefer,
tokio_postgres::config::ChannelBinding::Prefer,
)
.await
}
@@ -185,7 +185,7 @@ async fn scram_auth_prefer_channel_binding() -> anyhow::Result<()> {
async fn scram_auth_prefer_channel_binding_intercept() -> anyhow::Result<()> {
connect_failure(
Intercept::Methods,
postgres_client::config::ChannelBinding::Prefer,
tokio_postgres::config::ChannelBinding::Prefer,
)
.await
}
@@ -195,7 +195,7 @@ async fn scram_auth_prefer_channel_binding_intercept() -> anyhow::Result<()> {
async fn scram_auth_prefer_channel_binding_intercept_response() -> anyhow::Result<()> {
connect_failure(
Intercept::SASLResponse,
postgres_client::config::ChannelBinding::Prefer,
tokio_postgres::config::ChannelBinding::Prefer,
)
.await
}
@@ -205,7 +205,7 @@ async fn scram_auth_prefer_channel_binding_intercept_response() -> anyhow::Resul
async fn scram_auth_require_channel_binding() -> anyhow::Result<()> {
connect_failure(
Intercept::None,
postgres_client::config::ChannelBinding::Require,
tokio_postgres::config::ChannelBinding::Require,
)
.await
}
@@ -215,7 +215,7 @@ async fn scram_auth_require_channel_binding() -> anyhow::Result<()> {
async fn scram_auth_require_channel_binding_intercept() -> anyhow::Result<()> {
connect_failure(
Intercept::Methods,
postgres_client::config::ChannelBinding::Require,
tokio_postgres::config::ChannelBinding::Require,
)
.await
}
@@ -225,14 +225,14 @@ async fn scram_auth_require_channel_binding_intercept() -> anyhow::Result<()> {
async fn scram_auth_require_channel_binding_intercept_response() -> anyhow::Result<()> {
connect_failure(
Intercept::SASLResponse,
postgres_client::config::ChannelBinding::Require,
tokio_postgres::config::ChannelBinding::Require,
)
.await
}
async fn connect_failure(
intercept: Intercept,
channel_binding: postgres_client::config::ChannelBinding,
channel_binding: tokio_postgres::config::ChannelBinding,
) -> anyhow::Result<()> {
let (server, client, client_config, server_config) = proxy_mitm(intercept).await;
let proxy = tokio::spawn(dummy_proxy(
@@ -241,7 +241,7 @@ async fn connect_failure(
Scram::new("password").await?,
));
let _client_err = postgres_client::Config::new("test".to_owned(), 5432)
let _client_err = tokio_postgres::Config::new()
.channel_binding(channel_binding)
.user("user")
.dbname("db")

View File

@@ -7,13 +7,13 @@ use std::time::Duration;
use anyhow::{bail, Context};
use async_trait::async_trait;
use http::StatusCode;
use postgres_client::config::SslMode;
use postgres_client::tls::{MakeTlsConnect, NoTls};
use retry::{retry_after, ShouldRetryWakeCompute};
use rstest::rstest;
use rustls::crypto::ring;
use rustls::pki_types;
use tokio::io::DuplexStream;
use tokio_postgres::config::SslMode;
use tokio_postgres::tls::{MakeTlsConnect, NoTls};
use super::connect_compute::ConnectMechanism;
use super::retry::CouldRetry;
@@ -204,7 +204,7 @@ async fn handshake_tls_is_enforced_by_proxy() -> anyhow::Result<()> {
let (_, server_config) = generate_tls_config("generic-project-name.localhost", "localhost")?;
let proxy = tokio::spawn(dummy_proxy(client, Some(server_config), NoAuth));
let client_err = postgres_client::Config::new("test".to_owned(), 5432)
let client_err = tokio_postgres::Config::new()
.user("john_doe")
.dbname("earth")
.ssl_mode(SslMode::Disable)
@@ -233,7 +233,7 @@ async fn handshake_tls() -> anyhow::Result<()> {
generate_tls_config("generic-project-name.localhost", "localhost")?;
let proxy = tokio::spawn(dummy_proxy(client, Some(server_config), NoAuth));
let _conn = postgres_client::Config::new("test".to_owned(), 5432)
let (_client, _conn) = tokio_postgres::Config::new()
.user("john_doe")
.dbname("earth")
.ssl_mode(SslMode::Require)
@@ -249,7 +249,7 @@ async fn handshake_raw() -> anyhow::Result<()> {
let proxy = tokio::spawn(dummy_proxy(client, None, NoAuth));
let _conn = postgres_client::Config::new("test".to_owned(), 5432)
let (_client, _conn) = tokio_postgres::Config::new()
.user("john_doe")
.dbname("earth")
.options("project=generic-project-name")
@@ -296,8 +296,8 @@ async fn scram_auth_good(#[case] password: &str) -> anyhow::Result<()> {
Scram::new(password).await?,
));
let _conn = postgres_client::Config::new("test".to_owned(), 5432)
.channel_binding(postgres_client::config::ChannelBinding::Require)
let (_client, _conn) = tokio_postgres::Config::new()
.channel_binding(tokio_postgres::config::ChannelBinding::Require)
.user("user")
.dbname("db")
.password(password)
@@ -320,8 +320,8 @@ async fn scram_auth_disable_channel_binding() -> anyhow::Result<()> {
Scram::new("password").await?,
));
let _conn = postgres_client::Config::new("test".to_owned(), 5432)
.channel_binding(postgres_client::config::ChannelBinding::Disable)
let (_client, _conn) = tokio_postgres::Config::new()
.channel_binding(tokio_postgres::config::ChannelBinding::Disable)
.user("user")
.dbname("db")
.password("password")
@@ -348,7 +348,7 @@ async fn scram_auth_mock() -> anyhow::Result<()> {
.map(char::from)
.collect();
let _client_err = postgres_client::Config::new("test".to_owned(), 5432)
let _client_err = tokio_postgres::Config::new()
.user("user")
.dbname("db")
.password(&password) // no password will match the mocked secret
@@ -546,7 +546,7 @@ impl TestControlPlaneClient for TestConnectMechanism {
fn helper_create_cached_node_info(cache: &'static NodeInfoCache) -> CachedNodeInfo {
let node = NodeInfo {
config: compute::ConnCfg::new("test".to_owned(), 5432),
config: compute::ConnCfg::new(),
aux: MetricsAuxInfo {
endpoint_id: (&EndpointId::from("endpoint")).into(),
project_id: (&ProjectId::from("project")).into(),

View File

@@ -1,14 +1,6 @@
use std::sync::Arc;
use std::time::{Duration, SystemTime};
use aws_config::environment::EnvironmentVariableCredentialsProvider;
use aws_config::imds::credentials::ImdsCredentialsProvider;
use aws_config::meta::credentials::CredentialsProviderChain;
use aws_config::meta::region::RegionProviderChain;
use aws_config::profile::ProfileFileCredentialsProvider;
use aws_config::provider_config::ProviderConfig;
use aws_config::web_identity_token::WebIdentityTokenCredentialsProvider;
use aws_config::Region;
use aws_sdk_iam::config::ProvideCredentials;
use aws_sigv4::http_request::{
self, SignableBody, SignableRequest, SignatureLocation, SigningSettings,
@@ -53,45 +45,12 @@ pub struct CredentialsProvider {
}
impl CredentialsProvider {
pub async fn new(
aws_region: String,
redis_cluster_name: Option<String>,
redis_user_id: Option<String>,
) -> Arc<CredentialsProvider> {
let region_provider =
RegionProviderChain::default_provider().or_else(Region::new(aws_region.clone()));
let provider_conf =
ProviderConfig::without_region().with_region(region_provider.region().await);
let aws_credentials_provider = {
// uses "AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY"
CredentialsProviderChain::first_try(
"env",
EnvironmentVariableCredentialsProvider::new(),
)
// uses "AWS_PROFILE" / `aws sso login --profile <profile>`
.or_else(
"profile-sso",
ProfileFileCredentialsProvider::builder()
.configure(&provider_conf)
.build(),
)
// uses "AWS_WEB_IDENTITY_TOKEN_FILE", "AWS_ROLE_ARN", "AWS_ROLE_SESSION_NAME"
// needed to access remote extensions bucket
.or_else(
"token",
WebIdentityTokenCredentialsProvider::builder()
.configure(&provider_conf)
.build(),
)
// uses imds v2
.or_else("imds", ImdsCredentialsProvider::builder().build())
};
Arc::new(CredentialsProvider {
config: AWSIRSAConfig::new(aws_region, redis_cluster_name, redis_user_id),
credentials_provider: aws_credentials_provider,
})
pub fn new(config: AWSIRSAConfig, credentials_provider: CredentialsProviderChain) -> Self {
CredentialsProvider {
config,
credentials_provider,
}
}
pub(crate) async fn provide_credentials(&self) -> anyhow::Result<(String, String)> {
let aws_credentials = self
.credentials_provider

View File

@@ -37,9 +37,9 @@ use crate::types::{EndpointId, Host, LOCAL_PROXY_SUFFIX};
pub(crate) struct PoolingBackend {
pub(crate) http_conn_pool: Arc<GlobalConnPool<Send, HttpConnPool<Send>>>,
pub(crate) local_pool: Arc<LocalConnPool<postgres_client::Client>>,
pub(crate) local_pool: Arc<LocalConnPool<tokio_postgres::Client>>,
pub(crate) pool:
Arc<GlobalConnPool<postgres_client::Client, EndpointConnPool<postgres_client::Client>>>,
Arc<GlobalConnPool<tokio_postgres::Client, EndpointConnPool<tokio_postgres::Client>>>,
pub(crate) config: &'static ProxyConfig,
pub(crate) auth_backend: &'static crate::auth::Backend<'static, ()>,
@@ -53,8 +53,6 @@ impl PoolingBackend {
user_info: &ComputeUserInfo,
password: &[u8],
) -> Result<ComputeCredentials, AuthError> {
ctx.set_auth_method(crate::context::AuthMethod::Cleartext);
let user_info = user_info.clone();
let backend = self.auth_backend.as_ref().map(|()| user_info.clone());
let (allowed_ips, maybe_secret) = backend.get_allowed_ips_and_secret(ctx).await?;
@@ -117,8 +115,6 @@ impl PoolingBackend {
user_info: &ComputeUserInfo,
jwt: String,
) -> Result<ComputeCredentials, AuthError> {
ctx.set_auth_method(crate::context::AuthMethod::Jwt);
match &self.auth_backend {
crate::auth::Backend::ControlPlane(console, ()) => {
self.config
@@ -170,7 +166,7 @@ impl PoolingBackend {
conn_info: ConnInfo,
keys: ComputeCredentials,
force_new: bool,
) -> Result<Client<postgres_client::Client>, HttpConnError> {
) -> Result<Client<tokio_postgres::Client>, HttpConnError> {
let maybe_client = if force_new {
debug!("pool: pool is disabled");
None
@@ -256,7 +252,7 @@ impl PoolingBackend {
&self,
ctx: &RequestContext,
conn_info: ConnInfo,
) -> Result<Client<postgres_client::Client>, HttpConnError> {
) -> Result<Client<tokio_postgres::Client>, HttpConnError> {
if let Some(client) = self.local_pool.get(ctx, &conn_info)? {
return Ok(client);
}
@@ -315,7 +311,7 @@ impl PoolingBackend {
));
let pause = ctx.latency_timer_pause(crate::metrics::Waiting::Compute);
let (client, connection) = config.connect(postgres_client::NoTls).await?;
let (client, connection) = config.connect(tokio_postgres::NoTls).await?;
drop(pause);
let pid = client.get_process_id();
@@ -360,7 +356,7 @@ pub(crate) enum HttpConnError {
#[error("pooled connection closed at inconsistent state")]
ConnectionClosedAbruptly(#[from] tokio::sync::watch::error::SendError<uuid::Uuid>),
#[error("could not connection to postgres in compute")]
PostgresConnectionError(#[from] postgres_client::Error),
PostgresConnectionError(#[from] tokio_postgres::Error),
#[error("could not connection to local-proxy in compute")]
LocalProxyConnectionError(#[from] LocalProxyConnError),
#[error("could not parse JWT payload")]
@@ -479,7 +475,7 @@ impl ShouldRetryWakeCompute for LocalProxyConnError {
}
struct TokioMechanism {
pool: Arc<GlobalConnPool<postgres_client::Client, EndpointConnPool<postgres_client::Client>>>,
pool: Arc<GlobalConnPool<tokio_postgres::Client, EndpointConnPool<tokio_postgres::Client>>>,
conn_info: ConnInfo,
conn_id: uuid::Uuid,
@@ -489,7 +485,7 @@ struct TokioMechanism {
#[async_trait]
impl ConnectMechanism for TokioMechanism {
type Connection = Client<postgres_client::Client>;
type Connection = Client<tokio_postgres::Client>;
type ConnectError = HttpConnError;
type Error = HttpConnError;
@@ -499,7 +495,7 @@ impl ConnectMechanism for TokioMechanism {
node_info: &CachedNodeInfo,
timeout: Duration,
) -> Result<Self::Connection, Self::ConnectError> {
let host = node_info.config.get_host();
let host = node_info.config.get_host()?;
let permit = self.locks.get_permit(&host).await?;
let mut config = (*node_info.config).clone();
@@ -509,7 +505,7 @@ impl ConnectMechanism for TokioMechanism {
.connect_timeout(timeout);
let pause = ctx.latency_timer_pause(crate::metrics::Waiting::Compute);
let res = config.connect(postgres_client::NoTls).await;
let res = config.connect(tokio_postgres::NoTls).await;
drop(pause);
let (client, connection) = permit.release_result(res)?;
@@ -549,12 +545,16 @@ impl ConnectMechanism for HyperMechanism {
node_info: &CachedNodeInfo,
timeout: Duration,
) -> Result<Self::Connection, Self::ConnectError> {
let host = node_info.config.get_host();
let host = node_info.config.get_host()?;
let permit = self.locks.get_permit(&host).await?;
let pause = ctx.latency_timer_pause(crate::metrics::Waiting::Compute);
let port = node_info.config.get_port();
let port = *node_info.config.get_ports().first().ok_or_else(|| {
HttpConnError::WakeCompute(WakeComputeError::BadComputeAddress(
"local-proxy port missing on compute address".into(),
))
})?;
let res = connect_http2(&host, port, timeout).await;
drop(pause);
let (client, connection) = permit.release_result(res)?;

View File

@@ -5,11 +5,11 @@ use std::task::{ready, Poll};
use futures::future::poll_fn;
use futures::Future;
use postgres_client::tls::NoTlsStream;
use postgres_client::AsyncMessage;
use smallvec::SmallVec;
use tokio::net::TcpStream;
use tokio::time::Instant;
use tokio_postgres::tls::NoTlsStream;
use tokio_postgres::AsyncMessage;
use tokio_util::sync::CancellationToken;
use tracing::{error, info, info_span, warn, Instrument};
#[cfg(test)]
@@ -58,7 +58,7 @@ pub(crate) fn poll_client<C: ClientInnerExt>(
ctx: &RequestContext,
conn_info: ConnInfo,
client: C,
mut connection: postgres_client::Connection<TcpStream, NoTlsStream>,
mut connection: tokio_postgres::Connection<TcpStream, NoTlsStream>,
conn_id: uuid::Uuid,
aux: MetricsAuxInfo,
) -> Client<C> {

View File

@@ -7,8 +7,8 @@ use std::time::Duration;
use dashmap::DashMap;
use parking_lot::RwLock;
use postgres_client::ReadyForQueryStatus;
use rand::Rng;
use tokio_postgres::ReadyForQueryStatus;
use tracing::{debug, info, Span};
use super::backend::HttpConnError;
@@ -683,7 +683,7 @@ pub(crate) trait ClientInnerExt: Sync + Send + 'static {
fn get_process_id(&self) -> i32;
}
impl ClientInnerExt for postgres_client::Client {
impl ClientInnerExt for tokio_postgres::Client {
fn is_closed(&self) -> bool {
self.is_closed()
}

View File

@@ -1,6 +1,6 @@
use postgres_client::types::{Kind, Type};
use postgres_client::Row;
use serde_json::{Map, Value};
use tokio_postgres::types::{Kind, Type};
use tokio_postgres::Row;
//
// Convert json non-string types to strings, so that they can be passed to Postgres
@@ -61,7 +61,7 @@ fn json_array_to_pg_array(value: &Value) -> Option<String> {
#[derive(Debug, thiserror::Error)]
pub(crate) enum JsonConversionError {
#[error("internal error compute returned invalid data: {0}")]
AsTextError(postgres_client::Error),
AsTextError(tokio_postgres::Error),
#[error("parse int error: {0}")]
ParseIntError(#[from] std::num::ParseIntError),
#[error("parse float error: {0}")]

View File

@@ -22,13 +22,13 @@ use indexmap::IndexMap;
use jose_jwk::jose_b64::base64ct::{Base64UrlUnpadded, Encoding};
use p256::ecdsa::{Signature, SigningKey};
use parking_lot::RwLock;
use postgres_client::tls::NoTlsStream;
use postgres_client::types::ToSql;
use postgres_client::AsyncMessage;
use serde_json::value::RawValue;
use signature::Signer;
use tokio::net::TcpStream;
use tokio::time::Instant;
use tokio_postgres::tls::NoTlsStream;
use tokio_postgres::types::ToSql;
use tokio_postgres::AsyncMessage;
use tokio_util::sync::CancellationToken;
use tracing::{debug, error, info, info_span, warn, Instrument};
@@ -164,7 +164,7 @@ pub(crate) fn poll_client<C: ClientInnerExt>(
ctx: &RequestContext,
conn_info: ConnInfo,
client: C,
mut connection: postgres_client::Connection<TcpStream, NoTlsStream>,
mut connection: tokio_postgres::Connection<TcpStream, NoTlsStream>,
key: SigningKey,
conn_id: uuid::Uuid,
aux: MetricsAuxInfo,
@@ -280,7 +280,7 @@ pub(crate) fn poll_client<C: ClientInnerExt>(
)
}
impl ClientInnerCommon<postgres_client::Client> {
impl ClientInnerCommon<tokio_postgres::Client> {
pub(crate) async fn set_jwt_session(&mut self, payload: &[u8]) -> Result<(), HttpConnError> {
if let ClientDataEnum::Local(local_data) = &mut self.data {
local_data.jti += 1;

View File

@@ -11,12 +11,12 @@ use http_body_util::{BodyExt, Full};
use hyper::body::Incoming;
use hyper::http::{HeaderName, HeaderValue};
use hyper::{header, HeaderMap, Request, Response, StatusCode};
use postgres_client::error::{DbError, ErrorPosition, SqlState};
use postgres_client::{GenericClient, IsolationLevel, NoTls, ReadyForQueryStatus, Transaction};
use pq_proto::StartupMessageParamsBuilder;
use serde::Serialize;
use serde_json::Value;
use tokio::time::{self, Instant};
use tokio_postgres::error::{DbError, ErrorPosition, SqlState};
use tokio_postgres::{GenericClient, IsolationLevel, NoTls, ReadyForQueryStatus, Transaction};
use tokio_util::sync::CancellationToken;
use tracing::{debug, error, info};
use typed_json::json;
@@ -139,6 +139,9 @@ fn get_conn_info(
headers: &HeaderMap,
tls: Option<&TlsConfig>,
) -> Result<ConnInfoWithAuth, ConnInfoError> {
// HTTP only uses cleartext (for now and likely always)
ctx.set_auth_method(crate::context::AuthMethod::Cleartext);
let connection_string = headers
.get(&CONN_STRING)
.ok_or(ConnInfoError::InvalidHeader(&CONN_STRING))?
@@ -361,7 +364,7 @@ pub(crate) enum SqlOverHttpError {
#[error("invalid isolation level")]
InvalidIsolationLevel,
#[error("{0}")]
Postgres(#[from] postgres_client::Error),
Postgres(#[from] tokio_postgres::Error),
#[error("{0}")]
JsonConversion(#[from] JsonConversionError),
#[error("{0}")]
@@ -986,7 +989,7 @@ async fn query_to_json<T: GenericClient>(
// Manually drain the stream into a vector to leave row_stream hanging
// around to get a command tag. Also check that the response is not too
// big.
let mut rows: Vec<postgres_client::Row> = Vec::new();
let mut rows: Vec<tokio_postgres::Row> = Vec::new();
while let Some(row) = row_stream.next().await {
let row = row?;
*current_size += row.body_len();
@@ -1063,13 +1066,13 @@ async fn query_to_json<T: GenericClient>(
}
enum Client {
Remote(conn_pool_lib::Client<postgres_client::Client>),
Local(conn_pool_lib::Client<postgres_client::Client>),
Remote(conn_pool_lib::Client<tokio_postgres::Client>),
Local(conn_pool_lib::Client<tokio_postgres::Client>),
}
enum Discard<'a> {
Remote(conn_pool_lib::Discard<'a, postgres_client::Client>),
Local(conn_pool_lib::Discard<'a, postgres_client::Client>),
Remote(conn_pool_lib::Discard<'a, tokio_postgres::Client>),
Local(conn_pool_lib::Discard<'a, tokio_postgres::Client>),
}
impl Client {
@@ -1080,7 +1083,7 @@ impl Client {
}
}
fn inner(&mut self) -> (&mut postgres_client::Client, Discard<'_>) {
fn inner(&mut self) -> (&mut tokio_postgres::Client, Discard<'_>) {
match self {
Client::Remote(client) => {
let (c, d) = client.inner();

View File

@@ -1,18 +1,19 @@
//! Periodically collect proxy consumption metrics
//! and push them to a HTTP endpoint.
use std::borrow::Cow;
use std::convert::Infallible;
use std::pin::pin;
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::Arc;
use std::time::Duration;
use anyhow::{bail, Context};
use anyhow::Context;
use async_compression::tokio::write::GzipEncoder;
use bytes::Bytes;
use chrono::{DateTime, Datelike, Timelike, Utc};
use consumption_metrics::{idempotency_key, Event, EventChunk, EventType, CHUNK_SIZE};
use dashmap::mapref::entry::Entry;
use dashmap::DashMap;
use futures::future::select;
use once_cell::sync::Lazy;
use remote_storage::{GenericRemoteStorage, RemotePath, TimeoutOrCancel};
use serde::{Deserialize, Serialize};
@@ -22,7 +23,7 @@ use tracing::{error, info, instrument, trace, warn};
use utils::backoff;
use uuid::{NoContext, Timestamp};
use crate::config::MetricCollectionConfig;
use crate::config::{MetricBackupCollectionConfig, MetricCollectionConfig};
use crate::context::parquet::{FAILED_UPLOAD_MAX_RETRIES, FAILED_UPLOAD_WARN_THRESHOLD};
use crate::http;
use crate::intern::{BranchIdInt, EndpointIdInt};
@@ -57,21 +58,55 @@ trait MetricCounterReporter {
fn move_metrics(&self) -> (u64, usize);
}
#[derive(Debug)]
struct MetricBackupCounter {
transmitted: AtomicU64,
opened_connections: AtomicUsize,
}
impl MetricCounterRecorder for MetricBackupCounter {
fn record_egress(&self, bytes: u64) {
self.transmitted.fetch_add(bytes, Ordering::AcqRel);
}
fn record_connection(&self, count: usize) {
self.opened_connections.fetch_add(count, Ordering::AcqRel);
}
}
impl MetricCounterReporter for MetricBackupCounter {
fn get_metrics(&mut self) -> (u64, usize) {
(
*self.transmitted.get_mut(),
*self.opened_connections.get_mut(),
)
}
fn move_metrics(&self) -> (u64, usize) {
(
self.transmitted.swap(0, Ordering::AcqRel),
self.opened_connections.swap(0, Ordering::AcqRel),
)
}
}
#[derive(Debug)]
pub(crate) struct MetricCounter {
transmitted: AtomicU64,
opened_connections: AtomicUsize,
backup: Arc<MetricBackupCounter>,
}
impl MetricCounterRecorder for MetricCounter {
/// Record that some bytes were sent from the proxy to the client
fn record_egress(&self, bytes: u64) {
self.transmitted.fetch_add(bytes, Ordering::Relaxed);
self.transmitted.fetch_add(bytes, Ordering::AcqRel);
self.backup.record_egress(bytes);
}
/// Record that some connections were opened
fn record_connection(&self, count: usize) {
self.opened_connections.fetch_add(count, Ordering::Relaxed);
self.opened_connections.fetch_add(count, Ordering::AcqRel);
self.backup.record_connection(count);
}
}
@@ -84,8 +119,8 @@ impl MetricCounterReporter for MetricCounter {
}
fn move_metrics(&self) -> (u64, usize) {
(
self.transmitted.swap(0, Ordering::Relaxed),
self.opened_connections.swap(0, Ordering::Relaxed),
self.transmitted.swap(0, Ordering::AcqRel),
self.opened_connections.swap(0, Ordering::AcqRel),
)
}
}
@@ -138,11 +173,26 @@ type FastHasher = std::hash::BuildHasherDefault<rustc_hash::FxHasher>;
#[derive(Default)]
pub(crate) struct Metrics {
endpoints: DashMap<Ids, Arc<MetricCounter>, FastHasher>,
backup_endpoints: DashMap<Ids, Arc<MetricBackupCounter>, FastHasher>,
}
impl Metrics {
/// Register a new byte metrics counter for this endpoint
pub(crate) fn register(&self, ids: Ids) -> Arc<MetricCounter> {
let backup = if let Some(entry) = self.backup_endpoints.get(&ids) {
entry.clone()
} else {
self.backup_endpoints
.entry(ids.clone())
.or_insert_with(|| {
Arc::new(MetricBackupCounter {
transmitted: AtomicU64::new(0),
opened_connections: AtomicUsize::new(0),
})
})
.clone()
};
let entry = if let Some(entry) = self.endpoints.get(&ids) {
entry.clone()
} else {
@@ -152,6 +202,7 @@ impl Metrics {
Arc::new(MetricCounter {
transmitted: AtomicU64::new(0),
opened_connections: AtomicUsize::new(0),
backup: backup.clone(),
})
})
.clone()
@@ -176,21 +227,6 @@ pub async fn task_main(config: &MetricCollectionConfig) -> anyhow::Result<Infall
);
let hostname = hostname::get()?.as_os_str().to_string_lossy().into_owned();
// Even if the remote storage is not configured, we still want to clear the metrics.
let storage = if let Some(config) = config
.backup_metric_collection_config
.remote_storage_config
.as_ref()
{
Some(
GenericRemoteStorage::from_config(config)
.await
.context("remote storage init")?,
)
} else {
None
};
let mut prev = Utc::now();
let mut ticker = tokio::time::interval(config.interval);
loop {
@@ -201,8 +237,6 @@ pub async fn task_main(config: &MetricCollectionConfig) -> anyhow::Result<Infall
&USAGE_METRICS.endpoints,
&http_client,
&config.endpoint,
storage.as_ref(),
config.backup_metric_collection_config.chunk_size,
&hostname,
prev,
now,
@@ -249,6 +283,7 @@ fn create_event_chunks<'a>(
now: DateTime<Utc>,
chunk_size: usize,
) -> impl Iterator<Item = EventChunk<'a, Event<Ids, &'static str>>> + 'a {
// Split into chunks of 1000 metrics to avoid exceeding the max request size
metrics_to_send
.chunks(chunk_size)
.map(move |chunk| EventChunk {
@@ -268,14 +303,11 @@ fn create_event_chunks<'a>(
})
}
#[expect(clippy::too_many_arguments)]
#[instrument(skip_all)]
async fn collect_metrics_iteration(
endpoints: &DashMap<Ids, Arc<MetricCounter>, FastHasher>,
client: &http::ClientWithMiddleware,
metric_collection_endpoint: &reqwest::Url,
storage: Option<&GenericRemoteStorage>,
outer_chunk_size: usize,
hostname: &str,
prev: DateTime<Utc>,
now: DateTime<Utc>,
@@ -291,54 +323,17 @@ async fn collect_metrics_iteration(
trace!("no new metrics to send");
}
let cancel = CancellationToken::new();
let path_prefix = create_remote_path_prefix(now);
// Send metrics.
for chunk in create_event_chunks(&metrics_to_send, hostname, prev, now, outer_chunk_size) {
tokio::join!(
upload_main_events_chunked(client, metric_collection_endpoint, &chunk, CHUNK_SIZE),
async {
if let Err(e) = upload_backup_events(storage, &chunk, &path_prefix, &cancel).await {
error!("failed to upload consumption events to remote storage: {e:?}");
}
}
);
}
}
fn create_remote_path_prefix(now: DateTime<Utc>) -> String {
format!(
"year={year:04}/month={month:02}/day={day:02}/{hour:02}:{minute:02}:{second:02}Z",
year = now.year(),
month = now.month(),
day = now.day(),
hour = now.hour(),
minute = now.minute(),
second = now.second(),
)
}
async fn upload_main_events_chunked(
client: &http::ClientWithMiddleware,
metric_collection_endpoint: &reqwest::Url,
chunk: &EventChunk<'_, Event<Ids, &str>>,
subchunk_size: usize,
) {
// Split into smaller chunks to avoid exceeding the max request size
for subchunk in chunk.events.chunks(subchunk_size).map(|c| EventChunk {
events: Cow::Borrowed(c),
}) {
for chunk in create_event_chunks(&metrics_to_send, hostname, prev, now, CHUNK_SIZE) {
let res = client
.post(metric_collection_endpoint.clone())
.json(&subchunk)
.json(&chunk)
.send()
.await;
let res = match res {
Ok(x) => x,
Err(err) => {
// TODO: retry?
error!("failed to send metrics: {:?}", err);
continue;
}
@@ -346,7 +341,7 @@ async fn upload_main_events_chunked(
if !res.status().is_success() {
error!("metrics endpoint refused the sent metrics: {:?}", res);
for metric in subchunk.events.iter().filter(|e| e.value > (1u64 << 40)) {
for metric in chunk.events.iter().filter(|e| e.value > (1u64 << 40)) {
// Report if the metric value is suspiciously large
warn!("potentially abnormal metric value: {:?}", metric);
}
@@ -354,34 +349,113 @@ async fn upload_main_events_chunked(
}
}
async fn upload_backup_events(
pub async fn task_backup(
backup_config: &MetricBackupCollectionConfig,
cancellation_token: CancellationToken,
) -> anyhow::Result<()> {
info!("metrics backup config: {backup_config:?}");
scopeguard::defer! {
info!("metrics backup has shut down");
}
// Even if the remote storage is not configured, we still want to clear the metrics.
let storage = if let Some(config) = backup_config.remote_storage_config.as_ref() {
Some(
GenericRemoteStorage::from_config(config)
.await
.context("remote storage init")?,
)
} else {
None
};
let mut ticker = tokio::time::interval(backup_config.interval);
let mut prev = Utc::now();
let hostname = hostname::get()?.as_os_str().to_string_lossy().into_owned();
loop {
select(pin!(ticker.tick()), pin!(cancellation_token.cancelled())).await;
let now = Utc::now();
collect_metrics_backup_iteration(
&USAGE_METRICS.backup_endpoints,
storage.as_ref(),
&hostname,
prev,
now,
backup_config.chunk_size,
)
.await;
prev = now;
if cancellation_token.is_cancelled() {
info!("metrics backup has been cancelled");
break;
}
}
Ok(())
}
#[instrument(skip_all)]
async fn collect_metrics_backup_iteration(
endpoints: &DashMap<Ids, Arc<MetricBackupCounter>, FastHasher>,
storage: Option<&GenericRemoteStorage>,
chunk: &EventChunk<'_, Event<Ids, &'static str>>,
path_prefix: &str,
hostname: &str,
prev: DateTime<Utc>,
now: DateTime<Utc>,
chunk_size: usize,
) {
let year = now.year();
let month = now.month();
let day = now.day();
let hour = now.hour();
let minute = now.minute();
let second = now.second();
let cancel = CancellationToken::new();
info!("starting collect_metrics_backup_iteration");
let metrics_to_send = collect_and_clear_metrics(endpoints);
if metrics_to_send.is_empty() {
trace!("no new metrics to send");
}
// Send metrics.
for chunk in create_event_chunks(&metrics_to_send, hostname, prev, now, chunk_size) {
let real_now = Utc::now();
let id = uuid::Uuid::new_v7(Timestamp::from_unix(
NoContext,
real_now.second().into(),
real_now.nanosecond(),
));
let path = format!("year={year:04}/month={month:02}/day={day:02}/{hour:02}:{minute:02}:{second:02}Z_{id}.json.gz");
let remote_path = match RemotePath::from_string(&path) {
Ok(remote_path) => remote_path,
Err(e) => {
error!("failed to create remote path from str {path}: {:?}", e);
continue;
}
};
let res = upload_events_chunk(storage, chunk, &remote_path, &cancel).await;
if let Err(e) = res {
error!(
"failed to upload consumption events to remote storage: {:?}",
e
);
}
}
}
async fn upload_events_chunk(
storage: Option<&GenericRemoteStorage>,
chunk: EventChunk<'_, Event<Ids, &'static str>>,
remote_path: &RemotePath,
cancel: &CancellationToken,
) -> anyhow::Result<()> {
let Some(storage) = storage else {
warn!("no remote storage configured");
error!("no remote storage configured");
return Ok(());
};
let real_now = Utc::now();
let id = uuid::Uuid::new_v7(Timestamp::from_unix(
NoContext,
real_now.second().into(),
real_now.nanosecond(),
));
let path = format!("{path_prefix}_{id}.json.gz");
let remote_path = match RemotePath::from_string(&path) {
Ok(remote_path) => remote_path,
Err(e) => {
bail!("failed to create remote path from str {path}: {:?}", e);
}
};
// TODO: This is async compression from Vec to Vec. Rewrite as byte stream.
// Use sync compression in blocking threadpool.
let data = serde_json::to_vec(chunk).context("serialize metrics")?;
let data = serde_json::to_vec(&chunk).context("serialize metrics")?;
let mut encoder = GzipEncoder::new(Vec::new());
encoder.write_all(&data).await.context("compress metrics")?;
encoder.shutdown().await.context("compress metrics")?;
@@ -390,7 +464,7 @@ async fn upload_backup_events(
|| async {
let stream = futures::stream::once(futures::future::ready(Ok(compressed_data.clone())));
storage
.upload(stream, compressed_data.len(), &remote_path, None, cancel)
.upload(stream, compressed_data.len(), remote_path, None, cancel)
.await
},
TimeoutOrCancel::caused_by_cancel,
@@ -408,12 +482,9 @@ async fn upload_backup_events(
#[cfg(test)]
mod tests {
use std::fs;
use std::io::BufReader;
use std::sync::{Arc, Mutex};
use anyhow::Error;
use camino_tempfile::tempdir;
use chrono::Utc;
use consumption_metrics::{Event, EventChunk};
use http_body_util::BodyExt;
@@ -422,7 +493,6 @@ mod tests {
use hyper::service::service_fn;
use hyper::{Request, Response};
use hyper_util::rt::TokioIo;
use remote_storage::{RemoteStorageConfig, RemoteStorageKind};
use tokio::net::TcpListener;
use url::Url;
@@ -468,34 +538,8 @@ mod tests {
let endpoint = Url::parse(&format!("http://{addr}")).unwrap();
let now = Utc::now();
let storage_test_dir = tempdir().unwrap();
let local_fs_path = storage_test_dir.path().join("usage_metrics");
fs::create_dir_all(&local_fs_path).unwrap();
let storage = GenericRemoteStorage::from_config(&RemoteStorageConfig {
storage: RemoteStorageKind::LocalFs {
local_path: local_fs_path.clone(),
},
timeout: Duration::from_secs(10),
small_timeout: Duration::from_secs(1),
})
.await
.unwrap();
let mut pushed_chunks: Vec<Report> = Vec::new();
let mut stored_chunks: Vec<Report> = Vec::new();
// no counters have been registered
collect_metrics_iteration(
&metrics.endpoints,
&client,
&endpoint,
Some(&storage),
1000,
"foo",
now,
now,
)
.await;
collect_metrics_iteration(&metrics.endpoints, &client, &endpoint, "foo", now, now).await;
let r = std::mem::take(&mut *reports.lock().unwrap());
assert!(r.is_empty());
@@ -507,84 +551,39 @@ mod tests {
});
// the counter should be observed despite 0 egress
collect_metrics_iteration(
&metrics.endpoints,
&client,
&endpoint,
Some(&storage),
1000,
"foo",
now,
now,
)
.await;
collect_metrics_iteration(&metrics.endpoints, &client, &endpoint, "foo", now, now).await;
let r = std::mem::take(&mut *reports.lock().unwrap());
assert_eq!(r.len(), 1);
assert_eq!(r[0].events.len(), 1);
assert_eq!(r[0].events[0].value, 0);
pushed_chunks.extend(r);
// record egress
counter.record_egress(1);
// egress should be observered
collect_metrics_iteration(
&metrics.endpoints,
&client,
&endpoint,
Some(&storage),
1000,
"foo",
now,
now,
)
.await;
collect_metrics_iteration(&metrics.endpoints, &client, &endpoint, "foo", now, now).await;
let r = std::mem::take(&mut *reports.lock().unwrap());
assert_eq!(r.len(), 1);
assert_eq!(r[0].events.len(), 1);
assert_eq!(r[0].events[0].value, 1);
pushed_chunks.extend(r);
// release counter
drop(counter);
// we do not observe the counter
collect_metrics_iteration(
&metrics.endpoints,
&client,
&endpoint,
Some(&storage),
1000,
"foo",
now,
now,
)
.await;
collect_metrics_iteration(&metrics.endpoints, &client, &endpoint, "foo", now, now).await;
let r = std::mem::take(&mut *reports.lock().unwrap());
assert!(r.is_empty());
// counter is unregistered
assert!(metrics.endpoints.is_empty());
let path_prefix = create_remote_path_prefix(now);
for entry in walkdir::WalkDir::new(&local_fs_path)
.into_iter()
.filter_map(|e| e.ok())
{
let path = local_fs_path.join(&path_prefix).to_string();
if entry.path().to_str().unwrap().starts_with(&path) {
let chunk = serde_json::from_reader(flate2::bufread::GzDecoder::new(
BufReader::new(fs::File::open(entry.into_path()).unwrap()),
))
.unwrap();
stored_chunks.push(chunk);
}
}
storage_test_dir.close().ok();
// sort by first event's idempotency key because the order of files is nondeterministic
pushed_chunks.sort_by_cached_key(|c| c.events[0].idempotency_key.clone());
stored_chunks.sort_by_cached_key(|c| c.events[0].idempotency_key.clone());
assert_eq!(pushed_chunks, stored_chunks);
collect_metrics_backup_iteration(&metrics.backup_endpoints, None, "foo", now, now, 1000)
.await;
assert!(!metrics.backup_endpoints.is_empty());
collect_metrics_backup_iteration(&metrics.backup_endpoints, None, "foo", now, now, 1000)
.await;
// backup counter is unregistered after the second iteration
assert!(metrics.backup_endpoints.is_empty());
}
}

View File

@@ -24,15 +24,9 @@ const KB: usize = 1024;
const MB: usize = 1024 * KB;
const GB: usize = 1024 * MB;
/// Use jemalloc, and configure it to sample allocations for profiles every 1 MB.
/// This mirrors the configuration in bin/safekeeper.rs.
#[global_allocator]
static GLOBAL: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
#[allow(non_upper_case_globals)]
#[export_name = "malloc_conf"]
pub static malloc_conf: &[u8] = b"prof:true,prof_active:true,lg_prof_sample:20\0";
// Register benchmarks with Criterion.
criterion_group!(
name = benches;

View File

@@ -1,3 +0,0 @@
*TTrace*
*.toolbox/
states/

View File

@@ -1,31 +0,0 @@
---- MODULE MCProposerAcceptorStatic ----
EXTENDS TLC, ProposerAcceptorStatic
\* Augments the spec with model checking constraints.
\* For model checking.
CONSTANTS
max_entries, \* model constraint: max log entries acceptor/proposer can hold
max_term \* model constraint: max allowed term
ASSUME max_entries \in Nat /\ max_term \in Nat
\* Model space constraint.
StateConstraint == \A p \in proposers:
/\ prop_state[p].term <= max_term
/\ Len(prop_state[p].wal) <= max_entries
\* Sets of proposers and acceptors are symmetric because we don't take any
\* actions depending on some concrete proposer/acceptor (like IF p = p1 THEN
\* ...)
ProposerAcceptorSymmetry == Permutations(proposers) \union Permutations(acceptors)
\* enforce order of the vars in the error trace with ALIAS
\* Note that ALIAS is supported only since version 1.8.0 which is pre-release
\* as of writing this.
Alias == [
prop_state |-> prop_state,
acc_state |-> acc_state,
committed |-> committed
]
====

View File

@@ -0,0 +1,34 @@
\* MV CONSTANT declarations
CONSTANT NULL = NULL
CONSTANTS
p1 = p1
p2 = p2
p3 = p3
a1 = a1
a2 = a2
a3 = a3
\* MV CONSTANT definitions
CONSTANT
proposers = {p1, p2}
acceptors = {a1, a2, a3}
\* SYMMETRY definition
SYMMETRY perms
\* CONSTANT definitions
CONSTANT
max_term = 3
CONSTANT
max_entries = 3
\* INIT definition
INIT
Init
\* NEXT definition
NEXT
Next
\* INVARIANT definition
INVARIANT
TypeOk
ElectionSafety
LogIsMonotonic
LogSafety
CommittedNotOverwritten
CHECK_DEADLOCK FALSE

View File

@@ -0,0 +1,363 @@
---- MODULE ProposerAcceptorConsensus ----
\* Differences from current implementation:
\* - unified not-globally-unique epoch & term (node_id)
\* Simplifications:
\* - instant message delivery
\* - feedback is not modeled separately, commit_lsn is updated directly
EXTENDS Integers, Sequences, FiniteSets, TLC
VARIABLES
prop_state, \* prop_state[p] is state of proposer p
acc_state, \* acc_state[a] is state of acceptor a
commit_lsns \* map of acceptor -> commit_lsn
CONSTANT
acceptors,
proposers,
max_entries, \* model constraint: max log entries acceptor/proposer can hold
max_term \* model constraint: max allowed term
CONSTANT NULL
ASSUME max_entries \in Nat /\ max_term \in Nat
\* For specifying symmetry set in manual cfg file, see
\* https://github.com/tlaplus/tlaplus/issues/404
perms == Permutations(proposers) \union Permutations(acceptors)
\********************************************************************************
\* Helpers
\********************************************************************************
Maximum(S) ==
(*************************************************************************)
(* If S is a set of numbers, then this define Maximum(S) to be the *)
(* maximum of those numbers, or -1 if S is empty. *)
(*************************************************************************)
IF S = {} THEN -1
ELSE CHOOSE n \in S : \A m \in S : n \geq m
\* minimum of numbers in the set, error if set is empty
Minimum(S) ==
CHOOSE min \in S : \A n \in S : min <= n
\* Min of two numbers
Min(a, b) == IF a < b THEN a ELSE b
\* Set of values of function f. XXX is there a such builtin?
FValues(f) == {f[a] : a \in DOMAIN f}
\* Sort of 0 for functions
EmptyF == [x \in {} |-> 42]
IsEmptyF(f) == DOMAIN f = {}
\* Next entry proposer p will push to acceptor a or NULL.
NextEntry(p, a) ==
IF Len(prop_state[p].wal) >= prop_state[p].next_send_lsn[a] THEN
CHOOSE r \in FValues(prop_state[p].wal) : r.lsn = prop_state[p].next_send_lsn[a]
ELSE
NULL
\*****************
NumAccs == Cardinality(acceptors)
\* does acc_set form the quorum?
Quorum(acc_set) == Cardinality(acc_set) >= (NumAccs \div 2 + 1)
\* all quorums of acceptors
Quorums == {subset \in SUBSET acceptors: Quorum(subset)}
\* flush_lsn of acceptor a.
FlushLsn(a) == Len(acc_state[a].wal)
\********************************************************************************
\* Type assertion
\********************************************************************************
\* Defining sets of all possible tuples and using them in TypeOk in usual
\* all-tuples constructor is not practical because such definitions force
\* TLC to enumerate them, while they are are horribly enormous
\* (TLC screams "Attempted to construct a set with too many elements").
\* So instead check types manually.
TypeOk ==
/\ \A p \in proposers:
/\ DOMAIN prop_state[p] = {"state", "term", "votes", "donor_epoch", "vcl", "wal", "next_send_lsn"}
\* in campaign proposer sends RequestVote and waits for acks;
\* in leader he is elected
/\ prop_state[p].state \in {"campaign", "leader"}
\* 0..max_term should be actually Nat in the unbounded model, but TLC won't
\* swallow it
/\ prop_state[p].term \in 0..max_term
\* votes received
/\ \A voter \in DOMAIN prop_state[p].votes:
/\ voter \in acceptors
/\ prop_state[p].votes[voter] \in [epoch: 0..max_term, flush_lsn: 0..max_entries]
/\ prop_state[p].donor_epoch \in 0..max_term
\* wal is sequence of just <lsn, epoch of author> records
/\ \A i \in DOMAIN prop_state[p].wal:
prop_state[p].wal[i] \in [lsn: 1..max_entries, epoch: 1..max_term]
\* Following implementation, we skew the original Aurora meaning of this;
\* here it is lsn of highest definitely committed record as set by proposer
\* when it is elected; it doesn't change since then
/\ prop_state[p].vcl \in 0..max_entries
\* map of acceptor -> next lsn to send
/\ \A a \in DOMAIN prop_state[p].next_send_lsn:
/\ a \in acceptors
/\ prop_state[p].next_send_lsn[a] \in 1..(max_entries + 1)
/\ \A a \in acceptors:
/\ DOMAIN acc_state[a] = {"term", "epoch", "wal"}
/\ acc_state[a].term \in 0..max_term
/\ acc_state[a].epoch \in 0..max_term
/\ \A i \in DOMAIN acc_state[a].wal:
acc_state[a].wal[i] \in [lsn: 1..max_entries, epoch: 1..max_term]
/\ \A a \in DOMAIN commit_lsns:
/\ a \in acceptors
/\ commit_lsns[a] \in 0..max_entries
\********************************************************************************
\* Initial
\********************************************************************************
Init ==
/\ prop_state = [p \in proposers |-> [
state |-> "campaign",
term |-> 1,
votes |-> EmptyF,
donor_epoch |-> 0,
vcl |-> 0,
wal |-> << >>,
next_send_lsn |-> EmptyF
]]
/\ acc_state = [a \in acceptors |-> [
\* there will be no leader in this term, 1 is the first real
term |-> 0,
epoch |-> 0,
wal |-> << >>
]]
/\ commit_lsns = [a \in acceptors |-> 0]
\********************************************************************************
\* Actions
\********************************************************************************
\* Proposer loses all state.
\* For simplicity (and to reduct state space), we assume it immediately gets
\* current state from quorum q of acceptors determining the term he will request
\* to vote for.
RestartProposer(p, q) ==
/\ Quorum(q)
/\ LET
new_term == Maximum({acc_state[a].term : a \in q}) + 1
IN
/\ new_term <= max_term
/\ prop_state' = [prop_state EXCEPT ![p].state = "campaign",
![p].term = new_term,
![p].votes = EmptyF,
![p].donor_epoch = 0,
![p].vcl = 0,
![p].wal = << >>,
![p].next_send_lsn = EmptyF]
/\ UNCHANGED <<acc_state, commit_lsns>>
\* Acceptor a immediately votes for proposer p.
Vote(p, a) ==
/\ prop_state[p].state = "campaign"
/\ acc_state[a].term < prop_state[p].term \* main voting condition
/\ acc_state' = [acc_state EXCEPT ![a].term = prop_state[p].term]
/\ LET
vote == [epoch |-> acc_state[a].epoch, flush_lsn |-> FlushLsn(a)]
IN
prop_state' = [prop_state EXCEPT ![p].votes = prop_state[p].votes @@ (a :> vote)]
/\ UNCHANGED <<commit_lsns>>
\* Proposer p gets elected.
BecomeLeader(p) ==
/\ prop_state[p].state = "campaign"
/\ Quorum(DOMAIN prop_state[p].votes)
/\ LET
max_epoch == Maximum({v.epoch : v \in FValues(prop_state[p].votes)})
max_epoch_votes == {v \in FValues(prop_state[p].votes) : v.epoch = max_epoch}
donor == CHOOSE dv \in DOMAIN prop_state[p].votes :
/\ prop_state[p].votes[dv].epoch = max_epoch
/\ \A v \in max_epoch_votes:
prop_state[p].votes[dv].flush_lsn >= v.flush_lsn
max_vote == prop_state[p].votes[donor]
\* Establish lsn to stream from for voters.
\* At some point it seemed like we can regard log as correct and only
\* append to it if has in the max_epoch, however TLC showed that's not
\* the case; we must always stream since first not matching record.
next_send_lsn == [voter \in DOMAIN prop_state[p].votes |-> 1]
IN
\* we fetch log from the most advanced node (this is separate
\* roundtrip), make sure node is still on one term with us
/\ acc_state[donor].term = prop_state[p].term
/\ prop_state' = [prop_state EXCEPT ![p].state = "leader",
\* fetch the log from donor
![p].wal = acc_state[donor].wal,
![p].donor_epoch = max_epoch,
![p].vcl = max_vote.flush_lsn,
![p].next_send_lsn = next_send_lsn]
/\ UNCHANGED <<acc_state, commit_lsns>>
\* acceptor a learns about elected proposer p's term.
UpdateTerm(p, a) ==
/\ prop_state[p].state = "leader"
/\ acc_state[a].term < prop_state[p].term
/\ acc_state' = [acc_state EXCEPT ![a].term = prop_state[p].term]
/\ UNCHANGED <<prop_state, commit_lsns>>
\* Acceptor a which didn't participate in voting connects to elected proposer p
\* and p sets the streaming point
HandshakeWithLeader(p, a) ==
/\ prop_state[p].state = "leader"
/\ acc_state[a].term = prop_state[p].term
/\ a \notin DOMAIN prop_state[p].next_send_lsn
/\ LET
next_send_lsn == prop_state[p].next_send_lsn @@ (a :> 1)
IN
prop_state' = [prop_state EXCEPT ![p].next_send_lsn = next_send_lsn]
/\ UNCHANGED <<acc_state, commit_lsns>>
\* Append new log entry to elected proposer
NewEntry(p) ==
/\ prop_state[p].state = "leader"
/\ Len(prop_state[p].wal) < max_entries \* model constraint
/\ LET
new_lsn == IF Len(prop_state[p].wal) = 0 THEN
prop_state[p].vcl + 1
ELSE
\* lsn of last record + 1
prop_state[p].wal[Len(prop_state[p].wal)].lsn + 1
new_entry == [lsn |-> new_lsn, epoch |-> prop_state[p].term]
IN
/\ prop_state' = [prop_state EXCEPT ![p].wal = Append(prop_state[p].wal, new_entry)]
/\ UNCHANGED <<acc_state, commit_lsns>>
\* Write entry new_e to log wal, rolling back all higher entries if e is different.
\* If bump_epoch is TRUE, it means we get record with lsn=vcl and going to update
\* the epoch. Truncate log in this case as well, as we might have correct <= vcl
\* part and some outdated entries behind it which we want to purge before
\* declaring us as recovered. Another way to accomplish this (in previous commit)
\* is wait for first-entry-from-new-epoch before bumping it.
WriteEntry(wal, new_e, bump_epoch) ==
(new_e.lsn :> new_e) @@
\* If wal has entry with such lsn and it is different, truncate all higher log.
IF \/ (new_e.lsn \in DOMAIN wal /\ wal[new_e.lsn] /= new_e)
\/ bump_epoch THEN
SelectSeq(wal, LAMBDA e: e.lsn < new_e.lsn)
ELSE
wal
\* Try to transfer entry from elected proposer p to acceptor a
TransferEntry(p, a) ==
/\ prop_state[p].state = "leader"
/\ prop_state[p].term = acc_state[a].term
/\ a \in DOMAIN prop_state[p].next_send_lsn
/\ LET
next_e == NextEntry(p, a)
IN
/\ next_e /= NULL
/\ LET
\* Consider bumping epoch if getting this entry recovers the acceptor,
\* that is, we reach first record behind VCL.
new_epoch ==
IF /\ acc_state[a].epoch < prop_state[p].term
/\ next_e.lsn >= prop_state[p].vcl
THEN
prop_state[p].term
ELSE
acc_state[a].epoch
\* Also check whether this entry allows to advance commit_lsn and
\* if so, bump it where possible. Modeling this as separate action
\* significantly bloats the space (5m vs 15m on max_entries=3 max_term=3,
\* so act immediately.
entry_owners == {o \in acceptors:
/\ o /= a
\* only recovered acceptors advance commit_lsn
/\ acc_state[o].epoch = prop_state[p].term
/\ next_e \in FValues(acc_state[o].wal)} \cup {a}
IN
/\ acc_state' = [acc_state EXCEPT ![a].wal = WriteEntry(acc_state[a].wal, next_e, new_epoch /= acc_state[a].epoch),
![a].epoch = new_epoch]
/\ prop_state' = [prop_state EXCEPT ![p].next_send_lsn[a] =
prop_state[p].next_send_lsn[a] + 1]
/\ commit_lsns' = IF /\ new_epoch = prop_state[p].term
/\ Quorum(entry_owners)
THEN
[acc \in acceptors |->
IF /\ acc \in entry_owners
/\ next_e.lsn > commit_lsns[acc]
THEN
next_e.lsn
ELSE
commit_lsns[acc]]
ELSE
commit_lsns
\*******************************************************************************
\* Final spec
\*******************************************************************************
Next ==
\/ \E q \in Quorums: \E p \in proposers: RestartProposer(p, q)
\/ \E p \in proposers: \E a \in acceptors: Vote(p, a)
\/ \E p \in proposers: BecomeLeader(p)
\/ \E p \in proposers: \E a \in acceptors: UpdateTerm(p, a)
\/ \E p \in proposers: \E a \in acceptors: HandshakeWithLeader(p, a)
\/ \E p \in proposers: NewEntry(p)
\/ \E p \in proposers: \E a \in acceptors: TransferEntry(p, a)
Spec == Init /\ [][Next]_<<prop_state, acc_state, commit_lsns>>
\********************************************************************************
\* Invariants
\********************************************************************************
\* we don't track history, but this property is fairly convincing anyway
ElectionSafety ==
\A p1, p2 \in proposers:
(/\ prop_state[p1].state = "leader"
/\ prop_state[p2].state = "leader"
/\ prop_state[p1].term = prop_state[p2].term) => (p1 = p2)
LogIsMonotonic ==
\A a \in acceptors:
\A i \in DOMAIN acc_state[a].wal: \A j \in DOMAIN acc_state[a].wal:
(i > j) => (/\ acc_state[a].wal[i].lsn > acc_state[a].wal[j].lsn
/\ acc_state[a].wal[i].epoch >= acc_state[a].wal[j].epoch)
\* Main invariant: log under commit_lsn must match everywhere.
LogSafety ==
\A a1 \in acceptors: \A a2 \in acceptors:
LET
common_len == Min(commit_lsns[a1], commit_lsns[a2])
IN
SubSeq(acc_state[a1].wal, 1, common_len) = SubSeq(acc_state[a2].wal, 1, common_len)
\* Next record we are going to push to acceptor must never overwrite committed
\* different record.
CommittedNotOverwritten ==
\A p \in proposers: \A a \in acceptors:
(/\ prop_state[p].state = "leader"
/\ prop_state[p].term = acc_state[a].term
/\ a \in DOMAIN prop_state[p].next_send_lsn) =>
LET
next_e == NextEntry(p, a)
IN
(next_e /= NULL) =>
((commit_lsns[a] >= next_e.lsn) => (acc_state[a].wal[next_e.lsn] = next_e))
====

View File

@@ -1,449 +0,0 @@
---- MODULE ProposerAcceptorStatic ----
(*
The protocol is very similar to Raft. The key differences are:
- Leaders (proposers) are separated from storage nodes (acceptors), which has
been already an established way to think about Paxos.
- We don't want to stamp each log record with term, so instead carry around
term histories which are sequences of <term, LSN where term begins> pairs.
As a bonus (and subtlety) this allows the proposer to commit entries from
previous terms without writing new records -- if acceptor's log is caught
up, update of term history on it updates last_log_term as well.
*)
\* Model simplifications:
\* - Instant message delivery. Notably, ProposerElected message (TruncateWal action) is not
\* delayed, so we don't attempt to truncate WAL when the same wp already appended something
\* on the acceptor since common point had been calculated (this should be rejected).
\* - old WAL is immediately copied to proposer on its election, without on-demand fetch later.
\* Some ideas how to break it to play around to get a feeling:
\* - replace Quorums with BadQuorums.
\* - remove 'don't commit entries from previous terms separately' rule in
\* CommitEntries and observe figure 8 from the raft paper.
\* With p2a3t4l4 32 steps error was found in 1h on 80 cores.
EXTENDS Integers, Sequences, FiniteSets, TLC
VARIABLES
prop_state, \* prop_state[p] is state of proposer p
acc_state, \* acc_state[a] is state of acceptor a
committed, \* bag (set) of ever committed <<term, lsn>> entries
elected_history \* counter for elected terms, see TypeOk for details
CONSTANT
acceptors,
proposers
CONSTANT NULL
\********************************************************************************
\* Helpers
\********************************************************************************
Maximum(S) ==
(*************************************************************************)
(* If S is a set of numbers, then this define Maximum(S) to be the *)
(* maximum of those numbers, or -1 if S is empty. *)
(*************************************************************************)
IF S = {} THEN -1 ELSE CHOOSE n \in S : \A m \in S : n \geq m
\* minimum of numbers in the set, error if set is empty
Minimum(S) == CHOOSE min \in S : \A n \in S : min <= n
\* Min of two numbers
Min(a, b) == IF a < b THEN a ELSE b
\* Sort of 0 for functions
EmptyF == [x \in {} |-> 42]
IsEmptyF(f) == DOMAIN f = {}
\* Set of values (image) of the function f. Apparently no such builtin.
Range(f) == {f[x] : x \in DOMAIN f}
\* If key k is in function f, map it using l, otherwise insert v. Returns the
\* updated function.
Upsert(f, k, v, l(_)) ==
LET new_val == IF k \in DOMAIN f THEN l(f[k]) ELSE v IN
(k :> new_val) @@ f
\*****************
NumAccs == Cardinality(acceptors)
\* does acc_set form the quorum?
Quorum(acc_set) == Cardinality(acc_set) >= (NumAccs \div 2 + 1)
\* all quorums of acceptors
Quorums == {subset \in SUBSET acceptors: Quorum(subset)}
\* For substituting Quorums and seeing what happens.
BadQuorum(acc_set) == Cardinality(acc_set) >= (NumAccs \div 2)
BadQuorums == {subset \in SUBSET acceptors: BadQuorum(subset)}
\* flushLsn (end of WAL, i.e. index of next entry) of acceptor a.
FlushLsn(a) == Len(acc_state[a].wal) + 1
\* Typedefs. Note that TLA+ Nat includes zero.
Terms == Nat
Lsns == Nat
\********************************************************************************
\* Type assertion
\********************************************************************************
\* Defining sets of all possible tuples and using them in TypeOk in usual
\* all-tuples constructor is not practical because such definitions force
\* TLC to enumerate them, while they are are horribly enormous
\* (TLC screams "Attempted to construct a set with too many elements").
\* So instead check types manually.
\* Term history is a sequence of <term, LSN where term begins> pairs.
IsTermHistory(th) ==
\A th_entry \in Range(th): th_entry.term \in Terms /\ th_entry.lsn \in Lsns
IsWal(w) ==
\A i \in DOMAIN w:
/\ i \in Lsns
/\ w[i] \in Terms
TypeOk ==
/\ \A p \in proposers:
\* '_' in field names hinders pretty printing
\* https://github.com/tlaplus/tlaplus/issues/1051
\* so use camel case.
/\ DOMAIN prop_state[p] = {"state", "term", "votes", "termHistory", "wal", "nextSendLsn"}
\* In campaign proposer sends RequestVote and waits for acks;
\* in leader he is elected.
/\ prop_state[p].state \in {"campaign", "leader"}
\* term for which it will campaign, or won term in leader state
/\ prop_state[p].term \in Terms
\* votes received
/\ \A voter \in DOMAIN prop_state[p].votes: voter \in acceptors
/\ \A vote \in Range(prop_state[p].votes):
/\ IsTermHistory(vote.termHistory)
/\ vote.flushLsn \in Lsns
\* Proposer's term history. Empty while proposer is in "campaign".
/\ IsTermHistory(prop_state[p].termHistory)
\* In the model we identify WAL entries only by <term, LSN> pairs
\* without additional unique id, which is enough for its purposes.
\* It means that with term history fully modeled wal becomes
\* redundant as it can be computed from term history + WAL length.
\* However, we still keep it here and at acceptors as explicit sequence
\* where index is LSN and value is the term to avoid artificial mapping to
\* figure out real entries. It shouldn't bloat model much because this
\* doesn't increase number of distinct states.
/\ IsWal(prop_state[p].wal)
\* Map of acceptor -> next lsn to send. It is set when truncate_wal is
\* done so sending entries is allowed only after that. In the impl TCP
\* ensures this ordering.
/\ \A a \in DOMAIN prop_state[p].nextSendLsn:
/\ a \in acceptors
/\ prop_state[p].nextSendLsn[a] \in Lsns
/\ \A a \in acceptors:
/\ DOMAIN acc_state[a] = {"term", "termHistory", "wal"}
/\ acc_state[a].term \in Terms
/\ IsTermHistory(acc_state[a].termHistory)
/\ IsWal(acc_state[a].wal)
/\ \A c \in committed:
/\ c.term \in Terms
/\ c.lsn \in Lsns
\* elected_history is a retrospective map of term -> number of times it was
\* elected, for use in ElectionSafetyFull invariant. For static spec it is
\* fairly convincing that it holds, but with membership change it is less
\* trivial. And as we identify log entries only with <term, lsn>, importance
\* of it is quite high as violation of log safety might go undetected if
\* election safety is violated. Note though that this is not always the
\* case, i.e. you can imagine (and TLC should find) schedule where log
\* safety violation is still detected because two leaders with the same term
\* commit histories which are different in previous terms, so it is not that
\* crucial. Plus if spec allows ElectionSafetyFull violation, likely
\* ElectionSafety will also be violated in some schedules. But neither it
\* should bloat the model too much.
/\ \A term \in DOMAIN elected_history:
/\ term \in Terms
/\ elected_history[term] \in Nat
\********************************************************************************
\* Initial
\********************************************************************************
Init ==
/\ prop_state = [p \in proposers |-> [
state |-> "campaign",
term |-> 1,
votes |-> EmptyF,
termHistory |-> << >>,
wal |-> << >>,
nextSendLsn |-> EmptyF
]]
/\ acc_state = [a \in acceptors |-> [
\* There will be no leader in zero term, 1 is the first
\* real.
term |-> 0,
\* Again, leader in term 0 doesn't exist, but we initialize
\* term histories with it to always have common point in
\* them. Lsn is 1 because TLA+ sequences are indexed from 1
\* (we don't want to truncate WAL out of range).
termHistory |-> << [term |-> 0, lsn |-> 1] >>,
wal |-> << >>
]]
/\ committed = {}
/\ elected_history = EmptyF
\********************************************************************************
\* Actions
\********************************************************************************
\* Proposer loses all state.
\* For simplicity (and to reduct state space), we assume it immediately gets
\* current state from quorum q of acceptors determining the term he will request
\* to vote for.
RestartProposer(p, q) ==
/\ Quorum(q)
/\ LET new_term == Maximum({acc_state[a].term : a \in q}) + 1 IN
/\ prop_state' = [prop_state EXCEPT ![p].state = "campaign",
![p].term = new_term,
![p].votes = EmptyF,
![p].termHistory = << >>,
![p].wal = << >>,
![p].nextSendLsn = EmptyF]
/\ UNCHANGED <<acc_state, committed, elected_history>>
\* Term history of acceptor a's WAL: the one saved truncated to contain only <=
\* local FlushLsn entries.
AcceptorTermHistory(a) ==
SelectSeq(acc_state[a].termHistory, LAMBDA th_entry: th_entry.lsn <= FlushLsn(a))
\* Acceptor a immediately votes for proposer p.
Vote(p, a) ==
/\ prop_state[p].state = "campaign"
/\ acc_state[a].term < prop_state[p].term \* main voting condition
/\ acc_state' = [acc_state EXCEPT ![a].term = prop_state[p].term]
/\ LET
vote == [termHistory |-> AcceptorTermHistory(a), flushLsn |-> FlushLsn(a)]
IN
prop_state' = [prop_state EXCEPT ![p].votes = (a :> vote) @@ prop_state[p].votes]
/\ UNCHANGED <<committed, elected_history>>
\* Get lastLogTerm from term history th.
LastLogTerm(th) == th[Len(th)].term
\* Proposer p gets elected.
BecomeLeader(p) ==
/\ prop_state[p].state = "campaign"
/\ Quorum(DOMAIN prop_state[p].votes)
/\ LET
\* Find acceptor with the highest <last_log_term, lsn> vote.
max_vote_acc ==
CHOOSE a \in DOMAIN prop_state[p].votes:
LET v == prop_state[p].votes[a]
IN \A v2 \in Range(prop_state[p].votes):
/\ LastLogTerm(v.termHistory) >= LastLogTerm(v2.termHistory)
/\ (LastLogTerm(v.termHistory) = LastLogTerm(v2.termHistory) => v.flushLsn >= v2.flushLsn)
max_vote == prop_state[p].votes[max_vote_acc]
prop_th == Append(max_vote.termHistory, [term |-> prop_state[p].term, lsn |-> max_vote.flushLsn])
IN
\* We copy all log preceding proposer's term from the max vote node so
\* make sure it is still on one term with us. This is a model
\* simplification which can be removed, in impl we fetch WAL on demand
\* from safekeeper which has it later. Note though that in case of on
\* demand fetch we must check on donor not only term match, but that
\* truncate_wal had already been done (if it is not max_vote_acc).
/\ acc_state[max_vote_acc].term = prop_state[p].term
/\ prop_state' = [prop_state EXCEPT ![p].state = "leader",
![p].termHistory = prop_th,
![p].wal = acc_state[max_vote_acc].wal
]
/\ elected_history' = Upsert(elected_history, prop_state[p].term, 1, LAMBDA c: c + 1)
/\ UNCHANGED <<acc_state, committed>>
\* Acceptor a learns about elected proposer p's term. In impl it matches to
\* VoteRequest/VoteResponse exchange when leader is already elected and is not
\* interested in the vote result.
UpdateTerm(p, a) ==
/\ prop_state[p].state = "leader"
/\ acc_state[a].term < prop_state[p].term
/\ acc_state' = [acc_state EXCEPT ![a].term = prop_state[p].term]
/\ UNCHANGED <<prop_state, committed, elected_history>>
\* Find highest common point (LSN of the first divergent record) in the logs of
\* proposer p and acceptor a. Returns <term, lsn> of the highest common point.
FindHighestCommonPoint(prop_th, acc_th, acc_flush_lsn) ==
LET
\* First find index of the highest common term.
\* It must exist because we initialize th with <0, 1>.
last_common_idx == Maximum({i \in 1..Min(Len(prop_th), Len(acc_th)): prop_th[i].term = acc_th[i].term})
last_common_term == prop_th[last_common_idx].term
\* Now find where it ends at both prop and acc and take min. End of term
\* is the start of the next unless it is the last one; there it is
\* flush_lsn in case of acceptor. In case of proposer it is the current
\* writing position, but it can't be less than flush_lsn, so we
\* take flush_lsn.
acc_common_term_end == IF last_common_idx = Len(acc_th) THEN acc_flush_lsn ELSE acc_th[last_common_idx + 1].lsn
prop_common_term_end == IF last_common_idx = Len(prop_th) THEN acc_flush_lsn ELSE prop_th[last_common_idx + 1].lsn
IN
[term |-> last_common_term, lsn |-> Min(acc_common_term_end, prop_common_term_end)]
\* Elected proposer p immediately truncates WAL (and term history) of acceptor a
\* before starting streaming. Establishes nextSendLsn for a.
\*
\* In impl this happens at each reconnection, here we also allow to do it multiple times.
TruncateWal(p, a) ==
/\ prop_state[p].state = "leader"
/\ acc_state[a].term = prop_state[p].term
/\ LET
hcp == FindHighestCommonPoint(prop_state[p].termHistory, AcceptorTermHistory(a), FlushLsn(a))
next_send_lsn == (a :> hcp.lsn) @@ prop_state[p].nextSendLsn
IN
\* Acceptor persists full history immediately; reads adjust it to the
\* really existing wal with AcceptorTermHistory.
/\ acc_state' = [acc_state EXCEPT ![a].termHistory = prop_state[p].termHistory,
\* note: SubSeq is inclusive, hence -1.
![a].wal = SubSeq(acc_state[a].wal, 1, hcp.lsn - 1)
]
/\ prop_state' = [prop_state EXCEPT ![p].nextSendLsn = next_send_lsn]
/\ UNCHANGED <<committed, elected_history>>
\* Append new log entry to elected proposer
NewEntry(p) ==
/\ prop_state[p].state = "leader"
/\ LET
\* entry consists only of term, index serves as LSN.
new_entry == prop_state[p].term
IN
/\ prop_state' = [prop_state EXCEPT ![p].wal = Append(prop_state[p].wal, new_entry)]
/\ UNCHANGED <<acc_state, committed, elected_history>>
\* Immediately append next entry from elected proposer to acceptor a.
AppendEntry(p, a) ==
/\ prop_state[p].state = "leader"
/\ acc_state[a].term = prop_state[p].term
/\ a \in DOMAIN prop_state[p].nextSendLsn \* did TruncateWal
/\ prop_state[p].nextSendLsn[a] <= Len(prop_state[p].wal) \* have smth to send
/\ LET
send_lsn == prop_state[p].nextSendLsn[a]
entry == prop_state[p].wal[send_lsn]
\* Since message delivery is instant we don't check that send_lsn follows
\* the last acc record, it must always be true.
IN
/\ prop_state' = [prop_state EXCEPT ![p].nextSendLsn[a] = send_lsn + 1]
/\ acc_state' = [acc_state EXCEPT ![a].wal = Append(acc_state[a].wal, entry)]
/\ UNCHANGED <<committed, elected_history>>
\* LSN where elected proposer p starts writing its records.
PropStartLsn(p) ==
IF prop_state[p].state = "leader" THEN prop_state[p].termHistory[Len(prop_state[p].termHistory)].lsn ELSE NULL
\* Proposer p commits all entries it can using quorum q. Note that unlike
\* will62794/logless-reconfig this allows to commit entries from previous terms
\* (when conditions for that are met).
CommitEntries(p, q) ==
/\ prop_state[p].state = "leader"
/\ \A a \in q:
/\ acc_state[a].term = prop_state[p].term
\* nextSendLsn existence means TruncateWal has happened, it ensures
\* acceptor's WAL (and FlushLsn) are from proper proposer's history.
\* Alternatively we could compare LastLogTerm here, but that's closer to
\* what we do in the impl (we check flushLsn in AppendResponse, but
\* AppendRequest is processed only if HandleElected handling was good).
/\ a \in DOMAIN prop_state[p].nextSendLsn
\* Now find the LSN present on all the quorum.
/\ LET quorum_lsn == Minimum({FlushLsn(a): a \in q}) IN
\* This is the basic Raft rule of not committing entries from previous
\* terms except along with current term entry (commit them only when
\* quorum recovers, i.e. last_log_term on it reaches leader's term).
/\ quorum_lsn >= PropStartLsn(p)
/\ committed' = committed \cup {[term |-> prop_state[p].wal[lsn], lsn |-> lsn]: lsn \in 1..(quorum_lsn - 1)}
/\ UNCHANGED <<prop_state, acc_state, elected_history>>
\*******************************************************************************
\* Final spec
\*******************************************************************************
Next ==
\/ \E q \in Quorums: \E p \in proposers: RestartProposer(p, q)
\/ \E p \in proposers: \E a \in acceptors: Vote(p, a)
\/ \E p \in proposers: BecomeLeader(p)
\/ \E p \in proposers: \E a \in acceptors: UpdateTerm(p, a)
\/ \E p \in proposers: \E a \in acceptors: TruncateWal(p, a)
\/ \E p \in proposers: NewEntry(p)
\/ \E p \in proposers: \E a \in acceptors: AppendEntry(p, a)
\/ \E q \in Quorums: \E p \in proposers: CommitEntries(p, q)
Spec == Init /\ [][Next]_<<prop_state, acc_state, committed, elected_history>>
\********************************************************************************
\* Invariants
\********************************************************************************
\* Lighter version of ElectionSafetyFull which doesn't require elected_history.
ElectionSafety ==
\A p1, p2 \in proposers:
(/\ prop_state[p1].state = "leader"
/\ prop_state[p2].state = "leader"
/\ prop_state[p1].term = prop_state[p2].term) => (p1 = p2)
\* Single term must never be elected more than once.
ElectionSafetyFull == \A term \in DOMAIN elected_history: elected_history[term] <= 1
\* Log is expected to be monotonic by <term, lsn> comparison. This is not true
\* in variants of multi Paxos, but in Raft (and here) it is.
LogIsMonotonic ==
\A a \in acceptors:
\A i, j \in DOMAIN acc_state[a].wal:
(i > j) => (acc_state[a].wal[i] >= acc_state[a].wal[j])
\* Main invariant: If two entries are committed at the same LSN, they must be
\* the same entry.
LogSafety ==
\A c1, c2 \in committed: (c1.lsn = c2.lsn) => (c1 = c2)
\********************************************************************************
\* Invariants which don't need to hold, but useful for playing/debugging.
\********************************************************************************
\* Limits term of elected proposers
MaxTerm == \A p \in proposers: (prop_state[p].state = "leader" => prop_state[p].term < 2)
MaxAccWalLen == \A a \in acceptors: Len(acc_state[a].wal) < 2
\* Limits max number of committed entries. That way we can check that we'are
\* actually committing something.
MaxCommitLsn == Cardinality(committed) < 2
\* How many records with different terms can be removed in single WAL
\* truncation.
MaxTruncatedTerms ==
\A p \in proposers: \A a \in acceptors:
(/\ prop_state[p].state = "leader"
/\ prop_state[p].term = acc_state[a].term) =>
LET
hcp == FindHighestCommonPoint(prop_state[p].termHistory, AcceptorTermHistory(a), FlushLsn(a))
truncated_lsns == {lsn \in DOMAIN acc_state[a].wal: lsn >= hcp.lsn}
truncated_records_terms == {acc_state[a].wal[lsn]: lsn \in truncated_lsns}
IN
Cardinality(truncated_records_terms) < 2
\* Check that TruncateWal never deletes committed record.
\* It might seem that this should an invariant, but it is not.
\* With 5 nodes, it is legit to truncate record which had been
\* globally committed: e.g. nodes abc can commit record of term 1 in
\* term 3, and after that leader of term 2 can delete such record
\* on d. On 10 cores TLC can find such a trace in ~7 hours.
CommittedNotTruncated ==
\A p \in proposers: \A a \in acceptors:
(/\ prop_state[p].state = "leader"
/\ prop_state[p].term = acc_state[a].term) =>
LET
hcp == FindHighestCommonPoint(prop_state[p].termHistory, AcceptorTermHistory(a), FlushLsn(a))
truncated_lsns == {lsn \in DOMAIN acc_state[a].wal: lsn >= hcp.lsn}
truncated_records == {[term |-> acc_state[a].wal[lsn], lsn |-> lsn]: lsn \in truncated_lsns}
IN
\A r \in truncated_records: r \notin committed
====

Some files were not shown because too many files have changed in this diff Show More