Compare commits

...

1 Commits

Author SHA1 Message Date
Christian Schwarz
d977ff3386 keep holding layer map lock inside compact_level0_phase1
This is preliminary work for/from #4220 (async `Layer::get_value_reconstruct_data`).

That plan involves turning Timeline::layers into a `tokio::sync::RwLock`.

Before this patch, `compact_level0_phase1` would drop and re-acquire
`Timeline::layers` (`read().unwrap()`).

We can't have that if we switch to an async RwLock because
tokio::sync::RwLock's read guard is `!Send`.
Which makes the compaction task future `!Send`.
Which doesn't compile because the compaction task runs inside a `task_mgr`
task, and `task_mgr` requires the task futures to be `Send`.

I think the performance impact of holding the lock for longer will be minimal
as the number of L0s should be bounded, and so, any of the prep work that
we're doing in the section where we previously didn't hold the lock should
be fast anyways.

The far worse part of this function in terms of lock contention / performance
is the fact that we hold the read lock while we write out the L1s.

But that's a pre-existing condition => not changing it here.
2023-06-12 12:40:45 +02:00

View File

@@ -3323,7 +3323,6 @@ impl Timeline {
) -> Result<CompactLevel0Phase1Result, CompactionError> {
let layers = self.layers.read().unwrap();
let mut level0_deltas = layers.get_level0_deltas()?;
drop(layers);
// Only compact if enough layers have accumulated.
let threshold = self.get_compaction_threshold();
@@ -3444,7 +3443,6 @@ impl Timeline {
// Determine N largest holes where N is number of compacted layers.
let max_holes = deltas_to_compact.len();
let last_record_lsn = self.get_last_record_lsn();
let layers = self.layers.read().unwrap(); // Is'n it better to hold original layers lock till here?
let min_hole_range = (target_file_size / page_cache::PAGE_SZ as u64) as i128;
let min_hole_coverage_size = 3; // TODO: something more flexible?