Compare commits

...

91 Commits

Author SHA1 Message Date
Stu Hood
147214b0eb Implement GreaterThanOrEqual and LessThanOrEqual to handle boundary cases in Chain. 2025-12-29 15:38:28 -07:00
Stu Hood
865a12f4bb Simpler implementation of first_vals_in_value_range. 2025-12-29 14:51:21 -07:00
Stu Hood
00110312c9 Fix compound filters, and remove redundant implementation in Chain implementation 2025-12-29 14:51:17 -07:00
Stu Hood
b2e980b450 Property test for Comparator/ValueRange consistency, and fixes. 2025-12-27 21:03:02 -07:00
Stu Hood
1a701b86bd Remove allow-dead-code annotation. 2025-12-27 17:56:55 -07:00
Stu Hood
ee4538d6c2 test_order_by_u64_prop 2025-12-27 16:04:24 -07:00
Stu Hood
25f1e9aa9f Move ComparableDoc to a reusable location, allowing for pushing directly from ColumnValues into a TopNComputer buffer in some cases. 2025-12-27 14:01:02 -07:00
Stu Hood
6b03b28bac Use a Buffer generic scratch buffer parameter on TopNComputer to allow for internal iteration in SegmentSortKeyComputer. 2025-12-27 13:03:56 -07:00
Stu Hood
7a5241cb83 Update comments. 2025-12-26 17:52:40 -07:00
Stu Hood
0f5e0f6f87 TODO: Audit. 2025-12-26 16:12:15 -07:00
Stu Hood
a654115d9a Squash with Optional. WIP: Still needs work: we are allocating. 2025-12-26 15:18:17 -07:00
Stu Hood
1a17515ead Convert test_order_by_compound_filtering_with_none to a proptest. 2025-12-26 14:25:01 -07:00
Stu Hood
0f1b0ce527 Optimize Optional indexes. TODO: Audit. 2025-12-26 13:01:30 -07:00
Stu Hood
0c920dfc61 Add a ValueRange filter to SegmentSortKeyComputer::segment_sort_keys. 2025-12-26 12:27:11 -07:00
Stu Hood
996fc936f6 Add null handling to first_vals_in_value_range. 2025-12-26 11:12:53 -07:00
Stu Hood
5ff38e1605 WIP: Add ValueRange cases for Comparators. 2025-12-26 11:02:19 -07:00
Stu Hood
e8a4adeedd Replace Column::first_vals with Column::first_vals_in_value_range. 2025-12-25 15:39:18 -07:00
Stu Hood
efc9e585a9 WIP: Add ValueRange::All 2025-12-25 15:16:26 -07:00
Stu Hood
f4252fc184 WIP: Add ValueRange. 2025-12-25 14:53:15 -07:00
Stu Hood
53c067d1f3 Restore laziness in ChainSegmentSortKeyComputer. 2025-12-24 10:39:26 -07:00
Stu Hood
259c1ed965 Isolate accept_sort_key_lazy to ChainSegmentSortKeyComputer. 2025-12-23 17:37:33 -07:00
Stu Hood
1afc432df8 Use an internal buffer in the SegmentSortKeyComputer. 2025-12-23 17:23:10 -07:00
Stu Hood
b8acd3ac94 WIP: Add and use segment_sort_keys to remove dynamic dispatch to the column. 2025-12-23 16:44:50 -07:00
Stu Hood
b5321d2125 Implement laziness for collect_block. 2025-12-23 15:48:36 -07:00
Stu Hood
ad3e2363fe WIP: Add failing test. 2025-12-23 15:48:34 -07:00
Stu Hood
9ec5750c25 Implement collect_block for lazy scorers. 2025-12-23 15:46:41 -07:00
Stu Hood
03f09a2b5b chore: Add support for natural-order-with-none-highest in TopDocs::order_by (#90)
Add `ComparatorEnum::NaturalNoneHigher`, which matches Postgres's `DESC NULLS FIRST` behavior in `TopDocs::order_by`.

Expands comments on `Comparator` implementations to ensure that behavior for `None` is explicit.

Upstream as https://github.com/quickwit-oss/tantivy/pull/2780
2025-12-23 09:15:31 -08:00
Stu Hood
9ffe4af096 Fix TopN performance regression.
https://github.com/quickwit-oss/tantivy/pull/2777
2025-12-17 10:43:29 -07:00
Stu Hood
c56ddcb6d7 Add an erased SortKeyComputer to sort on types which are not known until runtime.
https://github.com/quickwit-oss/tantivy/pull/2770
2025-12-17 10:43:29 -07:00
Ming
5b8fff154b fix: overflow in vint buffer (#88) 2025-12-17 10:43:29 -07:00
Mohammad Dashti
ff6ee3a5db fix: post-rebase fixes
- Add missing size_hint module declaration
- Remove test-only export serialize_and_load_u64_based_column_values
- fixed quickwit CI issues
2025-12-10 10:17:28 -08:00
Moe
eda9aa437f fix: boolean query incorrectly dropping documents when AllScorer is present (#84)
Co-authored-by: Stu Hood <stuhood@gmail.com>
2025-12-10 10:17:28 -08:00
Piotr Olszak
538da08eb5 Add polish stemmer (#82)
This commit adds support for Polish language stemming.
The previously used rust-stemmers crate is abandoned and unmaintained, which blocked the addition of new languages. This change addresses a user request for Polish stemming to improve BM25 recall in their use case. The tantivy-stemmers crate is a modern, maintained alternative that also opens the door for supporting many other languages in the future.
- Added the tantivy-stemmers crate as a dependency to the workspace, alongside the existing rust-stemmers dependency (for backward compatibility)
- Introduced an internal enum that can hold an algorithm from either rust-stemmers or tantivy-stemmers
- Added Polish to the main Language enum, mapped to the new tantivy-stemmers implementation
- Updated the token stream to handle both types of stemmers internally
- Added the POLISH variant to the stopwords list
- Existing tests pass
- Added test_pl_tokenizer to verify that the Polish stemmer works correctly
2025-12-10 10:17:28 -08:00
Moe
7bd5cc5417 fix: fixed integer overflow in ExpUnrolledLinkedList for large datasets (#80) 2025-12-10 10:17:28 -08:00
Moe
5d46137556 feat: Added multiple snippet support (#76)
Adds `SnippetGenerator::snippets` to render multiple snippets in either score or position order.

Additionally: renames the existing `limit` and `offset` arguments to disambiguate between "match" positions (which are concatenated into fragments), and "snippet" positions.

Co-authored-by: Stu Hood <stuhood@gmail.com>
2025-12-10 10:17:28 -08:00
Stu Hood
92c784f697 perf: Optimize TermSet for very large sets of terms. (#75)
* Removes allocation in a bunch of places
* Removes sorting of terms if we're going to use the fast field execution method
* Adds back the (accidentally dropped) cardinality threshold
* Removes `bool` support -- using the posting lists is always more efficient for a `bool`, since there are at most two of them
* More eagerly constructs the term `HashSet` so that it happens once, rather than once per segment
2025-12-10 10:17:28 -08:00
Stu Hood
b3541d10e1 chore: Use smaller merge buffers. (#74)
## What

Reduce the per-segment buffer sizes from 4MB to 512KB.

## Why

#71 moved from buffers which covered the entire file to maximum 4MB buffers. But for merges with very large segment counts, we need to be using more conservative buffer sizes. 512KB will still eliminate most posting list reads: posting lists larger than 512KB will skip the buffer.
2025-12-10 10:17:28 -08:00
Stu Hood
7183ac6cbc fix: Use smaller buffers during merging (#71)
`MergeOptimizedInvertedIndexReader` was added in #32 in order to avoid making small reads to our underlying `FileHandle`. It did so by reading the entire content of the posting lists and positions at open time.

As that PR says:
> A likely downside to this approach is that now pg_search will be, indirectly, holding onto a lot of heap-allocated memory that was read from its block storage. Perhaps in the (near) future we can further optimize the new `MergeOptimizedInvertedIndexReader` such that it pages in blocks of a few megabytes at a time, on demand, rather than the whole file.

This PR makes that change. But it additionally removes code that was later added in #47 to borrow individual entries rather than creating `OwnedBytes` for them. I believe that this code was added due to a misunderstanding:

`OwnedBytes` is a total misnomer: the bytes are not "owned": they are immutably borrowed and reference counted. An `OwnedBytes` object can be created for any type which derefs to a slice of bytes, and can be cheaply cloned and sliced. So there is no need to actually borrow _or_ copy the buffer under the `OwnedBytes`. Removing the code that was doing so allows us to safely recreate our buffer without worrying about the lifetimes of buffers that we've handed out.
2025-12-10 10:17:28 -08:00
Stu Hood
e0476d2eb2 fix: Add support for bool to the fast field TermSet implementation (#70)
Missed in #69.

The `TermSet` fast fields implementation cribbed from `RangeQuery`'s fast fields implementation: ... which also has this bug. Will fix upstream.
2025-12-10 10:17:28 -08:00
Stu Hood
9fe0899934 perf: Implement a TermSet variant which uses fast fields (#69)
The `TermSet` `Query` currently produces one `Scorer`/`DocSet` per matched term by scanning the term dictionary and then consuming posting lists. For very large sets of terms and a fast field, it is faster to scan the fast field column while intersecting with a `HashSet` of (encoded) term values.

Following the pattern set by the two execution modes of `RangeQuery`, this PR introduces a variant of `TermSet` which uses fast fields, and then uses it when there are more than 1024 input terms (an arbitrary threshold!).

Performance is significantly improved for large `TermSet`s of primitives.
2025-12-10 10:17:28 -08:00
Stu Hood
aaa5abb7d6 chore: Expose a method to create a segment with a particular id (#68)
In support of https://github.com/paradedb/paradedb/pull/3203
2025-12-10 10:17:28 -08:00
Ming
f8b8fd0321 feat: SnippetGenerator accepts limit/offset (#66) 2025-12-10 10:17:27 -08:00
Eric Ridge
cd878a5c90 fix: support MemoryArena allocations up to 4GB (#62)
A MemoryArena should support allocations up to 4GB and https://github.com/paradedb/tantivy/pull/60 broke this by not accounting for the "max page id" when pages are now 50% the size of what the originally were.

This cleans up the code so things stay in sync if we change NUM_BITS_PAGE_ADDR again and adds a unit test
2025-12-10 10:17:27 -08:00
Eric Ridge
30c237e895 perf: various optimizations around arenas (#60)
- Use a bitset to track used buckets in the `SharedArenaHashmap`, allowing for more efficient iteration
- Create a global pool for both `MemoryArena` and `IndexingContext`
- Reduce the MemoryArea page size by half (it's now 512KB instead of 1MB)
- Centralize thread pool instances in `SegmentUpdater` so we can elide making them if all nthread sizes are zero
2025-12-10 10:17:27 -08:00
Eric Ridge
b6cd39872b fix: Allow zero indexing & merging threads (#59)
This removes a check against `IndexWriterOptions` which disallowed zero indexing worker threads (`num_worker_threads`).
2025-12-10 10:17:27 -08:00
Stu Hood
c96d801c68 perf: Lazily load in BitpackedCodec (#56)
We would like to be able to lazily load `BitpackedCodec` columns (similar to what 020bdffd61 did for `BlockwiseLinearCodec`), because in the context of `pg_search`, immediately constructing `OwnedBytes` means copying the entire content of the column into memory.

To do so, we expose some (slightly overlapped) block boundaries from `BitUnpacker`, and then lazily load each block when it is requested. Only the `get_val` function uses the cache: `get_row_ids_for_value_range` does not (yet), because it would be necessary to partition the row ids by block, and most of the time consumers using it are already loading reasonably large ranges anyway.

See https://github.com/paradedb/paradedb/pull/2894 for usage. There are a few 2x speedups in the benchmark suite, as well as a 1.8x speedup on a representative customer query. Unfortunately there are also some 13-19% slowdowns on aggregates: it looks like that is because aggregates use `get_vals`, for which the default implementation is to just call `get_val` in a loop.
2025-12-10 10:17:27 -08:00
Stu Hood
7a13e0294d Avoid copying into OwnedBytes when opening a fast field column Dictionary. (#55)
When a fast fields string/bytes `Dictionary` is opened, we currently read the entire dictionary from `FileSlice` -> `OwnedBytes`... and then immediately wrap it back into a `FileSlice`.

Switching to `Dictionary::open` preserves the `FileSlice`, such that only the portions of the `Dictionary` which are actually accessed are read from disk/buffers.
2025-12-10 10:17:27 -08:00
Eric Ridge
20d00701ee perf: lazily open positions file (#54)
Not all queries use positions and it's okay if we (from the perspective of `pg_search`, anyways) defer opening/loading them until they're first needed.

This is probably completely wrong for a mmap-based Directory, but we (again, `pg_search`) decided long ago that we don't care about that use case.

This saves a lot of disk I/O when an index has lots of segments and the query doesn't need positions.

As a drive by, make sure a random Vec has enough space before pushing items to it.  This showed up in the profiler, believe it or not.
2025-12-10 10:17:27 -08:00
Eric Ridge
526afc6111 chore: internal API visibility adjustments (#53) 2025-12-10 10:17:27 -08:00
Ming
f9e4a8413b make the directory BufWriter capacity configurable (#52) 2025-12-10 10:17:27 -08:00
Ming
58124bb164 changes to make merging work (#48) 2025-12-10 10:17:27 -08:00
Eric Ridge
176f7e852a perf: remove general overhead during segment merging (#47) 2025-12-10 10:17:27 -08:00
Ming
cfa5f94114 chore: Make some delete-related functions public (#46) 2025-12-10 10:17:26 -08:00
Ming
5e449e7dda feat: SnippetGenerator can handle JSON fields (#42) 2025-12-10 10:17:26 -08:00
Stu Hood
1617459b01 Expose some methods which are necessary to create a streaming version of sorted_ords_to_term_cb. (#43)
See https://github.com/paradedb/paradedb/pull/2612.

We might eventually want that function upstreamed, but there are more changes planned to it for https://github.com/paradedb/paradedb/issues/2619, so doing the expedient thing now.
2025-12-10 10:17:26 -08:00
Ming
0e1a7e213e chore: allow merge_foreground to ignore the store (#40) 2025-12-10 10:17:26 -08:00
Ming
b0660ba196 chore: make some structs pub (#39) 2025-12-10 10:17:26 -08:00
Eric Ridge
936d6af471 feat: ability to directly merge segments in the foregound (#36)
This adds new public-facing (and internal) APIs for being able to merge a list of segments in the foreground, without using any threads.  It's largely a cut-n-paste of the existing background merge code.

For pg_search, this is beneficial because it allows us to merge directly using our `MVCCDirectory` rather than going through the `ChannelDirectory`, which has quite a bit of overhead.
2025-12-10 10:17:26 -08:00
Eric Ridge
2560de3a01 feat: IndexWriter::wait_merging_threads() return Err on merge failure (#34) 2025-12-10 10:17:26 -08:00
Eric Ridge
75a8384c2b feat: remove Directory::reconsider_merge_policy() and add other niceties to Directory API (#33)
This removes `Directory::reconsider_merge_policy()`.  After reconsidering this, it's better to make this decision ahead of time.

Also adds a `Directory::log(message: &str)` function along with passing a `Directory` reference to `MergePolicy::compute_merge_candidates()`.

It also hits some `#[derive(Debug)]` and `#[derive(Serialize)]` on a couple of structs that can benefit.
2025-12-10 10:17:26 -08:00
Eric Ridge
5b6da9123c feat: introduce a MergeOptimizedInvertedIndexReader (#32)
This is probably a bit of a misnomer as it's really a "PgSearchOptimizedInvertedIndexReaderForMerge".

What we've done here is copied `InvertedIndexReader` and internally adjusted it to hold onto the complete `OwnedBytes` of the index's postings and positions.  One or two other small touch points were required to make other internal APIs compatabile with this but they don't otherwise change functionality or I/O patterns.

`MergeOptimizedInvertedIndexReader` does change I/O patterns, however, in that the merge process now does two (potentially) very large reads when it obtains the new "merge optimized inverted index reader" for each segment.  This changes access patterns such that all the reads happen up-front rather than term-by-term as the merge process is solving.

A likely downside to this approach is that now pg_search will be, indirectly, holding onto a lot of heap-allocated memory that was read from its block storage.  Perhaps in the (near) future we can further optimize the new `MergeOptimizedInvertedIndexReader` such that it pages in blocks of a few megabytes at a time, on demand, rather than the whole file.

---

Some unit tests were also updated to resolve compilation problems by PR https://github.com/paradedb/tantivy/pull/31 that for some reason didn't show in CI.  #weird
2025-12-10 10:17:26 -08:00
Eric Ridge
8b7db36c99 feat: Add Directory::wants_cancel() function (#31)
This adds a function named `wants_cancel() -> bool` to the `Directory` trait.  It allows a Directory implementation to indicate that it would like Tantivy to cancel an operation.

Right now, querying this function only happens during key points of index merging, but _could_ be used in other places.  Technically, segment merging is the only "black box" in tantivy that users don't otherwise have the direct ability to control.

The default implementaiton of `wants_cancel()` returns false, so there's no fear of default tantivy spuriously cancelling a merge.

The cancels happen "cleanly" such that if `wants_cancel()` returns true an `Err(TantivyError::Cancelled)` is returned from the calling function at that point, and the error result will be propogated up the stack.  No panics are raised.
2025-12-10 10:17:26 -08:00
Eric Ridge
eabe589814 feat: ability to assign a panic handler to a Directory (#30)
Tantivy creates thread pools for some of its background work, specifically committing and merging.

It's possible if one of the thread workers panics that rayon will simply abort the process.  This is terrible from pg_search as that takes down the entire Postgres cluster.

These changes allow a Directory to assign a panic handler that gets called in such cases.  Which allows pg_search to gracefully rollback the current transaction, while presenting the panic message to the user.
2025-12-10 10:17:26 -08:00
Eric Ridge
65d3574dfd feat: make garbage collection opt-out (#28) 2025-12-10 10:17:26 -08:00
Ming
26d623c411 Change default index precision to microseconds (#27) 2025-12-10 10:17:25 -08:00
Eric Ridge
0552dddeb9 feat: delete docs by (SegmentId, DocId) (#26)
This teaches tantivy how to "directly" delete a document in a segment.
    
Our use case from pg_search is that we already know the segment_id and doc_id so it's waaaaay more efficient for us to delete docs through our `ambulkdelete()` routine.

It avoids doing a search, and all the stuff around that, for each of our "ctid" terms that we want to delete.
2025-12-10 10:17:25 -08:00
Eric Ridge
1b88bb61f9 feat: Add ability to construct a SegmentId from raw bytes (#24)
This allows a `SegmentId` to be constructed from a `[u8; 16]` byte array.  

It also adds a `impl Default for SegementId`, which defaults to all nulls
2025-12-10 10:17:25 -08:00
Eric Ridge
16da31cf06 perf: make the footer fixed width (#23)
Prior to this commit, the Footer tantivy serialized at the end of every file included a json blob that could be an arbitrary size.

This changes the Footer to be exactly 24 bytes (6 u32s), making sure to keep the `crc` value.  The other change we make here is to not actually read/validate the footer bytes when opening a file.

From pg_search's perspective, this is quite unnecessary Postgres buffer cache I/O and increases index/segment opening overhead, which is something pg_search does often for each query.

Two tests are ignored here as they test physical index files stored here in the repo that this change completely breaks.
2025-12-10 10:17:25 -08:00
Eric Ridge
658b9b22e0 perf: remove some fast fields loading overhead (#22)
This removes up some overhead the profiler exposed.  In the case I was testing, fast fields no longer shows up in the profile at all.

I also renamed `BlockWithLength` to `BlockWithData`
2025-12-10 10:17:25 -08:00
Eric Ridge
95661fba30 perf: teach SegmentReader to lazily open/read its various SegmentComponents (#20)
This overhauls `SegmentReader` to put its various components behind `OnceLock`s such that they can be opened and read on their first use, as oppoed when a SegmentReader is constructed -- which is once for every segment when an Index is opened.

This has a negative impact on some of Tantivy's expectations in that an existing SegementReader can still read from physical files that were deleted by a merge.  This isn't true now that the segment's physical files aren't opened until needed.  As such, I've `#[ignore]`'d six tests that expose this problem.

From our (pg_search's) side of things, we don't really have physical files and don't need to rely on the filesystem/kernel to allow reading unlinked files that are still open.

Overall, this cuts down a signficiant number of disk reads during pg_search's query planning.  With my test data it goes from 808 individual reads totalling 999,799 bytes, to 18 reads totalling 814,514 bytes.

This reduces the time it takes to plan a simple query from about 1.4ms to 0.436ms -- roughly a 3.2x improvement.
2025-12-10 10:17:25 -08:00
Philippe Noël
ddd169b77c chore: Don't do codecov (#21) 2025-12-10 10:17:25 -08:00
Eric Ridge
bb4c4b8522 perf: push FileSlices down through most of fast fields (#19)
This PR modifies internal API signatures and implementation details so that `FileSlice`s are passed down into the innards of (at least) the `BlockwiseLinearCodec`.  This allows tantivy to defer dereferencing large slices of bytes when reading numeric fast fields, and instead dereference only the slice of bytes it needs for any given compressed Block.

The motivation here is for external `Directory` implementations where it's not exactly efficient to dereference large slices of bytes.
2025-12-10 10:17:25 -08:00
Neil Hansen
ffa558e3a9 fix: tests in ci (#18) 2025-12-10 10:17:25 -08:00
Neil Hansen
a35e3dcb5a suppress warnings after rebase 2025-12-10 10:17:25 -08:00
Neil Hansen
1e3998fbad implement fuzzy scoring in sstable 2025-12-10 10:17:25 -08:00
Neil Hansen
f3df079d6b chore: point tantivy-fst to paradedb fork to fix regex 2025-12-10 10:17:24 -08:00
Ming Ying
f7c0335857 comments 2025-12-10 10:17:24 -08:00
Ming Ying
2584325e0d add reconsider_merge_policy to directory 2025-12-10 10:17:24 -08:00
Eric B. Ridge
1f2c2d0c8a fix compilation warnings on rust v1.83 2025-12-10 10:17:24 -08:00
Eric Ridge
91db6909d1 Add a payload: &mut (dyn Any + '_) argument to Directory::save_meta() (#17) 2025-12-10 10:17:24 -08:00
Ming Ying
7639b47615 small changes to make MVCC work with delete 2025-12-10 10:17:24 -08:00
Ming Ying
8b55f0f355 Make DeleteMeta pub 2025-12-10 10:17:24 -08:00
Ming Ying
8d29f19110 make save_metas provide previous metas 2025-12-10 10:17:24 -08:00
Ming Ying
d742d3277a undo changes to segment_updater.rs 2025-12-10 10:17:24 -08:00
Eric B. Ridge
3afe3714a2 no pgrx, please 2025-12-10 10:17:24 -08:00
Ming Ying
67ea8e53a8 quickwit compiles 2025-12-10 10:17:24 -08:00
Ming Ying
3adc85c017 Directory trait can read/write meta/managed 2025-12-10 10:17:24 -08:00
Ming
6bb3a22c98 expose AddOperation and with_max_doc (#7) 2025-12-10 10:17:23 -08:00
Ming
5503cfb8ef Fix managed paths (#5) 2025-12-10 10:17:23 -08:00
Alexander Alexandrov
ea0e88ae4b feat: implement TokenFilter for Option<F> (#4) 2025-12-10 10:17:23 -08:00
Neil Hansen
dee2dd3f21 Use Levenshtein distance to score documents in fuzzy term queries 2025-12-10 10:17:19 -08:00
155 changed files with 11835 additions and 1451 deletions

View File

@@ -1,29 +0,0 @@
name: Coverage
on:
push:
branches: [main]
# Ensures that we cancel running jobs for the same PR / same workflow.
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
coverage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust
run: rustup toolchain install nightly-2024-07-01 --profile minimal --component llvm-tools-preview
- uses: Swatinem/rust-cache@v2
- uses: taiki-e/install-action@cargo-llvm-cov
- name: Generate code coverage
run: cargo +nightly-2024-07-01 llvm-cov --all-features --workspace --doctests --lcov --output-path lcov.info
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
continue-on-error: true
with:
token: ${{ secrets.CODECOV_TOKEN }} # not required for public repos
files: lcov.info
fail_ci_if_error: true

View File

@@ -76,7 +76,9 @@ jobs:
profile: minimal
override: true
- uses: taiki-e/install-action@nextest
- uses: taiki-e/install-action@v2
with:
tool: 'nextest'
- uses: Swatinem/rust-cache@v2
- name: Run tests

5
.gitignore vendored
View File

@@ -6,7 +6,6 @@ target
target/debug
.vscode
target/release
Cargo.lock
benchmark
.DS_Store
*.bk
@@ -15,3 +14,7 @@ trace.dat
cargo-timing*
control
variable
# for `sample record -p`
profile.json
profile.json.gz

2361
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -21,11 +21,11 @@ byteorder = "1.4.3"
crc32fast = "1.3.2"
once_cell = "1.10.0"
regex = { version = "1.5.5", default-features = false, features = [
"std",
"unicode",
"std",
"unicode",
] }
aho-corasick = "1.0"
tantivy-fst = "0.5"
tantivy-fst = { git = "https://github.com/paradedb/fst.git" }
memmap2 = { version = "0.9.0", optional = true }
lz4_flex = { version = "0.11", default-features = false, optional = true }
zstd = { version = "0.13", optional = true, default-features = false }
@@ -38,9 +38,10 @@ levenshtein_automata = "0.2.1"
uuid = { version = "1.0.0", features = ["v4", "serde"] }
crossbeam-channel = "0.5.4"
rust-stemmers = "1.2.0"
tantivy-stemmers = { version = "0.4.0", default-features = false, features = ["polish_yarovoy"] }
downcast-rs = "2.0.1"
bitpacking = { version = "0.9.2", default-features = false, features = [
"bitpacker4x",
"bitpacker4x",
] }
census = "0.4.2"
rustc-hash = "2.0.0"
@@ -48,6 +49,10 @@ thiserror = "2.0.1"
htmlescape = "0.3.1"
fail = { version = "0.5.0", optional = true }
time = { version = "0.3.35", features = ["serde-well-known"] }
# TODO: We have integer wrappers with PartialOrd, and a misfeature of
# `deranged` causes inference to fail in a bunch of cases. See
# https://github.com/jhpratt/deranged/issues/18#issuecomment-2746844093
deranged = "=0.4.0"
smallvec = "1.8.0"
rayon = "1.5.2"
lru = "0.12.0"
@@ -69,6 +74,7 @@ hyperloglogplus = { version = "0.4.1", features = ["const-loop"] }
futures-util = { version = "0.3.28", optional = true }
futures-channel = { version = "0.3.28", optional = true }
fnv = "1.0.7"
parking_lot = "0.12.4"
typetag = "0.2.21"
[target.'cfg(windows)'.dependencies]
@@ -88,7 +94,7 @@ more-asserts = "0.3.1"
rand_distr = "0.4.3"
time = { version = "0.3.10", features = ["serde-well-known", "macros"] }
postcard = { version = "1.0.4", features = [
"use-std",
"use-std",
], default-features = false }
[target.'cfg(not(windows))'.dev-dependencies]
@@ -135,14 +141,14 @@ compare_hash_only = ["stacker/compare_hash_only"]
[workspace]
members = [
"query-grammar",
"bitpacker",
"common",
"ownedbytes",
"stacker",
"sstable",
"tokenizer-api",
"columnar",
"query-grammar",
"bitpacker",
"common",
"ownedbytes",
"stacker",
"sstable",
"tokenizer-api",
"columnar",
]
# Following the "fail" crate best practises, we isolate

View File

@@ -11,9 +11,6 @@ keywords = []
documentation = "https://docs.rs/tantivy-bitpacker/latest/tantivy_bitpacker"
homepage = "https://github.com/quickwit-oss/tantivy"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bitpacking = { version = "0.9.2", default-features = false, features = ["bitpacker1x"] }

View File

@@ -48,7 +48,7 @@ impl BitPacker {
pub fn flush<TWrite: io::Write + ?Sized>(&mut self, output: &mut TWrite) -> io::Result<()> {
if self.mini_buffer_written > 0 {
let num_bytes = self.mini_buffer_written.div_ceil(8);
let num_bytes = (self.mini_buffer_written + 7) / 8;
let bytes = self.mini_buffer.to_le_bytes();
output.write_all(&bytes[..num_bytes])?;
self.mini_buffer_written = 0;
@@ -65,10 +65,16 @@ impl BitPacker {
#[derive(Clone, Debug, Default, Copy)]
pub struct BitUnpacker {
num_bits: usize,
num_bits: u32,
mask: u64,
}
pub type BlockNumber = usize;
// 16k
const BLOCK_SIZE_MIN_POW: u8 = 14;
const BLOCK_SIZE_MIN: usize = 2 << BLOCK_SIZE_MIN_POW;
impl BitUnpacker {
/// Creates a bit unpacker, that assumes the same bitwidth for all values.
///
@@ -82,8 +88,9 @@ impl BitUnpacker {
} else {
(1u64 << num_bits) - 1u64
};
BitUnpacker {
num_bits: usize::from(num_bits),
num_bits: u32::from(num_bits),
mask,
}
}
@@ -92,16 +99,69 @@ impl BitUnpacker {
self.num_bits as u8
}
/// Calculates a block number for the given `idx`.
#[inline]
pub fn block_num(&self, idx: u32) -> BlockNumber {
// Find the address in bits of the index.
let addr_in_bits = (idx * self.num_bits) as usize;
// Then round down to the nearest byte.
let addr_in_bytes = addr_in_bits >> 3;
// And compute the containing BlockNumber.
addr_in_bytes >> (BLOCK_SIZE_MIN_POW + 1)
}
/// Given a block number and dataset length, calculates a data Range for the block.
pub fn block(&self, block: BlockNumber, data_len: usize) -> Range<usize> {
let block_addr = block << (BLOCK_SIZE_MIN_POW + 1);
// We extend the end of the block by a constant factor, so that it overlaps the next
// block. That ensures that we never need to read on a block boundary.
block_addr..(std::cmp::min(block_addr + BLOCK_SIZE_MIN + 8, data_len))
}
/// Calculates the number of blocks for the given data_len.
///
/// Usually only called at startup to pre-allocate structures.
pub fn block_count(&self, data_len: usize) -> usize {
let block_count = data_len / (BLOCK_SIZE_MIN as usize);
if data_len % (BLOCK_SIZE_MIN as usize) == 0 {
block_count
} else {
block_count + 1
}
}
/// Returns a range within the data which covers the given id_range.
///
/// NOTE: This method is used for batch reads which bypass blocks to avoid dealing with block
/// boundaries.
#[inline]
pub fn block_oblivious_range(&self, id_range: Range<u32>, data_len: usize) -> Range<usize> {
let start_in_bits = id_range.start * self.num_bits;
let start = (start_in_bits >> 3) as usize;
let end_in_bits = id_range.end * self.num_bits;
let end = (end_in_bits >> 3) as usize;
// TODO: We fetch more than we need and then truncate.
start..(std::cmp::min(end + 8, data_len))
}
#[inline]
pub fn get(&self, idx: u32, data: &[u8]) -> u64 {
let addr_in_bits = idx as usize * self.num_bits;
let addr = addr_in_bits >> 3;
self.get_from_subset(idx, 0, data)
}
/// Get the value at the given idx, which must exist within the given subset of the data.
#[inline]
pub fn get_from_subset(&self, idx: u32, data_offset: usize, data: &[u8]) -> u64 {
let addr_in_bits = idx * self.num_bits;
let addr = (addr_in_bits >> 3) as usize - data_offset;
if addr + 8 > data.len() {
if self.num_bits == 0 {
return 0;
}
let bit_shift = addr_in_bits & 7;
return self.get_slow_path(addr, bit_shift as u32, data);
return self.get_slow_path(addr, bit_shift, data);
}
let bit_shift = addr_in_bits & 7;
let bytes: [u8; 8] = (&data[addr..addr + 8]).try_into().unwrap();
@@ -113,6 +173,7 @@ impl BitUnpacker {
#[inline(never)]
fn get_slow_path(&self, addr: usize, bit_shift: u32, data: &[u8]) -> u64 {
let mut bytes: [u8; 8] = [0u8; 8];
let available_bytes = data.len() - addr;
// This function is meant to only be called if we did not have 8 bytes to load.
debug_assert!(available_bytes < 8);
@@ -128,26 +189,25 @@ impl BitUnpacker {
// #Panics
//
// This methods panics if `num_bits` is > 32.
fn get_batch_u32s(&self, start_idx: u32, data: &[u8], output: &mut [u32]) {
fn get_batch_u32s(&self, start_idx: u32, data_offset: usize, data: &[u8], output: &mut [u32]) {
assert!(
self.bit_width() <= 32,
"Bitwidth must be <= 32 to use this method."
);
let end_idx: u32 = start_idx + output.len() as u32;
let end_idx = start_idx + output.len() as u32;
// We use `usize` here to avoid overflow issues.
let end_bit_read = (end_idx as usize) * self.num_bits;
let end_byte_read = end_bit_read.div_ceil(8);
let end_bit_read = end_idx * self.num_bits;
let end_byte_read = (end_bit_read + 7) / 8;
assert!(
end_byte_read <= data.len(),
end_byte_read as usize <= data_offset + data.len(),
"Requested index is out of bounds."
);
// Simple slow implementation of get_batch_u32s, to deal with our ramps.
let get_batch_ramp = |start_idx: u32, output: &mut [u32]| {
for (out, idx) in output.iter_mut().zip(start_idx..) {
*out = self.get(idx, data) as u32;
*out = self.get_from_subset(idx, data_offset, data) as u32;
}
};
@@ -160,24 +220,24 @@ impl BitUnpacker {
// We want the start of the fast track to start align with bytes.
// A sufficient condition is to start with an idx that is a multiple of 8,
// so highway start is the closest multiple of 8 that is >= start_idx.
let entrance_ramp_len: u32 = 8 - (start_idx % 8) % 8;
let entrance_ramp_len = 8 - (start_idx % 8) % 8;
let highway_start: u32 = start_idx + entrance_ramp_len;
if highway_start + (BitPacker1x::BLOCK_LEN as u32) > end_idx {
if highway_start + BitPacker1x::BLOCK_LEN as u32 > end_idx {
// We don't have enough values to have even a single block of highway.
// Let's just supply the values the simple way.
get_batch_ramp(start_idx, output);
return;
}
let num_blocks: usize = (end_idx - highway_start) as usize / BitPacker1x::BLOCK_LEN;
let num_blocks: u32 = (end_idx - highway_start) / BitPacker1x::BLOCK_LEN as u32;
// Entrance ramp
get_batch_ramp(start_idx, &mut output[..entrance_ramp_len as usize]);
// Highway
let mut offset = (highway_start as usize * self.num_bits) / 8;
let mut offset = ((highway_start * self.num_bits) as usize / 8) - data_offset;
let mut output_cursor = (highway_start - start_idx) as usize;
for _ in 0..num_blocks {
offset += BitPacker1x.decompress(
@@ -189,7 +249,7 @@ impl BitUnpacker {
}
// Exit ramp
let highway_end: u32 = highway_start + (num_blocks * BitPacker1x::BLOCK_LEN) as u32;
let highway_end = highway_start + num_blocks * BitPacker1x::BLOCK_LEN as u32;
get_batch_ramp(highway_end, &mut output[output_cursor..]);
}
@@ -199,16 +259,27 @@ impl BitUnpacker {
id_range: Range<u32>,
data: &[u8],
positions: &mut Vec<u32>,
) {
self.get_ids_for_value_range_from_subset(range, id_range, 0, data, positions)
}
pub fn get_ids_for_value_range_from_subset(
&self,
range: RangeInclusive<u64>,
id_range: Range<u32>,
data_offset: usize,
data: &[u8],
positions: &mut Vec<u32>,
) {
if self.bit_width() > 32 {
self.get_ids_for_value_range_slow(range, id_range, data, positions)
self.get_ids_for_value_range_slow(range, id_range, data_offset, data, positions)
} else {
if *range.start() > u32::MAX as u64 {
positions.clear();
return;
}
let range_u32 = (*range.start() as u32)..=(*range.end()).min(u32::MAX as u64) as u32;
self.get_ids_for_value_range_fast(range_u32, id_range, data, positions)
self.get_ids_for_value_range_fast(range_u32, id_range, data_offset, data, positions)
}
}
@@ -216,6 +287,7 @@ impl BitUnpacker {
&self,
range: RangeInclusive<u64>,
id_range: Range<u32>,
data_offset: usize,
data: &[u8],
positions: &mut Vec<u32>,
) {
@@ -223,7 +295,7 @@ impl BitUnpacker {
for i in id_range {
// If we cared we could make this branchless, but the slow implementation should rarely
// kick in.
let val = self.get(i, data);
let val = self.get_from_subset(i, data_offset, data);
if range.contains(&val) {
positions.push(i);
}
@@ -234,11 +306,12 @@ impl BitUnpacker {
&self,
value_range: RangeInclusive<u32>,
id_range: Range<u32>,
data_offset: usize,
data: &[u8],
positions: &mut Vec<u32>,
) {
positions.resize(id_range.len(), 0u32);
self.get_batch_u32s(id_range.start, data, positions);
self.get_batch_u32s(id_range.start, data_offset, data, positions);
crate::filter_vec::filter_vec_in_place(value_range, id_range.start, positions)
}
}
@@ -329,14 +402,14 @@ mod test {
fn test_get_batch_panics_over_32_bits() {
let bitunpacker = BitUnpacker::new(33);
let mut output: [u32; 1] = [0u32];
bitunpacker.get_batch_u32s(0, &[0, 0, 0, 0, 0, 0, 0, 0], &mut output[..]);
bitunpacker.get_batch_u32s(0, 0, &[0, 0, 0, 0, 0, 0, 0, 0], &mut output[..]);
}
#[test]
fn test_get_batch_limit() {
let bitunpacker = BitUnpacker::new(1);
let mut output: [u32; 3] = [0u32, 0u32, 0u32];
bitunpacker.get_batch_u32s(8 * 4 - 3, &[0u8, 0u8, 0u8, 0u8], &mut output[..]);
bitunpacker.get_batch_u32s(8 * 4 - 3, 0, &[0u8, 0u8, 0u8, 0u8], &mut output[..]);
}
#[test]
@@ -345,7 +418,7 @@ mod test {
let bitunpacker = BitUnpacker::new(1);
let mut output: [u32; 3] = [0u32, 0u32, 0u32];
// We are missing exactly one bit.
bitunpacker.get_batch_u32s(8 * 4 - 2, &[0u8, 0u8, 0u8, 0u8], &mut output[..]);
bitunpacker.get_batch_u32s(8 * 4 - 2, 0, &[0u8, 0u8, 0u8, 0u8], &mut output[..]);
}
proptest::proptest! {
@@ -368,7 +441,7 @@ mod test {
for len in [0, 1, 2, 32, 33, 34, 64] {
for start_idx in 0u32..32u32 {
output.resize(len, 0);
bitunpacker.get_batch_u32s(start_idx, &buffer, &mut output);
bitunpacker.get_batch_u32s(start_idx, 0, &buffer, &mut output);
for (i, output_byte) in output.iter().enumerate() {
let expected = (start_idx + i as u32) & mask;
assert_eq!(*output_byte, expected);

View File

@@ -16,7 +16,7 @@ stacker = { version= "0.6", path = "../stacker", package="tantivy-stacker"}
sstable = { version= "0.6", path = "../sstable", package = "tantivy-sstable" }
common = { version= "0.10", path = "../common", package = "tantivy-common" }
tantivy-bitpacker = { version= "0.9", path = "../bitpacker/" }
serde = "1.0.152"
serde = { version = "1.0.152", features = ["derive"] }
downcast-rs = "2.0.1"
[dev-dependencies]

View File

@@ -1,6 +1,6 @@
use binggan::{InputGroup, black_box};
use common::*;
use tantivy_columnar::Column;
use tantivy_columnar::{Column, ValueRange};
pub mod common;
@@ -46,16 +46,16 @@ fn bench_group(mut runner: InputGroup<Column>) {
runner.register("access_first_vals", |column| {
let mut sum = 0;
const BLOCK_SIZE: usize = 32;
let mut docs = vec![0; BLOCK_SIZE];
let mut buffer = vec![None; BLOCK_SIZE];
let mut docs = Vec::with_capacity(BLOCK_SIZE);
let mut buffer = Vec::with_capacity(BLOCK_SIZE);
for i in (0..NUM_DOCS).step_by(BLOCK_SIZE) {
// fill docs
#[allow(clippy::needless_range_loop)]
docs.clear();
for idx in 0..BLOCK_SIZE {
docs[idx] = idx as u32 + i;
docs.push(idx as u32 + i);
}
column.first_vals(&docs, &mut buffer);
buffer.clear();
column.first_vals_in_value_range(&mut docs, &mut buffer, ValueRange::All);
for val in buffer.iter() {
let Some(val) = val else { continue };
sum += *val;

View File

@@ -40,7 +40,14 @@ fn main() {
let columnar_readers = columnar_readers.iter().collect::<Vec<_>>();
let merge_row_order = StackMergeOrder::stack(&columnar_readers[..]);
merge_columnar(&columnar_readers, &[], merge_row_order.into(), &mut out).unwrap();
merge_columnar(
&columnar_readers,
&[],
merge_row_order.into(),
&mut out,
|| false,
)
.unwrap();
Some(out.len() as u64)
},
);

View File

@@ -1,6 +1,7 @@
mod dictionary_encoded;
mod serialize;
use std::cell::RefCell;
use std::fmt::{self, Debug};
use std::io::Write;
use std::ops::{Range, RangeInclusive};
@@ -19,6 +20,11 @@ use crate::column_values::monotonic_mapping::StrictlyMonotonicMappingToInternal;
use crate::column_values::{ColumnValues, monotonic_map_column};
use crate::{Cardinality, DocId, EmptyColumnValues, MonotonicallyMappableToU64, RowId};
thread_local! {
static ROWS: RefCell<Vec<RowId>> = const { RefCell::new(Vec::new()) };
static DOCS: RefCell<Vec<DocId>> = const { RefCell::new(Vec::new()) };
}
#[derive(Clone)]
pub struct Column<T = u64> {
pub index: ColumnIndex,
@@ -89,31 +95,6 @@ impl<T: PartialOrd + Copy + Debug + Send + Sync + 'static> Column<T> {
self.values_for_doc(row_id).next()
}
/// Load the first value for each docid in the provided slice.
#[inline]
pub fn first_vals(&self, docids: &[DocId], output: &mut [Option<T>]) {
match &self.index {
ColumnIndex::Empty { .. } => {}
ColumnIndex::Full => self.values.get_vals_opt(docids, output),
ColumnIndex::Optional(optional_index) => {
for (i, docid) in docids.iter().enumerate() {
output[i] = optional_index
.rank_if_exists(*docid)
.map(|rowid| self.values.get_val(rowid));
}
}
ColumnIndex::Multivalued(multivalued_index) => {
for (i, docid) in docids.iter().enumerate() {
let range = multivalued_index.range(*docid);
let is_empty = range.start == range.end;
if !is_empty {
output[i] = Some(self.values.get_val(range.start));
}
}
}
}
}
/// Translates a block of docids to row_ids.
///
/// returns the row_ids and the matching docids on the same index
@@ -143,7 +124,7 @@ impl<T: PartialOrd + Copy + Debug + Send + Sync + 'static> Column<T> {
#[inline]
pub fn get_docids_for_value_range(
&self,
value_range: RangeInclusive<T>,
value_range: ValueRange<T>,
selected_docid_range: Range<u32>,
doc_ids: &mut Vec<u32>,
) {
@@ -168,6 +149,181 @@ impl<T: PartialOrd + Copy + Debug + Send + Sync + 'static> Column<T> {
}
}
// Separate impl block for methods requiring `Default` for `T`.
impl<T: PartialOrd + Copy + Debug + Send + Sync + 'static + Default> Column<T> {
/// Load the first value for each docid in the provided slice.
///
/// The `docids` vector is mutated: documents that do not match the `value_range` are removed.
/// The `values` vector is populated with the values of the remaining documents.
#[inline]
pub fn first_vals_in_value_range(
&self,
input_docs: &[DocId],
output: &mut Vec<crate::ComparableDoc<Option<T>, DocId>>,
value_range: ValueRange<T>,
) {
match (&self.index, value_range) {
(ColumnIndex::Empty { .. }, value_range) => {
let nulls_match = match &value_range {
ValueRange::All => true,
ValueRange::Inclusive(_) => false,
ValueRange::GreaterThan(_, nulls_match) => *nulls_match,
ValueRange::GreaterThanOrEqual(_, nulls_match) => *nulls_match,
ValueRange::LessThan(_, nulls_match) => *nulls_match,
ValueRange::LessThanOrEqual(_, nulls_match) => *nulls_match,
};
if nulls_match {
for &doc in input_docs {
output.push(crate::ComparableDoc {
doc,
sort_key: None,
});
}
}
}
(ColumnIndex::Full, value_range) => {
self.values
.get_vals_in_value_range(input_docs, input_docs, output, value_range);
}
(ColumnIndex::Optional(optional_index), value_range) => {
let nulls_match = match &value_range {
ValueRange::All => true,
ValueRange::Inclusive(_) => false,
ValueRange::GreaterThan(_, nulls_match) => *nulls_match,
ValueRange::GreaterThanOrEqual(_, nulls_match) => *nulls_match,
ValueRange::LessThan(_, nulls_match) => *nulls_match,
ValueRange::LessThanOrEqual(_, nulls_match) => *nulls_match,
};
let fallback_needed = ROWS.with(|rows_cell| {
DOCS.with(|docs_cell| {
let mut rows = rows_cell.borrow_mut();
let mut docs = docs_cell.borrow_mut();
rows.clear();
docs.clear();
let mut has_nulls = false;
for &doc_id in input_docs {
if let Some(row_id) = optional_index.rank_if_exists(doc_id) {
rows.push(row_id);
docs.push(doc_id);
} else {
has_nulls = true;
if nulls_match {
break;
}
}
}
if !has_nulls || !nulls_match {
self.values.get_vals_in_value_range(
&rows,
&docs,
output,
value_range.clone(),
);
return false;
}
true
})
});
if fallback_needed {
for &doc_id in input_docs {
if let Some(row_id) = optional_index.rank_if_exists(doc_id) {
let val = self.values.get_val(row_id);
let value_matches = match &value_range {
ValueRange::All => true,
ValueRange::Inclusive(r) => r.contains(&val),
ValueRange::GreaterThan(t, _) => val > *t,
ValueRange::GreaterThanOrEqual(t, _) => val >= *t,
ValueRange::LessThan(t, _) => val < *t,
ValueRange::LessThanOrEqual(t, _) => val <= *t,
};
if value_matches {
output.push(crate::ComparableDoc {
doc: doc_id,
sort_key: Some(val),
});
}
} else if nulls_match {
output.push(crate::ComparableDoc {
doc: doc_id,
sort_key: None,
});
}
}
}
}
(ColumnIndex::Multivalued(multivalued_index), value_range) => {
let nulls_match = match &value_range {
ValueRange::All => true,
ValueRange::Inclusive(_) => false,
ValueRange::GreaterThan(_, nulls_match) => *nulls_match,
ValueRange::GreaterThanOrEqual(_, nulls_match) => *nulls_match,
ValueRange::LessThan(_, nulls_match) => *nulls_match,
ValueRange::LessThanOrEqual(_, nulls_match) => *nulls_match,
};
for i in 0..input_docs.len() {
let docid = input_docs[i];
let row_range = multivalued_index.range(docid);
let is_empty = row_range.start == row_range.end;
if !is_empty {
let val = self.values.get_val(row_range.start);
let matches = match &value_range {
ValueRange::All => true,
ValueRange::Inclusive(r) => r.contains(&val),
ValueRange::GreaterThan(t, _) => val > *t,
ValueRange::GreaterThanOrEqual(t, _) => val >= *t,
ValueRange::LessThan(t, _) => val < *t,
ValueRange::LessThanOrEqual(t, _) => val <= *t,
};
if matches {
output.push(crate::ComparableDoc {
doc: docid,
sort_key: Some(val),
});
}
} else if nulls_match {
output.push(crate::ComparableDoc {
doc: docid,
sort_key: None,
});
}
}
}
}
}
}
/// A range of values.
///
/// This type is intended to be used in batch APIs, where the cost of unpacking the enum
/// is outweighed by the time spent processing a batch.
///
/// Implementers should pattern match on the variants to use optimized loops for each case.
#[derive(Clone, Debug)]
pub enum ValueRange<T> {
/// A range that includes both start and end.
Inclusive(RangeInclusive<T>),
/// A range that matches all values.
All,
/// A range that matches all values greater than the threshold.
/// The boolean flag indicates if null values should be included.
GreaterThan(T, bool),
/// A range that matches all values greater than or equal to the threshold.
/// The boolean flag indicates if null values should be included.
GreaterThanOrEqual(T, bool),
/// A range that matches all values less than the threshold.
/// The boolean flag indicates if null values should be included.
LessThan(T, bool),
/// A range that matches all values less than or equal to the threshold.
/// The boolean flag indicates if null values should be included.
LessThanOrEqual(T, bool),
}
impl BinarySerializable for Cardinality {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> std::io::Result<()> {
self.to_code().serialize(writer)

View File

@@ -2,7 +2,7 @@ use std::io;
use std::io::Write;
use std::sync::Arc;
use common::OwnedBytes;
use common::file_slice::FileSlice;
use sstable::Dictionary;
use crate::column::{BytesColumn, Column};
@@ -41,12 +41,13 @@ pub fn serialize_column_mappable_to_u64<T: MonotonicallyMappableToU64>(
}
pub fn open_column_u64<T: MonotonicallyMappableToU64>(
bytes: OwnedBytes,
file_slice: FileSlice,
format_version: Version,
) -> io::Result<Column<T>> {
let (body, column_index_num_bytes_payload) = bytes.rsplit(4);
let (body, column_index_num_bytes_payload) = file_slice.split_from_end(4);
let column_index_num_bytes = u32::from_le_bytes(
column_index_num_bytes_payload
.read_bytes()?
.as_slice()
.try_into()
.unwrap(),
@@ -61,12 +62,13 @@ pub fn open_column_u64<T: MonotonicallyMappableToU64>(
}
pub fn open_column_u128<T: MonotonicallyMappableToU128>(
bytes: OwnedBytes,
file_slice: FileSlice,
format_version: Version,
) -> io::Result<Column<T>> {
let (body, column_index_num_bytes_payload) = bytes.rsplit(4);
let (body, column_index_num_bytes_payload) = file_slice.split_from_end(4);
let column_index_num_bytes = u32::from_le_bytes(
column_index_num_bytes_payload
.read_bytes()?
.as_slice()
.try_into()
.unwrap(),
@@ -84,12 +86,13 @@ pub fn open_column_u128<T: MonotonicallyMappableToU128>(
///
/// See [`open_u128_as_compact_u64`] for more details.
pub fn open_column_u128_as_compact_u64(
bytes: OwnedBytes,
file_slice: FileSlice,
format_version: Version,
) -> io::Result<Column<u64>> {
let (body, column_index_num_bytes_payload) = bytes.rsplit(4);
let (body, column_index_num_bytes_payload) = file_slice.split_from_end(4);
let column_index_num_bytes = u32::from_le_bytes(
column_index_num_bytes_payload
.read_bytes()?
.as_slice()
.try_into()
.unwrap(),
@@ -103,11 +106,21 @@ pub fn open_column_u128_as_compact_u64(
})
}
pub fn open_column_bytes(data: OwnedBytes, format_version: Version) -> io::Result<BytesColumn> {
let (body, dictionary_len_bytes) = data.rsplit(4);
let dictionary_len = u32::from_le_bytes(dictionary_len_bytes.as_slice().try_into().unwrap());
pub fn open_column_bytes(
file_slice: FileSlice,
format_version: Version,
) -> io::Result<BytesColumn> {
let (body, dictionary_len_bytes) = file_slice.split_from_end(4);
let dictionary_len = u32::from_le_bytes(
dictionary_len_bytes
.read_bytes()?
.as_slice()
.try_into()
.unwrap(),
);
let (dictionary_bytes, column_bytes) = body.split(dictionary_len as usize);
let dictionary = Arc::new(Dictionary::from_bytes(dictionary_bytes)?);
let dictionary = Arc::new(Dictionary::open(dictionary_bytes)?);
let term_ord_column = crate::column::open_column_u64::<u64>(column_bytes, format_version)?;
Ok(BytesColumn {
dictionary,
@@ -115,7 +128,7 @@ pub fn open_column_bytes(data: OwnedBytes, format_version: Version) -> io::Resul
})
}
pub fn open_column_str(data: OwnedBytes, format_version: Version) -> io::Result<StrColumn> {
let bytes_column = open_column_bytes(data, format_version)?;
pub fn open_column_str(file_slice: FileSlice, format_version: Version) -> io::Result<StrColumn> {
let bytes_column = open_column_bytes(file_slice, format_version)?;
Ok(StrColumn::wrap(bytes_column))
}

View File

@@ -95,7 +95,7 @@ pub fn merge_column_index<'a>(
#[cfg(test)]
mod tests {
use common::OwnedBytes;
use common::file_slice::FileSlice;
use crate::column_index::merge::detect_cardinality;
use crate::column_index::multivalued_index::{
@@ -178,7 +178,7 @@ mod tests {
let mut output = Vec::new();
serialize_multivalued_index(&start_index_iterable, &mut output).unwrap();
let multivalue =
open_multivalued_index(OwnedBytes::new(output), crate::Version::V2).unwrap();
open_multivalued_index(FileSlice::from(output), crate::Version::V2).unwrap();
let start_indexes: Vec<RowId> = multivalue.get_start_index_column().iter().collect();
assert_eq!(&start_indexes, &[0, 3, 5]);
}
@@ -216,7 +216,7 @@ mod tests {
let mut output = Vec::new();
serialize_multivalued_index(&start_index_iterable, &mut output).unwrap();
let multivalue =
open_multivalued_index(OwnedBytes::new(output), crate::Version::V2).unwrap();
open_multivalued_index(FileSlice::from(output), crate::Version::V2).unwrap();
let start_indexes: Vec<RowId> = multivalue.get_start_index_column().iter().collect();
assert_eq!(&start_indexes, &[0, 3, 5, 6]);
}

View File

@@ -3,7 +3,8 @@ use std::io::Write;
use std::ops::Range;
use std::sync::Arc;
use common::{CountingWriter, OwnedBytes};
use common::CountingWriter;
use common::file_slice::FileSlice;
use super::optional_index::{open_optional_index, serialize_optional_index};
use super::{OptionalIndex, SerializableOptionalIndex, Set};
@@ -44,21 +45,26 @@ pub fn serialize_multivalued_index(
}
pub fn open_multivalued_index(
bytes: OwnedBytes,
file_slice: FileSlice,
format_version: Version,
) -> io::Result<MultiValueIndex> {
match format_version {
Version::V1 => {
let start_index_column: Arc<dyn ColumnValues<RowId>> =
load_u64_based_column_values(bytes)?;
load_u64_based_column_values(file_slice)?;
Ok(MultiValueIndex::MultiValueIndexV1(MultiValueIndexV1 {
start_index_column,
}))
}
Version::V2 => {
let (body_bytes, optional_index_len) = bytes.rsplit(4);
let optional_index_len =
u32::from_le_bytes(optional_index_len.as_slice().try_into().unwrap());
let (body_bytes, optional_index_len) = file_slice.split_from_end(4);
let optional_index_len = u32::from_le_bytes(
optional_index_len
.read_bytes()?
.as_slice()
.try_into()
.unwrap(),
);
let (optional_index_bytes, start_index_bytes) =
body_bytes.split(optional_index_len as usize);
let optional_index = open_optional_index(optional_index_bytes)?;
@@ -185,8 +191,8 @@ impl MultiValueIndex {
};
let mut buffer = Vec::new();
serialize_multivalued_index(&serializable_multivalued_index, &mut buffer).unwrap();
let bytes = OwnedBytes::new(buffer);
open_multivalued_index(bytes, Version::V2).unwrap()
let file_slice = FileSlice::from(buffer);
open_multivalued_index(file_slice, Version::V2).unwrap()
}
pub fn get_start_index_column(&self) -> &Arc<dyn crate::ColumnValues<RowId>> {
@@ -333,7 +339,7 @@ mod tests {
use std::ops::Range;
use super::MultiValueIndex;
use crate::{ColumnarReader, DynamicColumn};
use crate::{ColumnarReader, DynamicColumn, ValueRange};
fn index_to_pos_helper(
index: &MultiValueIndex,
@@ -413,7 +419,7 @@ mod tests {
assert_eq!(row_id_range, 0..4);
let check = |range, expected| {
let full_range = 0..=u64::MAX;
let full_range = ValueRange::All;
let mut docids = Vec::new();
column.get_docids_for_value_range(full_range, range, &mut docids);
assert_eq!(docids, expected);

View File

@@ -4,6 +4,7 @@ use std::sync::Arc;
mod set;
mod set_block;
use common::file_slice::FileSlice;
use common::{BinarySerializable, OwnedBytes, VInt};
pub use set::{SelectCursor, Set, SetCodec};
use set_block::{
@@ -268,8 +269,8 @@ impl OptionalIndex {
);
let mut buffer = Vec::new();
serialize_optional_index(&row_ids, num_rows, &mut buffer).unwrap();
let bytes = OwnedBytes::new(buffer);
open_optional_index(bytes).unwrap()
let file_slice = FileSlice::from(buffer);
open_optional_index(file_slice).unwrap()
}
pub fn num_docs(&self) -> RowId {
@@ -486,10 +487,17 @@ fn deserialize_optional_index_block_metadatas(
(block_metas.into_boxed_slice(), non_null_rows_before_block)
}
pub fn open_optional_index(bytes: OwnedBytes) -> io::Result<OptionalIndex> {
let (mut bytes, num_non_empty_blocks_bytes) = bytes.rsplit(2);
let num_non_empty_block_bytes =
u16::from_le_bytes(num_non_empty_blocks_bytes.as_slice().try_into().unwrap());
pub fn open_optional_index(file_slice: FileSlice) -> io::Result<OptionalIndex> {
let (bytes, num_non_empty_blocks_bytes) = file_slice.split_from_end(2);
let num_non_empty_block_bytes = u16::from_le_bytes(
num_non_empty_blocks_bytes
.read_bytes()?
.as_slice()
.try_into()
.unwrap(),
);
let mut bytes = bytes.read_bytes()?;
let num_docs = VInt::deserialize_u64(&mut bytes)? as u32;
let block_metas_num_bytes =
num_non_empty_block_bytes as usize * SERIALIZED_BLOCK_META_NUM_BYTES;

View File

@@ -59,7 +59,7 @@ fn test_with_random_sets_simple() {
let vals = 10..ELEMENTS_PER_BLOCK * 2;
let mut out: Vec<u8> = Vec::new();
serialize_optional_index(&vals, 100, &mut out).unwrap();
let null_index = open_optional_index(OwnedBytes::new(out)).unwrap();
let null_index = open_optional_index(FileSlice::from(out)).unwrap();
let ranks: Vec<u32> = (65_472u32..65_473u32).collect();
let els: Vec<u32> = ranks.iter().copied().map(|rank| rank + 10).collect();
let mut select_cursor = null_index.select_cursor();
@@ -102,7 +102,7 @@ impl<'a> Iterable<RowId> for &'a [bool] {
fn test_null_index(data: &[bool]) {
let mut out: Vec<u8> = Vec::new();
serialize_optional_index(&data, data.len() as RowId, &mut out).unwrap();
let null_index = open_optional_index(OwnedBytes::new(out)).unwrap();
let null_index = open_optional_index(FileSlice::from(out)).unwrap();
let orig_idx_with_value: Vec<u32> = data
.iter()
.enumerate()
@@ -223,3 +223,170 @@ fn test_optional_index_for_tests() {
assert!(!optional_index.contains(3));
assert_eq!(optional_index.num_docs(), 4);
}
#[cfg(all(test, feature = "unstable"))]
mod bench {
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use test::Bencher;
use super::*;
const TOTAL_NUM_VALUES: u32 = 1_000_000;
fn gen_bools(fill_ratio: f64) -> OptionalIndex {
let mut out = Vec::new();
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
let vals: Vec<RowId> = (0..TOTAL_NUM_VALUES)
.map(|_| rng.gen_bool(fill_ratio))
.enumerate()
.filter(|(_pos, val)| *val)
.map(|(pos, _)| pos as RowId)
.collect();
serialize_optional_index(&&vals[..], TOTAL_NUM_VALUES, &mut out).unwrap();
open_optional_index(FileSlice::from(out)).unwrap()
}
fn random_range_iterator(
start: u32,
end: u32,
avg_step_size: u32,
avg_deviation: u32,
) -> impl Iterator<Item = u32> {
let mut rng: StdRng = StdRng::from_seed([1u8; 32]);
let mut current = start;
std::iter::from_fn(move || {
current += rng.gen_range(avg_step_size - avg_deviation..=avg_step_size + avg_deviation);
if current >= end { None } else { Some(current) }
})
}
fn n_percent_step_iterator(percent: f32, num_values: u32) -> impl Iterator<Item = u32> {
let ratio = percent / 100.0;
let step_size = (1f32 / ratio) as u32;
let deviation = step_size - 1;
random_range_iterator(0, num_values, step_size, deviation)
}
fn walk_over_data(codec: &OptionalIndex, avg_step_size: u32) -> Option<u32> {
walk_over_data_from_positions(
codec,
random_range_iterator(0, TOTAL_NUM_VALUES, avg_step_size, 0),
)
}
fn walk_over_data_from_positions(
codec: &OptionalIndex,
positions: impl Iterator<Item = u32>,
) -> Option<u32> {
let mut dense_idx: Option<u32> = None;
for idx in positions {
dense_idx = dense_idx.or(codec.rank_if_exists(idx));
}
dense_idx
}
#[bench]
fn bench_translate_orig_to_codec_1percent_filled_10percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.01f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_orig_to_codec_5percent_filled_10percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.05f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_orig_to_codec_5percent_filled_1percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.05f64);
bench.iter(|| walk_over_data(&codec, 1000));
}
#[bench]
fn bench_translate_orig_to_codec_full_scan_1percent_filled(bench: &mut Bencher) {
let codec = gen_bools(0.01f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_translate_orig_to_codec_full_scan_10percent_filled(bench: &mut Bencher) {
let codec = gen_bools(0.1f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_translate_orig_to_codec_full_scan_90percent_filled(bench: &mut Bencher) {
let codec = gen_bools(0.9f64);
bench.iter(|| walk_over_data_from_positions(&codec, 0..TOTAL_NUM_VALUES));
}
#[bench]
fn bench_translate_orig_to_codec_10percent_filled_1percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.1f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_orig_to_codec_50percent_filled_1percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.5f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_orig_to_codec_90percent_filled_1percent_hit(bench: &mut Bencher) {
let codec = gen_bools(0.9f64);
bench.iter(|| walk_over_data(&codec, 100));
}
#[bench]
fn bench_translate_codec_to_orig_1percent_filled_0comma005percent_hit(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.01f64, 0.005f32, bench);
}
#[bench]
fn bench_translate_codec_to_orig_10percent_filled_0comma005percent_hit(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.1f64, 0.005f32, bench);
}
#[bench]
fn bench_translate_codec_to_orig_1percent_filled_10percent_hit(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.01f64, 10f32, bench);
}
#[bench]
fn bench_translate_codec_to_orig_1percent_filled_full_scan(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.01f64, 100f32, bench);
}
fn bench_translate_codec_to_orig_util(
percent_filled: f64,
percent_hit: f32,
bench: &mut Bencher,
) {
let codec = gen_bools(percent_filled);
let num_non_nulls = codec.num_non_nulls();
let idxs: Vec<u32> = if percent_hit == 100.0f32 {
(0..num_non_nulls).collect()
} else {
n_percent_step_iterator(percent_hit, num_non_nulls).collect()
};
let mut output = vec![0u32; idxs.len()];
bench.iter(|| {
output.copy_from_slice(&idxs[..]);
codec.select_batch(&mut output);
});
}
#[bench]
fn bench_translate_codec_to_orig_90percent_filled_0comma005percent_hit(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.9f64, 0.005, bench);
}
#[bench]
fn bench_translate_codec_to_orig_90percent_filled_full_scan(bench: &mut Bencher) {
bench_translate_codec_to_orig_util(0.9f64, 100.0f32, bench);
}
}

View File

@@ -1,7 +1,8 @@
use std::io;
use std::io::Write;
use common::{CountingWriter, OwnedBytes};
use common::file_slice::FileSlice;
use common::{CountingWriter, HasLen};
use super::OptionalIndex;
use super::multivalued_index::SerializableMultivalueIndex;
@@ -65,27 +66,28 @@ pub fn serialize_column_index(
/// Open a serialized column index.
pub fn open_column_index(
mut bytes: OwnedBytes,
file_slice: FileSlice,
format_version: Version,
) -> io::Result<ColumnIndex> {
if bytes.is_empty() {
if file_slice.len() == 0 {
return Err(io::Error::new(
io::ErrorKind::UnexpectedEof,
"Failed to deserialize column index. Empty buffer.",
));
}
let cardinality_code = bytes[0];
let (header, body) = file_slice.split(1);
let cardinality_code = header.read_bytes()?.as_slice()[0];
let cardinality = Cardinality::try_from_code(cardinality_code)?;
bytes.advance(1);
match cardinality {
Cardinality::Full => Ok(ColumnIndex::Full),
Cardinality::Optional => {
let optional_index = super::optional_index::open_optional_index(bytes)?;
let optional_index = super::optional_index::open_optional_index(body)?;
Ok(ColumnIndex::Optional(optional_index))
}
Cardinality::Multivalued => {
let multivalue_index =
super::multivalued_index::open_multivalued_index(bytes, format_version)?;
super::multivalued_index::open_multivalued_index(body, format_version)?;
Ok(ColumnIndex::Multivalued(multivalue_index))
}
}

View File

@@ -7,13 +7,15 @@
//! - Monotonically map values to u64/u128
use std::fmt::Debug;
use std::ops::{Range, RangeInclusive};
use std::ops::Range;
use std::sync::Arc;
use downcast_rs::DowncastSync;
pub use monotonic_mapping::{MonotonicallyMappableToU64, StrictlyMonotonicFn};
pub use monotonic_mapping_u128::MonotonicallyMappableToU128;
use crate::column::ValueRange;
mod merge;
pub(crate) mod monotonic_mapping;
pub(crate) mod monotonic_mapping_u128;
@@ -27,8 +29,7 @@ mod monotonic_column;
pub(crate) use merge::MergedColumnValues;
pub use stats::ColumnStats;
pub use u64_based::{
ALL_U64_CODEC_TYPES, CodecType, load_u64_based_column_values,
serialize_and_load_u64_based_column_values, serialize_u64_based_column_values,
ALL_U64_CODEC_TYPES, CodecType, load_u64_based_column_values, serialize_u64_based_column_values,
};
pub use u128_based::{
CompactSpaceU64Accessor, open_u128_as_compact_u64, open_u128_mapped,
@@ -109,6 +110,307 @@ pub trait ColumnValues<T: PartialOrd = u64>: Send + Sync + DowncastSync {
}
}
/// Load the values for the provided docids.
///
/// The values are filtered by the provided value range.
fn get_vals_in_value_range(
&self,
input_indexes: &[u32],
input_doc_ids: &[u32],
output: &mut Vec<crate::ComparableDoc<Option<T>, crate::DocId>>,
value_range: ValueRange<T>,
) {
let len = input_indexes.len();
let mut read_head = 0;
match value_range {
ValueRange::All => {
while read_head + 3 < len {
let idx0 = input_indexes[read_head];
let idx1 = input_indexes[read_head + 1];
let idx2 = input_indexes[read_head + 2];
let idx3 = input_indexes[read_head + 3];
let doc0 = input_doc_ids[read_head];
let doc1 = input_doc_ids[read_head + 1];
let doc2 = input_doc_ids[read_head + 2];
let doc3 = input_doc_ids[read_head + 3];
let val0 = self.get_val(idx0);
let val1 = self.get_val(idx1);
let val2 = self.get_val(idx2);
let val3 = self.get_val(idx3);
output.push(crate::ComparableDoc {
doc: doc0,
sort_key: Some(val0),
});
output.push(crate::ComparableDoc {
doc: doc1,
sort_key: Some(val1),
});
output.push(crate::ComparableDoc {
doc: doc2,
sort_key: Some(val2),
});
output.push(crate::ComparableDoc {
doc: doc3,
sort_key: Some(val3),
});
read_head += 4;
}
}
ValueRange::Inclusive(ref range) => {
while read_head + 3 < len {
let idx0 = input_indexes[read_head];
let idx1 = input_indexes[read_head + 1];
let idx2 = input_indexes[read_head + 2];
let idx3 = input_indexes[read_head + 3];
let doc0 = input_doc_ids[read_head];
let doc1 = input_doc_ids[read_head + 1];
let doc2 = input_doc_ids[read_head + 2];
let doc3 = input_doc_ids[read_head + 3];
let val0 = self.get_val(idx0);
let val1 = self.get_val(idx1);
let val2 = self.get_val(idx2);
let val3 = self.get_val(idx3);
if range.contains(&val0) {
output.push(crate::ComparableDoc {
doc: doc0,
sort_key: Some(val0),
});
}
if range.contains(&val1) {
output.push(crate::ComparableDoc {
doc: doc1,
sort_key: Some(val1),
});
}
if range.contains(&val2) {
output.push(crate::ComparableDoc {
doc: doc2,
sort_key: Some(val2),
});
}
if range.contains(&val3) {
output.push(crate::ComparableDoc {
doc: doc3,
sort_key: Some(val3),
});
}
read_head += 4;
}
}
ValueRange::GreaterThan(ref threshold, _) => {
while read_head + 3 < len {
let idx0 = input_indexes[read_head];
let idx1 = input_indexes[read_head + 1];
let idx2 = input_indexes[read_head + 2];
let idx3 = input_indexes[read_head + 3];
let doc0 = input_doc_ids[read_head];
let doc1 = input_doc_ids[read_head + 1];
let doc2 = input_doc_ids[read_head + 2];
let doc3 = input_doc_ids[read_head + 3];
let val0 = self.get_val(idx0);
let val1 = self.get_val(idx1);
let val2 = self.get_val(idx2);
let val3 = self.get_val(idx3);
if val0 > *threshold {
output.push(crate::ComparableDoc {
doc: doc0,
sort_key: Some(val0),
});
}
if val1 > *threshold {
output.push(crate::ComparableDoc {
doc: doc1,
sort_key: Some(val1),
});
}
if val2 > *threshold {
output.push(crate::ComparableDoc {
doc: doc2,
sort_key: Some(val2),
});
}
if val3 > *threshold {
output.push(crate::ComparableDoc {
doc: doc3,
sort_key: Some(val3),
});
}
read_head += 4;
}
}
ValueRange::GreaterThanOrEqual(ref threshold, _) => {
while read_head + 3 < len {
let idx0 = input_indexes[read_head];
let idx1 = input_indexes[read_head + 1];
let idx2 = input_indexes[read_head + 2];
let idx3 = input_indexes[read_head + 3];
let doc0 = input_doc_ids[read_head];
let doc1 = input_doc_ids[read_head + 1];
let doc2 = input_doc_ids[read_head + 2];
let doc3 = input_doc_ids[read_head + 3];
let val0 = self.get_val(idx0);
let val1 = self.get_val(idx1);
let val2 = self.get_val(idx2);
let val3 = self.get_val(idx3);
if val0 >= *threshold {
output.push(crate::ComparableDoc {
doc: doc0,
sort_key: Some(val0),
});
}
if val1 >= *threshold {
output.push(crate::ComparableDoc {
doc: doc1,
sort_key: Some(val1),
});
}
if val2 >= *threshold {
output.push(crate::ComparableDoc {
doc: doc2,
sort_key: Some(val2),
});
}
if val3 >= *threshold {
output.push(crate::ComparableDoc {
doc: doc3,
sort_key: Some(val3),
});
}
read_head += 4;
}
}
ValueRange::LessThan(ref threshold, _) => {
while read_head + 3 < len {
let idx0 = input_indexes[read_head];
let idx1 = input_indexes[read_head + 1];
let idx2 = input_indexes[read_head + 2];
let idx3 = input_indexes[read_head + 3];
let doc0 = input_doc_ids[read_head];
let doc1 = input_doc_ids[read_head + 1];
let doc2 = input_doc_ids[read_head + 2];
let doc3 = input_doc_ids[read_head + 3];
let val0 = self.get_val(idx0);
let val1 = self.get_val(idx1);
let val2 = self.get_val(idx2);
let val3 = self.get_val(idx3);
if val0 < *threshold {
output.push(crate::ComparableDoc {
doc: doc0,
sort_key: Some(val0),
});
}
if val1 < *threshold {
output.push(crate::ComparableDoc {
doc: doc1,
sort_key: Some(val1),
});
}
if val2 < *threshold {
output.push(crate::ComparableDoc {
doc: doc2,
sort_key: Some(val2),
});
}
if val3 < *threshold {
output.push(crate::ComparableDoc {
doc: doc3,
sort_key: Some(val3),
});
}
read_head += 4;
}
}
ValueRange::LessThanOrEqual(ref threshold, _) => {
while read_head + 3 < len {
let idx0 = input_indexes[read_head];
let idx1 = input_indexes[read_head + 1];
let idx2 = input_indexes[read_head + 2];
let idx3 = input_indexes[read_head + 3];
let doc0 = input_doc_ids[read_head];
let doc1 = input_doc_ids[read_head + 1];
let doc2 = input_doc_ids[read_head + 2];
let doc3 = input_doc_ids[read_head + 3];
let val0 = self.get_val(idx0);
let val1 = self.get_val(idx1);
let val2 = self.get_val(idx2);
let val3 = self.get_val(idx3);
if val0 <= *threshold {
output.push(crate::ComparableDoc {
doc: doc0,
sort_key: Some(val0),
});
}
if val1 <= *threshold {
output.push(crate::ComparableDoc {
doc: doc1,
sort_key: Some(val1),
});
}
if val2 <= *threshold {
output.push(crate::ComparableDoc {
doc: doc2,
sort_key: Some(val2),
});
}
if val3 <= *threshold {
output.push(crate::ComparableDoc {
doc: doc3,
sort_key: Some(val3),
});
}
read_head += 4;
}
}
}
// Process remaining elements (0 to 3)
while read_head < len {
let idx = input_indexes[read_head];
let doc = input_doc_ids[read_head];
let val = self.get_val(idx);
let matches = match value_range {
// 'value_range' is still moved here. This is the outer `value_range`
ValueRange::All => true,
ValueRange::Inclusive(ref r) => r.contains(&val),
ValueRange::GreaterThan(ref t, _) => val > *t,
ValueRange::GreaterThanOrEqual(ref t, _) => val >= *t,
ValueRange::LessThan(ref t, _) => val < *t,
ValueRange::LessThanOrEqual(ref t, _) => val <= *t,
};
if matches {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(val),
});
}
read_head += 1;
}
}
/// Fills an output buffer with the fast field values
/// associated with the `DocId` going from
/// `start` to `start + output.len()`.
@@ -129,15 +431,54 @@ pub trait ColumnValues<T: PartialOrd = u64>: Send + Sync + DowncastSync {
/// Note that position == docid for single value fast fields
fn get_row_ids_for_value_range(
&self,
value_range: RangeInclusive<T>,
value_range: ValueRange<T>,
row_id_range: Range<RowId>,
row_id_hits: &mut Vec<RowId>,
) {
let row_id_range = row_id_range.start..row_id_range.end.min(self.num_vals());
for idx in row_id_range {
let val = self.get_val(idx);
if value_range.contains(&val) {
row_id_hits.push(idx);
match value_range {
ValueRange::Inclusive(range) => {
for idx in row_id_range {
let val = self.get_val(idx);
if range.contains(&val) {
row_id_hits.push(idx);
}
}
}
ValueRange::GreaterThan(threshold, _) => {
for idx in row_id_range {
let val = self.get_val(idx);
if val > threshold {
row_id_hits.push(idx);
}
}
}
ValueRange::GreaterThanOrEqual(threshold, _) => {
for idx in row_id_range {
let val = self.get_val(idx);
if val >= threshold {
row_id_hits.push(idx);
}
}
}
ValueRange::LessThan(threshold, _) => {
for idx in row_id_range {
let val = self.get_val(idx);
if val < threshold {
row_id_hits.push(idx);
}
}
}
ValueRange::LessThanOrEqual(threshold, _) => {
for idx in row_id_range {
let val = self.get_val(idx);
if val <= threshold {
row_id_hits.push(idx);
}
}
}
ValueRange::All => {
row_id_hits.extend(row_id_range);
}
}
}
@@ -193,6 +534,17 @@ impl<T: PartialOrd + Default> ColumnValues<T> for EmptyColumnValues {
fn num_vals(&self) -> u32 {
0
}
fn get_vals_in_value_range(
&self,
input_indexes: &[u32],
input_doc_ids: &[u32],
output: &mut Vec<crate::ComparableDoc<Option<T>, crate::DocId>>,
value_range: ValueRange<T>,
) {
let _ = (input_indexes, input_doc_ids, output, value_range);
panic!("Internal Error: Called get_vals_in_value_range of empty column.")
}
}
impl<T: Copy + PartialOrd + Debug + 'static> ColumnValues<T> for Arc<dyn ColumnValues<T>> {
@@ -206,6 +558,18 @@ impl<T: Copy + PartialOrd + Debug + 'static> ColumnValues<T> for Arc<dyn ColumnV
self.as_ref().get_vals_opt(indexes, output)
}
#[inline(always)]
fn get_vals_in_value_range(
&self,
input_indexes: &[u32],
input_doc_ids: &[u32],
output: &mut Vec<crate::ComparableDoc<Option<T>, crate::DocId>>,
value_range: ValueRange<T>,
) {
self.as_ref()
.get_vals_in_value_range(input_indexes, input_doc_ids, output, value_range)
}
#[inline(always)]
fn min_value(&self) -> T {
self.as_ref().min_value()
@@ -234,7 +598,7 @@ impl<T: Copy + PartialOrd + Debug + 'static> ColumnValues<T> for Arc<dyn ColumnV
#[inline(always)]
fn get_row_ids_for_value_range(
&self,
range: RangeInclusive<T>,
range: ValueRange<T>,
doc_id_range: Range<u32>,
positions: &mut Vec<u32>,
) {

View File

@@ -1,8 +1,9 @@
use std::fmt::Debug;
use std::marker::PhantomData;
use std::ops::{Range, RangeInclusive};
use std::ops::Range;
use crate::ColumnValues;
use crate::column::ValueRange;
use crate::column_values::monotonic_mapping::StrictlyMonotonicFn;
struct MonotonicMappingColumn<C, T, Input> {
@@ -80,16 +81,52 @@ where
fn get_row_ids_for_value_range(
&self,
range: RangeInclusive<Output>,
range: ValueRange<Output>,
doc_id_range: Range<u32>,
positions: &mut Vec<u32>,
) {
self.from_column.get_row_ids_for_value_range(
self.monotonic_mapping.inverse(range.start().clone())
..=self.monotonic_mapping.inverse(range.end().clone()),
doc_id_range,
positions,
)
match range {
ValueRange::Inclusive(range) => self.from_column.get_row_ids_for_value_range(
ValueRange::Inclusive(
self.monotonic_mapping.inverse(range.start().clone())
..=self.monotonic_mapping.inverse(range.end().clone()),
),
doc_id_range,
positions,
),
ValueRange::All => self.from_column.get_row_ids_for_value_range(
ValueRange::All,
doc_id_range,
positions,
),
ValueRange::GreaterThan(threshold, _) => self.from_column.get_row_ids_for_value_range(
ValueRange::GreaterThan(self.monotonic_mapping.inverse(threshold), false),
doc_id_range,
positions,
),
ValueRange::GreaterThanOrEqual(threshold, _) => {
self.from_column.get_row_ids_for_value_range(
ValueRange::GreaterThanOrEqual(
self.monotonic_mapping.inverse(threshold),
false,
),
doc_id_range,
positions,
)
}
ValueRange::LessThan(threshold, _) => self.from_column.get_row_ids_for_value_range(
ValueRange::LessThan(self.monotonic_mapping.inverse(threshold), false),
doc_id_range,
positions,
),
ValueRange::LessThanOrEqual(threshold, _) => {
self.from_column.get_row_ids_for_value_range(
ValueRange::LessThanOrEqual(self.monotonic_mapping.inverse(threshold), false),
doc_id_range,
positions,
)
}
}
}
// We voluntarily do not implement get_range as it yields a regression,

View File

@@ -2,7 +2,8 @@ use std::io;
use std::io::Write;
use std::num::NonZeroU64;
use common::{BinarySerializable, VInt};
use common::file_slice::FileSlice;
use common::{BinarySerializable, HasLen, VInt};
use crate::RowId;
@@ -27,6 +28,55 @@ impl ColumnStats {
}
}
impl ColumnStats {
/// Deserialize from the tail of the given FileSlice, and return the stats and remaining prefix
/// FileSlice.
pub fn deserialize_from_tail(file_slice: FileSlice) -> io::Result<(Self, FileSlice)> {
// [`deserialize_with_size`] deserializes 4 variable-width encoded u64s, which
// could end up being, in the worst case, 9 bytes each. this is where the 36 comes from
let (stats, _) = file_slice.clone().split(36.min(file_slice.len())); // hope that's enough bytes
let mut stats = stats.read_bytes()?;
let (stats, stats_nbytes) = ColumnStats::deserialize_with_size(&mut stats)?;
let (_, remainder) = file_slice.split(stats_nbytes);
Ok((stats, remainder))
}
/// Same as [`BinarySeerializable::deserialize`] but also returns the number of bytes
/// consumed from the reader `R`
fn deserialize_with_size<R: io::Read>(reader: &mut R) -> io::Result<(Self, usize)> {
let mut nbytes = 0;
let (min_value, len) = VInt::deserialize_with_size(reader)?;
let min_value = min_value.0;
nbytes += len;
let (gcd, len) = VInt::deserialize_with_size(reader)?;
let gcd = gcd.0;
let gcd = NonZeroU64::new(gcd)
.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidData, "GCD of 0 is forbidden"))?;
nbytes += len;
let (amplitude, len) = VInt::deserialize_with_size(reader)?;
let amplitude = amplitude.0 * gcd.get();
let max_value = min_value + amplitude;
nbytes += len;
let (num_rows, len) = VInt::deserialize_with_size(reader)?;
let num_rows = num_rows.0 as RowId;
nbytes += len;
Ok((
ColumnStats {
min_value,
max_value,
num_rows,
gcd,
},
nbytes,
))
}
}
impl BinarySerializable for ColumnStats {
fn serialize<W: Write + ?Sized>(&self, writer: &mut W) -> io::Result<()> {
VInt(self.min_value).serialize(writer)?;

View File

@@ -25,6 +25,7 @@ use common::{BinarySerializable, CountingWriter, OwnedBytes, VInt, VIntU128};
use tantivy_bitpacker::{BitPacker, BitUnpacker};
use crate::RowId;
use crate::column::ValueRange;
use crate::column_values::ColumnValues;
/// The cost per blank is quite hard actually, since blanks are delta encoded, the actual cost of
@@ -338,14 +339,48 @@ impl ColumnValues<u64> for CompactSpaceU64Accessor {
#[inline]
fn get_row_ids_for_value_range(
&self,
value_range: RangeInclusive<u64>,
value_range: ValueRange<u64>,
position_range: Range<u32>,
positions: &mut Vec<u32>,
) {
let value_range = self.0.compact_to_u128(*value_range.start() as u32)
..=self.0.compact_to_u128(*value_range.end() as u32);
self.0
.get_row_ids_for_value_range(value_range, position_range, positions)
match value_range {
ValueRange::Inclusive(value_range) => {
let value_range = ValueRange::Inclusive(
self.0.compact_to_u128(*value_range.start() as u32)
..=self.0.compact_to_u128(*value_range.end() as u32),
);
self.0
.get_row_ids_for_value_range(value_range, position_range, positions)
}
ValueRange::All => {
let position_range = position_range.start..position_range.end.min(self.num_vals());
positions.extend(position_range);
}
ValueRange::GreaterThan(threshold, _) => {
let value_range =
ValueRange::GreaterThan(self.0.compact_to_u128(threshold as u32), false);
self.0
.get_row_ids_for_value_range(value_range, position_range, positions)
}
ValueRange::GreaterThanOrEqual(threshold, _) => {
let value_range =
ValueRange::GreaterThanOrEqual(self.0.compact_to_u128(threshold as u32), false);
self.0
.get_row_ids_for_value_range(value_range, position_range, positions)
}
ValueRange::LessThan(threshold, _) => {
let value_range =
ValueRange::LessThan(self.0.compact_to_u128(threshold as u32), false);
self.0
.get_row_ids_for_value_range(value_range, position_range, positions)
}
ValueRange::LessThanOrEqual(threshold, _) => {
let value_range =
ValueRange::LessThanOrEqual(self.0.compact_to_u128(threshold as u32), false);
self.0
.get_row_ids_for_value_range(value_range, position_range, positions)
}
}
}
}
@@ -375,10 +410,47 @@ impl ColumnValues<u128> for CompactSpaceDecompressor {
#[inline]
fn get_row_ids_for_value_range(
&self,
value_range: RangeInclusive<u128>,
value_range: ValueRange<u128>,
position_range: Range<u32>,
positions: &mut Vec<u32>,
) {
let value_range = match value_range {
ValueRange::Inclusive(value_range) => value_range,
ValueRange::All => {
let position_range = position_range.start..position_range.end.min(self.num_vals());
positions.extend(position_range);
return;
}
ValueRange::GreaterThan(threshold, _) => {
let max = self.max_value();
if threshold >= max {
return;
}
(threshold + 1)..=max
}
ValueRange::GreaterThanOrEqual(threshold, _) => {
let max = self.max_value();
if threshold > max {
return;
}
threshold..=max
}
ValueRange::LessThan(threshold, _) => {
let min = self.min_value();
if threshold <= min {
return;
}
min..=(threshold - 1)
}
ValueRange::LessThanOrEqual(threshold, _) => {
let min = self.min_value();
if threshold < min {
return;
}
min..=threshold
}
};
if value_range.start() > value_range.end() {
return;
}
@@ -560,7 +632,7 @@ mod tests {
.collect::<Vec<_>>();
let mut positions = Vec::new();
decompressor.get_row_ids_for_value_range(
range,
ValueRange::Inclusive(range),
0..decompressor.num_vals(),
&mut positions,
);
@@ -604,7 +676,11 @@ mod tests {
let val = *val;
let pos = pos as u32;
let mut positions = Vec::new();
decomp.get_row_ids_for_value_range(val..=val, pos..pos + 1, &mut positions);
decomp.get_row_ids_for_value_range(
ValueRange::Inclusive(val..=val),
pos..pos + 1,
&mut positions,
);
assert_eq!(positions, vec![pos]);
}
@@ -746,7 +822,11 @@ mod tests {
doc_id_range: Range<u32>,
) -> Vec<u32> {
let mut positions = Vec::new();
column.get_row_ids_for_value_range(value_range, doc_id_range, &mut positions);
column.get_row_ids_for_value_range(
ValueRange::Inclusive(value_range),
doc_id_range,
&mut positions,
);
positions
}
@@ -769,7 +849,7 @@ mod tests {
];
let mut out = Vec::new();
serialize_column_values_u128(&&vals[..], &mut out).unwrap();
let decomp = open_u128_mapped(OwnedBytes::new(out)).unwrap();
let decomp = open_u128_mapped(FileSlice::from(out)).unwrap();
let complete_range = 0..vals.len() as u32;
assert_eq!(
@@ -823,6 +903,7 @@ mod tests {
let _data = test_aux_vals(vals);
}
use common::file_slice::FileSlice;
use proptest::prelude::*;
fn num_strategy() -> impl Strategy<Value = u128> {

View File

@@ -5,7 +5,8 @@ use std::sync::Arc;
mod compact_space;
use common::{BinarySerializable, OwnedBytes, VInt};
use common::file_slice::FileSlice;
use common::{BinarySerializable, VInt};
pub use compact_space::{
CompactSpaceCompressor, CompactSpaceDecompressor, CompactSpaceU64Accessor,
};
@@ -101,8 +102,9 @@ impl U128FastFieldCodecType {
/// Returns the correct codec reader wrapped in the `Arc` for the data.
pub fn open_u128_mapped<T: MonotonicallyMappableToU128 + Debug>(
mut bytes: OwnedBytes,
file_slice: FileSlice,
) -> io::Result<Arc<dyn ColumnValues<T>>> {
let mut bytes = file_slice.read_bytes()?;
let header = U128Header::deserialize(&mut bytes)?;
assert_eq!(header.codec_type, U128FastFieldCodecType::CompactSpace);
let reader = CompactSpaceDecompressor::open(bytes)?;
@@ -120,7 +122,8 @@ pub fn open_u128_mapped<T: MonotonicallyMappableToU128 + Debug>(
/// # Notice
/// In case there are new codecs added, check for usages of `CompactSpaceDecompressorU64` and
/// also handle the new codecs.
pub fn open_u128_as_compact_u64(mut bytes: OwnedBytes) -> io::Result<Arc<dyn ColumnValues<u64>>> {
pub fn open_u128_as_compact_u64(file_slice: FileSlice) -> io::Result<Arc<dyn ColumnValues<u64>>> {
let mut bytes = file_slice.read_bytes()?;
let header = U128Header::deserialize(&mut bytes)?;
assert_eq!(header.codec_type, U128FastFieldCodecType::CompactSpace);
let reader = CompactSpaceU64Accessor::open(bytes)?;

View File

@@ -1,11 +1,14 @@
use std::io::{self, Write};
use std::num::NonZeroU64;
use std::ops::{Range, RangeInclusive};
use std::sync::{Arc, OnceLock};
use common::{BinarySerializable, OwnedBytes};
use common::file_slice::FileSlice;
use common::{BinarySerializable, HasLen, OwnedBytes};
use fastdivide::DividerU64;
use tantivy_bitpacker::{BitPacker, BitUnpacker, compute_num_bits};
use crate::column::ValueRange;
use crate::column_values::u64_based::{ColumnCodec, ColumnCodecEstimator, ColumnStats};
use crate::{ColumnValues, RowId};
@@ -13,9 +16,40 @@ use crate::{ColumnValues, RowId};
/// fast field is required.
#[derive(Clone)]
pub struct BitpackedReader {
data: OwnedBytes,
data: FileSlice,
bit_unpacker: BitUnpacker,
stats: ColumnStats,
blocks: Arc<[OnceLock<Block>]>,
}
impl BitpackedReader {
#[inline(always)]
fn unpack_val(&self, doc: u32) -> u64 {
let block_num = self.bit_unpacker.block_num(doc);
if block_num == 0 && self.blocks.len() == 0 {
return 0;
}
let block = self.blocks[block_num].get_or_init(|| {
let block_range = self.bit_unpacker.block(block_num, self.data.len());
let offset = block_range.start;
let data = self
.data
.slice(block_range)
.read_bytes()
.expect("Failed to read column values.");
Block { offset, data }
});
self.bit_unpacker
.get_from_subset(doc, block.offset, &block.data)
}
}
struct Block {
offset: usize,
data: OwnedBytes,
}
#[inline(always)]
@@ -57,8 +91,9 @@ fn transform_range_before_linear_transformation(
impl ColumnValues for BitpackedReader {
#[inline(always)]
fn get_val(&self, doc: u32) -> u64 {
self.stats.min_value + self.stats.gcd.get() * self.bit_unpacker.get(doc, &self.data)
self.stats.min_value + self.stats.gcd.get() * self.unpack_val(doc)
}
#[inline]
fn min_value(&self) -> u64 {
self.stats.min_value
@@ -72,24 +107,329 @@ impl ColumnValues for BitpackedReader {
self.stats.num_rows
}
fn get_vals_in_value_range(
&self,
input_indexes: &[u32],
input_doc_ids: &[u32],
output: &mut Vec<crate::ComparableDoc<Option<u64>, crate::DocId>>,
value_range: ValueRange<u64>,
) {
match value_range {
ValueRange::All => {
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(self.get_val(idx)),
});
}
}
ValueRange::Inclusive(range) => {
if let Some(transformed_range) =
transform_range_before_linear_transformation(&self.stats, range)
{
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
let raw_val = self.unpack_val(idx);
if transformed_range.contains(&raw_val) {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(
self.stats.min_value + self.stats.gcd.get() * raw_val,
),
});
}
}
}
}
ValueRange::GreaterThan(threshold, _) => {
if threshold < self.stats.min_value {
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(self.get_val(idx)),
});
}
} else if threshold >= self.stats.max_value {
// All filtered out
} else {
let raw_threshold = (threshold - self.stats.min_value) / self.stats.gcd.get();
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
let raw_val = self.unpack_val(idx);
if raw_val > raw_threshold {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(
self.stats.min_value + self.stats.gcd.get() * raw_val,
),
});
}
}
}
}
ValueRange::GreaterThanOrEqual(threshold, _) => {
if threshold <= self.stats.min_value {
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(self.get_val(idx)),
});
}
} else if threshold > self.stats.max_value {
// All filtered out
} else {
let diff = threshold - self.stats.min_value;
let gcd = self.stats.gcd.get();
let raw_threshold = (diff + gcd - 1) / gcd;
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
let raw_val = self.unpack_val(idx);
if raw_val >= raw_threshold {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(
self.stats.min_value + self.stats.gcd.get() * raw_val,
),
});
}
}
}
}
ValueRange::LessThan(threshold, _) => {
if threshold > self.stats.max_value {
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(self.get_val(idx)),
});
}
} else if threshold <= self.stats.min_value {
// All filtered out
} else {
let diff = threshold - self.stats.min_value;
let gcd = self.stats.gcd.get();
let raw_threshold = if diff % gcd == 0 {
diff / gcd
} else {
diff / gcd + 1
};
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
let raw_val = self.unpack_val(idx);
if raw_val < raw_threshold {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(
self.stats.min_value + self.stats.gcd.get() * raw_val,
),
});
}
}
}
}
ValueRange::LessThanOrEqual(threshold, _) => {
if threshold >= self.stats.max_value {
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(self.get_val(idx)),
});
}
} else if threshold < self.stats.min_value {
// All filtered out
} else {
let diff = threshold - self.stats.min_value;
let gcd = self.stats.gcd.get();
let raw_threshold = diff / gcd;
for (&idx, &doc) in input_indexes.iter().zip(input_doc_ids.iter()) {
let raw_val = self.unpack_val(idx);
if raw_val <= raw_threshold {
output.push(crate::ComparableDoc {
doc,
sort_key: Some(
self.stats.min_value + self.stats.gcd.get() * raw_val,
),
});
}
}
}
}
}
}
fn get_row_ids_for_value_range(
&self,
range: RangeInclusive<u64>,
range: ValueRange<u64>,
doc_id_range: Range<u32>,
positions: &mut Vec<u32>,
) {
let Some(transformed_range) =
transform_range_before_linear_transformation(&self.stats, range)
else {
positions.clear();
return;
};
self.bit_unpacker.get_ids_for_value_range(
transformed_range,
doc_id_range,
&self.data,
positions,
);
match range {
ValueRange::All => {
positions.extend(doc_id_range);
return;
}
ValueRange::Inclusive(range) => {
let Some(transformed_range) =
transform_range_before_linear_transformation(&self.stats, range)
else {
positions.clear();
return;
};
// TODO: This does not use the `self.blocks` cache, because callers are usually
// already doing sequential, and fairly dense reads. Fix it to
// iterate over blocks if that assumption turns out to be incorrect!
let data_range = self
.bit_unpacker
.block_oblivious_range(doc_id_range.clone(), self.data.len());
let data_offset = data_range.start;
let data_subset = self
.data
.slice(data_range)
.read_bytes()
.expect("Failed to read column values.");
self.bit_unpacker.get_ids_for_value_range_from_subset(
transformed_range,
doc_id_range,
data_offset,
&data_subset,
positions,
);
}
ValueRange::GreaterThan(threshold, _) => {
if threshold < self.stats.min_value {
positions.extend(doc_id_range);
return;
}
if threshold >= self.stats.max_value {
return;
}
let raw_threshold = (threshold - self.stats.min_value) / self.stats.gcd.get();
// We want raw > raw_threshold.
// bit_unpacker.get_ids_for_value_range_from_subset takes a RangeInclusive.
// We can construct a RangeInclusive: (raw_threshold + 1) ..= u64::MAX
// But max raw value is known? (max_value - min_value) / gcd.
let max_raw = (self.stats.max_value - self.stats.min_value) / self.stats.gcd.get();
let transformed_range = (raw_threshold + 1)..=max_raw;
let data_range = self
.bit_unpacker
.block_oblivious_range(doc_id_range.clone(), self.data.len());
let data_offset = data_range.start;
let data_subset = self
.data
.slice(data_range)
.read_bytes()
.expect("Failed to read column values.");
self.bit_unpacker.get_ids_for_value_range_from_subset(
transformed_range,
doc_id_range,
data_offset,
&data_subset,
positions,
);
}
ValueRange::GreaterThanOrEqual(threshold, _) => {
if threshold <= self.stats.min_value {
positions.extend(doc_id_range);
return;
}
if threshold > self.stats.max_value {
return;
}
let diff = threshold - self.stats.min_value;
let gcd = self.stats.gcd.get();
let raw_threshold = (diff + gcd - 1) / gcd;
// We want raw >= raw_threshold.
let max_raw = (self.stats.max_value - self.stats.min_value) / self.stats.gcd.get();
let transformed_range = raw_threshold..=max_raw;
let data_range = self
.bit_unpacker
.block_oblivious_range(doc_id_range.clone(), self.data.len());
let data_offset = data_range.start;
let data_subset = self
.data
.slice(data_range)
.read_bytes()
.expect("Failed to read column values.");
self.bit_unpacker.get_ids_for_value_range_from_subset(
transformed_range,
doc_id_range,
data_offset,
&data_subset,
positions,
);
}
ValueRange::LessThan(threshold, _) => {
if threshold > self.stats.max_value {
positions.extend(doc_id_range);
return;
}
if threshold <= self.stats.min_value {
return;
}
let diff = threshold - self.stats.min_value;
let gcd = self.stats.gcd.get();
// We want raw < raw_threshold_limit
// raw <= raw_threshold_limit - 1
let raw_threshold_limit = if diff % gcd == 0 {
diff / gcd
} else {
diff / gcd + 1
};
if raw_threshold_limit == 0 {
return;
}
let transformed_range = 0..=(raw_threshold_limit - 1);
let data_range = self
.bit_unpacker
.block_oblivious_range(doc_id_range.clone(), self.data.len());
let data_offset = data_range.start;
let data_subset = self
.data
.slice(data_range)
.read_bytes()
.expect("Failed to read column values.");
self.bit_unpacker.get_ids_for_value_range_from_subset(
transformed_range,
doc_id_range,
data_offset,
&data_subset,
positions,
);
}
ValueRange::LessThanOrEqual(threshold, _) => {
if threshold >= self.stats.max_value {
positions.extend(doc_id_range);
return;
}
if threshold < self.stats.min_value {
return;
}
let diff = threshold - self.stats.min_value;
let gcd = self.stats.gcd.get();
// We want raw <= raw_threshold.
let raw_threshold = diff / gcd;
let transformed_range = 0..=raw_threshold;
let data_range = self
.bit_unpacker
.block_oblivious_range(doc_id_range.clone(), self.data.len());
let data_offset = data_range.start;
let data_subset = self
.data
.slice(data_range)
.read_bytes()
.expect("Failed to read column values.");
self.bit_unpacker.get_ids_for_value_range_from_subset(
transformed_range,
doc_id_range,
data_offset,
&data_subset,
positions,
);
}
}
}
}
@@ -133,14 +473,20 @@ impl ColumnCodec for BitpackedCodec {
type Estimator = BitpackedCodecEstimator;
/// Opens a fast field given a file.
fn load(mut data: OwnedBytes) -> io::Result<Self::ColumnValues> {
let stats = ColumnStats::deserialize(&mut data)?;
fn load(file_slice: FileSlice) -> io::Result<Self::ColumnValues> {
let (stats, data) = ColumnStats::deserialize_from_tail(file_slice)?;
let num_bits = num_bits(&stats);
let bit_unpacker = BitUnpacker::new(num_bits);
let block_count = bit_unpacker.block_count(data.len());
Ok(BitpackedReader {
data,
bit_unpacker,
stats,
blocks: (0..block_count)
.into_iter()
.map(|_| OnceLock::new())
.collect(),
})
}
}

View File

@@ -1,8 +1,10 @@
use std::io;
use std::io::Write;
use std::sync::Arc;
use std::{io, iter};
use std::ops::{Deref, DerefMut};
use std::sync::{Arc, OnceLock};
use common::{BinarySerializable, CountingWriter, DeserializeFrom, OwnedBytes};
use common::file_slice::FileSlice;
use common::{BinarySerializable, CountingWriter, DeserializeFrom, HasLen, OwnedBytes};
use fastdivide::DividerU64;
use tantivy_bitpacker::{BitPacker, BitUnpacker, compute_num_bits};
@@ -172,32 +174,63 @@ impl ColumnCodec<u64> for BlockwiseLinearCodec {
type Estimator = BlockwiseLinearEstimator;
fn load(mut bytes: OwnedBytes) -> io::Result<Self::ColumnValues> {
let stats = ColumnStats::deserialize(&mut bytes)?;
let footer_len: u32 = (&bytes[bytes.len() - 4..]).deserialize()?;
let footer_offset = bytes.len() - 4 - footer_len as usize;
let (data, mut footer) = bytes.split(footer_offset);
fn load(file_slice: FileSlice) -> io::Result<Self::ColumnValues> {
let (stats, body) = ColumnStats::deserialize_from_tail(file_slice)?;
let (_, footer) = body.clone().split_from_end(4);
let footer_len: u32 = footer.read_bytes()?.as_slice().deserialize()?;
let (data, footer) = body.split_from_end(footer_len as usize + 4);
let mut footer = footer.read_bytes()?;
let num_blocks = compute_num_blocks(stats.num_rows);
let mut blocks: Vec<Block> = iter::repeat_with(|| Block::deserialize(&mut footer))
.take(num_blocks as usize)
.collect::<io::Result<_>>()?;
let mut start_offset = 0;
for block in &mut blocks {
let mut blocks = Vec::with_capacity(num_blocks as usize);
for _ in 0..num_blocks {
let mut block = Block::deserialize(&mut footer)?;
let len = (block.bit_unpacker.bit_width() as usize) * BLOCK_SIZE as usize / 8;
block.data_start_offset = start_offset;
start_offset += (block.bit_unpacker.bit_width() as usize) * BLOCK_SIZE as usize / 8;
blocks.push(BlockWithData {
block,
file_slice: data.slice(start_offset..(start_offset + len).min(data.len())),
data: Default::default(),
});
start_offset += len;
}
Ok(BlockwiseLinearReader {
blocks: blocks.into_boxed_slice().into(),
data,
stats,
})
}
}
struct BlockWithData {
block: Block,
file_slice: FileSlice,
data: OnceLock<OwnedBytes>,
}
impl Deref for BlockWithData {
type Target = Block;
fn deref(&self) -> &Self::Target {
&self.block
}
}
impl DerefMut for BlockWithData {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.block
}
}
#[derive(Clone)]
pub struct BlockwiseLinearReader {
blocks: Arc<[Block]>,
data: OwnedBytes,
blocks: Arc<[BlockWithData]>,
stats: ColumnStats,
}
@@ -208,7 +241,9 @@ impl ColumnValues for BlockwiseLinearReader {
let idx_within_block = idx % BLOCK_SIZE;
let block = &self.blocks[block_id];
let interpoled_val: u64 = block.line.eval(idx_within_block);
let block_bytes = &self.data[block.data_start_offset..];
let block_bytes = block
.data
.get_or_init(|| block.file_slice.read_bytes().unwrap());
let bitpacked_diff = block.bit_unpacker.get(idx_within_block, block_bytes);
// TODO optimize me! the line parameters could be tweaked to include the multiplication and
// remove the dependency.

View File

@@ -1,5 +1,6 @@
use std::io;
use common::file_slice::FileSlice;
use common::{BinarySerializable, OwnedBytes};
use tantivy_bitpacker::{BitPacker, BitUnpacker, compute_num_bits};
@@ -190,7 +191,8 @@ impl ColumnCodec for LinearCodec {
type Estimator = LinearCodecEstimator;
fn load(mut data: OwnedBytes) -> io::Result<Self::ColumnValues> {
fn load(file_slice: FileSlice) -> io::Result<Self::ColumnValues> {
let mut data = file_slice.read_bytes()?;
let stats = ColumnStats::deserialize(&mut data)?;
let linear_params = LinearParams::deserialize(&mut data)?;
Ok(LinearReader {

View File

@@ -8,7 +8,8 @@ use std::io;
use std::io::Write;
use std::sync::Arc;
use common::{BinarySerializable, OwnedBytes};
use common::BinarySerializable;
use common::file_slice::FileSlice;
use crate::column_values::monotonic_mapping::{
StrictlyMonotonicMappingInverter, StrictlyMonotonicMappingToInternal,
@@ -60,7 +61,7 @@ pub trait ColumnCodec<T: PartialOrd = u64> {
type Estimator: ColumnCodecEstimator + Default;
/// Loads a column that has been serialized using this codec.
fn load(bytes: OwnedBytes) -> io::Result<Self::ColumnValues>;
fn load(file_slice: FileSlice) -> io::Result<Self::ColumnValues>;
/// Returns an estimator.
fn estimator() -> Self::Estimator {
@@ -111,20 +112,22 @@ impl CodecType {
fn load<T: MonotonicallyMappableToU64>(
&self,
bytes: OwnedBytes,
file_slice: FileSlice,
) -> io::Result<Arc<dyn ColumnValues<T>>> {
match self {
CodecType::Bitpacked => load_specific_codec::<BitpackedCodec, T>(bytes),
CodecType::Linear => load_specific_codec::<LinearCodec, T>(bytes),
CodecType::BlockwiseLinear => load_specific_codec::<BlockwiseLinearCodec, T>(bytes),
CodecType::Bitpacked => load_specific_codec::<BitpackedCodec, T>(file_slice),
CodecType::Linear => load_specific_codec::<LinearCodec, T>(file_slice),
CodecType::BlockwiseLinear => {
load_specific_codec::<BlockwiseLinearCodec, T>(file_slice)
}
}
}
}
fn load_specific_codec<C: ColumnCodec, T: MonotonicallyMappableToU64>(
bytes: OwnedBytes,
file_slice: FileSlice,
) -> io::Result<Arc<dyn ColumnValues<T>>> {
let reader = C::load(bytes)?;
let reader = C::load(file_slice)?;
let reader_typed = monotonic_map_column(
reader,
StrictlyMonotonicMappingInverter::from(StrictlyMonotonicMappingToInternal::<T>::new()),
@@ -189,25 +192,28 @@ pub fn serialize_u64_based_column_values<T: MonotonicallyMappableToU64>(
///
/// This method first identifies the codec off the first byte.
pub fn load_u64_based_column_values<T: MonotonicallyMappableToU64>(
mut bytes: OwnedBytes,
file_slice: FileSlice,
) -> io::Result<Arc<dyn ColumnValues<T>>> {
let codec_type: CodecType = bytes
.first()
.copied()
let (header, body) = file_slice.split(1);
let codec_type: CodecType = header
.read_bytes()?
.as_slice()
.get(0)
.cloned()
.and_then(CodecType::try_from_code)
.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidData, "Failed to read codec type"))?;
bytes.advance(1);
codec_type.load(bytes)
codec_type.load(body)
}
/// Helper function to serialize a column (autodetect from all codecs) and then open it
#[cfg(test)]
pub fn serialize_and_load_u64_based_column_values<T: MonotonicallyMappableToU64>(
vals: &dyn Iterable,
codec_types: &[CodecType],
) -> Arc<dyn ColumnValues<T>> {
let mut buffer = Vec::new();
serialize_u64_based_column_values(vals, codec_types, &mut buffer).unwrap();
load_u64_based_column_values::<T>(OwnedBytes::new(buffer)).unwrap()
load_u64_based_column_values::<T>(FileSlice::from(buffer)).unwrap()
}
#[cfg(test)]

View File

@@ -1,3 +1,4 @@
use common::HasLen;
use proptest::prelude::*;
use proptest::{prop_oneof, proptest};
use rand::Rng;
@@ -13,7 +14,7 @@ fn test_serialize_and_load_simple() {
)
.unwrap();
assert_eq!(buffer.len(), 7);
let col = load_u64_based_column_values::<u64>(OwnedBytes::new(buffer)).unwrap();
let col = load_u64_based_column_values::<u64>(FileSlice::from(buffer)).unwrap();
assert_eq!(col.num_vals(), 3);
assert_eq!(col.get_val(0), 1);
assert_eq!(col.get_val(1), 2);
@@ -30,7 +31,7 @@ fn test_empty_column_i64() {
continue;
}
num_acceptable_codecs += 1;
let col = load_u64_based_column_values::<i64>(OwnedBytes::new(buffer)).unwrap();
let col = load_u64_based_column_values::<i64>(FileSlice::from(buffer)).unwrap();
assert_eq!(col.num_vals(), 0);
assert_eq!(col.min_value(), i64::MIN);
assert_eq!(col.max_value(), i64::MIN);
@@ -48,7 +49,7 @@ fn test_empty_column_u64() {
continue;
}
num_acceptable_codecs += 1;
let col = load_u64_based_column_values::<u64>(OwnedBytes::new(buffer)).unwrap();
let col = load_u64_based_column_values::<u64>(FileSlice::from(buffer)).unwrap();
assert_eq!(col.num_vals(), 0);
assert_eq!(col.min_value(), u64::MIN);
assert_eq!(col.max_value(), u64::MIN);
@@ -66,7 +67,7 @@ fn test_empty_column_f64() {
continue;
}
num_acceptable_codecs += 1;
let col = load_u64_based_column_values::<f64>(OwnedBytes::new(buffer)).unwrap();
let col = load_u64_based_column_values::<f64>(FileSlice::from(buffer)).unwrap();
assert_eq!(col.num_vals(), 0);
// FIXME. f64::MIN would be better!
assert!(col.min_value().is_nan());
@@ -97,7 +98,7 @@ pub(crate) fn create_and_validate<TColumnCodec: ColumnCodec>(
let actual_compression = buffer.len() as u64;
let reader = TColumnCodec::load(OwnedBytes::new(buffer)).unwrap();
let reader = TColumnCodec::load(FileSlice::from(buffer)).unwrap();
assert_eq!(reader.num_vals(), vals.len() as u32);
let mut buffer = Vec::new();
for (doc, orig_val) in vals.iter().copied().enumerate() {
@@ -131,7 +132,7 @@ pub(crate) fn create_and_validate<TColumnCodec: ColumnCodec>(
.collect();
let mut positions = Vec::new();
reader.get_row_ids_for_value_range(
vals[test_rand_idx]..=vals[test_rand_idx],
crate::column::ValueRange::Inclusive(vals[test_rand_idx]..=vals[test_rand_idx]),
0..vals.len() as u32,
&mut positions,
);
@@ -326,7 +327,7 @@ fn test_fastfield_gcd_i64_with_codec(codec_type: CodecType, num_vals: usize) ->
&[codec_type],
&mut buffer,
)?;
let buffer = OwnedBytes::new(buffer);
let buffer = FileSlice::from(buffer);
let column = crate::column_values::load_u64_based_column_values::<i64>(buffer.clone())?;
assert_eq!(column.get_val(0), -4000i64);
assert_eq!(column.get_val(1), -3000i64);
@@ -343,7 +344,7 @@ fn test_fastfield_gcd_i64_with_codec(codec_type: CodecType, num_vals: usize) ->
&[codec_type],
&mut buffer_without_gcd,
)?;
let buffer_without_gcd = OwnedBytes::new(buffer_without_gcd);
let buffer_without_gcd = FileSlice::from(buffer_without_gcd);
assert!(buffer_without_gcd.len() > buffer.len());
Ok(())
@@ -369,7 +370,7 @@ fn test_fastfield_gcd_u64_with_codec(codec_type: CodecType, num_vals: usize) ->
&[codec_type],
&mut buffer,
)?;
let buffer = OwnedBytes::new(buffer);
let buffer = FileSlice::from(buffer);
let column = crate::column_values::load_u64_based_column_values::<u64>(buffer.clone())?;
assert_eq!(column.get_val(0), 1000u64);
assert_eq!(column.get_val(1), 2000u64);
@@ -386,7 +387,7 @@ fn test_fastfield_gcd_u64_with_codec(codec_type: CodecType, num_vals: usize) ->
&[codec_type],
&mut buffer_without_gcd,
)?;
let buffer_without_gcd = OwnedBytes::new(buffer_without_gcd);
let buffer_without_gcd = FileSlice::from(buffer_without_gcd);
assert!(buffer_without_gcd.len() > buffer.len());
Ok(())
}
@@ -405,7 +406,7 @@ fn test_fastfield_gcd_u64() -> io::Result<()> {
#[test]
pub fn test_fastfield2() {
let test_fastfield = crate::column_values::serialize_and_load_u64_based_column_values::<u64>(
let test_fastfield = serialize_and_load_u64_based_column_values::<u64>(
&&[100u64, 200u64, 300u64][..],
&ALL_U64_CODEC_TYPES,
);

View File

@@ -4,6 +4,7 @@ mod term_merger;
use std::collections::{BTreeMap, HashSet};
use std::io;
use std::io::ErrorKind;
use std::net::Ipv6Addr;
use std::sync::Arc;
@@ -78,6 +79,7 @@ pub fn merge_columnar(
required_columns: &[(String, ColumnType)],
merge_row_order: MergeRowOrder,
output: &mut impl io::Write,
cancel: impl Fn() -> bool,
) -> io::Result<()> {
let mut serializer = ColumnarSerializer::new(output);
let num_docs_per_columnar = columnar_readers
@@ -87,6 +89,9 @@ pub fn merge_columnar(
let columns_to_merge = group_columns_for_merge(columnar_readers, required_columns)?;
for res in columns_to_merge {
if cancel() {
return Err(io::Error::new(ErrorKind::Interrupted, "Merge cancelled"));
}
let ((column_name, _column_type_category), grouped_columns) = res;
let grouped_columns = grouped_columns.open(&merge_row_order)?;
if grouped_columns.is_empty() {

View File

@@ -205,6 +205,7 @@ fn test_merge_columnar_numbers() {
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut buffer,
|| false,
)
.unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
@@ -233,6 +234,7 @@ fn test_merge_columnar_texts() {
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut buffer,
|| false,
)
.unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
@@ -282,6 +284,7 @@ fn test_merge_columnar_byte() {
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut buffer,
|| false,
)
.unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
@@ -338,6 +341,7 @@ fn test_merge_columnar_byte_with_missing() {
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut buffer,
|| false,
)
.unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
@@ -390,6 +394,7 @@ fn test_merge_columnar_different_types() {
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut buffer,
|| false,
)
.unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
@@ -455,6 +460,7 @@ fn test_merge_columnar_different_empty_cardinality() {
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut buffer,
|| false,
)
.unwrap();
let columnar_reader = ColumnarReader::open(buffer).unwrap();
@@ -565,6 +571,7 @@ proptest! {
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut out,
|| false,
).unwrap();
let merged_reader = ColumnarReader::open(out).unwrap();
@@ -582,6 +589,7 @@ proptest! {
&[],
MergeRowOrder::Stack(stack_merge_order),
&mut out,
|| false,
).unwrap();
}

View File

@@ -0,0 +1,22 @@
use serde::{Deserialize, Serialize};
/// Contains a feature (field, score, etc.) of a document along with the document address.
///
/// Used only by TopNComputer, which implements the actual comparison via a `Comparator`.
#[derive(Clone, Default, Eq, PartialEq, Serialize, Deserialize)]
pub struct ComparableDoc<T, D> {
/// The feature of the document. In practice, this is
/// is a type which can be compared with a `Comparator<T>`.
pub sort_key: T,
/// The document address. In practice, this is either a `DocId` or `DocAddress`.
pub doc: D,
}
impl<T: std::fmt::Debug, D: std::fmt::Debug> std::fmt::Debug for ComparableDoc<T, D> {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
f.debug_struct("ComparableDoc")
.field("feature", &self.sort_key)
.field("doc", &self.doc)
.finish()
}
}

View File

@@ -71,7 +71,14 @@ fn test_format(path: &str) {
let columnar_readers = vec![&reader, &reader2];
let merge_row_order = StackMergeOrder::stack(&columnar_readers[..]);
let mut out = Vec::new();
merge_columnar(&columnar_readers, &[], merge_row_order.into(), &mut out).unwrap();
merge_columnar(
&columnar_readers,
&[],
merge_row_order.into(),
&mut out,
|| false,
)
.unwrap();
let reader = ColumnarReader::open(out).unwrap();
check_columns(&reader);
}

View File

@@ -3,7 +3,7 @@ use std::sync::Arc;
use std::{fmt, io};
use common::file_slice::FileSlice;
use common::{ByteCount, DateTime, OwnedBytes};
use common::{ByteCount, DateTime};
use serde::{Deserialize, Serialize};
use crate::column::{BytesColumn, Column, StrColumn};
@@ -239,8 +239,7 @@ pub struct DynamicColumnHandle {
impl DynamicColumnHandle {
// TODO rename load
pub fn open(&self) -> io::Result<DynamicColumn> {
let column_bytes: OwnedBytes = self.file_slice.read_bytes()?;
self.open_internal(column_bytes)
self.open_internal(self.file_slice.clone())
}
#[doc(hidden)]
@@ -259,16 +258,15 @@ impl DynamicColumnHandle {
/// If not, the fastfield reader will returns the u64-value associated with the original
/// FastValue.
pub fn open_u64_lenient(&self) -> io::Result<Option<Column<u64>>> {
let column_bytes = self.file_slice.read_bytes()?;
match self.column_type {
ColumnType::Str | ColumnType::Bytes => {
let column: BytesColumn =
crate::column::open_column_bytes(column_bytes, self.format_version)?;
crate::column::open_column_bytes(self.file_slice.clone(), self.format_version)?;
Ok(Some(column.term_ord_column))
}
ColumnType::IpAddr => {
let column = crate::column::open_column_u128_as_compact_u64(
column_bytes,
self.file_slice.clone(),
self.format_version,
)?;
Ok(Some(column))
@@ -278,40 +276,40 @@ impl DynamicColumnHandle {
| ColumnType::U64
| ColumnType::F64
| ColumnType::DateTime => {
let column =
crate::column::open_column_u64::<u64>(column_bytes, self.format_version)?;
let column = crate::column::open_column_u64::<u64>(
self.file_slice.clone(),
self.format_version,
)?;
Ok(Some(column))
}
}
}
fn open_internal(&self, column_bytes: OwnedBytes) -> io::Result<DynamicColumn> {
fn open_internal(&self, file_slice: FileSlice) -> io::Result<DynamicColumn> {
let dynamic_column: DynamicColumn = match self.column_type {
ColumnType::Bytes => {
crate::column::open_column_bytes(column_bytes, self.format_version)?.into()
crate::column::open_column_bytes(file_slice, self.format_version)?.into()
}
ColumnType::Str => {
crate::column::open_column_str(column_bytes, self.format_version)?.into()
crate::column::open_column_str(file_slice, self.format_version)?.into()
}
ColumnType::I64 => {
crate::column::open_column_u64::<i64>(column_bytes, self.format_version)?.into()
crate::column::open_column_u64::<i64>(file_slice, self.format_version)?.into()
}
ColumnType::U64 => {
crate::column::open_column_u64::<u64>(column_bytes, self.format_version)?.into()
crate::column::open_column_u64::<u64>(file_slice, self.format_version)?.into()
}
ColumnType::F64 => {
crate::column::open_column_u64::<f64>(column_bytes, self.format_version)?.into()
crate::column::open_column_u64::<f64>(file_slice, self.format_version)?.into()
}
ColumnType::Bool => {
crate::column::open_column_u64::<bool>(column_bytes, self.format_version)?.into()
crate::column::open_column_u64::<bool>(file_slice, self.format_version)?.into()
}
ColumnType::IpAddr => {
crate::column::open_column_u128::<Ipv6Addr>(column_bytes, self.format_version)?
.into()
crate::column::open_column_u128::<Ipv6Addr>(file_slice, self.format_version)?.into()
}
ColumnType::DateTime => {
crate::column::open_column_u64::<DateTime>(column_bytes, self.format_version)?
.into()
crate::column::open_column_u64::<DateTime>(file_slice, self.format_version)?.into()
}
};
Ok(dynamic_column)

View File

@@ -29,6 +29,7 @@ mod column;
pub mod column_index;
pub mod column_values;
mod columnar;
mod comparable_doc;
mod dictionary;
mod dynamic_column;
mod iterable;
@@ -36,7 +37,7 @@ pub(crate) mod utils;
mod value;
pub use block_accessor::ColumnBlockAccessor;
pub use column::{BytesColumn, Column, StrColumn};
pub use column::{BytesColumn, Column, StrColumn, ValueRange};
pub use column_index::ColumnIndex;
pub use column_values::{
ColumnValues, EmptyColumnValues, MonotonicallyMappableToU64, MonotonicallyMappableToU128,
@@ -45,6 +46,7 @@ pub use columnar::{
CURRENT_VERSION, ColumnType, ColumnarReader, ColumnarWriter, HasAssociatedColumnType,
MergeRowOrder, ShuffleMergeOrder, StackMergeOrder, Version, merge_columnar,
};
pub use comparable_doc::ComparableDoc;
use sstable::VoidSSTable;
pub use value::{NumericalType, NumericalValue};

View File

@@ -641,7 +641,7 @@ proptest! {
let columnar_readers_arr: Vec<&ColumnarReader> = columnar_readers.iter().collect();
let mut output: Vec<u8> = Vec::new();
let stack_merge_order = StackMergeOrder::stack(&columnar_readers_arr[..]).into();
crate::merge_columnar(&columnar_readers_arr[..], &[], stack_merge_order, &mut output).unwrap();
crate::merge_columnar(&columnar_readers_arr[..], &[], stack_merge_order, &mut output, || false,).unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap();
let concat_rows: Vec<Vec<(&'static str, ColumnValue)>> = columnar_docs.iter().flatten().cloned().collect();
let expected_merged_columnar = build_columnar(&concat_rows[..]);
@@ -665,6 +665,7 @@ fn test_columnar_merging_empty_columnar() {
&[],
crate::MergeRowOrder::Stack(stack_merge_order),
&mut output,
|| false,
)
.unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap();
@@ -702,6 +703,7 @@ fn test_columnar_merging_number_columns() {
&[],
crate::MergeRowOrder::Stack(stack_merge_order),
&mut output,
|| false,
)
.unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap();
@@ -775,6 +777,7 @@ fn test_columnar_merge_and_remap(
&[],
shuffle_merge_order.into(),
&mut output,
|| false,
)
.unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap();
@@ -817,6 +820,7 @@ fn test_columnar_merge_empty() {
&[],
shuffle_merge_order.into(),
&mut output,
|| false,
)
.unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap();
@@ -843,6 +847,7 @@ fn test_columnar_merge_single_str_column() {
&[],
shuffle_merge_order.into(),
&mut output,
|| false,
)
.unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap();
@@ -875,6 +880,7 @@ fn test_delete_decrease_cardinality() {
&[],
shuffle_merge_order.into(),
&mut output,
|| false,
)
.unwrap();
let merged_columnar = ColumnarReader::open(output).unwrap();

View File

@@ -0,0 +1,106 @@
use std::cell::RefCell;
use std::cmp::min;
use std::io;
use std::ops::Range;
use super::file_slice::FileSlice;
use super::{HasLen, OwnedBytes};
const DEFAULT_BUFFER_MAX_SIZE: usize = 512 * 1024; // 512K
/// A buffered reader for a FileSlice.
///
/// Reads the underlying `FileSlice` in large, sequential chunks to amortize
/// the cost of `read_bytes` calls, while keeping peak memory usage under control.
///
/// TODO: Rather than wrapping a `FileSlice` in buffering, it will usually be better to adjust a
/// `FileHandle` to directly handle buffering itself.
/// TODO: See: https://github.com/paradedb/paradedb/issues/3374
pub struct BufferedFileSlice {
file_slice: FileSlice,
buffer: RefCell<OwnedBytes>,
buffer_range: RefCell<Range<u64>>,
buffer_max_size: usize,
}
impl BufferedFileSlice {
/// Creates a new `BufferedFileSlice`.
///
/// The `buffer_max_size` is the amount of data that will be read from the
/// `FileSlice` on a buffer miss.
pub fn new(file_slice: FileSlice, buffer_max_size: usize) -> Self {
Self {
file_slice,
buffer: RefCell::new(OwnedBytes::empty()),
buffer_range: RefCell::new(0..0),
buffer_max_size,
}
}
/// Creates a new `BufferedFileSlice` with a default buffer max size.
pub fn new_with_default_buffer_size(file_slice: FileSlice) -> Self {
Self::new(file_slice, DEFAULT_BUFFER_MAX_SIZE)
}
/// Creates an empty `BufferedFileSlice`.
pub fn empty() -> Self {
Self::new(FileSlice::empty(), 0)
}
/// Returns an `OwnedBytes` corresponding to the given `required_range`.
///
/// If the requested range is not in the buffer, this will trigger a read
/// from the underlying `FileSlice`.
///
/// If the requested range is larger than the buffer_max_size, it will be read directly from the
/// source without buffering.
///
/// # Errors
///
/// Returns an `io::Error` if the underlying read fails or the range is
/// out of bounds.
pub fn get_bytes(&self, required_range: Range<u64>) -> io::Result<OwnedBytes> {
let buffer_range = self.buffer_range.borrow();
// Cache miss condition: the required range is not fully contained in the current buffer.
if required_range.start < buffer_range.start || required_range.end > buffer_range.end {
drop(buffer_range); // release borrow before mutating
if required_range.end > self.file_slice.len() as u64 {
return Err(io::Error::new(
io::ErrorKind::UnexpectedEof,
"Requested range extends beyond the end of the file slice.",
));
}
if (required_range.end - required_range.start) as usize > self.buffer_max_size {
// This read is larger than our buffer max size.
// Read it directly and bypass the buffer to avoid churning.
return self
.file_slice
.read_bytes_slice(required_range.start as usize..required_range.end as usize);
}
let new_buffer_start = required_range.start;
let new_buffer_end = min(
new_buffer_start + self.buffer_max_size as u64,
self.file_slice.len() as u64,
);
let read_range = new_buffer_start..new_buffer_end;
let new_buffer = self
.file_slice
.read_bytes_slice(read_range.start as usize..read_range.end as usize)?;
self.buffer.replace(new_buffer);
self.buffer_range.replace(read_range);
}
// Now the data is guaranteed to be in the buffer.
let buffer = self.buffer.borrow();
let buffer_range = self.buffer_range.borrow();
let local_start = (required_range.start - buffer_range.start) as usize;
let local_end = (required_range.end - buffer_range.start) as usize;
Ok(buffer.slice(local_start..local_end))
}
}

View File

@@ -1,7 +1,7 @@
use std::fs::File;
use std::ops::{Deref, Range, RangeBounds};
use std::path::Path;
use std::sync::Arc;
use std::sync::{Arc, OnceLock};
use std::{fmt, io};
use async_trait::async_trait;
@@ -339,6 +339,27 @@ impl FileHandle for OwnedBytes {
}
}
pub struct DeferredFileSlice {
opener: Arc<dyn Fn() -> io::Result<FileSlice> + Send + Sync + 'static>,
file_slice: OnceLock<std::io::Result<FileSlice>>,
}
impl DeferredFileSlice {
pub fn new(opener: impl Fn() -> io::Result<FileSlice> + Send + Sync + 'static) -> Self {
DeferredFileSlice {
opener: Arc::new(opener),
file_slice: OnceLock::default(),
}
}
pub fn open(&self) -> io::Result<&FileSlice> {
match self.file_slice.get_or_init(|| (self.opener)()) {
Ok(file_slice) => Ok(file_slice),
Err(e) => Err(io::Error::new(io::ErrorKind::Other, e.to_string())),
}
}
}
#[cfg(test)]
mod tests {
use std::io;

View File

@@ -6,6 +6,7 @@ pub use byteorder::LittleEndian as Endianness;
mod bitset;
pub mod bounds;
pub mod buffered_file_slice;
mod byte_count;
mod datetime;
pub mod file_slice;

View File

@@ -58,6 +58,33 @@ impl BinarySerializable for VIntU128 {
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub struct VInt(pub u64);
impl VInt {
pub fn deserialize_with_size<R: Read>(reader: &mut R) -> io::Result<(Self, usize)> {
let mut nbytes = 0;
let mut bytes = reader.bytes();
let mut result = 0u64;
let mut shift = 0u64;
loop {
match bytes.next() {
Some(Ok(b)) => {
nbytes += 1;
result |= u64::from(b % 128u8) << shift;
if b >= STOP_BIT {
return Ok((VInt(result), nbytes));
}
shift += 7;
}
_ => {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Reach end of buffer while reading VInt",
));
}
}
}
}
}
const STOP_BIT: u8 = 128;
#[inline]
@@ -225,7 +252,6 @@ impl BinarySerializable for VInt {
#[cfg(test)]
mod tests {
use super::{BinarySerializable, VInt, serialize_vint_u32};
fn aux_test_vint(val: u64) {

View File

@@ -0,0 +1,86 @@
// # Multiple Snippets Example
//
// This example demonstrates how to return multiple text fragments
// from a document, useful for long documents with matches in different locations.
use tantivy::collector::TopDocs;
use tantivy::query::QueryParser;
use tantivy::schema::*;
use tantivy::snippet::SnippetGenerator;
use tantivy::{doc, Index, IndexWriter};
use tempfile::TempDir;
fn main() -> tantivy::Result<()> {
let index_path = TempDir::new()?;
// Define the schema
let mut schema_builder = Schema::builder();
let title = schema_builder.add_text_field("title", TEXT | STORED);
let body = schema_builder.add_text_field("body", TEXT | STORED);
let schema = schema_builder.build();
// Create the index
let index = Index::create_in_dir(&index_path, schema)?;
let mut index_writer: IndexWriter = index.writer(50_000_000)?;
// Index a long document with multiple occurrences of "rust"
index_writer.add_document(doc!(
title => "The Rust Programming Language",
body => "Rust is a systems programming language that runs blazingly fast, prevents \
segfaults, and guarantees thread safety. Lorem ipsum dolor sit amet, \
consectetur adipiscing elit. Sed do eiusmod tempor incididunt ut labore. \
Rust empowers everyone to build reliable and efficient software. More filler \
text to create distance between matches. Ut enim ad minim veniam, quis nostrud \
exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. \
The Rust compiler is known for its helpful error messages. Duis aute irure \
dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla \
pariatur. Rust has a strong type system and ownership model."
))?;
index_writer.commit()?;
let reader = index.reader()?;
let searcher = reader.searcher();
let query_parser = QueryParser::for_index(&index, vec![title, body]);
let query = query_parser.parse_query("rust")?;
let top_docs = searcher.search(&query, &TopDocs::with_limit(10).order_by_score())?;
// Create snippet generator
let mut snippet_generator = SnippetGenerator::create(&searcher, &*query, body)?;
println!("=== Single Snippet (Default Behavior) ===\n");
for (score, doc_address) in &top_docs {
let doc = searcher.doc::<TantivyDocument>(*doc_address)?;
let snippet = snippet_generator.snippet_from_doc(&doc);
println!("Document score: {}", score);
println!("Title: {}", doc.get_first(title).unwrap().as_str().unwrap());
println!("Single snippet: {}\n", snippet.to_html());
}
println!("\n=== Multiple Snippets (New Feature) ===\n");
// Configure to return multiple snippets
// Get up to 3 snippets
snippet_generator.set_snippets_limit(3);
// Smaller fragments
snippet_generator.set_max_num_chars(80);
// By default, multiple snippets are sorted by score. You can change this to sort by position.
// snippet_generator.set_sort_order(SnippetSortOrder::Position);
for (score, doc_address) in top_docs {
let doc = searcher.doc::<TantivyDocument>(doc_address)?;
let snippets = snippet_generator.snippets_from_doc(&doc);
println!("Document score: {}", score);
println!("Title: {}", doc.get_first(title).unwrap().as_str().unwrap());
println!("Found {} snippets:", snippets.len());
for (i, snippet) in snippets.iter().enumerate() {
println!(" Snippet {}: {}", i + 1, snippet.to_html());
}
println!();
}
Ok(())
}

3
runtests.sh Executable file
View File

@@ -0,0 +1,3 @@
#! /bin/bash
cargo +stable nextest run --features quickwit,mmap,stopwords,lz4-compression,zstd-compression,failpoints --verbose --workspace

View File

@@ -1,7 +1,8 @@
use std::cmp::Ordering;
use std::collections::HashMap;
use std::net::Ipv6Addr;
use columnar::{Column, ColumnType, ColumnarReader, DynamicColumn};
use columnar::{Column, ColumnType, ColumnarReader, DynamicColumn, ValueRange};
use common::json_path_writer::JSON_PATH_SEGMENT_SEP_STR;
use common::DateTime;
use regex::Regex;
@@ -16,7 +17,7 @@ use crate::aggregation::intermediate_agg_result::{
};
use crate::aggregation::segment_agg_result::SegmentAggregationCollector;
use crate::aggregation::AggregationError;
use crate::collector::sort_key::ReverseComparator;
use crate::collector::sort_key::{Comparator, ReverseComparator};
use crate::collector::TopNComputer;
use crate::schema::OwnedValue;
use crate::{DocAddress, DocId, SegmentOrdinal};
@@ -383,7 +384,7 @@ impl From<FastFieldValue> for OwnedValue {
/// Holds a fast field value in its u64 representation, and the order in which it should be sorted.
#[derive(Clone, Serialize, Deserialize, Debug)]
struct DocValueAndOrder {
pub(crate) struct DocValueAndOrder {
/// A fast field value in its u64 representation.
value: Option<u64>,
/// Sort order for the value
@@ -455,6 +456,37 @@ impl PartialEq for DocSortValuesAndFields {
impl Eq for DocSortValuesAndFields {}
impl Comparator<DocSortValuesAndFields> for ReverseComparator {
#[inline(always)]
fn compare(&self, lhs: &DocSortValuesAndFields, rhs: &DocSortValuesAndFields) -> Ordering {
rhs.cmp(lhs)
}
fn threshold_to_valuerange(
&self,
threshold: DocSortValuesAndFields,
) -> ValueRange<DocSortValuesAndFields> {
ValueRange::LessThan(threshold, true)
}
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub(crate) struct TopHitsSegmentSortKey(pub Vec<DocValueAndOrder>);
impl Comparator<TopHitsSegmentSortKey> for ReverseComparator {
#[inline(always)]
fn compare(&self, lhs: &TopHitsSegmentSortKey, rhs: &TopHitsSegmentSortKey) -> Ordering {
rhs.cmp(lhs)
}
fn threshold_to_valuerange(
&self,
threshold: TopHitsSegmentSortKey,
) -> ValueRange<TopHitsSegmentSortKey> {
ValueRange::LessThan(threshold, true)
}
}
/// The TopHitsCollector used for collecting over segments and merging results.
#[derive(Clone, Serialize, Deserialize, Debug)]
pub struct TopHitsTopNComputer {
@@ -518,7 +550,7 @@ impl TopHitsTopNComputer {
pub(crate) struct TopHitsSegmentCollector {
segment_ordinal: SegmentOrdinal,
accessor_idx: usize,
top_n: TopNComputer<Vec<DocValueAndOrder>, DocAddress, ReverseComparator>,
top_n: TopNComputer<TopHitsSegmentSortKey, DocAddress, ReverseComparator>,
}
impl TopHitsSegmentCollector {
@@ -539,13 +571,15 @@ impl TopHitsSegmentCollector {
req: &TopHitsAggregationReq,
) -> TopHitsTopNComputer {
let mut top_hits_computer = TopHitsTopNComputer::new(req);
// Map TopHitsSegmentSortKey back to Vec<DocValueAndOrder> if needed or use directly
// The TopNComputer here stores TopHitsSegmentSortKey.
let top_results = self.top_n.into_vec();
for res in top_results {
let doc_value_fields = req.get_document_field_data(value_accessors, res.doc.doc_id);
top_hits_computer.collect(
DocSortValuesAndFields {
sorts: res.sort_key,
sorts: res.sort_key.0,
doc_value_fields,
},
res.doc,
@@ -579,7 +613,7 @@ impl TopHitsSegmentCollector {
.collect();
self.top_n.push(
sorts,
TopHitsSegmentSortKey(sorts),
DocAddress {
segment_ord: self.segment_ordinal,
doc_id,

View File

@@ -821,7 +821,6 @@ mod tests {
#[cfg(all(test, feature = "unstable"))]
mod bench {
use rand::seq::SliceRandom;
use rand::thread_rng;
use test::Bencher;

View File

@@ -96,10 +96,9 @@ mod histogram_collector;
pub use histogram_collector::HistogramCollector;
mod multi_collector;
pub use self::multi_collector::{FruitHandle, MultiCollector, MultiFruit};
pub use columnar::ComparableDoc;
mod top_collector;
pub use self::top_collector::ComparableDoc;
pub use self::multi_collector::{FruitHandle, MultiCollector, MultiFruit};
mod top_score_collector;
pub use self::top_score_collector::{TopDocs, TopNComputer};

View File

@@ -281,7 +281,6 @@ impl SegmentCollector for MultiCollectorChild {
#[cfg(test)]
mod tests {
use super::*;
use crate::collector::{Count, TopDocs};
use crate::query::TermQuery;

View File

@@ -1,10 +1,12 @@
mod order;
mod sort_by_erased_type;
mod sort_by_score;
mod sort_by_static_fast_value;
mod sort_by_string;
mod sort_key_computer;
pub use order::*;
pub use sort_by_erased_type::SortByErasedType;
pub use sort_by_score::SortBySimilarityScore;
pub use sort_by_static_fast_value::SortByStaticFastValue;
pub use sort_by_string::SortByString;
@@ -15,11 +17,14 @@ mod tests {
use std::collections::HashMap;
use std::ops::Range;
use crate::collector::sort_key::{SortBySimilarityScore, SortByStaticFastValue, SortByString};
use crate::collector::sort_key::{
Comparator, NaturalComparator, ReverseComparator, SortByErasedType, SortBySimilarityScore,
SortByStaticFastValue, SortByString,
};
use crate::collector::{ComparableDoc, DocSetCollector, TopDocs};
use crate::indexer::NoMergePolicy;
use crate::query::{AllQuery, QueryParser};
use crate::schema::{Schema, FAST, TEXT};
use crate::schema::{OwnedValue, Schema, FAST, TEXT};
use crate::{DocAddress, Document, Index, Order, Score, Searcher};
fn make_index() -> crate::Result<Index> {
@@ -294,11 +299,9 @@ mod tests {
(SortBySimilarityScore, score_order),
(SortByString::for_field("city"), city_order),
));
Ok(searcher
.search(&AllQuery, &top_collector)?
.into_iter()
.map(|(f, doc)| (f, ids[&doc]))
.collect())
let results: Vec<((Score, Option<String>), DocAddress)> =
searcher.search(&AllQuery, &top_collector)?;
Ok(results.into_iter().map(|(f, doc)| (f, ids[&doc])).collect())
}
assert_eq!(
@@ -323,6 +326,97 @@ mod tests {
Ok(())
}
#[test]
fn test_order_by_score_then_owned_value() -> crate::Result<()> {
let index = make_index()?;
type SortKey = (Score, OwnedValue);
fn query(
index: &Index,
score_order: Order,
city_order: Order,
) -> crate::Result<Vec<(SortKey, u64)>> {
let searcher = index.reader()?.searcher();
let ids = id_mapping(&searcher);
let top_collector = TopDocs::with_limit(4).order_by::<(Score, OwnedValue)>((
(SortBySimilarityScore, score_order),
(SortByErasedType::for_field("city"), city_order),
));
let results: Vec<((Score, OwnedValue), DocAddress)> =
searcher.search(&AllQuery, &top_collector)?;
Ok(results.into_iter().map(|(f, doc)| (f, ids[&doc])).collect())
}
assert_eq!(
&query(&index, Order::Asc, Order::Asc)?,
&[
((1.0, OwnedValue::Str("austin".to_owned())), 0),
((1.0, OwnedValue::Str("greenville".to_owned())), 1),
((1.0, OwnedValue::Str("tokyo".to_owned())), 2),
((1.0, OwnedValue::Null), 3),
]
);
assert_eq!(
&query(&index, Order::Asc, Order::Desc)?,
&[
((1.0, OwnedValue::Str("tokyo".to_owned())), 2),
((1.0, OwnedValue::Str("greenville".to_owned())), 1),
((1.0, OwnedValue::Str("austin".to_owned())), 0),
((1.0, OwnedValue::Null), 3),
]
);
Ok(())
}
#[test]
fn test_order_by_compound_fast_fields() -> crate::Result<()> {
let index = make_index()?;
type CompoundSortKey = (Option<String>, Option<f64>);
fn assert_query(
index: &Index,
city_order: Order,
altitude_order: Order,
expected: Vec<(CompoundSortKey, u64)>,
) -> crate::Result<()> {
let searcher = index.reader()?.searcher();
let ids = id_mapping(&searcher);
let top_collector = TopDocs::with_limit(4).order_by((
(SortByString::for_field("city"), city_order),
(
SortByStaticFastValue::<f64>::for_field("altitude"),
altitude_order,
),
));
let actual = searcher
.search(&AllQuery, &top_collector)?
.into_iter()
.map(|(key, doc)| (key, ids[&doc]))
.collect::<Vec<_>>();
assert_eq!(actual, expected);
Ok(())
}
assert_query(
&index,
Order::Asc,
Order::Desc,
vec![
((Some("austin".to_owned()), Some(149.0)), 0),
((Some("greenville".to_owned()), Some(27.0)), 1),
((Some("tokyo".to_owned()), Some(40.0)), 2),
((None, Some(0.0)), 3),
],
)?;
Ok(())
}
use proptest::prelude::*;
proptest! {
@@ -372,15 +466,14 @@ mod tests {
// Using the TopDocs collector should always be equivalent to sorting, skipping the
// offset, and then taking the limit.
let sorted_docs: Vec<_> = if order.is_desc() {
let mut comparable_docs: Vec<ComparableDoc<_, _, true>> =
let sorted_docs: Vec<_> = {
let mut comparable_docs: Vec<ComparableDoc<_, _>> =
all_results.into_iter().map(|(sort_key, doc)| ComparableDoc { sort_key, doc}).collect();
comparable_docs.sort();
comparable_docs.into_iter().map(|cd| (cd.sort_key, cd.doc)).collect()
} else {
let mut comparable_docs: Vec<ComparableDoc<_, _, false>> =
all_results.into_iter().map(|(sort_key, doc)| ComparableDoc { sort_key, doc}).collect();
comparable_docs.sort();
if order.is_desc() {
comparable_docs.sort_by(|l, r| NaturalComparator.compare_doc(l, r));
} else {
comparable_docs.sort_by(|l, r| ReverseComparator.compare_doc(l, r));
}
comparable_docs.into_iter().map(|cd| (cd.sort_key, cd.doc)).collect()
};
let expected_docs = sorted_docs.into_iter().skip(offset).take(limit).collect::<Vec<_>>();
@@ -390,4 +483,197 @@ mod tests {
);
}
}
proptest! {
#[test]
fn test_order_by_compound_prop(
city_order in prop_oneof!(Just(Order::Desc), Just(Order::Asc)),
altitude_order in prop_oneof!(Just(Order::Desc), Just(Order::Asc)),
limit in 1..20_usize,
offset in 0..20_usize,
segments_data in proptest::collection::vec(
proptest::collection::vec(
(proptest::option::of("[a-c]"), proptest::option::of(0..50u64)),
1..10_usize // segment size
),
1..4_usize // num segments
)
) {
use crate::collector::sort_key::ComparatorEnum;
use crate::TantivyDocument;
let mut schema_builder = Schema::builder();
let city = schema_builder.add_text_field("city", TEXT | FAST);
let altitude = schema_builder.add_u64_field("altitude", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests().unwrap();
for segment_data in segments_data.into_iter() {
for (city_val, altitude_val) in segment_data.into_iter() {
let mut doc = TantivyDocument::default();
if let Some(c) = city_val {
doc.add_text(city, c);
}
if let Some(a) = altitude_val {
doc.add_u64(altitude, a);
}
index_writer.add_document(doc).unwrap();
}
index_writer.commit().unwrap();
}
let searcher = index.reader().unwrap().searcher();
let top_collector = TopDocs::with_limit(limit)
.and_offset(offset)
.order_by((
(SortByString::for_field("city"), city_order),
(
SortByStaticFastValue::<u64>::for_field("altitude"),
altitude_order,
),
));
let actual_results = searcher.search(&AllQuery, &top_collector).unwrap();
let actual_doc_ids: Vec<DocAddress> =
actual_results.into_iter().map(|(_, doc)| doc).collect();
// Verification logic
let all_docs_collector = DocSetCollector;
let all_docs = searcher.search(&AllQuery, &all_docs_collector).unwrap();
let docs_with_keys: Vec<((Option<String>, Option<u64>), DocAddress)> = all_docs
.into_iter()
.map(|doc_addr| {
let reader = searcher.segment_reader(doc_addr.segment_ord);
let city_val = if let Some(col) = reader.fast_fields().str("city").unwrap() {
let ord = col.ords().first(doc_addr.doc_id);
if let Some(ord) = ord {
let mut out = Vec::new();
col.dictionary().ord_to_term(ord, &mut out).unwrap();
String::from_utf8(out).ok()
} else {
None
}
} else {
None
};
let alt_val = if let Some((col, _)) = reader.fast_fields().u64_lenient("altitude").unwrap() {
col.first(doc_addr.doc_id)
} else {
None
};
((city_val, alt_val), doc_addr)
})
.collect();
let city_comparator = ComparatorEnum::from(city_order);
let alt_comparator = ComparatorEnum::from(altitude_order);
let comparator = (city_comparator, alt_comparator);
let mut comparable_docs: Vec<ComparableDoc<_, _>> = docs_with_keys
.into_iter()
.map(|(sort_key, doc)| ComparableDoc { sort_key, doc })
.collect();
comparable_docs.sort_by(|l, r| comparator.compare_doc(l, r));
let expected_results = comparable_docs
.into_iter()
.skip(offset)
.take(limit)
.collect::<Vec<_>>();
let expected_doc_ids: Vec<DocAddress> =
expected_results.into_iter().map(|cd| cd.doc).collect();
prop_assert_eq!(actual_doc_ids, expected_doc_ids);
}
}
proptest! {
#[test]
fn test_order_by_u64_prop(
order in prop_oneof!(Just(Order::Desc), Just(Order::Asc)),
limit in 1..20_usize,
offset in 0..20_usize,
segments_data in proptest::collection::vec(
proptest::collection::vec(
proptest::option::of(0..100u64),
1..1000_usize // segment size
),
1..4_usize // num segments
)
) {
use crate::collector::sort_key::ComparatorEnum;
use crate::TantivyDocument;
let mut schema_builder = Schema::builder();
let field = schema_builder.add_u64_field("field", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests().unwrap();
for segment_data in segments_data.into_iter() {
for val in segment_data.into_iter() {
let mut doc = TantivyDocument::default();
if let Some(v) = val {
doc.add_u64(field, v);
}
index_writer.add_document(doc).unwrap();
}
index_writer.commit().unwrap();
}
let searcher = index.reader().unwrap().searcher();
let top_collector = TopDocs::with_limit(limit)
.and_offset(offset)
.order_by((SortByStaticFastValue::<u64>::for_field("field"), order));
let actual_results = searcher.search(&AllQuery, &top_collector).unwrap();
let actual_doc_ids: Vec<DocAddress> =
actual_results.into_iter().map(|(_, doc)| doc).collect();
// Verification logic
let all_docs_collector = DocSetCollector;
let all_docs = searcher.search(&AllQuery, &all_docs_collector).unwrap();
let docs_with_keys: Vec<(Option<u64>, DocAddress)> = all_docs
.into_iter()
.map(|doc_addr| {
let reader = searcher.segment_reader(doc_addr.segment_ord);
let val = if let Some((col, _)) = reader.fast_fields().u64_lenient("field").unwrap() {
col.first(doc_addr.doc_id)
} else {
None
};
(val, doc_addr)
})
.collect();
let comparator = ComparatorEnum::from(order);
let mut comparable_docs: Vec<ComparableDoc<_, _>> = docs_with_keys
.into_iter()
.map(|(sort_key, doc)| ComparableDoc { sort_key, doc })
.collect();
comparable_docs.sort_by(|l, r| comparator.compare_doc(l, r));
let expected_results = comparable_docs
.into_iter()
.skip(offset)
.take(limit)
.collect::<Vec<_>>();
let expected_doc_ids: Vec<DocAddress> =
expected_results.into_iter().map(|cd| cd.doc).collect();
prop_assert_eq!(actual_doc_ids, expected_doc_ids);
}
}
}

View File

@@ -1,19 +1,103 @@
use std::cmp::Ordering;
use columnar::{MonotonicallyMappableToU64, ValueRange};
use serde::{Deserialize, Serialize};
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer};
use crate::schema::Schema;
use crate::collector::{ComparableDoc, SegmentSortKeyComputer, SortKeyComputer};
use crate::schema::{OwnedValue, Schema};
use crate::{DocId, Order, Score};
fn compare_owned_value<const NULLS_FIRST: bool>(lhs: &OwnedValue, rhs: &OwnedValue) -> Ordering {
match (lhs, rhs) {
(OwnedValue::Null, OwnedValue::Null) => Ordering::Equal,
(OwnedValue::Null, _) => {
if NULLS_FIRST {
Ordering::Less
} else {
Ordering::Greater
}
}
(_, OwnedValue::Null) => {
if NULLS_FIRST {
Ordering::Greater
} else {
Ordering::Less
}
}
(OwnedValue::Str(a), OwnedValue::Str(b)) => a.cmp(b),
(OwnedValue::PreTokStr(a), OwnedValue::PreTokStr(b)) => a.cmp(b),
(OwnedValue::U64(a), OwnedValue::U64(b)) => a.cmp(b),
(OwnedValue::I64(a), OwnedValue::I64(b)) => a.cmp(b),
(OwnedValue::F64(a), OwnedValue::F64(b)) => a.to_u64().cmp(&b.to_u64()),
(OwnedValue::Bool(a), OwnedValue::Bool(b)) => a.cmp(b),
(OwnedValue::Date(a), OwnedValue::Date(b)) => a.cmp(b),
(OwnedValue::Facet(a), OwnedValue::Facet(b)) => a.cmp(b),
(OwnedValue::Bytes(a), OwnedValue::Bytes(b)) => a.cmp(b),
(OwnedValue::IpAddr(a), OwnedValue::IpAddr(b)) => a.cmp(b),
(OwnedValue::U64(a), OwnedValue::I64(b)) => {
if *b < 0 {
Ordering::Greater
} else {
a.cmp(&(*b as u64))
}
}
(OwnedValue::I64(a), OwnedValue::U64(b)) => {
if *a < 0 {
Ordering::Less
} else {
(*a as u64).cmp(b)
}
}
(OwnedValue::U64(a), OwnedValue::F64(b)) => (*a as f64).to_u64().cmp(&b.to_u64()),
(OwnedValue::F64(a), OwnedValue::U64(b)) => a.to_u64().cmp(&(*b as f64).to_u64()),
(OwnedValue::I64(a), OwnedValue::F64(b)) => (*a as f64).to_u64().cmp(&b.to_u64()),
(OwnedValue::F64(a), OwnedValue::I64(b)) => a.to_u64().cmp(&(*b as f64).to_u64()),
(a, b) => {
let ord = a.discriminant_value().cmp(&b.discriminant_value());
// If the discriminant is equal, it's because a new type was added, but hasn't been
// included in this `match` statement.
assert!(
ord != Ordering::Equal,
"Unimplemented comparison for type of {a:?}, {b:?}"
);
ord
}
}
}
/// Comparator trait defining the order in which documents should be ordered.
pub trait Comparator<T>: Send + Sync + std::fmt::Debug + Default {
/// Return the order between two values.
fn compare(&self, lhs: &T, rhs: &T) -> Ordering;
/// Return the order between two ComparableDoc values, using the semantics which are
/// implemented by TopNComputer.
#[inline(always)]
fn compare_doc<D: Ord>(
&self,
lhs: &ComparableDoc<T, D>,
rhs: &ComparableDoc<T, D>,
) -> Ordering {
// TopNComputer sorts in descending order of the SortKey by default: we apply that ordering
// here to ease comparison in testing.
self.compare(&rhs.sort_key, &lhs.sort_key).then_with(|| {
// In case of a tie on the sort key, we always sort by ascending `DocAddress` in order
// to ensure a stable sorting of the documents, regardless of the sort key's order.
// See the TopNComputer docs for more information.
lhs.doc.cmp(&rhs.doc)
})
}
/// Return a `ValueRange` that matches all values that are greater than the provided threshold.
fn threshold_to_valuerange(&self, threshold: T) -> ValueRange<T>;
}
/// With the natural comparator, the top k collector will return
/// the top documents in decreasing order.
/// Compare values naturally (e.g. 1 < 2).
///
/// When used with `TopDocs`, which reverses the order, this results in a
/// "Descending" sort (Greatest values first).
///
/// `None` (or Null for `OwnedValue`) values are considered to be smaller than any other value,
/// and will therefore appear last in a descending sort (e.g. `[Some(20), Some(10), None]`).
#[derive(Debug, Copy, Clone, Default, Serialize, Deserialize)]
pub struct NaturalComparator;
@@ -22,32 +106,116 @@ impl<T: PartialOrd> Comparator<T> for NaturalComparator {
fn compare(&self, lhs: &T, rhs: &T) -> Ordering {
lhs.partial_cmp(rhs).unwrap()
}
}
/// Sorts document in reverse order.
///
/// If the sort key is None, it will considered as the lowest value, and will therefore appear
/// first.
///
/// The ReverseComparator does not necessarily imply that the sort order is reversed compared
/// to the NaturalComparator. In presence of a tie, both version will retain the higher doc ids.
#[derive(Debug, Copy, Clone, Default, Serialize, Deserialize)]
pub struct ReverseComparator;
impl<T> Comparator<T> for ReverseComparator
where NaturalComparator: Comparator<T>
{
#[inline(always)]
fn compare(&self, lhs: &T, rhs: &T) -> Ordering {
NaturalComparator.compare(rhs, lhs)
fn threshold_to_valuerange(&self, threshold: T) -> ValueRange<T> {
ValueRange::GreaterThan(threshold, false)
}
}
/// Sorts document in reverse order, but considers None as having the lowest value.
/// A (partial) implementation of comparison for OwnedValue.
///
/// Intended for use within columns of homogenous types, and so will panic for OwnedValues with
/// mismatched types. The one exception is Null, for which we do define all comparisons.
impl Comparator<OwnedValue> for NaturalComparator {
#[inline(always)]
fn compare(&self, lhs: &OwnedValue, rhs: &OwnedValue) -> Ordering {
compare_owned_value::</* NULLS_FIRST= */ true>(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: OwnedValue) -> ValueRange<OwnedValue> {
ValueRange::GreaterThan(threshold, false)
}
}
/// Compare values in reverse (e.g. 2 < 1).
///
/// When used with `TopDocs`, which reverses the order, this results in an
/// "Ascending" sort (Smallest values first).
///
/// `None` is considered smaller than `Some` in the underlying comparator, but because the
/// comparison is reversed, `None` is effectively treated as the lowest value in the resulting
/// Ascending sort (e.g. `[None, Some(10), Some(20)]`).
///
/// The ReverseComparator does not necessarily imply that the sort order is reversed compared
/// to the NaturalComparator. In presence of a tie on the sort key, documents will always be
/// sorted by ascending `DocId`/`DocAddress` in TopN results, regardless of the sort key's order.
#[derive(Debug, Copy, Clone, Default, Serialize, Deserialize)]
pub struct ReverseComparator;
macro_rules! impl_reverse_comparator_primitive {
($($t:ty),*) => {
$(
impl Comparator<$t> for ReverseComparator {
#[inline(always)]
fn compare(&self, lhs: &$t, rhs: &$t) -> Ordering {
NaturalComparator.compare(rhs, lhs)
}
fn threshold_to_valuerange(&self, threshold: $t) -> ValueRange<$t> {
ValueRange::LessThan(threshold, true)
}
}
)*
}
}
impl_reverse_comparator_primitive!(
bool,
u8,
u16,
u32,
u64,
u128,
usize,
i8,
i16,
i32,
i64,
i128,
isize,
f32,
f64,
String,
crate::DateTime,
Vec<u8>,
crate::schema::Facet
);
impl<T: PartialOrd + Send + Sync + std::fmt::Debug + Clone + 'static> Comparator<Option<T>>
for ReverseComparator
{
#[inline(always)]
fn compare(&self, lhs: &Option<T>, rhs: &Option<T>) -> Ordering {
NaturalComparator.compare(rhs, lhs)
}
fn threshold_to_valuerange(&self, threshold: Option<T>) -> ValueRange<Option<T>> {
let is_some = threshold.is_some();
ValueRange::LessThan(threshold, is_some)
}
}
impl Comparator<OwnedValue> for ReverseComparator {
#[inline(always)]
fn compare(&self, lhs: &OwnedValue, rhs: &OwnedValue) -> Ordering {
NaturalComparator.compare(rhs, lhs)
}
fn threshold_to_valuerange(&self, threshold: OwnedValue) -> ValueRange<OwnedValue> {
let is_not_null = !matches!(threshold, OwnedValue::Null);
ValueRange::LessThan(threshold, is_not_null)
}
}
/// Compare values in reverse, but treating `None` as lower than `Some`.
///
/// When used with `TopDocs`, which reverses the order, this results in an
/// "Ascending" sort (Smallest values first), but with `None` values appearing last
/// (e.g. `[Some(10), Some(20), None]`).
///
/// This is usually what is wanted when sorting by a field in an ascending order.
/// For instance, in a e-commerce website, if I sort by price ascending, I most likely want the
/// cheapest items first, and the items without a price at last.
/// For instance, in an e-commerce website, if sorting by price ascending,
/// the cheapest items would appear first, and items without a price would appear last.
#[derive(Debug, Copy, Clone, Default)]
pub struct ReverseNoneIsLowerComparator;
@@ -63,6 +231,14 @@ where ReverseComparator: Comparator<T>
(Some(lhs), Some(rhs)) => ReverseComparator.compare(lhs, rhs),
}
}
fn threshold_to_valuerange(&self, threshold: Option<T>) -> ValueRange<Option<T>> {
if threshold.is_some() {
ValueRange::LessThan(threshold, false)
} else {
ValueRange::GreaterThan(threshold, false)
}
}
}
impl Comparator<u32> for ReverseNoneIsLowerComparator {
@@ -70,6 +246,10 @@ impl Comparator<u32> for ReverseNoneIsLowerComparator {
fn compare(&self, lhs: &u32, rhs: &u32) -> Ordering {
ReverseComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: u32) -> ValueRange<u32> {
ValueRange::LessThan(threshold, false)
}
}
impl Comparator<u64> for ReverseNoneIsLowerComparator {
@@ -77,6 +257,10 @@ impl Comparator<u64> for ReverseNoneIsLowerComparator {
fn compare(&self, lhs: &u64, rhs: &u64) -> Ordering {
ReverseComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: u64) -> ValueRange<u64> {
ValueRange::LessThan(threshold, false)
}
}
impl Comparator<f64> for ReverseNoneIsLowerComparator {
@@ -84,6 +268,10 @@ impl Comparator<f64> for ReverseNoneIsLowerComparator {
fn compare(&self, lhs: &f64, rhs: &f64) -> Ordering {
ReverseComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: f64) -> ValueRange<f64> {
ValueRange::LessThan(threshold, false)
}
}
impl Comparator<f32> for ReverseNoneIsLowerComparator {
@@ -91,6 +279,10 @@ impl Comparator<f32> for ReverseNoneIsLowerComparator {
fn compare(&self, lhs: &f32, rhs: &f32) -> Ordering {
ReverseComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: f32) -> ValueRange<f32> {
ValueRange::LessThan(threshold, false)
}
}
impl Comparator<i64> for ReverseNoneIsLowerComparator {
@@ -98,6 +290,10 @@ impl Comparator<i64> for ReverseNoneIsLowerComparator {
fn compare(&self, lhs: &i64, rhs: &i64) -> Ordering {
ReverseComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: i64) -> ValueRange<i64> {
ValueRange::LessThan(threshold, false)
}
}
impl Comparator<String> for ReverseNoneIsLowerComparator {
@@ -105,6 +301,129 @@ impl Comparator<String> for ReverseNoneIsLowerComparator {
fn compare(&self, lhs: &String, rhs: &String) -> Ordering {
ReverseComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: String) -> ValueRange<String> {
ValueRange::LessThan(threshold, false)
}
}
impl Comparator<OwnedValue> for ReverseNoneIsLowerComparator {
#[inline(always)]
fn compare(&self, lhs: &OwnedValue, rhs: &OwnedValue) -> Ordering {
compare_owned_value::</* NULLS_FIRST= */ false>(rhs, lhs)
}
fn threshold_to_valuerange(&self, threshold: OwnedValue) -> ValueRange<OwnedValue> {
ValueRange::LessThan(threshold, false)
}
}
/// Compare values naturally, but treating `None` as higher than `Some`.
///
/// When used with `TopDocs`, which reverses the order, this results in a
/// "Descending" sort (Greatest values first), but with `None` values appearing first
/// (e.g. `[None, Some(20), Some(10)]`).
#[derive(Debug, Copy, Clone, Default, Serialize, Deserialize)]
pub struct NaturalNoneIsHigherComparator;
impl<T> Comparator<Option<T>> for NaturalNoneIsHigherComparator
where NaturalComparator: Comparator<T>
{
#[inline(always)]
fn compare(&self, lhs_opt: &Option<T>, rhs_opt: &Option<T>) -> Ordering {
match (lhs_opt, rhs_opt) {
(None, None) => Ordering::Equal,
(None, Some(_)) => Ordering::Greater,
(Some(_), None) => Ordering::Less,
(Some(lhs), Some(rhs)) => NaturalComparator.compare(lhs, rhs),
}
}
fn threshold_to_valuerange(&self, threshold: Option<T>) -> ValueRange<Option<T>> {
if threshold.is_some() {
let is_some = threshold.is_some();
ValueRange::GreaterThan(threshold, is_some)
} else {
ValueRange::LessThan(threshold, false)
}
}
}
impl Comparator<u32> for NaturalNoneIsHigherComparator {
#[inline(always)]
fn compare(&self, lhs: &u32, rhs: &u32) -> Ordering {
NaturalComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: u32) -> ValueRange<u32> {
ValueRange::GreaterThan(threshold, true)
}
}
impl Comparator<u64> for NaturalNoneIsHigherComparator {
#[inline(always)]
fn compare(&self, lhs: &u64, rhs: &u64) -> Ordering {
NaturalComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: u64) -> ValueRange<u64> {
ValueRange::GreaterThan(threshold, true)
}
}
impl Comparator<f64> for NaturalNoneIsHigherComparator {
#[inline(always)]
fn compare(&self, lhs: &f64, rhs: &f64) -> Ordering {
NaturalComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: f64) -> ValueRange<f64> {
ValueRange::GreaterThan(threshold, true)
}
}
impl Comparator<f32> for NaturalNoneIsHigherComparator {
#[inline(always)]
fn compare(&self, lhs: &f32, rhs: &f32) -> Ordering {
NaturalComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: f32) -> ValueRange<f32> {
ValueRange::GreaterThan(threshold, true)
}
}
impl Comparator<i64> for NaturalNoneIsHigherComparator {
#[inline(always)]
fn compare(&self, lhs: &i64, rhs: &i64) -> Ordering {
NaturalComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: i64) -> ValueRange<i64> {
ValueRange::GreaterThan(threshold, true)
}
}
impl Comparator<String> for NaturalNoneIsHigherComparator {
#[inline(always)]
fn compare(&self, lhs: &String, rhs: &String) -> Ordering {
NaturalComparator.compare(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: String) -> ValueRange<String> {
ValueRange::GreaterThan(threshold, true)
}
}
impl Comparator<OwnedValue> for NaturalNoneIsHigherComparator {
#[inline(always)]
fn compare(&self, lhs: &OwnedValue, rhs: &OwnedValue) -> Ordering {
compare_owned_value::</* NULLS_FIRST= */ false>(lhs, rhs)
}
fn threshold_to_valuerange(&self, threshold: OwnedValue) -> ValueRange<OwnedValue> {
ValueRange::GreaterThan(threshold, true)
}
}
/// An enum representing the different sort orders.
@@ -115,8 +434,10 @@ pub enum ComparatorEnum {
Natural,
/// Reverse order (See [ReverseComparator])
Reverse,
/// Reverse order by treating None as the lowest value.(See [ReverseNoneLowerComparator])
/// Reverse order by treating None as the lowest value. (See [ReverseNoneLowerComparator])
ReverseNoneLower,
/// Natural order but treating None as the highest value. (See [NaturalNoneIsHigherComparator])
NaturalNoneHigher,
}
impl From<Order> for ComparatorEnum {
@@ -133,6 +454,7 @@ where
ReverseNoneIsLowerComparator: Comparator<T>,
NaturalComparator: Comparator<T>,
ReverseComparator: Comparator<T>,
NaturalNoneIsHigherComparator: Comparator<T>,
{
#[inline(always)]
fn compare(&self, lhs: &T, rhs: &T) -> Ordering {
@@ -140,6 +462,20 @@ where
ComparatorEnum::Natural => NaturalComparator.compare(lhs, rhs),
ComparatorEnum::Reverse => ReverseComparator.compare(lhs, rhs),
ComparatorEnum::ReverseNoneLower => ReverseNoneIsLowerComparator.compare(lhs, rhs),
ComparatorEnum::NaturalNoneHigher => NaturalNoneIsHigherComparator.compare(lhs, rhs),
}
}
fn threshold_to_valuerange(&self, threshold: T) -> ValueRange<T> {
match self {
ComparatorEnum::Natural => NaturalComparator.threshold_to_valuerange(threshold),
ComparatorEnum::Reverse => ReverseComparator.threshold_to_valuerange(threshold),
ComparatorEnum::ReverseNoneLower => {
ReverseNoneIsLowerComparator.threshold_to_valuerange(threshold)
}
ComparatorEnum::NaturalNoneHigher => {
NaturalNoneIsHigherComparator.threshold_to_valuerange(threshold)
}
}
}
}
@@ -156,6 +492,10 @@ where
.compare(&lhs.0, &rhs.0)
.then_with(|| self.1.compare(&lhs.1, &rhs.1))
}
fn threshold_to_valuerange(&self, threshold: (Head, Tail)) -> ValueRange<(Head, Tail)> {
ValueRange::GreaterThan(threshold, false)
}
}
impl<Type1, Type2, Type3, Comparator1, Comparator2, Comparator3> Comparator<(Type1, (Type2, Type3))>
@@ -172,6 +512,13 @@ where
.then_with(|| self.1.compare(&lhs.1 .0, &rhs.1 .0))
.then_with(|| self.2.compare(&lhs.1 .1, &rhs.1 .1))
}
fn threshold_to_valuerange(
&self,
threshold: (Type1, (Type2, Type3)),
) -> ValueRange<(Type1, (Type2, Type3))> {
ValueRange::GreaterThan(threshold, false)
}
}
impl<Type1, Type2, Type3, Comparator1, Comparator2, Comparator3> Comparator<(Type1, Type2, Type3)>
@@ -188,6 +535,13 @@ where
.then_with(|| self.1.compare(&lhs.1, &rhs.1))
.then_with(|| self.2.compare(&lhs.2, &rhs.2))
}
fn threshold_to_valuerange(
&self,
threshold: (Type1, Type2, Type3),
) -> ValueRange<(Type1, Type2, Type3)> {
ValueRange::GreaterThan(threshold, false)
}
}
impl<Type1, Type2, Type3, Type4, Comparator1, Comparator2, Comparator3, Comparator4>
@@ -211,6 +565,13 @@ where
.then_with(|| self.2.compare(&lhs.1 .1 .0, &rhs.1 .1 .0))
.then_with(|| self.3.compare(&lhs.1 .1 .1, &rhs.1 .1 .1))
}
fn threshold_to_valuerange(
&self,
threshold: (Type1, (Type2, (Type3, Type4))),
) -> ValueRange<(Type1, (Type2, (Type3, Type4)))> {
ValueRange::GreaterThan(threshold, false)
}
}
impl<Type1, Type2, Type3, Type4, Comparator1, Comparator2, Comparator3, Comparator4>
@@ -234,6 +595,13 @@ where
.then_with(|| self.2.compare(&lhs.2, &rhs.2))
.then_with(|| self.3.compare(&lhs.3, &rhs.3))
}
fn threshold_to_valuerange(
&self,
threshold: (Type1, Type2, Type3, Type4),
) -> ValueRange<(Type1, Type2, Type3, Type4)> {
ValueRange::GreaterThan(threshold, false)
}
}
impl<TSortKeyComputer> SortKeyComputer for (TSortKeyComputer, ComparatorEnum)
@@ -322,16 +690,33 @@ impl<TSegmentSortKeyComputer, TSegmentSortKey, TComparator> SegmentSortKeyComput
for SegmentSortKeyComputerWithComparator<TSegmentSortKeyComputer, TComparator>
where
TSegmentSortKeyComputer: SegmentSortKeyComputer<SegmentSortKey = TSegmentSortKey>,
TSegmentSortKey: PartialOrd + Clone + 'static + Sync + Send,
TComparator: Comparator<TSegmentSortKey> + 'static + Sync + Send,
TSegmentSortKey: Clone + 'static + Sync + Send,
TComparator: Comparator<TSegmentSortKey> + Clone + 'static + Sync + Send,
{
type SortKey = TSegmentSortKeyComputer::SortKey;
type SegmentSortKey = TSegmentSortKey;
type SegmentComparator = TComparator;
type Buffer = TSegmentSortKeyComputer::Buffer;
fn segment_comparator(&self) -> Self::SegmentComparator {
self.comparator.clone()
}
fn segment_sort_key(&mut self, doc: DocId, score: Score) -> Self::SegmentSortKey {
self.segment_sort_key_computer.segment_sort_key(doc, score)
}
fn segment_sort_keys(
&mut self,
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
buffer: &mut Self::Buffer,
filter: ValueRange<Self::SegmentSortKey>,
) {
self.segment_sort_key_computer
.segment_sort_keys(input_docs, output, buffer, filter)
}
#[inline(always)]
fn compare_segment_sort_key(
&self,
@@ -346,3 +731,55 @@ where
.convert_segment_sort_key(sort_key)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::schema::OwnedValue;
#[test]
fn test_mixed_ownedvalue_compare() {
let u = OwnedValue::U64(10);
let i = OwnedValue::I64(10);
let f = OwnedValue::F64(10.0);
let nc = NaturalComparator::default();
assert_eq!(nc.compare(&u, &i), Ordering::Equal);
assert_eq!(nc.compare(&u, &f), Ordering::Equal);
assert_eq!(nc.compare(&i, &f), Ordering::Equal);
let u2 = OwnedValue::U64(11);
assert_eq!(nc.compare(&u2, &f), Ordering::Greater);
let s = OwnedValue::Str("a".to_string());
// Str < U64
assert_eq!(nc.compare(&s, &u), Ordering::Less);
// Str < I64
assert_eq!(nc.compare(&s, &i), Ordering::Less);
// Str < F64
assert_eq!(nc.compare(&s, &f), Ordering::Less);
}
#[test]
fn test_natural_none_is_higher() {
let comp = NaturalNoneIsHigherComparator;
let null = OwnedValue::Null;
let v1 = OwnedValue::U64(1);
let v2 = OwnedValue::U64(2);
// NaturalNoneIsGreaterComparator logic:
// 1. Delegates to NaturalComparator for non-nulls.
// NaturalComparator compare(2, 1) -> 2.cmp(1) -> Greater.
assert_eq!(comp.compare(&v2, &v1), Ordering::Greater);
// 2. Treats None (Null) as Greater than any value.
// compare(Null, 2) should be Greater.
assert_eq!(comp.compare(&null, &v2), Ordering::Greater);
// compare(1, Null) should be Less.
assert_eq!(comp.compare(&v1, &null), Ordering::Less);
// compare(Null, Null) should be Equal.
assert_eq!(comp.compare(&null, &null), Ordering::Equal);
}
}

View File

@@ -0,0 +1,410 @@
use columnar::{ColumnType, MonotonicallyMappableToU64, ValueRange};
use crate::collector::sort_key::sort_by_score::SortBySimilarityScoreSegmentComputer;
use crate::collector::sort_key::{
NaturalComparator, SortBySimilarityScore, SortByStaticFastValue, SortByString,
};
use crate::collector::{ComparableDoc, SegmentSortKeyComputer, SortKeyComputer};
use crate::fastfield::FastFieldNotAvailableError;
use crate::schema::OwnedValue;
use crate::{DateTime, DocId, Score};
/// Sort by the boxed / OwnedValue representation of either a fast field, or of the score.
///
/// Using the OwnedValue representation allows for type erasure, and can be useful when sort orders
/// are not known until runtime. But it comes with a performance cost: wherever possible, prefer to
/// use a SortKeyComputer implementation with a known-type at compile time.
#[derive(Debug, Clone)]
pub enum SortByErasedType {
/// Sort by a fast field
Field(String),
/// Sort by score
Score,
}
impl SortByErasedType {
/// Creates a new sort key computer which will sort by the given fast field column, with type
/// erasure.
pub fn for_field(column_name: impl ToString) -> Self {
Self::Field(column_name.to_string())
}
/// Creates a new sort key computer which will sort by score, with type erasure.
pub fn for_score() -> Self {
Self::Score
}
}
trait ErasedSegmentSortKeyComputer: Send + Sync {
fn segment_sort_key(&mut self, doc: DocId, score: Score) -> Option<u64>;
fn segment_sort_keys(
&mut self,
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Option<u64>, DocId>>,
filter: ValueRange<Option<u64>>,
);
fn convert_segment_sort_key(&self, sort_key: Option<u64>) -> OwnedValue;
}
struct ErasedSegmentSortKeyComputerWrapper<C, F>
where
C: SegmentSortKeyComputer<SegmentSortKey = Option<u64>> + Send + Sync,
F: Fn(C::SortKey) -> OwnedValue + Send + Sync + 'static,
{
inner: C,
converter: F,
buffer: C::Buffer,
}
impl<C, F> ErasedSegmentSortKeyComputer for ErasedSegmentSortKeyComputerWrapper<C, F>
where
C: SegmentSortKeyComputer<SegmentSortKey = Option<u64>> + Send + Sync,
F: Fn(C::SortKey) -> OwnedValue + Send + Sync + 'static,
{
fn segment_sort_key(&mut self, doc: DocId, score: Score) -> Option<u64> {
self.inner.segment_sort_key(doc, score)
}
fn segment_sort_keys(
&mut self,
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Option<u64>, DocId>>,
filter: ValueRange<Option<u64>>,
) {
self.inner
.segment_sort_keys(input_docs, output, &mut self.buffer, filter)
}
fn convert_segment_sort_key(&self, sort_key: Option<u64>) -> OwnedValue {
let val = self.inner.convert_segment_sort_key(sort_key);
(self.converter)(val)
}
}
struct ScoreSegmentSortKeyComputer {
segment_computer: SortBySimilarityScoreSegmentComputer,
}
impl ErasedSegmentSortKeyComputer for ScoreSegmentSortKeyComputer {
fn segment_sort_key(&mut self, doc: DocId, score: Score) -> Option<u64> {
let score_value: f64 = self.segment_computer.segment_sort_key(doc, score).into();
Some(score_value.to_u64())
}
fn segment_sort_keys(
&mut self,
_input_docs: &[DocId],
_output: &mut Vec<ComparableDoc<Option<u64>, DocId>>,
_filter: ValueRange<Option<u64>>,
) {
unimplemented!("Batch computation not supported for score sorting")
}
fn convert_segment_sort_key(&self, sort_key: Option<u64>) -> OwnedValue {
let score_value: u64 = sort_key.expect("This implementation always produces a score.");
OwnedValue::F64(f64::from_u64(score_value))
}
}
impl SortKeyComputer for SortByErasedType {
type SortKey = OwnedValue;
type Child = ErasedColumnSegmentSortKeyComputer;
type Comparator = NaturalComparator;
fn requires_scoring(&self) -> bool {
matches!(self, Self::Score)
}
fn segment_sort_key_computer(
&self,
segment_reader: &crate::SegmentReader,
) -> crate::Result<Self::Child> {
let inner: Box<dyn ErasedSegmentSortKeyComputer> = match self {
Self::Field(column_name) => {
let fast_fields = segment_reader.fast_fields();
// TODO: We currently double-open the column to avoid relying on the implementation
// details of `SortByString` or `SortByStaticFastValue`. Once
// https://github.com/quickwit-oss/tantivy/issues/2776 is resolved, we should
// consider directly constructing the appropriate `SegmentSortKeyComputer` type for
// the column that we open here.
let (_column, column_type) =
fast_fields.u64_lenient(column_name)?.ok_or_else(|| {
FastFieldNotAvailableError {
field_name: column_name.to_owned(),
}
})?;
match column_type {
ColumnType::Str => {
let computer = SortByString::for_field(column_name);
let inner = computer.segment_sort_key_computer(segment_reader)?;
Box::new(ErasedSegmentSortKeyComputerWrapper {
inner,
converter: |val: Option<String>| {
val.map(OwnedValue::Str).unwrap_or(OwnedValue::Null)
},
buffer: Default::default(),
})
}
ColumnType::U64 => {
let computer = SortByStaticFastValue::<u64>::for_field(column_name);
let inner = computer.segment_sort_key_computer(segment_reader)?;
Box::new(ErasedSegmentSortKeyComputerWrapper {
inner,
converter: |val: Option<u64>| {
val.map(OwnedValue::U64).unwrap_or(OwnedValue::Null)
},
buffer: Default::default(),
})
}
ColumnType::I64 => {
let computer = SortByStaticFastValue::<i64>::for_field(column_name);
let inner = computer.segment_sort_key_computer(segment_reader)?;
Box::new(ErasedSegmentSortKeyComputerWrapper {
inner,
converter: |val: Option<i64>| {
val.map(OwnedValue::I64).unwrap_or(OwnedValue::Null)
},
buffer: Default::default(),
})
}
ColumnType::F64 => {
let computer = SortByStaticFastValue::<f64>::for_field(column_name);
let inner = computer.segment_sort_key_computer(segment_reader)?;
Box::new(ErasedSegmentSortKeyComputerWrapper {
inner,
converter: |val: Option<f64>| {
val.map(OwnedValue::F64).unwrap_or(OwnedValue::Null)
},
buffer: Default::default(),
})
}
ColumnType::Bool => {
let computer = SortByStaticFastValue::<bool>::for_field(column_name);
let inner = computer.segment_sort_key_computer(segment_reader)?;
Box::new(ErasedSegmentSortKeyComputerWrapper {
inner,
converter: |val: Option<bool>| {
val.map(OwnedValue::Bool).unwrap_or(OwnedValue::Null)
},
buffer: Default::default(),
})
}
ColumnType::DateTime => {
let computer = SortByStaticFastValue::<DateTime>::for_field(column_name);
let inner = computer.segment_sort_key_computer(segment_reader)?;
Box::new(ErasedSegmentSortKeyComputerWrapper {
inner,
converter: |val: Option<DateTime>| {
val.map(OwnedValue::Date).unwrap_or(OwnedValue::Null)
},
buffer: Default::default(),
})
}
column_type => {
return Err(crate::TantivyError::SchemaError(format!(
"Field `{}` is of type {column_type:?}, which is not supported for \
sorting by owned value yet.",
column_name
)))
}
}
}
Self::Score => Box::new(ScoreSegmentSortKeyComputer {
segment_computer: SortBySimilarityScore
.segment_sort_key_computer(segment_reader)?,
}),
};
Ok(ErasedColumnSegmentSortKeyComputer { inner })
}
}
pub struct ErasedColumnSegmentSortKeyComputer {
inner: Box<dyn ErasedSegmentSortKeyComputer>,
}
impl SegmentSortKeyComputer for ErasedColumnSegmentSortKeyComputer {
type SortKey = OwnedValue;
type SegmentSortKey = Option<u64>;
type SegmentComparator = NaturalComparator;
type Buffer = ();
#[inline(always)]
fn segment_sort_key(&mut self, doc: DocId, score: Score) -> Option<u64> {
self.inner.segment_sort_key(doc, score)
}
fn segment_sort_keys(
&mut self,
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
_buffer: &mut Self::Buffer,
filter: ValueRange<Self::SegmentSortKey>,
) {
self.inner.segment_sort_keys(input_docs, output, filter)
}
fn convert_segment_sort_key(&self, segment_sort_key: Self::SegmentSortKey) -> OwnedValue {
self.inner.convert_segment_sort_key(segment_sort_key)
}
}
#[cfg(test)]
mod tests {
use crate::collector::sort_key::{ComparatorEnum, SortByErasedType};
use crate::collector::TopDocs;
use crate::query::AllQuery;
use crate::schema::{OwnedValue, Schema, FAST, TEXT};
use crate::Index;
#[test]
fn test_sort_by_owned_u64() {
let mut schema_builder = Schema::builder();
let id_field = schema_builder.add_u64_field("id", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc!(id_field => 10u64)).unwrap();
writer.add_document(doc!(id_field => 2u64)).unwrap();
writer.add_document(doc!()).unwrap();
writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let collector = TopDocs::with_limit(10)
.order_by((SortByErasedType::for_field("id"), ComparatorEnum::Natural));
let top_docs = searcher.search(&AllQuery, &collector).unwrap();
let values: Vec<OwnedValue> = top_docs.into_iter().map(|(key, _)| key).collect();
assert_eq!(
values,
vec![OwnedValue::U64(10), OwnedValue::U64(2), OwnedValue::Null]
);
let collector = TopDocs::with_limit(10).order_by((
SortByErasedType::for_field("id"),
ComparatorEnum::ReverseNoneLower,
));
let top_docs = searcher.search(&AllQuery, &collector).unwrap();
let values: Vec<OwnedValue> = top_docs.into_iter().map(|(key, _)| key).collect();
assert_eq!(
values,
vec![OwnedValue::U64(2), OwnedValue::U64(10), OwnedValue::Null]
);
}
#[test]
fn test_sort_by_owned_string() {
let mut schema_builder = Schema::builder();
let city_field = schema_builder.add_text_field("city", FAST | TEXT);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc!(city_field => "tokyo")).unwrap();
writer.add_document(doc!(city_field => "austin")).unwrap();
writer.add_document(doc!()).unwrap();
writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let collector = TopDocs::with_limit(10).order_by((
SortByErasedType::for_field("city"),
ComparatorEnum::ReverseNoneLower,
));
let top_docs = searcher.search(&AllQuery, &collector).unwrap();
let values: Vec<OwnedValue> = top_docs.into_iter().map(|(key, _)| key).collect();
assert_eq!(
values,
vec![
OwnedValue::Str("austin".to_string()),
OwnedValue::Str("tokyo".to_string()),
OwnedValue::Null
]
);
}
#[test]
fn test_sort_by_owned_reverse() {
let mut schema_builder = Schema::builder();
let id_field = schema_builder.add_u64_field("id", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc!(id_field => 10u64)).unwrap();
writer.add_document(doc!(id_field => 2u64)).unwrap();
writer.add_document(doc!()).unwrap();
writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let collector = TopDocs::with_limit(10)
.order_by((SortByErasedType::for_field("id"), ComparatorEnum::Reverse));
let top_docs = searcher.search(&AllQuery, &collector).unwrap();
let values: Vec<OwnedValue> = top_docs.into_iter().map(|(key, _)| key).collect();
assert_eq!(
values,
vec![OwnedValue::Null, OwnedValue::U64(2), OwnedValue::U64(10)]
);
}
#[test]
fn test_sort_by_owned_score() {
let mut schema_builder = Schema::builder();
let body_field = schema_builder.add_text_field("body", TEXT);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut writer = index.writer_for_tests().unwrap();
writer.add_document(doc!(body_field => "a a")).unwrap();
writer.add_document(doc!(body_field => "a")).unwrap();
writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let query_parser = crate::query::QueryParser::for_index(&index, vec![body_field]);
let query = query_parser.parse_query("a").unwrap();
// Sort by score descending (Natural)
let collector = TopDocs::with_limit(10)
.order_by((SortByErasedType::for_score(), ComparatorEnum::Natural));
let top_docs = searcher.search(&query, &collector).unwrap();
let values: Vec<f64> = top_docs
.into_iter()
.map(|(key, _)| match key {
OwnedValue::F64(val) => val,
_ => panic!("Wrong type {:?}", key),
})
.collect();
assert_eq!(values.len(), 2);
assert!(values[0] > values[1]);
// Sort by score ascending (ReverseNoneLower)
let collector = TopDocs::with_limit(10).order_by((
SortByErasedType::for_score(),
ComparatorEnum::ReverseNoneLower,
));
let top_docs = searcher.search(&query, &collector).unwrap();
let values: Vec<f64> = top_docs
.into_iter()
.map(|(key, _)| match key {
OwnedValue::F64(val) => val,
_ => panic!("Wrong type {:?}", key),
})
.collect();
assert_eq!(values.len(), 2);
assert!(values[0] < values[1]);
}
}

View File

@@ -1,5 +1,7 @@
use columnar::ValueRange;
use crate::collector::sort_key::NaturalComparator;
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer, TopNComputer};
use crate::collector::{ComparableDoc, SegmentSortKeyComputer, SortKeyComputer, TopNComputer};
use crate::{DocAddress, DocId, Score};
/// Sort by similarity score.
@@ -9,7 +11,7 @@ pub struct SortBySimilarityScore;
impl SortKeyComputer for SortBySimilarityScore {
type SortKey = Score;
type Child = SortBySimilarityScore;
type Child = SortBySimilarityScoreSegmentComputer;
type Comparator = NaturalComparator;
@@ -21,7 +23,7 @@ impl SortKeyComputer for SortBySimilarityScore {
&self,
_segment_reader: &crate::SegmentReader,
) -> crate::Result<Self::Child> {
Ok(SortBySimilarityScore)
Ok(SortBySimilarityScoreSegmentComputer)
}
// Sorting by score is special in that it allows for the Block-Wand optimization.
@@ -61,16 +63,29 @@ impl SortKeyComputer for SortBySimilarityScore {
}
}
impl SegmentSortKeyComputer for SortBySimilarityScore {
type SortKey = Score;
pub struct SortBySimilarityScoreSegmentComputer;
impl SegmentSortKeyComputer for SortBySimilarityScoreSegmentComputer {
type SortKey = Score;
type SegmentSortKey = Score;
type SegmentComparator = NaturalComparator;
type Buffer = ();
#[inline(always)]
fn segment_sort_key(&mut self, _doc: DocId, score: Score) -> Score {
score
}
fn segment_sort_keys(
&mut self,
_input_docs: &[DocId],
_output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
_buffer: &mut Self::Buffer,
_filter: ValueRange<Self::SegmentSortKey>,
) {
unimplemented!("Batch computation not supported for score sorting")
}
fn convert_segment_sort_key(&self, score: Score) -> Score {
score
}

View File

@@ -1,9 +1,10 @@
use std::marker::PhantomData;
use columnar::Column;
use columnar::{Column, ValueRange};
use crate::collector::sort_key::sort_key_computer::convert_optional_u64_range_to_u64_range;
use crate::collector::sort_key::NaturalComparator;
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer};
use crate::collector::{ComparableDoc, SegmentSortKeyComputer, SortKeyComputer};
use crate::fastfield::{FastFieldNotAvailableError, FastValue};
use crate::{DocId, Score, SegmentReader};
@@ -34,9 +35,7 @@ impl<T: FastValue> SortByStaticFastValue<T> {
impl<T: FastValue> SortKeyComputer for SortByStaticFastValue<T> {
type Child = SortByFastValueSegmentSortKeyComputer<T>;
type SortKey = Option<T>;
type Comparator = NaturalComparator;
fn check_schema(&self, schema: &crate::schema::Schema) -> crate::Result<()> {
@@ -84,15 +83,112 @@ pub struct SortByFastValueSegmentSortKeyComputer<T> {
impl<T: FastValue> SegmentSortKeyComputer for SortByFastValueSegmentSortKeyComputer<T> {
type SortKey = Option<T>;
type SegmentSortKey = Option<u64>;
type SegmentComparator = NaturalComparator;
type Buffer = ();
#[inline(always)]
fn segment_sort_key(&mut self, doc: DocId, _score: Score) -> Self::SegmentSortKey {
self.sort_column.first(doc)
}
fn segment_sort_keys(
&mut self,
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
_buffer: &mut Self::Buffer,
filter: ValueRange<Self::SegmentSortKey>,
) {
let u64_filter = convert_optional_u64_range_to_u64_range(filter);
self.sort_column
.first_vals_in_value_range(input_docs, output, u64_filter);
}
fn convert_segment_sort_key(&self, sort_key: Self::SegmentSortKey) -> Self::SortKey {
sort_key.map(T::from_u64)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::schema::{Schema, FAST};
use crate::Index;
#[test]
fn test_sort_by_fast_value_batch() {
let mut schema_builder = Schema::builder();
let field_col = schema_builder.add_u64_field("field", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests().unwrap();
index_writer
.add_document(crate::doc!(field_col => 10u64))
.unwrap();
index_writer
.add_document(crate::doc!(field_col => 20u64))
.unwrap();
index_writer.add_document(crate::doc!()).unwrap();
index_writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0);
let sorter = SortByStaticFastValue::<u64>::for_field("field");
let mut computer = sorter.segment_sort_key_computer(segment_reader).unwrap();
let mut docs = vec![0, 1, 2];
let mut output = Vec::new();
let mut buffer = ();
computer.segment_sort_keys(&mut docs, &mut output, &mut buffer, ValueRange::All);
assert_eq!(
output.iter().map(|c| c.sort_key).collect::<Vec<_>>(),
&[Some(10), Some(20), None]
);
assert_eq!(output.iter().map(|c| c.doc).collect::<Vec<_>>(), &[0, 1, 2]);
}
#[test]
fn test_sort_by_fast_value_batch_with_filter() {
let mut schema_builder = Schema::builder();
let field_col = schema_builder.add_u64_field("field", FAST);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests().unwrap();
index_writer
.add_document(crate::doc!(field_col => 10u64))
.unwrap();
index_writer
.add_document(crate::doc!(field_col => 20u64))
.unwrap();
index_writer.add_document(crate::doc!()).unwrap();
index_writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0);
let sorter = SortByStaticFastValue::<u64>::for_field("field");
let mut computer = sorter.segment_sort_key_computer(segment_reader).unwrap();
let mut docs = vec![0, 1, 2];
let mut output = Vec::new();
let mut buffer = ();
computer.segment_sort_keys(
&mut docs,
&mut output,
&mut buffer,
ValueRange::GreaterThan(Some(15u64), false /* inclusive */),
);
assert_eq!(
output.iter().map(|c| c.sort_key).collect::<Vec<_>>(),
&[Some(20)]
);
assert_eq!(output.iter().map(|c| c.doc).collect::<Vec<_>>(), &[1]);
}
}

View File

@@ -1,7 +1,10 @@
use columnar::StrColumn;
use columnar::{StrColumn, ValueRange};
use crate::collector::sort_key::sort_key_computer::{
convert_optional_u64_range_to_u64_range, range_contains_none,
};
use crate::collector::sort_key::NaturalComparator;
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer};
use crate::collector::{ComparableDoc, SegmentSortKeyComputer, SortKeyComputer};
use crate::termdict::TermOrdinal;
use crate::{DocId, Score};
@@ -30,9 +33,7 @@ impl SortByString {
impl SortKeyComputer for SortByString {
type SortKey = Option<String>;
type Child = ByStringColumnSegmentSortKeyComputer;
type Comparator = NaturalComparator;
fn segment_sort_key_computer(
@@ -50,8 +51,9 @@ pub struct ByStringColumnSegmentSortKeyComputer {
impl SegmentSortKeyComputer for ByStringColumnSegmentSortKeyComputer {
type SortKey = Option<String>;
type SegmentSortKey = Option<TermOrdinal>;
type SegmentComparator = NaturalComparator;
type Buffer = ();
#[inline(always)]
fn segment_sort_key(&mut self, doc: DocId, _score: Score) -> Option<TermOrdinal> {
@@ -59,7 +61,31 @@ impl SegmentSortKeyComputer for ByStringColumnSegmentSortKeyComputer {
str_column.ords().first(doc)
}
fn segment_sort_keys(
&mut self,
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
_buffer: &mut Self::Buffer,
filter: ValueRange<Self::SegmentSortKey>,
) {
if let Some(str_column) = &self.str_column_opt {
let u64_filter = convert_optional_u64_range_to_u64_range(filter);
str_column
.ords()
.first_vals_in_value_range(input_docs, output, u64_filter);
} else if range_contains_none(&filter) {
for &doc in input_docs {
output.push(ComparableDoc {
doc,
sort_key: None,
});
}
}
}
fn convert_segment_sort_key(&self, term_ord_opt: Option<TermOrdinal>) -> Option<String> {
// TODO: Individual lookups to the dictionary like this are very likely to repeatedly
// decompress the same blocks. See https://github.com/quickwit-oss/tantivy/issues/2776
let term_ord = term_ord_opt?;
let str_column = self.str_column_opt.as_ref()?;
let mut bytes = Vec::new();
@@ -70,3 +96,90 @@ impl SegmentSortKeyComputer for ByStringColumnSegmentSortKeyComputer {
String::try_from(bytes).ok()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::schema::{Schema, FAST, TEXT};
use crate::Index;
#[test]
fn test_sort_by_string_batch() {
let mut schema_builder = Schema::builder();
let field_col = schema_builder.add_text_field("field", FAST | TEXT);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests().unwrap();
index_writer
.add_document(crate::doc!(field_col => "a"))
.unwrap();
index_writer
.add_document(crate::doc!(field_col => "c"))
.unwrap();
index_writer.add_document(crate::doc!()).unwrap();
index_writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0);
let sorter = SortByString::for_field("field");
let mut computer = sorter.segment_sort_key_computer(segment_reader).unwrap();
let mut docs = vec![0, 1, 2];
let mut output = Vec::new();
let mut buffer = ();
computer.segment_sort_keys(&mut docs, &mut output, &mut buffer, ValueRange::All);
assert_eq!(
output.iter().map(|c| c.sort_key).collect::<Vec<_>>(),
&[Some(0), Some(1), None]
);
assert_eq!(output.iter().map(|c| c.doc).collect::<Vec<_>>(), &[0, 1, 2]);
}
#[test]
fn test_sort_by_string_batch_with_filter() {
let mut schema_builder = Schema::builder();
let field_col = schema_builder.add_text_field("field", FAST | TEXT);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let mut index_writer = index.writer_for_tests().unwrap();
index_writer
.add_document(crate::doc!(field_col => "a"))
.unwrap();
index_writer
.add_document(crate::doc!(field_col => "c"))
.unwrap();
index_writer.add_document(crate::doc!()).unwrap();
index_writer.commit().unwrap();
let reader = index.reader().unwrap();
let searcher = reader.searcher();
let segment_reader = searcher.segment_reader(0);
let sorter = SortByString::for_field("field");
let mut computer = sorter.segment_sort_key_computer(segment_reader).unwrap();
let mut docs = vec![0, 1, 2];
let mut output = Vec::new();
// Filter: > "b". "a" is 0, "c" is 1.
// We want > "a" (ord 0). So we filter > ord 0.
// 0 is "a", 1 is "c".
let mut buffer = ();
computer.segment_sort_keys(
&mut docs,
&mut output,
&mut buffer,
ValueRange::GreaterThan(Some(0), false /* inclusive */),
);
assert_eq!(
output.iter().map(|c| c.sort_key).collect::<Vec<_>>(),
&[Some(1)]
);
assert_eq!(output.iter().map(|c| c.doc).collect::<Vec<_>>(), &[1]);
}
}

View File

@@ -1,8 +1,12 @@
use std::cmp::Ordering;
use columnar::ValueRange;
use crate::collector::sort_key::{Comparator, NaturalComparator};
use crate::collector::sort_key_top_collector::TopBySortKeySegmentCollector;
use crate::collector::{default_collect_segment_impl, SegmentCollector as _, TopNComputer};
use crate::collector::{
default_collect_segment_impl, ComparableDoc, SegmentCollector as _, TopNComputer,
};
use crate::schema::Schema;
use crate::{DocAddress, DocId, Result, Score, SegmentReader};
@@ -12,17 +16,40 @@ use crate::{DocAddress, DocId, Result, Score, SegmentReader};
/// It is the segment local version of the [`SortKeyComputer`].
pub trait SegmentSortKeyComputer: 'static {
/// The final score being emitted.
type SortKey: 'static + PartialOrd + Send + Sync + Clone;
type SortKey: 'static + Send + Sync + Clone;
/// Sort key used by at the segment level by the `SegmentSortKeyComputer`.
///
/// It is typically small like a `u64`, and is meant to be converted
/// to the final score at the end of the collection of the segment.
type SegmentSortKey: 'static + PartialOrd + Clone + Send + Sync + Clone;
type SegmentSortKey: 'static + Clone + Send + Sync + Clone;
/// Comparator type.
type SegmentComparator: Comparator<Self::SegmentSortKey> + Clone + 'static;
/// Buffer type used for scratch space.
type Buffer: Default + Send + Sync + 'static;
/// Returns the segment sort key comparator.
fn segment_comparator(&self) -> Self::SegmentComparator {
Self::SegmentComparator::default()
}
/// Computes the sort key for the given document and score.
fn segment_sort_key(&mut self, doc: DocId, score: Score) -> Self::SegmentSortKey;
/// Computes the sort keys for a batch of documents.
///
/// The computed sort keys and document IDs are pushed into the `output` vector.
/// The `buffer` is used for scratch space.
fn segment_sort_keys(
&mut self,
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
buffer: &mut Self::Buffer,
filter: ValueRange<Self::SegmentSortKey>,
);
/// Computes the sort key and pushes the document in a TopN Computer.
///
/// When using a tuple as the sorting key, the sort key is evaluated in a lazy manner.
@@ -31,12 +58,32 @@ pub trait SegmentSortKeyComputer: 'static {
&mut self,
doc: DocId,
score: Score,
top_n_computer: &mut TopNComputer<Self::SegmentSortKey, DocId, C>,
top_n_computer: &mut TopNComputer<Self::SegmentSortKey, DocId, C, Self::Buffer>,
) {
let sort_key = self.segment_sort_key(doc, score);
top_n_computer.push(sort_key, doc);
}
fn compute_sort_keys_and_collect<C: Comparator<Self::SegmentSortKey>>(
&mut self,
docs: &[DocId],
top_n_computer: &mut TopNComputer<Self::SegmentSortKey, DocId, C, Self::Buffer>,
) {
// The capacity of a TopNComputer is larger than 2*n + COLLECT_BLOCK_BUFFER_LEN, so we
// should always be able to `reserve` space for the entire block.
top_n_computer.reserve(docs.len());
let comparator = self.segment_comparator();
let value_range = if let Some(threshold) = &top_n_computer.threshold {
comparator.threshold_to_valuerange(threshold.clone())
} else {
ValueRange::All
};
let (buffer, scratch) = top_n_computer.buffer_and_scratch();
self.segment_sort_keys(docs, buffer, scratch, value_range);
}
/// A SegmentSortKeyComputer maps to a SegmentSortKey, but it can also decide on
/// its ordering.
///
@@ -47,27 +94,7 @@ pub trait SegmentSortKeyComputer: 'static {
left: &Self::SegmentSortKey,
right: &Self::SegmentSortKey,
) -> Ordering {
NaturalComparator.compare(left, right)
}
/// Implementing this method makes it possible to avoid computing
/// a sort_key entirely if we can assess that it won't pass a threshold
/// with a partial computation.
///
/// This is currently used for lexicographic sorting.
fn accept_sort_key_lazy(
&mut self,
doc_id: DocId,
score: Score,
threshold: &Self::SegmentSortKey,
) -> Option<(Ordering, Self::SegmentSortKey)> {
let sort_key = self.segment_sort_key(doc_id, score);
let cmp = self.compare_segment_sort_key(&sort_key, threshold);
if cmp == Ordering::Less {
None
} else {
Some((cmp, sort_key))
}
self.segment_comparator().compare(left, right)
}
/// Convert a segment level sort key into the global sort key.
@@ -81,7 +108,7 @@ pub trait SegmentSortKeyComputer: 'static {
/// the sort key at a segment scale.
pub trait SortKeyComputer: Sync {
/// The sort key type.
type SortKey: 'static + Send + Sync + PartialOrd + Clone + std::fmt::Debug;
type SortKey: 'static + Send + Sync + Clone + std::fmt::Debug;
/// Type of the associated [`SegmentSortKeyComputer`].
type Child: SegmentSortKeyComputer<SortKey = Self::SortKey>;
/// Comparator type.
@@ -136,11 +163,9 @@ where
HeadSortKeyComputer: SortKeyComputer,
TailSortKeyComputer: SortKeyComputer,
{
type SortKey = (
<HeadSortKeyComputer::Child as SegmentSortKeyComputer>::SortKey,
<TailSortKeyComputer::Child as SegmentSortKeyComputer>::SortKey,
);
type Child = (HeadSortKeyComputer::Child, TailSortKeyComputer::Child);
type SortKey = (HeadSortKeyComputer::SortKey, TailSortKeyComputer::SortKey);
type Child =
ChainSegmentSortKeyComputer<HeadSortKeyComputer::Child, TailSortKeyComputer::Child>;
type Comparator = (
HeadSortKeyComputer::Comparator,
@@ -152,10 +177,10 @@ where
}
fn segment_sort_key_computer(&self, segment_reader: &SegmentReader) -> Result<Self::Child> {
Ok((
self.0.segment_sort_key_computer(segment_reader)?,
self.1.segment_sort_key_computer(segment_reader)?,
))
Ok(ChainSegmentSortKeyComputer {
head: self.0.segment_sort_key_computer(segment_reader)?,
tail: self.1.segment_sort_key_computer(segment_reader)?,
})
}
/// Checks whether the schema is compatible with the sort key computer.
@@ -173,20 +198,91 @@ where
}
}
impl<HeadSegmentSortKeyComputer, TailSegmentSortKeyComputer> SegmentSortKeyComputer
for (HeadSegmentSortKeyComputer, TailSegmentSortKeyComputer)
pub struct ChainSegmentSortKeyComputer<Head, Tail>
where
HeadSegmentSortKeyComputer: SegmentSortKeyComputer,
TailSegmentSortKeyComputer: SegmentSortKeyComputer,
Head: SegmentSortKeyComputer,
Tail: SegmentSortKeyComputer,
{
type SortKey = (
HeadSegmentSortKeyComputer::SortKey,
TailSegmentSortKeyComputer::SortKey,
);
type SegmentSortKey = (
HeadSegmentSortKeyComputer::SegmentSortKey,
TailSegmentSortKeyComputer::SegmentSortKey,
);
head: Head,
tail: Tail,
}
pub struct ChainBuffer<HeadBuffer, TailBuffer, HeadKey, TailKey> {
pub head: HeadBuffer,
pub tail: TailBuffer,
pub head_output: Vec<ComparableDoc<HeadKey, DocId>>,
pub tail_output: Vec<ComparableDoc<TailKey, DocId>>,
pub tail_input_docs: Vec<DocId>,
}
impl<HeadBuffer: Default, TailBuffer: Default, HeadKey, TailKey> Default
for ChainBuffer<HeadBuffer, TailBuffer, HeadKey, TailKey>
{
fn default() -> Self {
ChainBuffer {
head: HeadBuffer::default(),
tail: TailBuffer::default(),
head_output: Vec::new(),
tail_output: Vec::new(),
tail_input_docs: Vec::new(),
}
}
}
impl<Head, Tail> ChainSegmentSortKeyComputer<Head, Tail>
where
Head: SegmentSortKeyComputer,
Tail: SegmentSortKeyComputer,
{
fn accept_sort_key_lazy(
&mut self,
doc_id: DocId,
score: Score,
threshold: &<Self as SegmentSortKeyComputer>::SegmentSortKey,
) -> Option<(Ordering, <Self as SegmentSortKeyComputer>::SegmentSortKey)> {
let (head_threshold, tail_threshold) = threshold;
let head_sort_key = self.head.segment_sort_key(doc_id, score);
let head_cmp = self
.head
.compare_segment_sort_key(&head_sort_key, head_threshold);
if head_cmp == Ordering::Less {
None
} else if head_cmp == Ordering::Equal {
let tail_sort_key = self.tail.segment_sort_key(doc_id, score);
let tail_cmp = self
.tail
.compare_segment_sort_key(&tail_sort_key, tail_threshold);
if tail_cmp == Ordering::Less {
None
} else {
Some((tail_cmp, (head_sort_key, tail_sort_key)))
}
} else {
let tail_sort_key = self.tail.segment_sort_key(doc_id, score);
Some((head_cmp, (head_sort_key, tail_sort_key)))
}
}
}
impl<Head, Tail> SegmentSortKeyComputer for ChainSegmentSortKeyComputer<Head, Tail>
where
Head: SegmentSortKeyComputer,
Tail: SegmentSortKeyComputer,
{
type SortKey = (Head::SortKey, Tail::SortKey);
type SegmentSortKey = (Head::SegmentSortKey, Tail::SegmentSortKey);
type SegmentComparator = (Head::SegmentComparator, Tail::SegmentComparator);
type Buffer =
ChainBuffer<Head::Buffer, Tail::Buffer, Head::SegmentSortKey, Tail::SegmentSortKey>;
fn segment_comparator(&self) -> Self::SegmentComparator {
(
self.head.segment_comparator(),
self.tail.segment_comparator(),
)
}
/// A SegmentSortKeyComputer maps to a SegmentSortKey, but it can also decide on
/// its ordering.
@@ -198,9 +294,90 @@ where
left: &Self::SegmentSortKey,
right: &Self::SegmentSortKey,
) -> Ordering {
self.0
self.head
.compare_segment_sort_key(&left.0, &right.0)
.then_with(|| self.1.compare_segment_sort_key(&left.1, &right.1))
.then_with(|| self.tail.compare_segment_sort_key(&left.1, &right.1))
}
fn segment_sort_keys(
&mut self,
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
buffer: &mut Self::Buffer,
filter: ValueRange<Self::SegmentSortKey>,
) {
let (head_filter, threshold) = match filter {
ValueRange::GreaterThan((head_threshold, tail_threshold), _)
| ValueRange::LessThan((head_threshold, tail_threshold), _) => {
let head_cmp = self.head.segment_comparator();
let strict_head_filter = head_cmp.threshold_to_valuerange(head_threshold.clone());
let head_filter = match strict_head_filter {
ValueRange::GreaterThan(t, m) => ValueRange::GreaterThanOrEqual(t, m),
ValueRange::LessThan(t, m) => ValueRange::LessThanOrEqual(t, m),
other => other,
};
(head_filter, Some((head_threshold, tail_threshold)))
}
_ => (ValueRange::All, None),
};
buffer.head_output.clear();
self.head.segment_sort_keys(
input_docs,
&mut buffer.head_output,
&mut buffer.head,
head_filter,
);
if buffer.head_output.is_empty() {
return;
}
buffer.tail_output.clear();
buffer.tail_input_docs.clear();
for cd in &buffer.head_output {
buffer.tail_input_docs.push(cd.doc);
}
self.tail.segment_sort_keys(
&buffer.tail_input_docs,
&mut buffer.tail_output,
&mut buffer.tail,
ValueRange::All,
);
let head_cmp = self.head.segment_comparator();
let tail_cmp = self.tail.segment_comparator();
for (head_doc, tail_doc) in buffer
.head_output
.drain(..)
.zip(buffer.tail_output.drain(..))
{
debug_assert_eq!(head_doc.doc, tail_doc.doc);
let doc = head_doc.doc;
let head_key = head_doc.sort_key;
let tail_key = tail_doc.sort_key;
let accept = if let Some((head_threshold, tail_threshold)) = &threshold {
let head_ord = head_cmp.compare(&head_key, head_threshold);
let ord = if head_ord == Ordering::Equal {
tail_cmp.compare(&tail_key, tail_threshold)
} else {
head_ord
};
ord == Ordering::Greater
} else {
true
};
if accept {
output.push(ComparableDoc {
sort_key: (head_key, tail_key),
doc,
});
}
}
}
#[inline(always)]
@@ -208,7 +385,7 @@ where
&mut self,
doc: DocId,
score: Score,
top_n_computer: &mut TopNComputer<Self::SegmentSortKey, DocId, C>,
top_n_computer: &mut TopNComputer<Self::SegmentSortKey, DocId, C, Self::Buffer>,
) {
let sort_key: Self::SegmentSortKey;
if let Some(threshold) = &top_n_computer.threshold {
@@ -225,68 +402,56 @@ where
#[inline(always)]
fn segment_sort_key(&mut self, doc: DocId, score: Score) -> Self::SegmentSortKey {
let head_sort_key = self.0.segment_sort_key(doc, score);
let tail_sort_key = self.1.segment_sort_key(doc, score);
let head_sort_key = self.head.segment_sort_key(doc, score);
let tail_sort_key = self.tail.segment_sort_key(doc, score);
(head_sort_key, tail_sort_key)
}
fn accept_sort_key_lazy(
&mut self,
doc_id: DocId,
score: Score,
threshold: &Self::SegmentSortKey,
) -> Option<(Ordering, Self::SegmentSortKey)> {
let (head_threshold, tail_threshold) = threshold;
let (head_cmp, head_sort_key) =
self.0.accept_sort_key_lazy(doc_id, score, head_threshold)?;
if head_cmp == Ordering::Equal {
let (tail_cmp, tail_sort_key) =
self.1.accept_sort_key_lazy(doc_id, score, tail_threshold)?;
Some((tail_cmp, (head_sort_key, tail_sort_key)))
} else {
let tail_sort_key = self.1.segment_sort_key(doc_id, score);
Some((head_cmp, (head_sort_key, tail_sort_key)))
}
}
fn convert_segment_sort_key(&self, sort_key: Self::SegmentSortKey) -> Self::SortKey {
let (head_sort_key, tail_sort_key) = sort_key;
(
self.0.convert_segment_sort_key(head_sort_key),
self.1.convert_segment_sort_key(tail_sort_key),
self.head.convert_segment_sort_key(head_sort_key),
self.tail.convert_segment_sort_key(tail_sort_key),
)
}
}
/// This struct is used as an adapter to take a sort key computer and map its score to another
/// new sort key.
pub struct MappedSegmentSortKeyComputer<T, PreviousSortKey, NewSortKey> {
pub struct MappedSegmentSortKeyComputer<T: SegmentSortKeyComputer, NewSortKey> {
sort_key_computer: T,
map: fn(PreviousSortKey) -> NewSortKey,
map: fn(T::SortKey) -> NewSortKey,
}
impl<T, PreviousScore, NewScore> SegmentSortKeyComputer
for MappedSegmentSortKeyComputer<T, PreviousScore, NewScore>
for MappedSegmentSortKeyComputer<T, NewScore>
where
T: SegmentSortKeyComputer<SortKey = PreviousScore>,
PreviousScore: 'static + Clone + Send + Sync + PartialOrd,
NewScore: 'static + Clone + Send + Sync + PartialOrd,
PreviousScore: 'static + Clone + Send + Sync,
NewScore: 'static + Clone + Send + Sync,
{
type SortKey = NewScore;
type SegmentSortKey = T::SegmentSortKey;
type SegmentComparator = T::SegmentComparator;
type Buffer = T::Buffer;
fn segment_comparator(&self) -> Self::SegmentComparator {
self.sort_key_computer.segment_comparator()
}
fn segment_sort_key(&mut self, doc: DocId, score: Score) -> Self::SegmentSortKey {
self.sort_key_computer.segment_sort_key(doc, score)
}
fn accept_sort_key_lazy(
fn segment_sort_keys(
&mut self,
doc_id: DocId,
score: Score,
threshold: &Self::SegmentSortKey,
) -> Option<(Ordering, Self::SegmentSortKey)> {
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
buffer: &mut Self::Buffer,
filter: ValueRange<Self::SegmentSortKey>,
) {
self.sort_key_computer
.accept_sort_key_lazy(doc_id, score, threshold)
.segment_sort_keys(input_docs, output, buffer, filter)
}
#[inline(always)]
@@ -294,12 +459,21 @@ where
&mut self,
doc: DocId,
score: Score,
top_n_computer: &mut TopNComputer<Self::SegmentSortKey, DocId, C>,
top_n_computer: &mut TopNComputer<Self::SegmentSortKey, DocId, C, Self::Buffer>,
) {
self.sort_key_computer
.compute_sort_key_and_collect(doc, score, top_n_computer);
}
fn compute_sort_keys_and_collect<C: Comparator<Self::SegmentSortKey>>(
&mut self,
docs: &[DocId],
top_n_computer: &mut TopNComputer<Self::SegmentSortKey, DocId, C, Self::Buffer>,
) {
self.sort_key_computer
.compute_sort_keys_and_collect(docs, top_n_computer);
}
fn convert_segment_sort_key(&self, segment_sort_key: Self::SegmentSortKey) -> Self::SortKey {
(self.map)(
self.sort_key_computer
@@ -325,10 +499,6 @@ where
);
type Child = MappedSegmentSortKeyComputer<
<(SortKeyComputer1, (SortKeyComputer2, SortKeyComputer3)) as SortKeyComputer>::Child,
(
SortKeyComputer1::SortKey,
(SortKeyComputer2::SortKey, SortKeyComputer3::SortKey),
),
Self::SortKey,
>;
@@ -352,7 +522,13 @@ where
let sort_key_computer3 = self.2.segment_sort_key_computer(segment_reader)?;
let map = |(sort_key1, (sort_key2, sort_key3))| (sort_key1, sort_key2, sort_key3);
Ok(MappedSegmentSortKeyComputer {
sort_key_computer: (sort_key_computer1, (sort_key_computer2, sort_key_computer3)),
sort_key_computer: ChainSegmentSortKeyComputer {
head: sort_key_computer1,
tail: ChainSegmentSortKeyComputer {
head: sort_key_computer2,
tail: sort_key_computer3,
},
},
map,
})
}
@@ -387,13 +563,6 @@ where
SortKeyComputer1,
(SortKeyComputer2, (SortKeyComputer3, SortKeyComputer4)),
) as SortKeyComputer>::Child,
(
SortKeyComputer1::SortKey,
(
SortKeyComputer2::SortKey,
(SortKeyComputer3::SortKey, SortKeyComputer4::SortKey),
),
),
Self::SortKey,
>;
type SortKey = (
@@ -415,10 +584,16 @@ where
let sort_key_computer3 = self.2.segment_sort_key_computer(segment_reader)?;
let sort_key_computer4 = self.3.segment_sort_key_computer(segment_reader)?;
Ok(MappedSegmentSortKeyComputer {
sort_key_computer: (
sort_key_computer1,
(sort_key_computer2, (sort_key_computer3, sort_key_computer4)),
),
sort_key_computer: ChainSegmentSortKeyComputer {
head: sort_key_computer1,
tail: ChainSegmentSortKeyComputer {
head: sort_key_computer2,
tail: ChainSegmentSortKeyComputer {
head: sort_key_computer3,
tail: sort_key_computer4,
},
},
},
map: |(sort_key1, (sort_key2, (sort_key3, sort_key4)))| {
(sort_key1, sort_key2, sort_key3, sort_key4)
},
@@ -441,6 +616,13 @@ where
}
}
use std::marker::PhantomData;
pub struct FuncSegmentSortKeyComputer<F, TSortKey> {
func: F,
_phantom: PhantomData<TSortKey>,
}
impl<F, SegmentF, TSortKey> SortKeyComputer for F
where
F: 'static + Send + Sync + Fn(&SegmentReader) -> SegmentF,
@@ -448,24 +630,44 @@ where
TSortKey: 'static + PartialOrd + Clone + Send + Sync + std::fmt::Debug,
{
type SortKey = TSortKey;
type Child = SegmentF;
type Child = FuncSegmentSortKeyComputer<SegmentF, TSortKey>;
type Comparator = NaturalComparator;
fn segment_sort_key_computer(&self, segment_reader: &SegmentReader) -> Result<Self::Child> {
Ok((self)(segment_reader))
Ok(FuncSegmentSortKeyComputer {
func: (self)(segment_reader),
_phantom: PhantomData,
})
}
}
impl<F, TSortKey> SegmentSortKeyComputer for F
impl<F, TSortKey> SegmentSortKeyComputer for FuncSegmentSortKeyComputer<F, TSortKey>
where
F: 'static + FnMut(DocId) -> TSortKey,
TSortKey: 'static + PartialOrd + Clone + Send + Sync,
{
type SortKey = TSortKey;
type SegmentSortKey = TSortKey;
type SegmentComparator = NaturalComparator;
type Buffer = ();
fn segment_sort_key(&mut self, doc: DocId, _score: Score) -> TSortKey {
(self)(doc)
(self.func)(doc)
}
fn segment_sort_keys(
&mut self,
input_docs: &[DocId],
output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
_buffer: &mut Self::Buffer,
_filter: ValueRange<Self::SegmentSortKey>,
) {
for &doc in input_docs {
output.push(ComparableDoc {
sort_key: (self.func)(doc),
doc,
});
}
}
/// Convert a segment level score into the global level score.
@@ -474,13 +676,75 @@ where
}
}
pub(crate) fn range_contains_none(range: &ValueRange<Option<u64>>) -> bool {
match range {
ValueRange::All => true,
ValueRange::Inclusive(r) => r.contains(&None),
ValueRange::GreaterThan(_threshold, match_nulls) => *match_nulls,
ValueRange::GreaterThanOrEqual(_threshold, match_nulls) => *match_nulls,
ValueRange::LessThan(_threshold, match_nulls) => *match_nulls,
ValueRange::LessThanOrEqual(_threshold, match_nulls) => *match_nulls,
}
}
pub(crate) fn convert_optional_u64_range_to_u64_range(
range: ValueRange<Option<u64>>,
) -> ValueRange<u64> {
match range {
ValueRange::Inclusive(r) => {
let start = r.start().unwrap_or(0);
let end = r.end().unwrap_or(u64::MAX);
ValueRange::Inclusive(start..=end)
}
ValueRange::GreaterThan(Some(val), match_nulls) => {
ValueRange::GreaterThan(val, match_nulls)
}
ValueRange::GreaterThan(None, match_nulls) => {
if match_nulls {
ValueRange::All
} else {
ValueRange::Inclusive(u64::MIN..=u64::MAX)
}
}
ValueRange::GreaterThanOrEqual(Some(val), match_nulls) => {
ValueRange::GreaterThanOrEqual(val, match_nulls)
}
ValueRange::GreaterThanOrEqual(None, match_nulls) => {
if match_nulls {
ValueRange::All
} else {
ValueRange::Inclusive(u64::MIN..=u64::MAX)
}
}
ValueRange::LessThan(None, match_nulls) => {
if match_nulls {
ValueRange::LessThan(u64::MIN, true)
} else {
ValueRange::Inclusive(1..=0)
}
}
ValueRange::LessThan(Some(val), match_nulls) => ValueRange::LessThan(val, match_nulls),
ValueRange::LessThanOrEqual(None, match_nulls) => {
if match_nulls {
ValueRange::LessThan(u64::MIN, true)
} else {
ValueRange::Inclusive(1..=0)
}
}
ValueRange::LessThanOrEqual(Some(val), match_nulls) => {
ValueRange::LessThanOrEqual(val, match_nulls)
}
ValueRange::All => ValueRange::All,
}
}
#[cfg(test)]
mod tests {
use std::cmp::Ordering;
use std::sync::atomic::{AtomicUsize, Ordering as AtomicOrdering};
use std::sync::Arc;
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer};
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer, TopNComputer};
use crate::schema::Schema;
use crate::{DocId, Index, Order, SegmentReader};
@@ -628,4 +892,178 @@ mod tests {
(200u32, 2u32)
);
}
#[test]
fn test_batch_score_computer_edge_case() {
let score_computer_primary = |_segment_reader: &SegmentReader| |_doc: DocId| 200u32;
let score_computer_secondary = |_segment_reader: &SegmentReader| |_doc: DocId| "b";
let lazy_score_computer = (score_computer_primary, score_computer_secondary);
let index = build_test_index();
let searcher = index.reader().unwrap().searcher();
let mut segment_sort_key_computer = lazy_score_computer
.segment_sort_key_computer(searcher.segment_reader(0))
.unwrap();
let mut top_n_computer =
TopNComputer::new_with_comparator(10, lazy_score_computer.comparator());
// Threshold (200, "a"). Doc is (200, "b"). 200 == 200, "b" > "a". Should be accepted.
top_n_computer.threshold = Some((200, "a"));
let docs = vec![0];
segment_sort_key_computer.compute_sort_keys_and_collect(&docs, &mut top_n_computer);
let results = top_n_computer.into_sorted_vec();
assert_eq!(results.len(), 1);
let result = &results[0];
assert_eq!(result.doc, 0);
assert_eq!(result.sort_key, (200, "b"));
}
}
#[cfg(test)]
mod proptest_tests {
use proptest::prelude::*;
use super::*;
use crate::collector::sort_key::order::*;
// Re-implement logic to interpret ValueRange<Option<u64>> manually to verify expectations
fn range_contains_opt(range: &ValueRange<Option<u64>>, val: &Option<u64>) -> bool {
match range {
ValueRange::All => true,
ValueRange::Inclusive(r) => r.contains(val),
ValueRange::GreaterThan(t, match_nulls) => {
if val.is_none() {
*match_nulls
} else {
val > t
}
}
ValueRange::GreaterThanOrEqual(t, match_nulls) => {
if val.is_none() {
*match_nulls
} else {
val >= t
}
}
ValueRange::LessThan(t, match_nulls) => {
if val.is_none() {
*match_nulls
} else {
val < t
}
}
ValueRange::LessThanOrEqual(t, match_nulls) => {
if val.is_none() {
*match_nulls
} else {
val <= t
}
}
}
}
fn range_contains_u64(range: &ValueRange<u64>, val: &u64) -> bool {
match range {
ValueRange::All => true,
ValueRange::Inclusive(r) => r.contains(val),
ValueRange::GreaterThan(t, _) => val > t,
ValueRange::GreaterThanOrEqual(t, _) => val >= t,
ValueRange::LessThan(t, _) => val < t,
ValueRange::LessThanOrEqual(t, _) => val <= t,
}
}
proptest! {
#[test]
fn test_comparator_consistency_natural_none_is_lower(
threshold in any::<Option<u64>>(),
val in any::<Option<u64>>()
) {
check_comparator::<NaturalComparator>(threshold, val)?;
}
#[test]
fn test_comparator_consistency_reverse(
threshold in any::<Option<u64>>(),
val in any::<Option<u64>>()
) {
check_comparator::<ReverseComparator>(threshold, val)?;
}
#[test]
fn test_comparator_consistency_reverse_none_is_lower(
threshold in any::<Option<u64>>(),
val in any::<Option<u64>>()
) {
check_comparator::<ReverseNoneIsLowerComparator>(threshold, val)?;
}
#[test]
fn test_comparator_consistency_natural_none_is_higher(
threshold in any::<Option<u64>>(),
val in any::<Option<u64>>()
) {
check_comparator::<NaturalNoneIsHigherComparator>(threshold, val)?;
}
}
fn check_comparator<C: Comparator<Option<u64>>>(
threshold: Option<u64>,
val: Option<u64>,
) -> std::result::Result<(), proptest::test_runner::TestCaseError> {
let comparator = C::default();
let range = comparator.threshold_to_valuerange(threshold);
let ordering = comparator.compare(&val, &threshold);
let should_be_in_range = ordering == Ordering::Greater;
let in_range_opt = range_contains_opt(&range, &val);
prop_assert_eq!(
in_range_opt,
should_be_in_range,
"Comparator consistency failed for {:?}. Threshold: {:?}, Val: {:?}, Range: {:?}, \
Ordering: {:?}. range_contains_opt says {}, but compare says {}",
std::any::type_name::<C>(),
threshold,
val,
range,
ordering,
in_range_opt,
should_be_in_range
);
// Check range_contains_none
let expected_none_in_range = range_contains_opt(&range, &None);
let actual_none_in_range = range_contains_none(&range);
prop_assert_eq!(
actual_none_in_range,
expected_none_in_range,
"range_contains_none failed for {:?}. Range: {:?}. Expected (from \
range_contains_opt): {}, Actual: {}",
std::any::type_name::<C>(),
range,
expected_none_in_range,
actual_none_in_range
);
// Check convert_optional_u64_range_to_u64_range
let u64_range = convert_optional_u64_range_to_u64_range(range.clone());
if let Some(v) = val {
let in_u64_range = range_contains_u64(&u64_range, &v);
let in_opt_range = range_contains_opt(&range, &Some(v));
prop_assert_eq!(
in_u64_range,
in_opt_range,
"convert_optional_u64_range_to_u64_range failed for {:?}. Val: {:?}, OptRange: \
{:?}, U64Range: {:?}. Opt says {}, U64 says {}",
std::any::type_name::<C>(),
v,
range,
u64_range,
in_opt_range,
in_u64_range
);
}
Ok(())
}
}

View File

@@ -99,7 +99,12 @@ where
TSegmentSortKeyComputer: SegmentSortKeyComputer,
C: Comparator<TSegmentSortKeyComputer::SegmentSortKey>,
{
pub(crate) topn_computer: TopNComputer<TSegmentSortKeyComputer::SegmentSortKey, DocId, C>,
pub(crate) topn_computer: TopNComputer<
TSegmentSortKeyComputer::SegmentSortKey,
DocId,
C,
TSegmentSortKeyComputer::Buffer,
>,
pub(crate) segment_ord: u32,
pub(crate) segment_sort_key_computer: TSegmentSortKeyComputer,
}
@@ -120,6 +125,11 @@ where
);
}
fn collect_block(&mut self, docs: &[DocId]) {
self.segment_sort_key_computer
.compute_sort_keys_and_collect(docs, &mut self.topn_computer);
}
fn harvest(self) -> Self::Fruit {
let segment_ord = self.segment_ord;
let segment_hits: Vec<(TSegmentSortKeyComputer::SortKey, DocAddress)> = self

View File

@@ -1,64 +0,0 @@
use std::cmp::Ordering;
use serde::{Deserialize, Serialize};
/// Contains a feature (field, score, etc.) of a document along with the document address.
///
/// It guarantees stable sorting: in case of a tie on the feature, the document
/// address is used.
///
/// The REVERSE_ORDER generic parameter controls whether the by-feature order
/// should be reversed, which is useful for achieving for example largest-first
/// semantics without having to wrap the feature in a `Reverse`.
#[derive(Clone, Default, Serialize, Deserialize)]
pub struct ComparableDoc<T, D, const REVERSE_ORDER: bool = false> {
/// The feature of the document. In practice, this is
/// is any type that implements `PartialOrd`.
pub sort_key: T,
/// The document address. In practice, this is any
/// type that implements `PartialOrd`, and is guaranteed
/// to be unique for each document.
pub doc: D,
}
impl<T: std::fmt::Debug, D: std::fmt::Debug, const R: bool> std::fmt::Debug
for ComparableDoc<T, D, R>
{
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
f.debug_struct(format!("ComparableDoc<_, _ {R}").as_str())
.field("feature", &self.sort_key)
.field("doc", &self.doc)
.finish()
}
}
impl<T: PartialOrd, D: PartialOrd, const R: bool> PartialOrd for ComparableDoc<T, D, R> {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl<T: PartialOrd, D: PartialOrd, const R: bool> Ord for ComparableDoc<T, D, R> {
#[inline]
fn cmp(&self, other: &Self) -> Ordering {
let by_feature = self
.sort_key
.partial_cmp(&other.sort_key)
.map(|ord| if R { ord.reverse() } else { ord })
.unwrap_or(Ordering::Equal);
let lazy_by_doc_address = || self.doc.partial_cmp(&other.doc).unwrap_or(Ordering::Equal);
// In case of a tie on the feature, we sort by ascending
// `DocAddress` in order to ensure a stable sorting of the
// documents.
by_feature.then_with(lazy_by_doc_address)
}
}
impl<T: PartialOrd, D: PartialOrd, const R: bool> PartialEq for ComparableDoc<T, D, R> {
fn eq(&self, other: &Self) -> bool {
self.cmp(other) == Ordering::Equal
}
}
impl<T: PartialOrd, D: PartialOrd, const R: bool> Eq for ComparableDoc<T, D, R> {}

View File

@@ -2,6 +2,7 @@ use std::cmp::Ordering;
use std::fmt;
use std::ops::Range;
use columnar::ValueRange;
use serde::{Deserialize, Serialize};
use super::Collector;
@@ -10,8 +11,7 @@ use crate::collector::sort_key::{
SortByStaticFastValue, SortByString,
};
use crate::collector::sort_key_top_collector::TopBySortKeyCollector;
use crate::collector::top_collector::ComparableDoc;
use crate::collector::{SegmentSortKeyComputer, SortKeyComputer};
use crate::collector::{ComparableDoc, SegmentSortKeyComputer, SortKeyComputer};
use crate::fastfield::FastValue;
use crate::{DocAddress, DocId, Order, Score, SegmentReader};
@@ -23,10 +23,9 @@ use crate::{DocAddress, DocId, Order, Score, SegmentReader};
/// The theoretical complexity for collecting the top `K` out of `N` documents
/// is `O(N + K)`.
///
/// This collector does not guarantee a stable sorting in case of a tie on the
/// document score, for stable sorting `PartialOrd` needs to resolve on other fields
/// like docid in case of score equality.
/// Only then, it is suitable for pagination.
/// This collector guarantees a stable sorting in case of a tie on the
/// document score/sort key: The document address (`DocAddress`) is used as a tie breaker.
/// In case of a tie on the sort key, documents are always sorted by ascending `DocAddress`.
///
/// ```rust
/// use tantivy::collector::TopDocs;
@@ -325,7 +324,7 @@ impl TopDocs {
sort_key_computer: impl SortKeyComputer<SortKey = TSortKey> + Send + 'static,
) -> impl Collector<Fruit = Vec<(TSortKey, DocAddress)>>
where
TSortKey: 'static + Clone + Send + Sync + PartialOrd + std::fmt::Debug,
TSortKey: 'static + Clone + Send + Sync + std::fmt::Debug,
{
TopBySortKeyCollector::new(sort_key_computer, self.doc_range())
}
@@ -446,7 +445,7 @@ where
F: 'static + Send + Sync + Fn(&SegmentReader) -> TTweakScoreSortKeyFn,
TTweakScoreSortKeyFn: 'static + Fn(DocId, Score) -> TSortKey,
TweakScoreSegmentSortKeyComputer<TTweakScoreSortKeyFn>:
SegmentSortKeyComputer<SortKey = TSortKey>,
SegmentSortKeyComputer<SortKey = TSortKey, SegmentSortKey = TSortKey>,
TSortKey: 'static + PartialOrd + Clone + Send + Sync + std::fmt::Debug,
{
type SortKey = TSortKey;
@@ -481,11 +480,23 @@ where
{
type SortKey = TSortKey;
type SegmentSortKey = TSortKey;
type SegmentComparator = NaturalComparator;
type Buffer = ();
fn segment_sort_key(&mut self, doc: DocId, score: Score) -> TSortKey {
(self.sort_key_fn)(doc, score)
}
fn segment_sort_keys(
&mut self,
_input_docs: &[DocId],
_output: &mut Vec<ComparableDoc<Self::SegmentSortKey, DocId>>,
_buffer: &mut Self::Buffer,
_filter: ValueRange<Self::SegmentSortKey>,
) {
unimplemented!("Batch computation is not supported for tweak score.")
}
/// Convert a segment level score into the global level score.
fn convert_segment_sort_key(&self, sort_key: Self::SegmentSortKey) -> Self::SortKey {
sort_key
@@ -500,16 +511,23 @@ where
///
/// For TopN == 0, it will be relative expensive.
///
/// When using the natural comparator, the top N computer returns the top N elements in
/// descending order, as expected for a top N.
/// The TopNComputer will tiebreak by using ascending `D` (DocId or DocAddress):
/// i.e., in case of a tie on the sort key, the `DocId|DocAddress` are always sorted in
/// ascending order, regardless of the `Comparator` used for the `Score` type.
///
/// NOTE: Items must be `push`ed to the TopNComputer in ascending `DocId|DocAddress` order, as the
/// threshold used to eliminate docs does not include the `DocId` or `DocAddress`: this provides
/// the ascending `DocId|DocAddress` tie-breaking behavior without additional comparisons.
#[derive(Serialize, Deserialize)]
#[serde(from = "TopNComputerDeser<Score, D, C>")]
pub struct TopNComputer<Score, D, C> {
pub struct TopNComputer<Score, D, C, Buffer = ()> {
/// The buffer reverses sort order to get top-semantics instead of bottom-semantics
buffer: Vec<ComparableDoc<Score, D>>,
top_n: usize,
pub(crate) threshold: Option<Score>,
comparator: C,
#[serde(skip)]
scratch: Buffer,
}
// Intermediate struct for TopNComputer for deserialization, to keep vec capacity
@@ -521,7 +539,9 @@ struct TopNComputerDeser<Score, D, C> {
comparator: C,
}
impl<Score, D, C> From<TopNComputerDeser<Score, D, C>> for TopNComputer<Score, D, C> {
impl<Score, D, C, Buffer> From<TopNComputerDeser<Score, D, C>> for TopNComputer<Score, D, C, Buffer>
where Buffer: Default
{
fn from(mut value: TopNComputerDeser<Score, D, C>) -> Self {
let expected_cap = value.top_n.max(1) * 2;
let current_cap = value.buffer.capacity();
@@ -536,12 +556,15 @@ impl<Score, D, C> From<TopNComputerDeser<Score, D, C>> for TopNComputer<Score, D
top_n: value.top_n,
threshold: value.threshold,
comparator: value.comparator,
scratch: Buffer::default(),
}
}
}
impl<Score: std::fmt::Debug, D, C> std::fmt::Debug for TopNComputer<Score, D, C>
where C: Comparator<Score>
impl<Score: std::fmt::Debug, D, C, Buffer> std::fmt::Debug for TopNComputer<Score, D, C, Buffer>
where
C: Comparator<Score>,
Buffer: std::fmt::Debug,
{
fn fmt(&self, f: &mut fmt::Formatter) -> std::fmt::Result {
f.debug_struct("TopNComputer")
@@ -549,12 +572,13 @@ where C: Comparator<Score>
.field("top_n", &self.top_n)
.field("current_threshold", &self.threshold)
.field("comparator", &self.comparator)
.field("scratch", &self.scratch)
.finish()
}
}
// Custom clone to keep capacity
impl<Score: Clone, D: Clone, C: Clone> Clone for TopNComputer<Score, D, C> {
impl<Score: Clone, D: Clone, C: Clone, Buffer: Clone> Clone for TopNComputer<Score, D, C, Buffer> {
fn clone(&self) -> Self {
let mut buffer_clone = Vec::with_capacity(self.buffer.capacity());
buffer_clone.extend(self.buffer.iter().cloned());
@@ -563,15 +587,17 @@ impl<Score: Clone, D: Clone, C: Clone> Clone for TopNComputer<Score, D, C> {
top_n: self.top_n,
threshold: self.threshold.clone(),
comparator: self.comparator.clone(),
scratch: self.scratch.clone(),
}
}
}
impl<TSortKey, D> TopNComputer<TSortKey, D, ReverseComparator>
impl<TSortKey, D> TopNComputer<TSortKey, D, ReverseComparator, ()>
where
D: Ord,
TSortKey: Clone,
NaturalComparator: Comparator<TSortKey>,
ReverseComparator: Comparator<TSortKey>,
{
/// Create a new `TopNComputer`.
/// Internally it will allocate a buffer of size `2 * top_n`.
@@ -580,30 +606,38 @@ where
}
}
impl<TSortKey, D, C> TopNComputer<TSortKey, D, C>
impl<TSortKey, D, C, Buffer> TopNComputer<TSortKey, D, C, Buffer>
where
D: Ord,
TSortKey: Clone,
C: Comparator<TSortKey>,
Buffer: Default,
{
/// Create a new `TopNComputer`.
/// Internally it will allocate a buffer of size `2 * top_n`.
/// Internally it will allocate a buffer of size `(top_n.max(1) * 2) +
/// COLLECT_BLOCK_BUFFER_LEN`.
pub fn new_with_comparator(top_n: usize, comparator: C) -> Self {
let vec_cap = top_n.max(1) * 2;
// We ensure that there is always enough space to include an entire block in the buffer if
// need be, so that `push_block_lazy` can avoid checking capacity inside its loop.
let vec_cap = (top_n.max(1) * 2) + crate::COLLECT_BLOCK_BUFFER_LEN;
TopNComputer {
buffer: Vec::with_capacity(vec_cap),
top_n,
threshold: None,
comparator,
scratch: Buffer::default(),
}
}
/// Push a new document to the top n.
/// If the document is below the current threshold, it will be ignored.
///
/// NOTE: `push` must be called in ascending `DocId`/`DocAddress` order.
#[inline]
pub fn push(&mut self, sort_key: TSortKey, doc: D) {
if let Some(last_median) = &self.threshold {
if self.comparator.compare(&sort_key, last_median) == Ordering::Less {
// See the struct docs for an explanation of why this comparison is strict.
if self.comparator.compare(&sort_key, last_median) != Ordering::Greater {
return;
}
}
@@ -615,24 +649,34 @@ where
// At this point, we need to have established that the doc is above the threshold.
#[inline(always)]
pub(crate) fn append_doc(&mut self, doc: D, sort_key: TSortKey) {
if self.buffer.len() == self.buffer.capacity() {
let median = self.truncate_top_n();
self.threshold = Some(median);
}
// This cannot panic, because we truncate_median will at least remove one element, since
// the min capacity is 2.
self.reserve(1);
// This cannot panic, because we've reserved room for one element.
let comparable_doc = ComparableDoc { doc, sort_key };
push_assuming_capacity(comparable_doc, &mut self.buffer);
}
// Ensure that there is capacity to push `additional` more elements without resizing.
#[inline(always)]
pub(crate) fn reserve(&mut self, additional: usize) {
if self.buffer.len() + additional > self.buffer.capacity() {
let median = self.truncate_top_n();
debug_assert!(self.buffer.len() + additional <= self.buffer.capacity());
self.threshold = Some(median);
}
}
pub(crate) fn buffer_and_scratch(
&mut self,
) -> (&mut Vec<ComparableDoc<TSortKey, D>>, &mut Buffer) {
(&mut self.buffer, &mut self.scratch)
}
#[inline(never)]
fn truncate_top_n(&mut self) -> TSortKey {
// Use select_nth_unstable to find the top nth score
let (_, median_el, _) = self.buffer.select_nth_unstable_by(self.top_n, |lhs, rhs| {
self.comparator
.compare(&rhs.sort_key, &lhs.sort_key)
.then_with(|| lhs.doc.cmp(&rhs.doc))
});
let (_, median_el, _) = self
.buffer
.select_nth_unstable_by(self.top_n, |lhs, rhs| self.comparator.compare_doc(lhs, rhs));
let median_score = median_el.sort_key.clone();
// Remove all elements below the top_n
@@ -646,11 +690,8 @@ where
if self.buffer.len() > self.top_n {
self.truncate_top_n();
}
self.buffer.sort_unstable_by(|left, right| {
self.comparator
.compare(&right.sort_key, &left.sort_key)
.then_with(|| left.doc.cmp(&right.doc))
});
self.buffer
.sort_unstable_by(|left, right| self.comparator.compare_doc(left, right));
self.buffer
}
@@ -669,7 +710,7 @@ where
//
// Panics if there is not enough capacity to add an element.
#[inline(always)]
fn push_assuming_capacity<T>(el: T, buf: &mut Vec<T>) {
pub fn push_assuming_capacity<T>(el: T, buf: &mut Vec<T>) {
let prev_len = buf.len();
assert!(prev_len < buf.capacity());
// This is mimicking the current (non-stabilized) implementation in std.
@@ -686,9 +727,10 @@ mod tests {
use proptest::prelude::*;
use super::{TopDocs, TopNComputer};
use crate::collector::sort_key::{ComparatorEnum, NaturalComparator, ReverseComparator};
use crate::collector::top_collector::ComparableDoc;
use crate::collector::{Collector, DocSetCollector};
use crate::collector::sort_key::{
Comparator, ComparatorEnum, NaturalComparator, ReverseComparator,
};
use crate::collector::{Collector, ComparableDoc, DocSetCollector};
use crate::query::{AllQuery, Query, QueryParser};
use crate::schema::{Field, Schema, FAST, STORED, TEXT};
use crate::time::format_description::well_known::Rfc3339;
@@ -755,6 +797,33 @@ mod tests {
);
}
#[test]
fn test_topn_computer_duplicates() {
let mut computer: TopNComputer<u32, u32, NaturalComparator> =
TopNComputer::new_with_comparator(2, NaturalComparator);
computer.push(1u32, 1u32);
computer.push(1u32, 2u32);
computer.push(1u32, 3u32);
computer.push(1u32, 4u32);
computer.push(1u32, 5u32);
// In the presence of duplicates, DocIds are always ascending order.
assert_eq!(
computer.into_sorted_vec(),
&[
ComparableDoc {
sort_key: 1u32,
doc: 1u32,
},
ComparableDoc {
sort_key: 1u32,
doc: 2u32,
}
]
);
}
#[test]
fn test_topn_computer_no_panic() {
for top_n in 0..10 {
@@ -772,14 +841,17 @@ mod tests {
#[test]
fn test_topn_computer_asc_prop(
limit in 0..10_usize,
docs in proptest::collection::vec((0..100_u64, 0..100_u64), 0..100_usize),
mut docs in proptest::collection::vec((0..100_u64, 0..100_u64), 0..100_usize),
) {
// NB: TopNComputer must receive inputs in ascending DocId order.
docs.sort_by_key(|(_, doc_id)| *doc_id);
let mut computer: TopNComputer<_, _, ReverseComparator> = TopNComputer::new_with_comparator(limit, ReverseComparator);
for (feature, doc) in &docs {
computer.push(*feature, *doc);
}
let mut comparable_docs: Vec<ComparableDoc<u64, u64>> = docs.into_iter().map(|(sort_key, doc)| ComparableDoc { sort_key, doc }).collect::<Vec<_>>();
comparable_docs.sort();
let mut comparable_docs =
docs.into_iter().map(|(sort_key, doc)| ComparableDoc { sort_key, doc }).collect::<Vec<_>>();
comparable_docs.sort_by(|l, r| ReverseComparator.compare_doc(l, r));
comparable_docs.truncate(limit);
prop_assert_eq!(
computer.into_sorted_vec(),
@@ -1363,11 +1435,11 @@ mod tests {
#[test]
fn test_top_field_collect_string_prop(
order in prop_oneof!(Just(Order::Desc), Just(Order::Asc)),
limit in 1..256_usize,
offset in 0..256_usize,
limit in 1..32_usize,
offset in 0..32_usize,
segments_terms in
proptest::collection::vec(
proptest::collection::vec(0..32_u8, 1..32_usize),
proptest::collection::vec(0..64_u8, 1..256_usize),
0..8_usize,
)
) {
@@ -1406,15 +1478,14 @@ mod tests {
// Using the TopDocs collector should always be equivalent to sorting, skipping the
// offset, and then taking the limit.
let sorted_docs: Vec<_> = if order.is_desc() {
let mut comparable_docs: Vec<ComparableDoc<_, _, true>> =
let sorted_docs: Vec<_> = {
let mut comparable_docs: Vec<ComparableDoc<_, _>> =
all_results.into_iter().map(|(sort_key, doc)| ComparableDoc { sort_key, doc}).collect();
comparable_docs.sort();
comparable_docs.into_iter().map(|cd| (cd.sort_key, cd.doc)).collect()
} else {
let mut comparable_docs: Vec<ComparableDoc<_, _, false>> =
all_results.into_iter().map(|(sort_key, doc)| ComparableDoc { sort_key, doc}).collect();
comparable_docs.sort();
if order.is_desc() {
comparable_docs.sort_by(|l, r| NaturalComparator.compare_doc(l, r));
} else {
comparable_docs.sort_by(|l, r| ReverseComparator.compare_doc(l, r));
}
comparable_docs.into_iter().map(|cd| (cd.sort_key, cd.doc)).collect()
};
let expected_docs = sorted_docs.into_iter().skip(offset).take(limit).collect::<Vec<_>>();
@@ -1693,7 +1764,8 @@ mod tests {
#[test]
fn test_top_n_computer_not_at_capacity() {
let mut top_n_computer = TopNComputer::new_with_comparator(4, NaturalComparator);
let mut top_n_computer: TopNComputer<f32, u32, _, ()> =
TopNComputer::new_with_comparator(4, NaturalComparator);
top_n_computer.append_doc(1, 0.8);
top_n_computer.append_doc(3, 0.2);
top_n_computer.append_doc(5, 0.3);
@@ -1718,7 +1790,8 @@ mod tests {
#[test]
fn test_top_n_computer_at_capacity() {
let mut top_collector = TopNComputer::new_with_comparator(4, NaturalComparator);
let mut top_collector: TopNComputer<f32, u32, _, ()> =
TopNComputer::new_with_comparator(4, NaturalComparator);
top_collector.append_doc(1, 0.8);
top_collector.append_doc(3, 0.2);
top_collector.append_doc(5, 0.3);
@@ -1755,12 +1828,14 @@ mod tests {
let doc_ids_collection = [4, 5, 6];
let score = 3.3f32;
let mut top_collector_limit_2 = TopNComputer::new_with_comparator(2, NaturalComparator);
let mut top_collector_limit_2: TopNComputer<f32, u32, _, ()> =
TopNComputer::new_with_comparator(2, NaturalComparator);
for id in &doc_ids_collection {
top_collector_limit_2.append_doc(*id, score);
}
let mut top_collector_limit_3 = TopNComputer::new_with_comparator(3, NaturalComparator);
let mut top_collector_limit_3: TopNComputer<f32, u32, _, ()> =
TopNComputer::new_with_comparator(3, NaturalComparator);
for id in &doc_ids_collection {
top_collector_limit_3.append_doc(*id, score);
}
@@ -1781,15 +1856,16 @@ mod bench {
#[bench]
fn bench_top_segment_collector_collect_at_capacity(b: &mut Bencher) {
let mut top_collector = TopNComputer::new_with_comparator(100, NaturalComparator);
let mut top_collector: TopNComputer<f32, u32, _, ()> =
TopNComputer::new_with_comparator(100, NaturalComparator);
for i in 0..100 {
top_collector.append_doc(i, 0.8);
top_collector.append_doc(i as u32, 0.8);
}
b.iter(|| {
for i in 0..100 {
top_collector.append_doc(i, 0.8);
top_collector.append_doc(i as u32, 0.8);
}
});
}

View File

@@ -36,6 +36,7 @@ fn path_for_version(version: &str) -> String {
/// feature flag quickwit uses a different dictionary type
#[test]
#[cfg(not(feature = "quickwit"))]
#[ignore = "test incompatible with fixed-width footer changes"]
fn test_format_6() {
let path = path_for_version("6");
@@ -47,6 +48,7 @@ fn test_format_6() {
/// feature flag quickwit uses a different dictionary type
#[test]
#[cfg(not(feature = "quickwit"))]
#[ignore = "test incompatible with fixed-width footer changes"]
fn test_format_7() {
let path = path_for_version("7");

View File

@@ -1,5 +1,5 @@
use std::collections::BTreeMap;
use std::sync::Arc;
use std::sync::{Arc, OnceLock};
use std::{fmt, io};
use crate::collector::Collector;
@@ -86,7 +86,7 @@ impl Searcher {
/// The searcher uses the segment ordinal to route the
/// request to the right `Segment`.
pub fn doc<D: DocumentDeserialize>(&self, doc_address: DocAddress) -> crate::Result<D> {
let store_reader = &self.inner.store_readers[doc_address.segment_ord as usize];
let store_reader = &self.inner.store_readers()[doc_address.segment_ord as usize];
store_reader.get(doc_address.doc_id)
}
@@ -96,7 +96,7 @@ impl Searcher {
pub fn doc_store_cache_stats(&self) -> CacheStats {
let cache_stats: CacheStats = self
.inner
.store_readers
.store_readers()
.iter()
.map(|reader| reader.cache_stats())
.sum();
@@ -110,7 +110,7 @@ impl Searcher {
doc_address: DocAddress,
) -> crate::Result<D> {
let executor = self.inner.index.search_executor();
let store_reader = &self.inner.store_readers[doc_address.segment_ord as usize];
let store_reader = &self.inner.store_readers()[doc_address.segment_ord as usize];
store_reader.get_async(doc_address.doc_id, executor).await
}
@@ -259,8 +259,9 @@ impl From<Arc<SearcherInner>> for Searcher {
pub(crate) struct SearcherInner {
schema: Schema,
index: Index,
doc_store_cache_num_blocks: usize,
segment_readers: Vec<SegmentReader>,
store_readers: Vec<StoreReader>,
store_readers: OnceLock<Vec<StoreReader>>,
generation: TrackedObject<SearcherGeneration>,
}
@@ -281,19 +282,30 @@ impl SearcherInner {
generation.segments(),
"Set of segments referenced by this Searcher and its SearcherGeneration must match"
);
let store_readers: Vec<StoreReader> = segment_readers
.iter()
.map(|segment_reader| segment_reader.get_store_reader(doc_store_cache_num_blocks))
.collect::<io::Result<Vec<_>>>()?;
Ok(SearcherInner {
schema,
index,
doc_store_cache_num_blocks,
segment_readers,
store_readers,
store_readers: OnceLock::default(),
generation,
})
}
#[inline]
fn store_readers(&self) -> &[StoreReader] {
self.store_readers.get_or_init(|| {
self.segment_readers
.iter()
.map(|segment_reader| {
segment_reader
.get_store_reader(self.doc_store_cache_num_blocks)
.expect("should be able to get store reader")
})
.collect()
})
}
}
impl fmt::Debug for Searcher {

View File

@@ -1,12 +1,20 @@
use std::any::Any;
use std::collections::HashSet;
use std::io::Write;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::Duration;
use std::{fmt, io, thread};
use log::Level;
use crate::directory::directory_lock::Lock;
use crate::directory::error::{DeleteError, LockError, OpenReadError, OpenWriteError};
use crate::directory::{FileHandle, FileSlice, WatchCallback, WatchHandle, WritePtr};
use crate::directory::{
FileHandle, FileSlice, TerminatingWrite, WatchCallback, WatchHandle, WritePtr,
};
use crate::index::SegmentMetaInventory;
use crate::IndexMeta;
/// Retry the logic of acquiring locks is pretty simple.
/// We just retry `n` times after a given `duratio`, both
@@ -56,7 +64,7 @@ impl<T: Send + Sync + 'static> From<Box<T>> for DirectoryLock {
impl Drop for DirectoryLockGuard {
fn drop(&mut self) {
if let Err(e) = self.directory.delete(&self.path) {
error!("Failed to remove the lock file. {e:?}");
error!("Failed to remove the lock file. {:?}", e);
}
}
}
@@ -97,6 +105,8 @@ fn retry_policy(is_blocking: bool) -> RetryPolicy {
}
}
pub type DirectoryPanicHandler = Arc<dyn Fn(Box<dyn Any + Send>) + Send + Sync + 'static>;
/// Write-once read many (WORM) abstraction for where
/// tantivy's data should be stored.
///
@@ -135,6 +145,10 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// Returns true if and only if the file exists
fn exists(&self, path: &Path) -> Result<bool, OpenReadError>;
/// Returns a boxed `TerminatingWrite` object, to be passed into `open_write`
/// which wraps it in a `BufWriter`
fn open_write_inner(&self, path: &Path) -> Result<Box<dyn TerminatingWrite>, OpenWriteError>;
/// Opens a writer for the *virtual file* associated with
/// a [`Path`].
///
@@ -161,7 +175,12 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// panic! if `flush` was not called.
///
/// The file may not previously exist.
fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError>;
fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError> {
Ok(io::BufWriter::with_capacity(
self.bufwriter_capacity(),
self.open_write_inner(path)?,
))
}
/// Reads the full content file that has been written using
/// [`Directory::atomic_write()`].
@@ -223,6 +242,75 @@ pub trait Directory: DirectoryClone + fmt::Debug + Send + Sync + 'static {
/// `OnCommitWithDelay` `ReloadPolicy`. Not implementing watch in a `Directory` only prevents
/// the `OnCommitWithDelay` `ReloadPolicy` to work properly.
fn watch(&self, watch_callback: WatchCallback) -> crate::Result<WatchHandle>;
/// Allows the directory to list managed files, overriding the ManagedDirectory's default
/// list_managed_files
fn list_managed_files(&self) -> crate::Result<HashSet<PathBuf>> {
Err(crate::TantivyError::InternalError(
"list_managed_files not implemented".to_string(),
))
}
/// Allows the directory to register a file as managed, overriding the ManagedDirectory's
/// default register_file_as_managed
fn register_files_as_managed(
&self,
_files: Vec<PathBuf>,
_overwrite: bool,
) -> crate::Result<()> {
Err(crate::TantivyError::InternalError(
"register_files_as_managed not implemented".to_string(),
))
}
/// Allows the directory to save IndexMeta, overriding the SegmentUpdater's default save_meta
fn save_metas(
&self,
_metas: &IndexMeta,
_previous_metas: &IndexMeta,
_payload: &mut (dyn Any + '_),
) -> crate::Result<()> {
Err(crate::TantivyError::InternalError(
"save_meta not implemented".to_string(),
))
}
/// Allows the directory to load IndexMeta, overriding the SegmentUpdater's default load_meta
fn load_metas(&self, _inventory: &SegmentMetaInventory) -> crate::Result<IndexMeta> {
Err(crate::TantivyError::InternalError(
"load_metas not implemented".to_string(),
))
}
/// Returns true if this directory supports garbage collection. The default assumption is
/// `true`
fn supports_garbage_collection(&self) -> bool {
true
}
/// Return a panic handler to be assigned to the various thread pools that may be created
///
/// The default is [`None`], which indicates that an unhandled panic from a thread pool will
/// abort the process
fn panic_handler(&self) -> Option<DirectoryPanicHandler> {
None
}
/// Returns true if this directory is in a position of requiring that tantivy cancel
/// whatever operation(s) it might be doing Typically this is just for the background
/// merge processes but could be used for anything
fn wants_cancel(&self) -> bool {
false
}
/// Send a logging message to the Directory to handle in its own way
fn log(&self, message: &str) {
log!(Level::Info, "{message}");
}
fn bufwriter_capacity(&self) -> usize {
8192
}
}
/// DirectoryClone

View File

@@ -58,3 +58,9 @@ pub static META_LOCK: Lazy<Lock> = Lazy::new(|| Lock {
filepath: PathBuf::from(".tantivy-meta.lock"),
is_blocking: true,
});
#[allow(missing_docs)]
pub static MANAGED_LOCK: Lazy<Lock> = Lazy::new(|| Lock {
filepath: PathBuf::from(".tantivy-managed.lock"),
is_blocking: true,
});

View File

@@ -9,6 +9,7 @@ use crc32fast::Hasher;
use crate::directory::{WatchCallback, WatchCallbackList, WatchHandle};
#[allow(dead_code)]
const POLLING_INTERVAL: Duration = Duration::from_millis(if cfg!(test) { 1 } else { 500 });
// Watches a file and executes registered callbacks when the file is modified.
@@ -18,6 +19,7 @@ pub struct FileWatcher {
state: Arc<AtomicUsize>, // 0: new, 1: runnable, 2: terminated
}
#[allow(dead_code)]
impl FileWatcher {
pub fn new(path: &Path) -> FileWatcher {
FileWatcher {

View File

@@ -7,15 +7,14 @@
use std::io;
use std::io::Write;
use common::{BinarySerializable, CountingWriter, DeserializeFrom, FixedSize, HasLen};
use common::{BinarySerializable, HasLen};
use crc32fast::Hasher;
use serde::{Deserialize, Serialize};
use crate::directory::error::Incompatibility;
use crate::directory::{AntiCallToken, FileSlice, TerminatingWrite};
use crate::{Version, INDEX_FORMAT_OLDEST_SUPPORTED_VERSION, INDEX_FORMAT_VERSION};
const FOOTER_MAX_LEN: u32 = 50_000;
pub const FOOTER_LEN: usize = 24;
/// The magic byte of the footer to identify corruption
/// or an old version of the footer.
@@ -24,7 +23,7 @@ const FOOTER_MAGIC_NUMBER: u32 = 1337;
type CrcHashU32 = u32;
/// A Footer is appended to every file
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
#[derive(Debug, Clone, PartialEq)]
pub struct Footer {
/// The version of the index format
pub version: Version,
@@ -41,34 +40,45 @@ impl Footer {
pub(crate) fn crc(&self) -> CrcHashU32 {
self.crc
}
pub(crate) fn append_footer<W: io::Write>(&self, mut write: &mut W) -> io::Result<()> {
let mut counting_write = CountingWriter::wrap(&mut write);
counting_write.write_all(serde_json::to_string(&self)?.as_ref())?;
let footer_payload_len = counting_write.written_bytes();
BinarySerializable::serialize(&(footer_payload_len as u32), write)?;
pub fn append_footer<W: io::Write>(&self, write: &mut W) -> io::Result<()> {
// 24 bytes
BinarySerializable::serialize(&self.version.major, write)?;
BinarySerializable::serialize(&self.version.minor, write)?;
BinarySerializable::serialize(&self.version.patch, write)?;
BinarySerializable::serialize(&self.version.index_format_version, write)?;
BinarySerializable::serialize(&self.crc, write)?;
BinarySerializable::serialize(&FOOTER_MAGIC_NUMBER, write)?;
Ok(())
}
/// Extracts the tantivy Footer from the file and returns the footer and the rest of the file
pub fn extract_footer(file: FileSlice) -> io::Result<(Footer, FileSlice)> {
if file.len() < 4 {
if file.len() < FOOTER_LEN {
return Err(io::Error::new(
io::ErrorKind::UnexpectedEof,
format!(
"File corrupted. The file is smaller than 4 bytes (len={}).",
"File corrupted. The file is too small to contain the {FOOTER_LEN} byte \
footer (len={}).",
file.len()
),
));
}
let footer_metadata_len = <(u32, u32)>::SIZE_IN_BYTES;
let (footer_len, footer_magic_byte): (u32, u32) = file
.slice_from_end(footer_metadata_len)
.read_bytes()?
.as_ref()
.deserialize()?;
let (body_slice, footer_slice) = file.split_from_end(FOOTER_LEN);
let footer_bytes = footer_slice.read_bytes()?;
let mut footer_bytes = footer_bytes.as_slice();
let footer = Footer {
version: Version {
major: u32::deserialize(&mut footer_bytes)?,
minor: u32::deserialize(&mut footer_bytes)?,
patch: u32::deserialize(&mut footer_bytes)?,
index_format_version: u32::deserialize(&mut footer_bytes)?,
},
crc: u32::deserialize(&mut footer_bytes)?,
};
let footer_magic_byte = u32::deserialize(&mut footer_bytes)?;
if footer_magic_byte != FOOTER_MAGIC_NUMBER {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
@@ -78,38 +88,12 @@ impl Footer {
));
}
if footer_len > FOOTER_MAX_LEN {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
format!(
"Footer seems invalid as it suggests a footer len of {footer_len}. File is \
corrupted, or the index was created with a different & old version of \
tantivy."
),
));
}
let total_footer_size = footer_len as usize + footer_metadata_len;
if file.len() < total_footer_size {
return Err(io::Error::new(
io::ErrorKind::UnexpectedEof,
format!(
"File corrupted. The file is smaller than it's footer bytes \
(len={total_footer_size})."
),
));
}
let footer: Footer =
serde_json::from_slice(&file.read_bytes_slice(
file.len() - total_footer_size..file.len() - footer_metadata_len,
)?)?;
let body = file.slice_to(file.len() - total_footer_size);
Ok((footer, body))
Ok((footer, body_slice))
}
/// Confirms that the index will be read correctly by this version of tantivy
/// Has to be called after `extract_footer` to make sure it's not accessing uninitialised memory
#[allow(dead_code)]
pub fn is_compatible(&self) -> Result<(), Incompatibility> {
const SUPPORTED_INDEX_FORMAT_VERSION_RANGE: std::ops::RangeInclusive<u32> =
INDEX_FORMAT_OLDEST_SUPPORTED_VERSION..=INDEX_FORMAT_VERSION;
@@ -188,6 +172,10 @@ mod tests {
fn test_deserialize_footer_missing_magic_byte() {
let mut buf: Vec<u8> = vec![];
BinarySerializable::serialize(&0_u32, &mut buf).unwrap();
BinarySerializable::serialize(&0_u32, &mut buf).unwrap();
BinarySerializable::serialize(&0_u32, &mut buf).unwrap();
BinarySerializable::serialize(&0_u32, &mut buf).unwrap();
BinarySerializable::serialize(&0_u32, &mut buf).unwrap();
let wrong_magic_byte: u32 = 5555;
BinarySerializable::serialize(&wrong_magic_byte, &mut buf).unwrap();
@@ -205,7 +193,6 @@ mod tests {
#[test]
fn test_deserialize_footer_wrong_filesize() {
let mut buf: Vec<u8> = vec![];
BinarySerializable::serialize(&100_u32, &mut buf).unwrap();
BinarySerializable::serialize(&FOOTER_MAGIC_NUMBER, &mut buf).unwrap();
let owned_bytes = OwnedBytes::new(buf);
@@ -215,27 +202,7 @@ mod tests {
assert_eq!(err.kind(), io::ErrorKind::UnexpectedEof);
assert_eq!(
err.to_string(),
"File corrupted. The file is smaller than it\'s footer bytes (len=108)."
);
}
#[test]
fn test_deserialize_too_large_footer() {
let mut buf: Vec<u8> = vec![];
let footer_length = super::FOOTER_MAX_LEN + 1;
BinarySerializable::serialize(&footer_length, &mut buf).unwrap();
BinarySerializable::serialize(&FOOTER_MAGIC_NUMBER, &mut buf).unwrap();
let owned_bytes = OwnedBytes::new(buf);
let fileslice = FileSlice::new(Arc::new(owned_bytes));
let err = Footer::extract_footer(fileslice).unwrap_err();
assert_eq!(err.kind(), io::ErrorKind::InvalidData);
assert_eq!(
err.to_string(),
"Footer seems invalid as it suggests a footer len of 50001. File is corrupted, or the \
index was created with a different & old version of tantivy."
"File corrupted. The file is too small to contain the 24 byte footer (len=4)."
);
}
}

View File

@@ -1,20 +1,22 @@
use std::any::Any;
use std::collections::HashSet;
use std::io::Write;
use std::path::{Path, PathBuf};
use std::sync::{Arc, RwLock, RwLockWriteGuard};
use std::sync::Arc;
use std::{io, result};
use crc32fast::Hasher;
use crate::core::MANAGED_FILEPATH;
use crate::directory::error::{DeleteError, LockError, OpenReadError, OpenWriteError};
use crate::directory::footer::{Footer, FooterProxy};
use crate::directory::footer::{Footer, FooterProxy, FOOTER_LEN};
use crate::directory::{
DirectoryLock, FileHandle, FileSlice, GarbageCollectionResult, Lock, WatchCallback,
WatchHandle, WritePtr, META_LOCK,
DirectoryLock, DirectoryPanicHandler, FileHandle, FileSlice, GarbageCollectionResult, Lock,
TerminatingWrite, WatchCallback, WatchHandle, MANAGED_LOCK, META_LOCK,
};
use crate::error::DataCorruption;
use crate::Directory;
use crate::index::SegmentMetaInventory;
use crate::{Directory, IndexMeta};
/// Returns true if the file is "managed".
/// Non-managed file are not subject to garbage collection.
@@ -39,9 +41,9 @@ fn is_managed(path: &Path) -> bool {
#[derive(Debug)]
pub struct ManagedDirectory {
directory: Box<dyn Directory>,
meta_informations: Arc<RwLock<MetaInformation>>,
}
#[allow(unused)]
#[derive(Debug, Default)]
struct MetaInformation {
managed_paths: HashSet<PathBuf>,
@@ -51,9 +53,9 @@ struct MetaInformation {
/// that were created by tantivy.
fn save_managed_paths(
directory: &dyn Directory,
wlock: &RwLockWriteGuard<'_, MetaInformation>,
managed_paths: &HashSet<PathBuf>,
) -> io::Result<()> {
let mut w = serde_json::to_vec(&wlock.managed_paths)?;
let mut w = serde_json::to_vec(managed_paths)?;
writeln!(&mut w)?;
directory.atomic_write(&MANAGED_FILEPATH, &w[..])?;
Ok(())
@@ -62,33 +64,37 @@ fn save_managed_paths(
impl ManagedDirectory {
/// Wraps a directory as managed directory.
pub fn wrap(directory: Box<dyn Directory>) -> crate::Result<ManagedDirectory> {
match directory.atomic_read(&MANAGED_FILEPATH) {
Ok(data) => {
let managed_files_json = String::from_utf8_lossy(&data);
let managed_files: HashSet<PathBuf> = serde_json::from_str(&managed_files_json)
.map_err(|e| {
DataCorruption::new(
MANAGED_FILEPATH.to_path_buf(),
format!("Managed file cannot be deserialized: {e:?}. "),
)
})?;
Ok(ManagedDirectory {
directory,
meta_informations: Arc::new(RwLock::new(MetaInformation {
managed_paths: managed_files,
})),
})
}
Err(OpenReadError::FileDoesNotExist(_)) => Ok(ManagedDirectory {
directory,
meta_informations: Arc::default(),
}),
io_err @ Err(OpenReadError::IoError { .. }) => Err(io_err.err().unwrap().into()),
Err(OpenReadError::IncompatibleIndex(incompatibility)) => {
// For the moment, this should never happen `meta.json`
// do not have any footer and cannot detect incompatibility.
Err(crate::TantivyError::IncompatibleIndex(incompatibility))
Ok(ManagedDirectory { directory })
}
pub fn list_managed_files(&self) -> crate::Result<HashSet<PathBuf>> {
match self.directory.list_managed_files() {
Ok(managed_files) => Ok(managed_files),
Err(crate::TantivyError::InternalError(_)) => {
match self.directory.atomic_read(&MANAGED_FILEPATH) {
Ok(data) => {
let managed_files_json = String::from_utf8_lossy(&data);
let managed_files: HashSet<PathBuf> =
serde_json::from_str(&managed_files_json).map_err(|e| {
DataCorruption::new(
MANAGED_FILEPATH.to_path_buf(),
format!("Managed file cannot be deserialized: {e:?}. "),
)
})?;
Ok(managed_files)
}
Err(OpenReadError::FileDoesNotExist(_)) => Ok(HashSet::new()),
io_err @ Err(OpenReadError::IoError { .. }) => {
Err(io_err.err().unwrap().into())
}
Err(OpenReadError::IncompatibleIndex(incompatibility)) => {
// For the moment, this should never happen `meta.json`
// do not have any footer and cannot detect incompatibility.
Err(crate::TantivyError::IncompatibleIndex(incompatibility))
}
}
}
Err(err) => Err(err),
}
}
@@ -110,9 +116,26 @@ impl ManagedDirectory {
&mut self,
get_living_files: L,
) -> crate::Result<GarbageCollectionResult> {
info!("Garbage collect");
if !self.supports_garbage_collection() {
// the underlying directory does not support garbage collection.
return Ok(GarbageCollectionResult {
deleted_files: vec![],
failed_to_delete_files: vec![],
});
}
let mut files_to_delete = vec![];
// We're about to do an atomic write to managed.json, lock it down
let _lock = self.acquire_lock(&MANAGED_LOCK)?;
let managed_paths = match self.directory.list_managed_files() {
Ok(managed_paths) => managed_paths,
Err(crate::TantivyError::InternalError(_)) => {
// If the managed.json file does not exist, we consider
// that there is no managed file.
self.list_managed_files()?
}
Err(err) => return Err(err),
};
// It is crucial to get the living files after acquiring the
// read lock of meta information. That way, we
// avoid the following scenario.
@@ -124,11 +147,6 @@ impl ManagedDirectory {
//
// releasing the lock as .delete() will use it too.
{
let meta_informations_rlock = self
.meta_informations
.read()
.expect("Managed directory rlock poisoned in garbage collect.");
// The point of this second "file" lock is to enforce the following scenario
// 1) process B tries to load a new set of searcher.
// The list of segments is loaded
@@ -138,7 +156,7 @@ impl ManagedDirectory {
match self.acquire_lock(&META_LOCK) {
Ok(_meta_lock) => {
let living_files = get_living_files();
for managed_path in &meta_informations_rlock.managed_paths {
for managed_path in &managed_paths {
if !living_files.contains(managed_path) {
files_to_delete.push(managed_path.clone());
}
@@ -181,16 +199,18 @@ impl ManagedDirectory {
if !deleted_files.is_empty() {
// update the list of managed files by removing
// the file that were removed.
let mut meta_informations_wlock = self
.meta_informations
.write()
.expect("Managed directory wlock poisoned (2).");
let managed_paths_write = &mut meta_informations_wlock.managed_paths;
let mut managed_paths_write = managed_paths;
for delete_file in &deleted_files {
managed_paths_write.remove(delete_file);
}
self.directory.sync_directory()?;
save_managed_paths(self.directory.as_mut(), &meta_informations_wlock)?;
if let Err(crate::TantivyError::InternalError(_)) = self
.directory
.register_files_as_managed(managed_paths_write.clone().into_iter().collect(), true)
{
save_managed_paths(self.directory.as_mut(), &managed_paths_write)?;
}
}
Ok(GarbageCollectionResult {
@@ -215,27 +235,39 @@ impl ManagedDirectory {
if !is_managed(filepath) {
return Ok(());
}
let mut meta_wlock = self
.meta_informations
.write()
.expect("Managed file lock poisoned");
let has_changed = meta_wlock.managed_paths.insert(filepath.to_owned());
if !has_changed {
return Ok(());
}
save_managed_paths(self.directory.as_ref(), &meta_wlock)?;
// This is not the first file we add.
// Therefore, we are sure that `.managed.json` has been already
// properly created and we do not need to sync its parent directory.
//
// (It might seem like a nicer solution to create the managed_json on the
// creation of the ManagedDirectory instance but it would actually
// prevent the use of read-only directories..)
let managed_file_definitely_already_exists = meta_wlock.managed_paths.len() > 1;
if managed_file_definitely_already_exists {
return Ok(());
// We're about to do an atomic write to managed.json, lock it down
let _lock = self
.acquire_lock(&MANAGED_LOCK)
.expect("must be able to acquire lock for managed.json");
if let Err(crate::TantivyError::InternalError(_)) = self
.directory
.register_files_as_managed(vec![filepath.to_owned()], false)
{
let mut managed_paths = self
.list_managed_files()
.expect("reading managed files should not fail");
let has_changed = managed_paths.insert(filepath.to_owned());
if !has_changed {
return Ok(());
}
save_managed_paths(self.directory.as_ref(), &managed_paths)?;
// This is not the first file we add.
// Therefore, we are sure that `.managed.json` has been already
// properly created and we do not need to sync its parent directory.
//
// (It might seem like a nicer solution to create the managed_json on the
// creation of the ManagedDirectory instance but it would actually
// prevent the use of read-only directories..)
let managed_file_definitely_already_exists = managed_paths.len() > 1;
if managed_file_definitely_already_exists {
return Ok(());
}
}
self.directory.sync_directory()?;
Ok(())
}
@@ -255,17 +287,6 @@ impl ManagedDirectory {
let crc = hasher.finalize();
Ok(footer.crc() == crc)
}
/// List all managed files
pub fn list_managed_files(&self) -> HashSet<PathBuf> {
let managed_paths = self
.meta_informations
.read()
.expect("Managed directory rlock poisoned in list damaged.")
.managed_paths
.clone();
managed_paths
}
}
impl Directory for ManagedDirectory {
@@ -276,22 +297,32 @@ impl Directory for ManagedDirectory {
fn open_read(&self, path: &Path) -> result::Result<FileSlice, OpenReadError> {
let file_slice = self.directory.open_read(path)?;
let (footer, reader) = Footer::extract_footer(file_slice)
.map_err(|io_error| OpenReadError::wrap_io_error(io_error, path.to_path_buf()))?;
footer.is_compatible()?;
debug_assert!(
{
use common::HasLen;
file_slice.len() >= FOOTER_LEN
},
"{} is too short",
path.display()
);
let (reader, _) = file_slice.split_from_end(FOOTER_LEN);
// NB: We do not read/validate the footer here -- we blindly skip it entirely
Ok(reader)
}
fn open_write(&self, path: &Path) -> result::Result<WritePtr, OpenWriteError> {
fn open_write_inner(
&self,
path: &Path,
) -> result::Result<Box<dyn TerminatingWrite>, OpenWriteError> {
self.register_file_as_managed(path)
.map_err(|io_error| OpenWriteError::wrap_io_error(io_error, path.to_path_buf()))?;
Ok(io::BufWriter::new(Box::new(FooterProxy::new(
Ok(Box::new(FooterProxy::new(
self.directory
.open_write(path)?
.into_inner()
.map_err(|_| ())
.expect("buffer should be empty"),
))))
)))
}
fn atomic_write(&self, path: &Path, data: &[u8]) -> io::Result<()> {
@@ -323,13 +354,45 @@ impl Directory for ManagedDirectory {
self.directory.sync_directory()?;
Ok(())
}
fn save_metas(
&self,
metas: &IndexMeta,
previous_metas: &IndexMeta,
payload: &mut (dyn Any + '_),
) -> crate::Result<()> {
self.directory.save_metas(metas, previous_metas, payload)
}
fn load_metas(&self, inventory: &SegmentMetaInventory) -> crate::Result<IndexMeta> {
self.directory.load_metas(inventory)
}
fn supports_garbage_collection(&self) -> bool {
self.directory.supports_garbage_collection()
}
fn panic_handler(&self) -> Option<DirectoryPanicHandler> {
self.directory.panic_handler()
}
fn wants_cancel(&self) -> bool {
self.directory.wants_cancel()
}
fn log(&self, message: &str) {
self.directory.log(message);
}
fn bufwriter_capacity(&self) -> usize {
self.directory.bufwriter_capacity()
}
}
impl Clone for ManagedDirectory {
fn clone(&self) -> ManagedDirectory {
ManagedDirectory {
directory: self.directory.box_clone(),
meta_informations: Arc::clone(&self.meta_informations),
}
}
}
@@ -337,7 +400,6 @@ impl Clone for ManagedDirectory {
#[cfg(feature = "mmap")]
#[cfg(test)]
mod tests_mmap_specific {
use std::collections::HashSet;
use std::io::Write;
use std::path::{Path, PathBuf};

View File

@@ -1,7 +1,7 @@
use std::collections::HashMap;
use std::fmt;
use std::fs::{self, File, OpenOptions};
use std::io::{self, BufWriter, Read, Write};
use std::io::{self, Read, Write};
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::sync::{Arc, RwLock, Weak};
@@ -21,7 +21,7 @@ use crate::directory::error::{
use crate::directory::file_watcher::FileWatcher;
use crate::directory::{
AntiCallToken, Directory, DirectoryLock, FileHandle, Lock, OwnedBytes, TerminatingWrite,
WatchCallback, WatchHandle, WritePtr,
WatchCallback, WatchHandle,
};
pub type ArcBytes = Arc<dyn Deref<Target = [u8]> + Send + Sync + 'static>;
@@ -413,8 +413,8 @@ impl Directory for MmapDirectory {
.map_err(|io_err| OpenReadError::wrap_io_error(io_err, path.to_path_buf()))
}
fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError> {
debug!("Open Write {path:?}");
fn open_write_inner(&self, path: &Path) -> Result<Box<dyn TerminatingWrite>, OpenWriteError> {
debug!("Open Write {:?}", path);
let full_path = self.resolve_path(path);
let open_res = OpenOptions::new()
@@ -443,7 +443,7 @@ impl Directory for MmapDirectory {
// sync_directory() is called.
let writer = SafeFileWriter::new(file);
Ok(BufWriter::new(Box::new(writer)))
Ok(Box::new(writer))
}
fn atomic_read(&self, path: &Path) -> Result<Vec<u8>, OpenReadError> {

View File

@@ -19,12 +19,13 @@ mod composite_file;
use std::io::BufWriter;
use std::path::PathBuf;
pub use common::buffered_file_slice::BufferedFileSlice;
pub use common::file_slice::{FileHandle, FileSlice};
pub use common::{AntiCallToken, OwnedBytes, TerminatingWrite};
pub(crate) use self::composite_file::{CompositeFile, CompositeWrite};
pub use self::directory::{Directory, DirectoryClone, DirectoryLock};
pub use self::directory_lock::{Lock, INDEX_WRITER_LOCK, META_LOCK};
pub use self::directory::{Directory, DirectoryClone, DirectoryLock, DirectoryPanicHandler};
pub use self::directory_lock::{Lock, INDEX_WRITER_LOCK, MANAGED_LOCK, META_LOCK};
pub use self::ram_directory::RamDirectory;
pub use self::watch_event_router::{WatchCallback, WatchCallbackList, WatchHandle};

View File

@@ -1,5 +1,5 @@
use std::collections::HashMap;
use std::io::{self, BufWriter, Cursor, Write};
use std::io::{self, Cursor, Write};
use std::path::{Path, PathBuf};
use std::sync::{Arc, RwLock};
use std::{fmt, result};
@@ -11,7 +11,7 @@ use crate::core::META_FILEPATH;
use crate::directory::error::{DeleteError, OpenReadError, OpenWriteError};
use crate::directory::{
AntiCallToken, Directory, FileSlice, TerminatingWrite, WatchCallback, WatchCallbackList,
WatchHandle, WritePtr,
WatchHandle,
};
/// Writer associated with the [`RamDirectory`].
@@ -197,7 +197,7 @@ impl Directory for RamDirectory {
.exists(path))
}
fn open_write(&self, path: &Path) -> Result<WritePtr, OpenWriteError> {
fn open_write_inner(&self, path: &Path) -> Result<Box<dyn TerminatingWrite>, OpenWriteError> {
let mut fs = self.fs.write().unwrap();
let path_buf = PathBuf::from(path);
let vec_writer = VecWriter::new(path_buf.clone(), self.clone());
@@ -206,7 +206,7 @@ impl Directory for RamDirectory {
if exists {
Err(OpenWriteError::FileAlreadyExists(path_buf))
} else {
Ok(BufWriter::new(Box::new(vec_writer)))
Ok(Box::new(vec_writer))
}
}

View File

@@ -110,6 +110,11 @@ pub enum TantivyError {
#[error("Deserialize error: {0}")]
/// An error occurred while attempting to deserialize a document.
DeserializeError(DeserializeError),
/// The user requested the current operation be cancelled
#[error("User requested cancel")]
Cancelled,
#[error("Segment Merging failed: {0:#?}")]
MergeErrors(Vec<TantivyError>),
}
impl From<io::Error> for TantivyError {

View File

@@ -79,7 +79,7 @@ mod tests {
use std::ops::{Range, RangeInclusive};
use std::path::Path;
use columnar::StrColumn;
use columnar::{StrColumn, ValueRange};
use common::{ByteCount, DateTimePrecision, HasLen, TerminatingWrite};
use once_cell::sync::Lazy;
use rand::prelude::SliceRandom;
@@ -395,7 +395,7 @@ mod tests {
.unwrap()
.first_or_default_col(0);
for a in 0..n {
assert_eq!(col.get_val(a as u32), permutation[a]);
assert_eq!(col.get_val(a as u32), permutation[a], "for doc {a}");
}
}
@@ -944,7 +944,7 @@ mod tests {
let test_range = |range: RangeInclusive<u64>| {
let expected_count = numbers.iter().filter(|num| range.contains(*num)).count();
let mut vec = vec![];
field.get_row_ids_for_value_range(range, 0..u32::MAX, &mut vec);
field.get_row_ids_for_value_range(ValueRange::Inclusive(range), 0..u32::MAX, &mut vec);
assert_eq!(vec.len(), expected_count);
};
test_range(50..=50);
@@ -1022,7 +1022,7 @@ mod tests {
let test_range = |range: RangeInclusive<u64>| {
let expected_count = numbers.iter().filter(|num| range.contains(*num)).count();
let mut vec = vec![];
field.get_row_ids_for_value_range(range, 0..u32::MAX, &mut vec);
field.get_row_ids_for_value_range(ValueRange::Inclusive(range), 0..u32::MAX, &mut vec);
assert_eq!(vec.len(), expected_count);
};
let test_range_variant = |start, stop| {

View File

@@ -30,22 +30,30 @@ fn load_metas(
directory: &dyn Directory,
inventory: &SegmentMetaInventory,
) -> crate::Result<IndexMeta> {
let meta_data = directory.atomic_read(&META_FILEPATH)?;
let meta_string = String::from_utf8(meta_data).map_err(|_utf8_err| {
error!("Meta data is not valid utf8.");
DataCorruption::new(
META_FILEPATH.to_path_buf(),
"Meta file does not contain valid utf8 file.".to_string(),
)
})?;
IndexMeta::deserialize(&meta_string, inventory)
.map_err(|e| {
DataCorruption::new(
META_FILEPATH.to_path_buf(),
format!("Meta file cannot be deserialized. {e:?}. Content: {meta_string:?}"),
)
})
.map_err(From::from)
match directory.load_metas(inventory) {
Ok(metas) => Ok(metas),
Err(crate::TantivyError::InternalError(_)) => {
let meta_data = directory.atomic_read(&META_FILEPATH)?;
let meta_string = String::from_utf8(meta_data).map_err(|_utf8_err| {
error!("Meta data is not valid utf8.");
DataCorruption::new(
META_FILEPATH.to_path_buf(),
"Meta file does not contain valid utf8 file.".to_string(),
)
})?;
IndexMeta::deserialize(&meta_string, inventory)
.map_err(|e| {
DataCorruption::new(
META_FILEPATH.to_path_buf(),
format!(
"Meta file cannot be deserialized. {e:?}. Content: {meta_string:?}"
),
)
})
.map_err(From::from)
}
Err(err) => Err(err),
}
}
/// Save the index meta file.
@@ -60,16 +68,14 @@ fn save_new_metas(
index_settings: IndexSettings,
directory: &dyn Directory,
) -> crate::Result<()> {
save_metas(
&IndexMeta {
index_settings,
segments: Vec::new(),
schema,
opstamp: 0u64,
payload: None,
},
directory,
)?;
let empty_metas = IndexMeta {
index_settings,
segments: Vec::new(),
schema,
opstamp: 0u64,
payload: None,
};
save_metas(&empty_metas, &empty_metas, directory)?;
directory.sync_directory()?;
Ok(())
}
@@ -582,7 +588,7 @@ impl Index {
num_threads: usize,
overall_memory_budget_in_bytes: usize,
) -> crate::Result<IndexWriter<D>> {
let memory_arena_in_bytes_per_thread = overall_memory_budget_in_bytes / num_threads;
let memory_arena_in_bytes_per_thread = overall_memory_budget_in_bytes / num_threads.max(1);
let options = IndexWriterOptions::builder()
.num_worker_threads(num_threads)
.memory_budget_per_thread(memory_arena_in_bytes_per_thread)
@@ -655,9 +661,11 @@ impl Index {
/// Creates a new segment.
pub fn new_segment(&self) -> Segment {
let segment_meta = self
.inventory
.new_segment_meta(SegmentId::generate_random(), 0);
self.new_segment_with_id(SegmentId::generate_random())
}
pub fn new_segment_with_id(&self, segment_id: SegmentId) -> Segment {
let segment_meta = self.inventory.new_segment_meta(segment_id, 0);
self.segment(segment_meta)
}
@@ -688,7 +696,7 @@ impl Index {
/// Returns the set of corrupted files
pub fn validate_checksum(&self) -> crate::Result<HashSet<PathBuf>> {
let managed_files = self.directory.list_managed_files();
let managed_files = self.directory.list_managed_files()?;
let active_segments_files: HashSet<PathBuf> = self
.searchable_segment_metas()?
.iter()

View File

@@ -1,5 +1,6 @@
use std::collections::HashSet;
use std::fmt;
use std::fmt::{Debug, Formatter};
use std::path::PathBuf;
use std::sync::atomic::AtomicBool;
use std::sync::Arc;
@@ -13,16 +14,24 @@ use crate::store::Compressor;
use crate::{Inventory, Opstamp, TrackedObject};
#[derive(Clone, Debug, Serialize, Deserialize)]
struct DeleteMeta {
num_deleted_docs: u32,
opstamp: Opstamp,
pub struct DeleteMeta {
pub num_deleted_docs: u32,
pub opstamp: Opstamp,
}
#[derive(Clone, Default)]
pub(crate) struct SegmentMetaInventory {
pub struct SegmentMetaInventory {
inventory: Inventory<InnerSegmentMeta>,
}
impl Debug for SegmentMetaInventory {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("SegmentMetaInventory")
.field("inventory", &self.inventory.list())
.finish()
}
}
impl SegmentMetaInventory {
/// Lists all living `SegmentMeta` object at the time of the call.
pub fn all(&self) -> Vec<SegmentMeta> {
@@ -50,7 +59,7 @@ impl SegmentMetaInventory {
/// how many are deleted, etc.
#[derive(Clone)]
pub struct SegmentMeta {
tracked: TrackedObject<InnerSegmentMeta>,
pub tracked: TrackedObject<InnerSegmentMeta>,
}
impl fmt::Debug for SegmentMeta {
@@ -210,15 +219,15 @@ impl SegmentMeta {
}
#[derive(Debug, Clone, Serialize, Deserialize)]
struct InnerSegmentMeta {
segment_id: SegmentId,
max_doc: u32,
deletes: Option<DeleteMeta>,
pub struct InnerSegmentMeta {
pub segment_id: SegmentId,
pub max_doc: u32,
pub deletes: Option<DeleteMeta>,
/// If you want to avoid the SegmentComponent::TempStore file to be covered by
/// garbage collection and deleted, set this to true. This is used during merge.
#[serde(skip)]
#[serde(default = "default_temp_store")]
pub(crate) include_temp_doc_store: Arc<AtomicBool>,
pub include_temp_doc_store: Arc<AtomicBool>,
}
fn default_temp_store() -> Arc<AtomicBool> {
Arc::new(AtomicBool::new(false))

View File

@@ -1,5 +1,6 @@
use std::io;
use common::file_slice::DeferredFileSlice;
use common::json_path_writer::JSON_END_OF_PATH;
use common::{BinarySerializable, ByteCount};
#[cfg(feature = "quickwit")]
@@ -30,7 +31,7 @@ use crate::termdict::TermDictionary;
pub struct InvertedIndexReader {
termdict: TermDictionary,
postings_file_slice: FileSlice,
positions_file_slice: FileSlice,
positions_file_slice: DeferredFileSlice,
record_option: IndexRecordOption,
total_num_tokens: u64,
}
@@ -66,7 +67,7 @@ impl InvertedIndexReader {
pub(crate) fn new(
termdict: TermDictionary,
postings_file_slice: FileSlice,
positions_file_slice: FileSlice,
positions_file_slice: DeferredFileSlice,
record_option: IndexRecordOption,
) -> io::Result<InvertedIndexReader> {
let (total_num_tokens_slice, postings_body) = postings_file_slice.split(8);
@@ -86,7 +87,7 @@ impl InvertedIndexReader {
InvertedIndexReader {
termdict: TermDictionary::empty(),
postings_file_slice: FileSlice::empty(),
positions_file_slice: FileSlice::empty(),
positions_file_slice: DeferredFileSlice::new(|| Ok(FileSlice::empty())),
record_option,
total_num_tokens: 0u64,
}
@@ -211,7 +212,7 @@ impl InvertedIndexReader {
.slice(term_info.postings_range.clone());
BlockSegmentPostings::open(
term_info.doc_freq,
postings_data,
postings_data.read_bytes()?,
self.record_option,
requested_option,
)
@@ -233,6 +234,7 @@ impl InvertedIndexReader {
if option.has_positions() {
let positions_data = self
.positions_file_slice
.open()?
.read_bytes_slice(term_info.positions_range.clone())?;
let position_reader = PositionReader::open(positions_data)?;
Some(position_reader)
@@ -342,6 +344,7 @@ impl InvertedIndexReader {
if with_positions {
let positions = self
.positions_file_slice
.open()?
.read_bytes_slice_async(term_info.positions_range.clone());
futures_util::future::try_join(postings, positions).await?;
} else {
@@ -384,6 +387,7 @@ impl InvertedIndexReader {
if with_positions {
let positions = self
.positions_file_slice
.open()?
.read_bytes_slice_async(positions_range);
futures_util::future::try_join(postings, positions).await?;
} else {
@@ -478,7 +482,7 @@ impl InvertedIndexReader {
pub async fn warm_postings_full(&self, with_positions: bool) -> io::Result<()> {
self.postings_file_slice.read_bytes_async().await?;
if with_positions {
self.positions_file_slice.read_bytes_async().await?;
self.positions_file_slice.open()?.read_bytes_async().await?;
}
Ok(())
}

View File

@@ -0,0 +1,106 @@
use std::io;
use crate::directory::{BufferedFileSlice, FileSlice};
use crate::positions::PositionReader;
use crate::postings::{BlockSegmentPostings, SegmentPostings, TermInfo};
use crate::schema::IndexRecordOption;
use crate::termdict::TermDictionary;
/// The inverted index reader is in charge of accessing
/// the inverted index associated with a specific field.
///
/// This is optimized for merging in that it uses a buffered reader
/// for the postings and positions files.
/// This eliminates most disk I/O to these files during merging, without
/// reading the entire file into memory at once.
///
/// NB: This is a copy/paste from [`InvertedIndexReader`] and trimmed
/// down to only include the methods required by the merge process.
pub(crate) struct MergeOptimizedInvertedIndexReader {
termdict: TermDictionary,
postings_reader: BufferedFileSlice,
positions_reader: BufferedFileSlice,
record_option: IndexRecordOption,
}
impl MergeOptimizedInvertedIndexReader {
pub(crate) fn new(
termdict: TermDictionary,
postings_file_slice: FileSlice,
positions_file_slice: FileSlice,
record_option: IndexRecordOption,
) -> io::Result<MergeOptimizedInvertedIndexReader> {
let (_, postings_body) = postings_file_slice.split(8);
Ok(MergeOptimizedInvertedIndexReader {
termdict,
postings_reader: BufferedFileSlice::new_with_default_buffer_size(postings_body),
positions_reader: BufferedFileSlice::new_with_default_buffer_size(positions_file_slice),
record_option,
})
}
/// Creates an empty `InvertedIndexReader` object, which
/// contains no terms at all.
pub fn empty(record_option: IndexRecordOption) -> MergeOptimizedInvertedIndexReader {
MergeOptimizedInvertedIndexReader {
termdict: TermDictionary::empty(),
postings_reader: BufferedFileSlice::empty(),
positions_reader: BufferedFileSlice::empty(),
record_option,
}
}
/// Return the term dictionary datastructure.
pub fn terms(&self) -> &TermDictionary {
&self.termdict
}
/// Returns a block postings given a `term_info`.
/// This method is for an advanced usage only.
///
/// Most users should prefer using [`Self::read_postings()`] instead.
pub fn read_block_postings_from_terminfo(
&self,
term_info: &TermInfo,
requested_option: IndexRecordOption,
) -> io::Result<BlockSegmentPostings> {
let postings_data = self.postings_reader.get_bytes(
term_info.postings_range.start as u64..term_info.postings_range.end as u64,
)?;
BlockSegmentPostings::open(
term_info.doc_freq,
postings_data,
self.record_option,
requested_option,
)
}
/// Returns a posting object given a `term_info`.
/// This method is for an advanced usage only.
///
/// Most users should prefer using [`Self::read_postings()`] instead.
pub fn read_postings_from_terminfo(
&self,
term_info: &TermInfo,
option: IndexRecordOption,
) -> io::Result<SegmentPostings> {
let option = option.downgrade(self.record_option);
let block_postings = self.read_block_postings_from_terminfo(term_info, option)?;
let position_reader = {
if option.has_positions() {
let positions_data = self.positions_reader.get_bytes(
term_info.positions_range.start as u64..term_info.positions_range.end as u64,
)?;
let position_reader = PositionReader::open(positions_data)?;
Some(position_reader)
} else {
None
}
};
Ok(SegmentPostings::from_block_postings(
block_postings,
position_reader,
))
}
}

View File

@@ -5,14 +5,17 @@
mod index;
mod index_meta;
mod inverted_index_reader;
pub mod merge_optimized_inverted_index_reader;
mod segment;
mod segment_component;
mod segment_id;
mod segment_reader;
pub use self::index::{Index, IndexBuilder};
pub(crate) use self::index_meta::SegmentMetaInventory;
pub use self::index_meta::{IndexMeta, IndexSettings, Order, SegmentMeta};
pub use self::index_meta::{
DeleteMeta, IndexMeta, IndexSettings, InnerSegmentMeta, Order, SegmentMeta,
SegmentMetaInventory,
};
pub use self::inverted_index_reader::InvertedIndexReader;
pub use self::segment::Segment;
pub use self::segment_component::SegmentComponent;

View File

@@ -46,7 +46,7 @@ impl Segment {
///
/// This method is only used when updating `max_doc` from 0
/// as we finalize a fresh new segment.
pub(crate) fn with_max_doc(self, max_doc: u32) -> Segment {
pub fn with_max_doc(self, max_doc: u32) -> Segment {
Segment {
index: self.index,
meta: self.meta.with_max_doc(max_doc),

View File

@@ -1,3 +1,4 @@
use std::fmt::{Display, Formatter};
use std::slice;
/// Enum describing each component of a tantivy segment.
@@ -5,7 +6,7 @@ use std::slice;
/// Each component is stored in its own file,
/// using the pattern `segment_uuid`.`component_extension`,
/// except the delete component that takes an `segment_uuid`.`delete_opstamp`.`component_extension`
#[derive(Copy, Clone, Eq, PartialEq)]
#[derive(Debug, Copy, Clone, Hash, Ord, PartialOrd, Eq, PartialEq)]
pub enum SegmentComponent {
/// Postings (or inverted list). Sorted lists of document ids, associated with terms
Postings,
@@ -30,6 +31,39 @@ pub enum SegmentComponent {
Delete,
}
impl TryFrom<&str> for SegmentComponent {
type Error = String;
fn try_from(value: &str) -> Result<Self, Self::Error> {
match value {
"idx" => Ok(SegmentComponent::Postings),
"pos" => Ok(SegmentComponent::Positions),
"term" => Ok(SegmentComponent::Terms),
"store" => Ok(SegmentComponent::Store),
"temp" => Ok(SegmentComponent::TempStore),
"fast" => Ok(SegmentComponent::FastFields),
"fieldnorm" => Ok(SegmentComponent::FieldNorms),
"del" => Ok(SegmentComponent::Delete),
other => Err(other.to_string()),
}
}
}
impl Display for SegmentComponent {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
SegmentComponent::Postings => write!(f, "idx"),
SegmentComponent::Positions => write!(f, "pos"),
SegmentComponent::FastFields => write!(f, "fast"),
SegmentComponent::FieldNorms => write!(f, "fieldnorm"),
SegmentComponent::Terms => write!(f, "term"),
SegmentComponent::Store => write!(f, "store"),
SegmentComponent::TempStore => write!(f, "temp"),
SegmentComponent::Delete => write!(f, "del"),
}
}
}
impl SegmentComponent {
/// Iterates through the components.
pub fn iterator() -> slice::Iter<'static, SegmentComponent> {

View File

@@ -21,6 +21,14 @@ use uuid::Uuid;
#[derive(Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct SegmentId(Uuid);
impl Default for SegmentId {
fn default() -> Self {
Self(Uuid::from_bytes(uuid::Bytes::from([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
])))
}
}
#[cfg(test)]
static AUTO_INC_COUNTER: Lazy<atomic::AtomicUsize> = Lazy::new(atomic::AtomicUsize::default);
@@ -68,12 +76,26 @@ impl SegmentId {
self.0.as_simple().to_string()
}
/// Returns the bytes of a segment uuid
pub fn uuid_bytes(&self) -> &[u8; 16] {
self.0.as_bytes()
}
/// Returns only the first four bytes of a segment uuid
pub fn short_uuid_bytes(&self) -> &[u8] {
&self.0.as_bytes()[0..4]
}
/// Build a `SegmentId` string from the full uuid string.
///
/// E.g. "a5c4dfcbdfe645089129e308e26d5523"
pub fn from_uuid_string(uuid_string: &str) -> Result<SegmentId, SegmentIdParseError> {
FromStr::from_str(uuid_string)
}
pub fn from_bytes(uuid_bytes: [u8; 16]) -> SegmentId {
SegmentId(Uuid::from_bytes(uuid::Bytes::from(uuid_bytes)))
}
}
/// Error type used when parsing a `SegmentId` from a string fails.

View File

@@ -1,22 +1,26 @@
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use std::path::PathBuf;
use std::sync::{Arc, OnceLock, RwLock};
use std::{fmt, io};
use common::file_slice::DeferredFileSlice;
use common::{ByteCount, HasLen};
use fnv::FnvHashMap;
use itertools::Itertools;
use crate::directory::error::OpenReadError;
use crate::directory::{CompositeFile, FileSlice};
use crate::error::DataCorruption;
use crate::fastfield::{intersect_alive_bitsets, AliveBitSet, FacetReader, FastFieldReaders};
use crate::fieldnorm::{FieldNormReader, FieldNormReaders};
use crate::index::merge_optimized_inverted_index_reader::MergeOptimizedInvertedIndexReader;
use crate::index::{InvertedIndexReader, Segment, SegmentComponent, SegmentId};
use crate::json_utils::json_path_sep_to_dot;
use crate::schema::{Field, IndexRecordOption, Schema, Type};
use crate::space_usage::SegmentSpaceUsage;
use crate::store::StoreReader;
use crate::termdict::TermDictionary;
use crate::{DocId, Opstamp};
use crate::{Directory, DocId, Index, Opstamp};
/// Entry point to access all of the datastructures of the `Segment`
///
@@ -30,22 +34,25 @@ use crate::{DocId, Opstamp};
/// as close to all of the memory data is mmapped.
#[derive(Clone)]
pub struct SegmentReader {
inv_idx_reader_cache: Arc<RwLock<HashMap<Field, Arc<InvertedIndexReader>>>>,
index: Index,
segment_id: SegmentId,
custom_alive_bitset: Option<AliveBitSet>,
inv_idx_reader_cache: Arc<RwLock<HashMap<Field, Arc<InvertedIndexReader>>>>,
delete_opstamp: Option<Opstamp>,
max_doc: DocId,
num_docs: DocId,
num_docs: Arc<OnceLock<DocId>>,
termdict_composite: CompositeFile,
postings_composite: CompositeFile,
positions_composite: CompositeFile,
fast_fields_readers: FastFieldReaders,
fieldnorm_readers: FieldNormReaders,
termdict_composite: Arc<OnceLock<CompositeFile>>,
postings_composite: Arc<OnceLock<CompositeFile>>,
positions_composite: Arc<OnceLock<CompositeFile>>,
fast_fields_readers: Arc<OnceLock<FastFieldReaders>>,
fieldnorm_readers: Arc<OnceLock<FieldNormReaders>>,
store_file: FileSlice,
alive_bitset_opt: Option<AliveBitSet>,
store_file: Arc<OnceLock<FileSlice>>,
has_deletes: bool,
alive_bitset_opt: Arc<OnceLock<Option<AliveBitSet>>>,
schema: Schema,
}
@@ -59,7 +66,12 @@ impl SegmentReader {
/// Returns the number of alive documents.
/// Deleted documents are not counted.
pub fn num_docs(&self) -> DocId {
self.num_docs
*self.num_docs.get_or_init(|| {
self.alive_bitset_opt()
.as_ref()
.map(|alive_bitset| alive_bitset.num_alive_docs() as u32)
.unwrap_or(self.max_doc)
})
}
/// Returns the schema of the index this segment belongs to.
@@ -70,7 +82,7 @@ impl SegmentReader {
/// Return the number of documents that have been
/// deleted in the segment.
pub fn num_deleted_docs(&self) -> DocId {
self.max_doc - self.num_docs
self.max_doc - self.num_docs()
}
/// Returns true if some of the documents of the segment have been deleted.
@@ -89,7 +101,7 @@ impl SegmentReader {
/// # Panics
/// May panic if the index is corrupted.
pub fn fast_fields(&self) -> &FastFieldReaders {
&self.fast_fields_readers
self.fast_fields_readers()
}
/// Accessor to the `FacetReader` associated with a given `Field`.
@@ -116,7 +128,7 @@ impl SegmentReader {
/// They are simply stored as a fast field, serialized in
/// the `.fieldnorm` file of the segment.
pub fn get_fieldnorms_reader(&self, field: Field) -> crate::Result<FieldNormReader> {
self.fieldnorm_readers.get_field(field)?.ok_or_else(|| {
self.fieldnorm_readers().get_field(field)?.ok_or_else(|| {
let field_name = self.schema.get_field_name(field);
let err_msg = format!(
"Field norm not found for field {field_name:?}. Was the field set to record norm \
@@ -128,7 +140,7 @@ impl SegmentReader {
#[doc(hidden)]
pub fn fieldnorms_readers(&self) -> &FieldNormReaders {
&self.fieldnorm_readers
self.fieldnorm_readers()
}
/// Accessor to the segment's [`StoreReader`](crate::store::StoreReader).
@@ -136,7 +148,7 @@ impl SegmentReader {
/// `cache_num_blocks` sets the number of decompressed blocks to be cached in an LRU.
/// The size of blocks is configurable, this should be reflexted in the
pub fn get_store_reader(&self, cache_num_blocks: usize) -> io::Result<StoreReader> {
StoreReader::open(self.store_file.clone(), cache_num_blocks)
StoreReader::open(self.store_file().clone(), cache_num_blocks)
}
/// Open a new segment for reading.
@@ -149,61 +161,27 @@ impl SegmentReader {
segment: &Segment,
custom_bitset: Option<AliveBitSet>,
) -> crate::Result<SegmentReader> {
let termdict_file = segment.open_read(SegmentComponent::Terms)?;
let termdict_composite = CompositeFile::open(&termdict_file)?;
let store_file = segment.open_read(SegmentComponent::Store)?;
crate::fail_point!("SegmentReader::open#middle");
let postings_file = segment.open_read(SegmentComponent::Postings)?;
let postings_composite = CompositeFile::open(&postings_file)?;
let positions_composite = {
if let Ok(positions_file) = segment.open_read(SegmentComponent::Positions) {
CompositeFile::open(&positions_file)?
} else {
CompositeFile::empty()
}
};
let schema = segment.schema();
let fast_fields_data = segment.open_read(SegmentComponent::FastFields)?;
let fast_fields_readers = FastFieldReaders::open(fast_fields_data, schema.clone())?;
let fieldnorm_data = segment.open_read(SegmentComponent::FieldNorms)?;
let fieldnorm_readers = FieldNormReaders::open(fieldnorm_data)?;
let original_bitset = if segment.meta().has_deletes() {
let alive_doc_file_slice = segment.open_read(SegmentComponent::Delete)?;
let alive_doc_data = alive_doc_file_slice.read_bytes()?;
Some(AliveBitSet::open(alive_doc_data))
} else {
None
};
let alive_bitset_opt = intersect_alive_bitset(original_bitset, custom_bitset);
let max_doc = segment.meta().max_doc();
let num_docs = alive_bitset_opt
.as_ref()
.map(|alive_bitset| alive_bitset.num_alive_docs() as u32)
.unwrap_or(max_doc);
Ok(SegmentReader {
inv_idx_reader_cache: Default::default(),
num_docs,
max_doc,
termdict_composite,
postings_composite,
fast_fields_readers,
fieldnorm_readers,
index: segment.index().clone(),
segment_id: segment.id(),
custom_alive_bitset: custom_bitset,
inv_idx_reader_cache: Default::default(),
delete_opstamp: segment.meta().delete_opstamp(),
store_file,
alive_bitset_opt,
positions_composite,
schema,
max_doc: segment.meta().max_doc(),
num_docs: Default::default(),
termdict_composite: Default::default(),
postings_composite: Default::default(),
positions_composite: Default::default(),
fast_fields_readers: Default::default(),
fieldnorm_readers: Default::default(),
store_file: Default::default(),
has_deletes: segment.meta().has_deletes(),
alive_bitset_opt: Default::default(),
schema: segment.schema(),
})
}
@@ -236,7 +214,7 @@ impl SegmentReader {
warn!("Field {:?} does not seem indexed.", field_entry.name());
}
let postings_file_opt = self.postings_composite.open_read(field);
let postings_file_opt = self.postings_composite().open_read(field);
if postings_file_opt.is_none() || record_option_opt.is_none() {
// no documents in the segment contained this field.
@@ -251,7 +229,7 @@ impl SegmentReader {
let postings_file = postings_file_opt.unwrap();
let termdict_file: FileSlice =
self.termdict_composite.open_read(field).ok_or_else(|| {
self.termdict_composite().open_read(field).ok_or_else(|| {
DataCorruption::comment_only(format!(
"Failed to open field {:?}'s term dictionary in the composite file. Has the \
schema been modified?",
@@ -259,19 +237,38 @@ impl SegmentReader {
))
})?;
let positions_file = self.positions_composite.open_read(field).ok_or_else(|| {
let error_msg = format!(
"Failed to open field {:?}'s positions in the composite file. Has the schema been \
modified?",
field_entry.name()
);
DataCorruption::comment_only(error_msg)
})?;
// not all queries require positions.
// we can defer opening the file until needed
let positions_file_opener = {
let path = self.relative_path(SegmentComponent::Positions);
let directory = self.index.directory().clone();
let field_entry = field_entry.clone();
move || {
let composite_file = if let Ok(positions_file) = &directory.open_read(&path) {
CompositeFile::open(&positions_file)
.expect("should be able to open positions composite component")
} else {
CompositeFile::empty()
};
composite_file.open_read(field).ok_or_else(|| {
let error_msg = format!(
"Failed to open field {:?}'s positions in the composite file. Has the \
schema been modified?",
field_entry.name()
);
io::Error::new(
io::ErrorKind::InvalidData,
format!("{:?}", DataCorruption::comment_only(error_msg)),
)
})
}
};
let inv_idx_reader = Arc::new(InvertedIndexReader::new(
TermDictionary::open(termdict_file)?,
postings_file,
positions_file,
DeferredFileSlice::new(positions_file_opener),
record_option,
)?);
@@ -285,6 +282,76 @@ impl SegmentReader {
Ok(inv_idx_reader)
}
/// Returns a field reader associated with the field given in argument that is optimized for
/// Tantivy's merge process.
///
/// If the field was not present in the index during indexing time,
/// the InvertedIndexReader is empty.
///
/// The field reader is in charge of iterating through the
/// term dictionary associated with a specific field,
/// and opening the posting list associated with any term.
///
/// If the field is not marked as index, a warning is logged and an empty
/// `MergeOptimizedInvertedIndexReader` is returned.
/// Similarly, if the field is marked as indexed but no term has been indexed for the given
/// index, an empty `MergeOptimizedInvertedIndexReader` is returned (but no warning is logged).
pub(crate) fn merge_optimized_inverted_index(
&self,
field: Field,
) -> crate::Result<Arc<MergeOptimizedInvertedIndexReader>> {
let field_entry = self.schema.get_field_entry(field);
let field_type = field_entry.field_type();
let record_option_opt = field_type.get_index_record_option();
if record_option_opt.is_none() {
warn!("Field {:?} does not seem indexed.", field_entry.name());
}
let postings_file_opt = self.postings_composite().open_read(field);
if postings_file_opt.is_none() || record_option_opt.is_none() {
// no documents in the segment contained this field.
// As a result, no data is associated with the inverted index.
//
// Returns an empty inverted index.
let record_option = record_option_opt.unwrap_or(IndexRecordOption::Basic);
return Ok(Arc::new(MergeOptimizedInvertedIndexReader::empty(
record_option,
)));
}
let record_option = record_option_opt.unwrap();
let postings_file = postings_file_opt.unwrap();
let termdict_file: FileSlice =
self.termdict_composite().open_read(field).ok_or_else(|| {
DataCorruption::comment_only(format!(
"Failed to open field {:?}'s term dictionary in the composite file. Has the \
schema been modified?",
field_entry.name()
))
})?;
let positions_file = self.positions_composite().open_read(field).ok_or_else(|| {
let error_msg = format!(
"Failed to open field {:?}'s positions in the composite file. Has the schema been \
modified?",
field_entry.name()
);
DataCorruption::comment_only(error_msg)
})?;
let inv_idx_reader = Arc::new(MergeOptimizedInvertedIndexReader::new(
TermDictionary::open(termdict_file)?,
postings_file,
positions_file,
record_option,
)?);
Ok(inv_idx_reader)
}
/// Returns the list of fields that have been indexed in the segment.
/// The field list includes the field defined in the schema as well as the fields
/// that have been indexed as a part of a JSON field.
@@ -309,7 +376,8 @@ impl SegmentReader {
if is_json {
let term_dictionary_json_field_num_bytes: u64 = self
.termdict_composite
.open_read(field)
.get()
.and_then(|composite| composite.open_read(field))
.map(|file_slice| file_slice.len() as u64)
.unwrap_or(0u64);
let inv_index = self.inverted_index(field)?;
@@ -361,19 +429,22 @@ impl SegmentReader {
} else {
let postings_size: ByteCount = self
.postings_composite
.open_read(field)
.get()
.and_then(|composite| composite.open_read(field))
.map(|posting_fileslice| posting_fileslice.len())
.unwrap_or(0)
.into();
let positions_size: ByteCount = self
.positions_composite
.open_read(field)
.get()
.and_then(|composite| composite.open_read(field))
.map(|positions_fileslice| positions_fileslice.len())
.unwrap_or(0)
.into();
let term_dictionary_size: ByteCount = self
.termdict_composite
.open_read(field)
.get()
.and_then(|composite| composite.open_read(field))
.map(|term_dictionary_fileslice| term_dictionary_fileslice.len())
.unwrap_or(0)
.into();
@@ -431,7 +502,7 @@ impl SegmentReader {
/// Returns the bitset representing the alive `DocId`s.
pub fn alive_bitset(&self) -> Option<&AliveBitSet> {
self.alive_bitset_opt.as_ref()
self.alive_bitset_opt().as_ref()
}
/// Returns true if the `doc` is marked
@@ -444,7 +515,7 @@ impl SegmentReader {
/// Returns an iterator that will iterate over the alive document ids
pub fn doc_ids_alive(&self) -> Box<dyn Iterator<Item = DocId> + Send + '_> {
if let Some(alive_bitset) = &self.alive_bitset_opt {
if let Some(alive_bitset) = self.alive_bitset_opt() {
Box::new(alive_bitset.iter_alive())
} else {
Box::new(0u32..self.max_doc)
@@ -455,30 +526,135 @@ impl SegmentReader {
pub fn space_usage(&self) -> io::Result<SegmentSpaceUsage> {
Ok(SegmentSpaceUsage::new(
self.num_docs(),
self.termdict_composite.space_usage(self.schema()),
self.postings_composite.space_usage(self.schema()),
self.positions_composite.space_usage(self.schema()),
self.fast_fields_readers.space_usage()?,
self.fieldnorm_readers.space_usage(self.schema()),
self.termdict_composite().space_usage(self.schema()),
self.postings_composite().space_usage(self.schema()),
self.positions_composite().space_usage(self.schema()),
self.fast_fields_readers().space_usage()?,
self.fieldnorm_readers().space_usage(self.schema()),
self.get_store_reader(0)?.space_usage(),
self.alive_bitset_opt
self.alive_bitset_opt()
.as_ref()
.map(AliveBitSet::space_usage)
.unwrap_or_default(),
))
}
fn relative_path(&self, component: SegmentComponent) -> PathBuf {
let mut path = self.segment_id().uuid_string();
path.push_str(&match component {
SegmentComponent::Postings => ".idx".to_string(),
SegmentComponent::Positions => ".pos".to_string(),
SegmentComponent::Terms => ".term".to_string(),
SegmentComponent::Store => ".store".to_string(),
SegmentComponent::TempStore => ".store.temp".to_string(),
SegmentComponent::FastFields => ".fast".to_string(),
SegmentComponent::FieldNorms => ".fieldnorm".to_string(),
SegmentComponent::Delete => format!(".{}.del", self.delete_opstamp().unwrap_or(0)),
});
PathBuf::from(path)
}
fn open_read(&self, component: SegmentComponent) -> Result<FileSlice, OpenReadError> {
let path = self.relative_path(component);
self.index.directory().open_read(&path)
}
#[inline]
fn store_file(&self) -> &FileSlice {
self.store_file.get_or_init(move || {
self.open_read(SegmentComponent::Store)
.expect("should be able to open store segment component")
})
}
#[inline]
fn termdict_composite(&self) -> &CompositeFile {
self.termdict_composite.get_or_init(move || {
CompositeFile::open(
&self
.open_read(SegmentComponent::Terms)
.expect("should be able to open termdict segment component"),
)
.expect("should be able to open termdict composite file")
})
}
#[inline]
fn postings_composite(&self) -> &CompositeFile {
self.postings_composite.get_or_init(move || {
CompositeFile::open(
&self
.open_read(SegmentComponent::Postings)
.expect("should be able to open postings segment component"),
)
.expect("should be able to open postings composite file")
})
}
#[inline]
fn positions_composite(&self) -> &CompositeFile {
self.positions_composite.get_or_init(move || {
if let Ok(positions_file) = &self.open_read(SegmentComponent::Positions) {
CompositeFile::open(&positions_file)
.expect("should be able to open positions composite component")
} else {
CompositeFile::empty()
}
})
}
#[inline]
fn fast_fields_readers(&self) -> &FastFieldReaders {
self.fast_fields_readers.get_or_init(move || {
FastFieldReaders::open(
self.open_read(SegmentComponent::FastFields)
.expect("should be able to open fast fields segment component"),
self.schema.clone(),
)
.expect("should be able to open fast fields readers")
})
}
#[inline]
fn fieldnorm_readers(&self) -> &FieldNormReaders {
self.fieldnorm_readers.get_or_init(move || {
FieldNormReaders::open(
self.open_read(SegmentComponent::FieldNorms)
.expect("should be able to open field norms segment component"),
)
.expect("should be able to open field norms readers")
})
}
#[inline]
fn alive_bitset_opt(&self) -> &Option<AliveBitSet> {
self.alive_bitset_opt.get_or_init(move || {
let physical_alive_bitset = if self.has_deletes {
Some(AliveBitSet::open(
self.open_read(SegmentComponent::Delete)
.expect("should be able to open deletes segment component")
.read_bytes()
.expect("should be able to read deletes segment component"),
))
} else {
None
};
intersect_alive_bitset(physical_alive_bitset, self.custom_alive_bitset.clone())
})
}
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
/// FieldMetadata
pub struct FieldMetadata {
/// The field name
// Notice: Don't reorder the declaration of 1.field_name 2.typ, as it is used for ordering by
// field_name then typ.
/// Notice: Don't reorder the declaration of 1.field_name 2.typ, as it is used for ordering by
/// field_name then typ.
pub field_name: String,
/// The field type
// Notice: Don't reorder the declaration of 1.field_name 2.typ, as it is used for ordering by
// field_name then typ.
/// Notice: Don't reorder the declaration of 1.field_name 2.typ, as it is used for ordering by
/// field_name then typ.
pub typ: Type,
/// Is the field stored in the doc store
pub stored: bool,
@@ -577,8 +753,8 @@ fn intersect_alive_bitset(
}
impl fmt::Debug for SegmentReader {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "SegmentReader({:?})", self.segment_id)
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "SegmentReader({:?})", self.segment_id())
}
}

View File

@@ -189,7 +189,7 @@ impl DeleteCursor {
fn is_behind_opstamp(&mut self, target_opstamp: Opstamp) -> bool {
self.get()
.map(|operation| operation.opstamp < target_opstamp)
.map(|operation| operation.opstamp() < target_opstamp)
.unwrap_or(false)
}
@@ -263,7 +263,7 @@ mod tests {
fn test_deletequeue() {
let delete_queue = DeleteQueue::new();
let make_op = |i: usize| DeleteOperation {
let make_op = |i: usize| DeleteOperation::ByWeight {
opstamp: i as u64,
target: Box::new(DummyWeight),
};
@@ -274,9 +274,9 @@ mod tests {
let snapshot = delete_queue.cursor();
{
let mut operations_it = snapshot.clone();
assert_eq!(operations_it.get().unwrap().opstamp, 1);
assert_eq!(operations_it.get().unwrap().opstamp(), 1);
operations_it.advance();
assert_eq!(operations_it.get().unwrap().opstamp, 2);
assert_eq!(operations_it.get().unwrap().opstamp(), 2);
operations_it.advance();
assert!(operations_it.get().is_none());
operations_it.advance();
@@ -284,20 +284,20 @@ mod tests {
let mut snapshot2 = delete_queue.cursor();
assert!(snapshot2.get().is_none());
delete_queue.push(make_op(3));
assert_eq!(snapshot2.get().unwrap().opstamp, 3);
assert_eq!(operations_it.get().unwrap().opstamp, 3);
assert_eq!(operations_it.get().unwrap().opstamp, 3);
assert_eq!(snapshot2.get().unwrap().opstamp(), 3);
assert_eq!(operations_it.get().unwrap().opstamp(), 3);
assert_eq!(operations_it.get().unwrap().opstamp(), 3);
operations_it.advance();
assert!(operations_it.get().is_none());
operations_it.advance();
}
{
let mut operations_it = snapshot;
assert_eq!(operations_it.get().unwrap().opstamp, 1);
assert_eq!(operations_it.get().unwrap().opstamp(), 1);
operations_it.advance();
assert_eq!(operations_it.get().unwrap().opstamp, 2);
assert_eq!(operations_it.get().unwrap().opstamp(), 2);
operations_it.advance();
assert_eq!(operations_it.get().unwrap().opstamp, 3);
assert_eq!(operations_it.get().unwrap().opstamp(), 3);
operations_it.advance();
assert!(operations_it.get().is_none());
}

View File

@@ -40,7 +40,6 @@ impl DocToOpstampMapping<'_> {
#[cfg(test)]
mod tests {
use super::DocToOpstampMapping;
#[test]

View File

@@ -22,7 +22,7 @@ use crate::indexer::{MergePolicy, SegmentEntry, SegmentWriter};
use crate::query::{EnableScoring, Query, TermQuery};
use crate::schema::document::Document;
use crate::schema::{IndexRecordOption, TantivyDocument, Term};
use crate::{FutureResult, Opstamp};
use crate::{Directory, DocId, FutureResult, Opstamp};
// Size of the margin for the `memory_arena`. A segment is closed when the remaining memory
// in the `memory_arena` goes below MARGIN_IN_BYTES.
@@ -101,24 +101,41 @@ fn compute_deleted_bitset(
) -> crate::Result<bool> {
let mut might_have_changed = false;
while let Some(delete_op) = delete_cursor.get() {
if delete_op.opstamp > target_opstamp {
if delete_op.opstamp() > target_opstamp {
break;
}
// A delete operation should only affect
// document that were inserted before it.
delete_op
.target
.for_each_no_score(segment_reader, &mut |docs_matching_delete_query| {
for doc_matching_delete_query in docs_matching_delete_query.iter().cloned() {
if doc_opstamps.is_deleted(doc_matching_delete_query, delete_op.opstamp) {
alive_bitset.remove(doc_matching_delete_query);
match delete_op {
DeleteOperation::ByWeight { opstamp, target } => {
// A delete operation should only affect
// document that were inserted before it.
target.for_each_no_score(segment_reader, &mut |docs_matching_delete_query| {
for doc_matching_delete_query in docs_matching_delete_query.iter().cloned() {
if doc_opstamps.is_deleted(doc_matching_delete_query, *opstamp) {
alive_bitset.remove(doc_matching_delete_query);
might_have_changed = true;
}
}
})?;
}
DeleteOperation::ByAddress {
opstamp,
segment_id,
doc_id,
} => {
if *segment_id == segment_reader.segment_id() {
if doc_opstamps.is_deleted(*doc_id, *opstamp) {
alive_bitset.remove(*doc_id);
might_have_changed = true;
}
}
})?;
}
}
delete_cursor.advance();
}
Ok(might_have_changed)
}
@@ -128,7 +145,7 @@ fn compute_deleted_bitset(
/// is `==` target_opstamp.
/// For instance, there was no delete operation between the state of the `segment_entry` and
/// the `target_opstamp`, `segment_entry` is not updated.
pub(crate) fn advance_deletes(
pub fn advance_deletes(
mut segment: Segment,
segment_entry: &mut SegmentEntry,
target_opstamp: Opstamp,
@@ -296,8 +313,8 @@ impl<D: Document> IndexWriter<D> {
return Err(TantivyError::InvalidArgument(err_msg));
}
if options.num_worker_threads == 0 {
let err_msg = "At least one worker thread is required, got 0".to_string();
return Err(TantivyError::InvalidArgument(err_msg));
// let err_msg = "At least one worker thread is required, got 0".to_string();
// return Err(TantivyError::InvalidArgument(err_msg));
}
let (document_sender, document_receiver) =
@@ -314,6 +331,11 @@ impl<D: Document> IndexWriter<D> {
stamper.clone(),
&delete_queue.cursor(),
options.num_merge_threads,
index.directory().panic_handler(),
{
let index = index.clone();
move || index.directory().wants_cancel()
},
)?;
let mut index_writer = Self {
@@ -373,6 +395,10 @@ impl<D: Document> IndexWriter<D> {
error!("Some merging thread failed {e:?}");
}
let merge_errors = self.segment_updater.get_merge_errors();
if !merge_errors.is_empty() {
return Err(TantivyError::MergeErrors(merge_errors));
}
result
}
@@ -533,11 +559,30 @@ impl<D: Document> IndexWriter<D> {
///
/// `segment_ids` is required to be non-empty.
pub fn merge(&mut self, segment_ids: &[SegmentId]) -> FutureResult<Option<SegmentMeta>> {
let merge_operation = self.segment_updater.make_merge_operation(segment_ids);
let merge_operation = self
.segment_updater
.make_merge_operation(segment_ids, false);
let segment_updater = self.segment_updater.clone();
segment_updater.start_merge(merge_operation)
}
/// Merges a given list of segments. This is a blocking operation that performs
/// the merge in the calling thread (foreground).
///
/// If all segments are empty no new segment will be created.
///
/// `segment_ids` is required to be non-empty.
pub fn merge_foreground(
&mut self,
segment_ids: &[SegmentId],
ignore_store: bool,
) -> crate::Result<Option<SegmentMeta>> {
let merge_operation = self
.segment_updater
.make_merge_operation(segment_ids, ignore_store);
self.segment_updater.merge_foreground(merge_operation)
}
/// Closes the current document channel send.
/// and replace all the channels by new ones.
///
@@ -698,7 +743,7 @@ impl<D: Document> IndexWriter<D> {
pub fn delete_query(&self, query: Box<dyn Query>) -> crate::Result<Opstamp> {
let weight = query.weight(EnableScoring::disabled_from_schema(&self.index.schema()))?;
let opstamp = self.stamper.stamp();
let delete_operation = DeleteOperation {
let delete_operation = DeleteOperation::ByWeight {
opstamp,
target: weight,
};
@@ -706,6 +751,17 @@ impl<D: Document> IndexWriter<D> {
Ok(opstamp)
}
/// Delete a specific document by its already-known [`DocAddress`]
pub fn delete_by_address(&self, segment_id: SegmentId, doc_id: DocId) -> Opstamp {
let opstamp = self.stamper.stamp();
self.delete_queue.push(DeleteOperation::ByAddress {
opstamp,
segment_id,
doc_id,
});
opstamp
}
/// Returns the opstamp of the last successful commit.
///
/// This is, for instance, the opstamp the index will
@@ -779,7 +835,7 @@ impl<D: Document> IndexWriter<D> {
let query = TermQuery::new(term, IndexRecordOption::Basic);
let weight =
query.weight(EnableScoring::disabled_from_schema(&self.index.schema()))?;
let delete_operation = DeleteOperation {
let delete_operation = DeleteOperation::ByWeight {
opstamp,
target: weight,
};
@@ -789,6 +845,13 @@ impl<D: Document> IndexWriter<D> {
let add_operation = AddOperation { opstamp, document };
adds.push(add_operation);
}
UserOperation::DeleteByAddress(segment_id, doc_id) => {
self.delete_queue.push(DeleteOperation::ByAddress {
opstamp,
segment_id,
doc_id,
});
}
}
}
self.send_add_documents_batch(adds)?;
@@ -1089,7 +1152,10 @@ mod tests {
index_writer.commit()?;
reader.reload().unwrap();
assert_eq!(num_docs_containing("a"), 0);
// In Tantivy upstream, this test results in 0 segments after delete.
// However, due to our custom, visibility rules, we leave the segment.
// See committed_segment_metas in segment_manager.rs.
assert_eq!(num_docs_containing("a"), 1);
index_writer.merge(&segments);
index_writer.wait_merging_threads().unwrap();
@@ -1135,7 +1201,10 @@ mod tests {
index_writer.commit()?;
reader.reload().unwrap();
assert_eq!(num_docs_containing("a"), 0);
// In Tantivy upstream, this test results in 0 segments after delete.
// However, due to our custom, visibility rules, we leave the segment.
// See committed_segment_metas in segment_manager.rs.
assert_eq!(num_docs_containing("a"), 4);
index_writer.merge(&segments);
index_writer.wait_merging_threads().unwrap();
@@ -2251,6 +2320,7 @@ mod tests {
}
#[test]
#[ignore = "doesn't work with deferred segment loading"]
fn test_ff_num_ips_regression() {
assert!(test_operation_strategy(
&[
@@ -2292,27 +2362,32 @@ mod tests {
#![proptest_config(ProptestConfig::with_cases(20))]
#[test]
#[ignore = "doesn't work with deferred segment loading"]
fn test_delete_proptest_adding(ops in proptest::collection::vec(adding_operation_strategy(), 1..100)) {
assert!(test_operation_strategy(&ops[..], false).is_ok());
}
#[test]
#[ignore = "doesn't work with deferred segment loading"]
fn test_delete_proptest_with_merge_adding(ops in proptest::collection::vec(adding_operation_strategy(), 1..100)) {
assert!(test_operation_strategy(&ops[..], true).is_ok());
}
#[test]
#[ignore = "doesn't work with deferred segment loading"]
fn test_delete_proptest(ops in proptest::collection::vec(balanced_operation_strategy(), 1..10)) {
assert!(test_operation_strategy(&ops[..], false).is_ok());
}
#[test]
#[ignore = "doesn't work with deferred segment loading"]
fn test_delete_proptest_with_merge(ops in proptest::collection::vec(balanced_operation_strategy(), 1..100)) {
assert!(test_operation_strategy(&ops[..], true).is_ok());
}
}
#[test]
#[ignore = "doesn't work with deferred segment loading"]
fn test_delete_bug_reproduction_ip_addr() {
use IndexingOp::*;
let ops = &[
@@ -2560,10 +2635,15 @@ mod tests {
let _field = schema_builder.add_bool_field("example", STORED);
let index = Index::create_in_ram(schema_builder.build());
// NB: tantivy proper probably can't work with zero worker threads, but we (pg_search) do
// indexing and merging in the foreground and don't need the worker threads
let opt_wo_threads = IndexWriterOptions::builder().num_worker_threads(0).build();
let result = index.writer_with_options::<TantivyDocument>(opt_wo_threads);
assert!(result.is_err(), "Writer should reject 0 thread count");
assert!(matches!(result, Err(TantivyError::InvalidArgument(_))));
assert!(result.is_ok(), "Writer should accept 0 thread count");
// the above actually created a writer which then takes a lock, which causes the next
// attempt to open an IndexWriter to fail in a way that's different than expected.
// Dropping the Result<IndexWriter> we just made lets the test carry on unchanged
drop(result);
let opt_with_low_memory = IndexWriterOptions::builder()
.memory_budget_per_thread(10 << 10)

View File

@@ -4,6 +4,7 @@ use itertools::Itertools;
use super::merge_policy::{MergeCandidate, MergePolicy};
use crate::index::SegmentMeta;
use crate::Directory;
const DEFAULT_LEVEL_LOG_SIZE: f64 = 0.75;
const DEFAULT_MIN_LAYER_SIZE: u32 = 10_000;
@@ -91,7 +92,11 @@ fn deletes_ratio(segment: &SegmentMeta) -> f32 {
}
impl MergePolicy for LogMergePolicy {
fn compute_merge_candidates(&self, segments: &[SegmentMeta]) -> Vec<MergeCandidate> {
fn compute_merge_candidates(
&self,
_directory: Option<&dyn Directory>,
segments: &[SegmentMeta],
) -> Vec<MergeCandidate> {
let size_sorted_segments = segments
.iter()
.filter(|seg| seg.num_docs() <= (self.max_docs_before_merge as u32))
@@ -222,7 +227,7 @@ mod tests {
#[test]
fn test_log_merge_policy_empty() {
let y = Vec::new();
let result_list = test_merge_policy().compute_merge_candidates(&y);
let result_list = test_merge_policy().compute_merge_candidates(None, &y);
assert!(result_list.is_empty());
}
@@ -237,7 +242,7 @@ mod tests {
create_random_segment_meta(10),
create_random_segment_meta(10),
];
let result_list = test_merge_policy().compute_merge_candidates(&test_input);
let result_list = test_merge_policy().compute_merge_candidates(None, &test_input);
assert_eq!(result_list.len(), 1);
}
@@ -261,7 +266,7 @@ mod tests {
create_random_segment_meta(10),
create_random_segment_meta(10),
];
let result_list = test_merge_policy().compute_merge_candidates(&test_input);
let result_list = test_merge_policy().compute_merge_candidates(None, &test_input);
assert_eq!(result_list.len(), 2);
}
@@ -276,7 +281,7 @@ mod tests {
create_random_segment_meta(1000), // log2(1000) = ~9.97
create_random_segment_meta(1000),
]; // log2(1000) = ~9.97
let result_list = test_merge_policy().compute_merge_candidates(&test_input);
let result_list = test_merge_policy().compute_merge_candidates(None, &test_input);
assert_eq!(result_list.len(), 2);
}
@@ -291,7 +296,7 @@ mod tests {
create_random_segment_meta(2),
create_random_segment_meta(2),
];
let result_list = test_merge_policy().compute_merge_candidates(&test_input);
let result_list = test_merge_policy().compute_merge_candidates(None, &test_input);
assert_eq!(result_list.len(), 1);
}
@@ -302,7 +307,7 @@ mod tests {
.take(8)
.collect();
assert!(test_merge_policy()
.compute_merge_candidates(&eight_large_segments)
.compute_merge_candidates(None, &eight_large_segments)
.is_empty());
}
@@ -317,7 +322,7 @@ mod tests {
create_random_segment_meta(100_000),
create_random_segment_meta(1_500_000),
];
let result_list = test_merge_policy().compute_merge_candidates(&test_input);
let result_list = test_merge_policy().compute_merge_candidates(None, &test_input);
// Do not include large segments
assert_eq!(result_list.len(), 1);
assert_eq!(result_list[0].0.len(), 3);
@@ -333,7 +338,7 @@ mod tests {
let mut test_merge_policy = test_merge_policy();
test_merge_policy.set_del_docs_ratio_before_merge(0.25f32);
let test_input = vec![create_random_segment_meta(40_000).with_delete_meta(10_000, 1)];
let merge_candidates = test_merge_policy.compute_merge_candidates(&test_input);
let merge_candidates = test_merge_policy.compute_merge_candidates(None, &test_input);
assert!(merge_candidates.is_empty());
}
@@ -342,7 +347,7 @@ mod tests {
let mut test_merge_policy = test_merge_policy();
test_merge_policy.set_del_docs_ratio_before_merge(0.25f32);
let test_input = vec![create_random_segment_meta(40_000).with_delete_meta(10_001, 1)];
let merge_candidates = test_merge_policy.compute_merge_candidates(&test_input);
let merge_candidates = test_merge_policy.compute_merge_candidates(None, &test_input);
assert_eq!(merge_candidates.len(), 1);
}
@@ -354,7 +359,7 @@ mod tests {
create_random_segment_meta(40_000).with_delete_meta(10_001, 1),
create_random_segment_meta(40_000),
];
let merge_candidates = test_merge_policy.compute_merge_candidates(&test_input);
let merge_candidates = test_merge_policy.compute_merge_candidates(None, &test_input);
assert_eq!(merge_candidates.len(), 1);
assert_eq!(merge_candidates[0].0.len(), 2);
}
@@ -367,7 +372,7 @@ mod tests {
create_random_segment_meta(100),
create_random_segment_meta(40_000).with_delete_meta(10_001, 1),
];
let merge_candidates = test_merge_policy.compute_merge_candidates(&test_input);
let merge_candidates = test_merge_policy.compute_merge_candidates(None, &test_input);
assert_eq!(merge_candidates.len(), 1);
assert_eq!(merge_candidates[0].0.len(), 1);
assert_eq!(merge_candidates[0].0[0], test_input[1].id());

View File

@@ -47,6 +47,7 @@ pub struct MergeOperation {
pub(crate) struct InnerMergeOperation {
target_opstamp: Opstamp,
segment_ids: Vec<SegmentId>,
ignore_store: bool,
}
impl MergeOperation {
@@ -54,10 +55,12 @@ impl MergeOperation {
inventory: &MergeOperationInventory,
target_opstamp: Opstamp,
segment_ids: Vec<SegmentId>,
ignore_store: bool,
) -> MergeOperation {
let inner_merge_operation = InnerMergeOperation {
target_opstamp,
segment_ids,
ignore_store,
};
MergeOperation {
inner: inventory.track(inner_merge_operation),
@@ -74,4 +77,9 @@ impl MergeOperation {
pub fn segment_ids(&self) -> &[SegmentId] {
&self.inner.segment_ids[..]
}
/// Returns true if the store should be ignored during merge.
pub fn ignore_store(&self) -> bool {
self.inner.ignore_store
}
}

View File

@@ -2,9 +2,10 @@ use std::fmt::Debug;
use std::marker;
use crate::index::{SegmentId, SegmentMeta};
use crate::Directory;
/// Set of segment suggested for a merge.
#[derive(Debug, Clone)]
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct MergeCandidate(pub Vec<SegmentId>);
/// The `MergePolicy` defines which segments should be merged.
@@ -16,7 +17,11 @@ pub trait MergePolicy: marker::Send + marker::Sync + Debug {
///
/// This call happens on the segment updater thread, and will block
/// other segment updates, so all implementations should happen rapidly.
fn compute_merge_candidates(&self, segments: &[SegmentMeta]) -> Vec<MergeCandidate>;
fn compute_merge_candidates(
&self,
directory: Option<&dyn Directory>,
segments: &[SegmentMeta],
) -> Vec<MergeCandidate>;
}
/// Never merge segments.
@@ -30,7 +35,11 @@ impl Default for NoMergePolicy {
}
impl MergePolicy for NoMergePolicy {
fn compute_merge_candidates(&self, _segments: &[SegmentMeta]) -> Vec<MergeCandidate> {
fn compute_merge_candidates(
&self,
_directory: Option<&dyn Directory>,
_segments: &[SegmentMeta],
) -> Vec<MergeCandidate> {
Vec::new()
}
}
@@ -48,7 +57,11 @@ pub(crate) mod tests {
pub struct MergeWheneverPossible;
impl MergePolicy for MergeWheneverPossible {
fn compute_merge_candidates(&self, segment_metas: &[SegmentMeta]) -> Vec<MergeCandidate> {
fn compute_merge_candidates(
&self,
_directory: Option<&dyn Directory>,
segment_metas: &[SegmentMeta],
) -> Vec<MergeCandidate> {
let segment_ids = segment_metas
.iter()
.map(|segment_meta| segment_meta.id())

View File

@@ -12,14 +12,16 @@ use crate::docset::{DocSet, TERMINATED};
use crate::error::DataCorruption;
use crate::fastfield::AliveBitSet;
use crate::fieldnorm::{FieldNormReader, FieldNormReaders, FieldNormsSerializer, FieldNormsWriter};
use crate::index::merge_optimized_inverted_index_reader::MergeOptimizedInvertedIndexReader;
use crate::index::{Segment, SegmentComponent, SegmentReader};
use crate::indexer::doc_id_mapping::{MappingType, SegmentDocIdMapping};
use crate::indexer::segment_updater::CancelSentinel;
use crate::indexer::SegmentSerializer;
use crate::postings::{InvertedIndexSerializer, Postings, SegmentPostings};
use crate::schema::{value_type_to_column_type, Field, FieldType, Schema};
use crate::store::StoreWriter;
use crate::termdict::{TermMerger, TermOrdinal};
use crate::{DocAddress, DocId, InvertedIndexReader};
use crate::{DocAddress, DocId};
/// Segment's max doc must be `< MAX_DOC_LIMIT`.
///
@@ -80,6 +82,8 @@ pub struct IndexMerger {
schema: Schema,
pub(crate) readers: Vec<SegmentReader>,
max_doc: u32,
cancel: Box<dyn CancelSentinel>,
ignore_store: bool,
}
struct DeltaComputer {
@@ -145,9 +149,14 @@ fn extract_fast_field_required_columns(schema: &Schema) -> Vec<(String, ColumnTy
}
impl IndexMerger {
pub fn open(schema: Schema, segments: &[Segment]) -> crate::Result<IndexMerger> {
pub fn open(
schema: Schema,
segments: &[Segment],
cancel: Box<dyn CancelSentinel>,
ignore_store: bool,
) -> crate::Result<IndexMerger> {
let alive_bitset = segments.iter().map(|_| None).collect_vec();
Self::open_with_custom_alive_set(schema, segments, alive_bitset)
Self::open_with_custom_alive_set(schema, segments, alive_bitset, cancel, ignore_store)
}
// Create merge with a custom delete set.
@@ -166,6 +175,8 @@ impl IndexMerger {
schema: Schema,
segments: &[Segment],
alive_bitset_opt: Vec<Option<AliveBitSet>>,
cancel: Box<dyn CancelSentinel>,
ignore_store: bool,
) -> crate::Result<IndexMerger> {
let mut readers = vec![];
for (segment, new_alive_bitset_opt) in segments.iter().zip(alive_bitset_opt) {
@@ -189,6 +200,8 @@ impl IndexMerger {
schema,
readers,
max_doc,
cancel,
ignore_store,
})
}
@@ -200,6 +213,9 @@ impl IndexMerger {
let fields = FieldNormsWriter::fields_with_fieldnorm(&self.schema);
let mut fieldnorms_data = Vec::with_capacity(self.max_doc as usize);
for field in fields {
if self.cancel.wants_cancel() {
return Err(crate::TantivyError::Cancelled);
}
fieldnorms_data.clear();
let fieldnorms_readers: Vec<FieldNormReader> = self
.readers
@@ -235,6 +251,7 @@ impl IndexMerger {
&required_columns,
merge_row_order,
fast_field_wrt,
|| self.cancel.wants_cancel(),
)?;
Ok(())
}
@@ -297,10 +314,10 @@ impl IndexMerger {
let mut max_term_ords: Vec<TermOrdinal> = Vec::new();
let field_readers: Vec<Arc<InvertedIndexReader>> = self
let field_readers: Vec<Arc<MergeOptimizedInvertedIndexReader>> = self
.readers
.iter()
.map(|reader| reader.inverted_index(indexed_field))
.map(|reader| reader.merge_optimized_inverted_index(indexed_field))
.collect::<crate::Result<Vec<_>>>()?;
let mut field_term_streams = Vec::new();
@@ -357,7 +374,16 @@ impl IndexMerger {
let mut segment_postings_containing_the_term: Vec<(usize, SegmentPostings)> = vec![];
let mut cnt = 0;
while merged_terms.advance() {
// calling `wants_cancel()` could be expensive so only do it so often
if cnt % 1000 == 0 {
if self.cancel.wants_cancel() {
return Err(crate::TantivyError::Cancelled);
}
}
cnt += 1;
segment_postings_containing_the_term.clear();
let term_bytes: &[u8] = merged_terms.key();
@@ -366,7 +392,8 @@ impl IndexMerger {
// Let's compute the list of non-empty posting lists
for (segment_ord, term_info) in merged_terms.current_segment_ords_and_term_infos() {
let segment_reader = &self.readers[segment_ord];
let inverted_index: &InvertedIndexReader = &field_readers[segment_ord];
let inverted_index: &MergeOptimizedInvertedIndexReader =
&field_readers[segment_ord];
let segment_postings = inverted_index
.read_postings_from_terminfo(&term_info, segment_postings_option)?;
let alive_bitset_opt = segment_reader.alive_bitset();
@@ -436,6 +463,12 @@ impl IndexMerger {
let mut doc = segment_postings.doc();
while doc != TERMINATED {
if doc % 1000 == 0 {
// calling `wants_cancel()` could be expensive so only do it so often
if self.cancel.wants_cancel() {
return Err(crate::TantivyError::Cancelled);
}
}
// deleted doc are skipped as they do not have a `remapped_doc_id`.
if let Some(remapped_doc_id) = old_to_new_doc_id[doc as usize] {
// we make sure to only write the term if
@@ -472,6 +505,9 @@ impl IndexMerger {
doc_id_mapping: &SegmentDocIdMapping,
) -> crate::Result<()> {
for (field, field_entry) in self.schema.fields() {
if self.cancel.wants_cancel() {
return Err(crate::TantivyError::Cancelled);
}
let fieldnorm_reader = fieldnorm_readers.get_field(field)?;
if field_entry.is_indexed() {
self.write_postings_for_field(
@@ -510,6 +546,9 @@ impl IndexMerger {
|| store_reader.decompressor() != store_writer.compressor().into()
{
for doc_bytes_res in store_reader.iter_raw(reader.alive_bitset()) {
if self.cancel.wants_cancel() {
return Err(crate::TantivyError::Cancelled);
}
let doc_bytes = doc_bytes_res?;
store_writer.store_bytes(&doc_bytes)?;
}
@@ -543,7 +582,9 @@ impl IndexMerger {
)?;
debug!("write-storagefields");
self.write_storable_fields(serializer.get_store_writer())?;
if !self.ignore_store {
self.write_storable_fields(serializer.get_store_writer())?;
}
debug!("write-fastfields");
self.write_fast_fields(serializer.get_fast_field_write(), doc_id_mapping)?;
@@ -735,6 +776,163 @@ mod tests {
Ok(())
}
// NB: this is the same as `test_index_merger_no_deletes` above, but using `merge_foreground()`
#[test]
fn test_foreground_merge() -> crate::Result<()> {
let mut schema_builder = schema::Schema::builder();
let text_fieldtype = schema::TextOptions::default()
.set_indexing_options(
TextFieldIndexing::default().set_index_option(IndexRecordOption::WithFreqs),
)
.set_stored();
let text_field = schema_builder.add_text_field("text", text_fieldtype);
let date_field = schema_builder.add_date_field("date", INDEXED);
let score_fieldtype = schema::NumericOptions::default().set_fast();
let score_field = schema_builder.add_u64_field("score", score_fieldtype);
let bytes_score_field = schema_builder.add_bytes_field("score_bytes", FAST);
let index = Index::create_in_ram(schema_builder.build());
let reader = index.reader()?;
let curr_time = OffsetDateTime::now_utc();
{
let mut index_writer = index.writer_for_tests()?;
// writing the segment
index_writer.add_document(doc!(
text_field => "af b",
score_field => 3u64,
date_field => DateTime::from_utc(curr_time),
bytes_score_field => 3u32.to_be_bytes().as_ref()
))?;
index_writer.add_document(doc!(
text_field => "a b c",
score_field => 5u64,
bytes_score_field => 5u32.to_be_bytes().as_ref()
))?;
index_writer.add_document(doc!(
text_field => "a b c d",
score_field => 7u64,
bytes_score_field => 7u32.to_be_bytes().as_ref()
))?;
index_writer.commit()?;
// writing the segment
index_writer.add_document(doc!(
text_field => "af b",
date_field => DateTime::from_utc(curr_time),
score_field => 11u64,
bytes_score_field => 11u32.to_be_bytes().as_ref()
))?;
index_writer.add_document(doc!(
text_field => "a b c g",
score_field => 13u64,
bytes_score_field => 13u32.to_be_bytes().as_ref()
))?;
index_writer.commit()?;
}
{
let segment_ids = index
.searchable_segment_ids()
.expect("Searchable segments failed.");
let mut index_writer: IndexWriter = index.writer_for_tests()?;
index_writer.merge_foreground(&segment_ids, false)?;
}
{
reader.reload()?;
let searcher = reader.searcher();
let get_doc_ids = |terms: Vec<Term>| {
let query = BooleanQuery::new_multiterms_query(terms);
searcher
.search(&query, &TEST_COLLECTOR_WITH_SCORE)
.map(|top_docs| top_docs.docs().to_vec())
};
{
assert_eq!(
get_doc_ids(vec![Term::from_field_text(text_field, "a")])?,
vec![
DocAddress::new(0, 1),
DocAddress::new(0, 2),
DocAddress::new(0, 4)
]
);
assert_eq!(
get_doc_ids(vec![Term::from_field_text(text_field, "af")])?,
vec![DocAddress::new(0, 0), DocAddress::new(0, 3)]
);
assert_eq!(
get_doc_ids(vec![Term::from_field_text(text_field, "g")])?,
vec![DocAddress::new(0, 4)]
);
assert_eq!(
get_doc_ids(vec![Term::from_field_text(text_field, "b")])?,
vec![
DocAddress::new(0, 0),
DocAddress::new(0, 1),
DocAddress::new(0, 2),
DocAddress::new(0, 3),
DocAddress::new(0, 4)
]
);
assert_eq!(
get_doc_ids(vec![Term::from_field_date_for_search(
date_field,
DateTime::from_utc(curr_time)
)])?,
vec![DocAddress::new(0, 0), DocAddress::new(0, 3)]
);
}
{
let doc = searcher.doc::<TantivyDocument>(DocAddress::new(0, 0))?;
assert_eq!(
doc.get_first(text_field).unwrap().as_value().as_str(),
Some("af b")
);
}
{
let doc = searcher.doc::<TantivyDocument>(DocAddress::new(0, 1))?;
assert_eq!(
doc.get_first(text_field).unwrap().as_value().as_str(),
Some("a b c")
);
}
{
let doc = searcher.doc::<TantivyDocument>(DocAddress::new(0, 2))?;
assert_eq!(
doc.get_first(text_field).unwrap().as_value().as_str(),
Some("a b c d")
);
}
{
let doc = searcher.doc::<TantivyDocument>(DocAddress::new(0, 3))?;
assert_eq!(doc.get_first(text_field).unwrap().as_str(), Some("af b"));
}
{
let doc = searcher.doc::<TantivyDocument>(DocAddress::new(0, 4))?;
assert_eq!(doc.get_first(text_field).unwrap().as_str(), Some("a b c g"));
}
{
let get_fast_vals = |terms: Vec<Term>| {
let query = BooleanQuery::new_multiterms_query(terms);
searcher.search(&query, &FastFieldTestCollector::for_field("score"))
};
let get_fast_vals_bytes = |terms: Vec<Term>| {
let query = BooleanQuery::new_multiterms_query(terms);
searcher.search(
&query,
&BytesFastFieldTestCollector::for_field("score_bytes"),
)
};
assert_eq!(
get_fast_vals(vec![Term::from_field_text(text_field, "a")])?,
vec![5, 7, 13]
);
assert_eq!(
get_fast_vals_bytes(vec![Term::from_field_text(text_field, "a")])?,
vec![0, 0, 0, 5, 0, 0, 0, 7, 0, 0, 0, 13]
);
}
}
Ok(())
}
#[test]
fn test_index_merger_with_deletes() -> crate::Result<()> {
let mut schema_builder = schema::Schema::builder();
@@ -1032,12 +1230,15 @@ mod tests {
// Test removing all docs
index_writer.delete_term(Term::from_field_text(text_field, "g"));
index_writer.commit()?;
let segment_ids = index.searchable_segment_ids()?;
let _segment_ids = index.searchable_segment_ids()?;
reader.reload()?;
let searcher = reader.searcher();
assert!(segment_ids.is_empty());
assert!(searcher.segment_readers().is_empty());
// In Tantivy upstream, this test results in 0 segments after delete.
// However, due to our custom, visibility rules, we leave the segment.
// See committed_segment_metas in segment_manager.rs.
// assert!(segment_ids.is_empty());
// assert!(searcher.segment_readers().is_empty());
assert_eq!(searcher.num_docs(), 0);
}
Ok(())

View File

@@ -4,7 +4,7 @@
//! `IndexWriter` is the main entry point for that, which created from
//! [`Index::writer`](crate::Index::writer).
pub(crate) mod delete_queue;
pub mod delete_queue;
pub(crate) mod path_to_unordered_id;
pub(crate) mod doc_id_mapping;
@@ -17,13 +17,13 @@ mod log_merge_policy;
mod merge_index_test;
mod merge_operation;
pub(crate) mod merge_policy;
pub(crate) mod merger;
pub mod merger;
pub(crate) mod operation;
pub(crate) mod prepared_commit;
mod segment_entry;
mod segment_manager;
mod segment_register;
pub(crate) mod segment_serializer;
pub mod segment_serializer;
pub(crate) mod segment_updater;
pub(crate) mod segment_writer;
pub(crate) mod single_segment_index_writer;
@@ -32,12 +32,11 @@ mod stamper;
use crossbeam_channel as channel;
use smallvec::SmallVec;
pub use self::index_writer::{IndexWriter, IndexWriterOptions};
pub use self::index_writer::{advance_deletes, IndexWriter, IndexWriterOptions};
pub use self::log_merge_policy::LogMergePolicy;
pub use self::merge_operation::MergeOperation;
pub use self::merge_policy::{MergeCandidate, MergePolicy, NoMergePolicy};
use self::operation::AddOperation;
pub use self::operation::UserOperation;
pub use self::operation::{AddOperation, DeleteOperation, UserOperation};
pub use self::prepared_commit::PreparedCommit;
pub use self::segment_entry::SegmentEntry;
pub(crate) use self::segment_serializer::SegmentSerializer;

View File

@@ -1,12 +1,28 @@
use crate::index::SegmentId;
use crate::query::Weight;
use crate::schema::document::Document;
use crate::schema::{TantivyDocument, Term};
use crate::Opstamp;
use crate::{DocId, Opstamp};
/// Timestamped Delete operation.
pub struct DeleteOperation {
pub opstamp: Opstamp,
pub target: Box<dyn Weight>,
pub enum DeleteOperation {
ByWeight {
opstamp: Opstamp,
target: Box<dyn Weight>,
},
ByAddress {
opstamp: Opstamp,
segment_id: SegmentId,
doc_id: DocId,
},
}
impl DeleteOperation {
pub fn opstamp(&self) -> Opstamp {
match self {
DeleteOperation::ByWeight { opstamp, .. } => *opstamp,
DeleteOperation::ByAddress { opstamp, .. } => *opstamp,
}
}
}
/// Timestamped Add operation.
@@ -23,4 +39,7 @@ pub enum UserOperation<D: Document = TantivyDocument> {
Add(D),
/// Delete operation
Delete(Term),
/// Delete a document by its address
DeleteByAddress(SegmentId, DocId),
}

View File

@@ -7,6 +7,7 @@ use crate::error::TantivyError;
use crate::index::{SegmentId, SegmentMeta};
use crate::indexer::delete_queue::DeleteCursor;
use crate::indexer::SegmentEntry;
use crate::Index;
#[derive(Default)]
struct SegmentRegisters {
@@ -114,6 +115,7 @@ impl SegmentManager {
}
/// Deletes all empty segments
#[allow(dead_code)]
fn remove_empty_segments(&self) {
let mut registers_lock = self.write();
registers_lock
@@ -134,7 +136,7 @@ impl SegmentManager {
registers_lock.uncommitted.clear();
}
pub fn commit(&self, segment_entries: Vec<SegmentEntry>) {
pub fn commit(&self, _index: &Index, segment_entries: Vec<SegmentEntry>) {
let mut registers_lock = self.write();
registers_lock.committed.clear();
registers_lock.uncommitted.clear();
@@ -148,7 +150,11 @@ impl SegmentManager {
/// Returns an error if some segments are missing, or if
/// the `segment_ids` are not either all committed or all
/// uncommitted.
pub fn start_merge(&self, segment_ids: &[SegmentId]) -> crate::Result<Vec<SegmentEntry>> {
pub fn start_merge(
&self,
_index: &Index,
segment_ids: &[SegmentId],
) -> crate::Result<Vec<SegmentEntry>> {
let registers_lock = self.read();
let mut segment_entries = vec![];
if registers_lock.uncommitted.contains_all(segment_ids) {
@@ -165,6 +171,7 @@ impl SegmentManager {
"Segment id not found {}. Should never happen because of the contains all \
if-block.",
);
segment_entries.push(segment_entry);
}
} else {
@@ -215,7 +222,13 @@ impl SegmentManager {
}
pub fn committed_segment_metas(&self) -> Vec<SegmentMeta> {
self.remove_empty_segments();
// When a SegmentMeta's DeleteMeta shows max_doc = num_docs_deleted, then all documents in
// this segment have been deleted and Tantivy simply removes this segment from the
// meta.json file. We don't want to do this -- we want to actually write the
// DeletaMeta to the meta.json file because our visibility rules are different for
// "segments that have been deleted" vs. "segments with a DeleteMeta"
// self.remove_empty_segments();
let registers_lock = self.read();
registers_lock.committed.segment_metas()
}

View File

@@ -1,17 +1,17 @@
use std::any::Any;
use std::borrow::BorrowMut;
use std::collections::HashSet;
use std::io::Write;
use std::ops::Deref;
use std::path::PathBuf;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, RwLock};
use std::sync::Arc;
use parking_lot::RwLock;
use rayon::{ThreadPool, ThreadPoolBuilder};
use super::segment_manager::SegmentManager;
use crate::core::META_FILEPATH;
use crate::directory::{Directory, DirectoryClone, GarbageCollectionResult};
use crate::directory::{Directory, DirectoryClone, DirectoryPanicHandler, GarbageCollectionResult};
use crate::fastfield::AliveBitSet;
use crate::index::{Index, IndexMeta, IndexSettings, Segment, SegmentId, SegmentMeta};
use crate::indexer::delete_queue::DeleteCursor;
@@ -26,8 +26,6 @@ use crate::indexer::{
};
use crate::{FutureResult, Opstamp, TantivyError};
const PANIC_CAUGHT: &str = "Panic caught in merge thread";
/// Save the index meta file.
/// This operation is atomic:
/// Either
@@ -35,21 +33,52 @@ const PANIC_CAUGHT: &str = "Panic caught in merge thread";
/// - it success, and `meta.json` is written and flushed.
///
/// This method is not part of tantivy's public API
pub(crate) fn save_metas(metas: &IndexMeta, directory: &dyn Directory) -> crate::Result<()> {
pub(crate) fn save_metas(
metas: &IndexMeta,
previous_metas: &IndexMeta,
directory: &dyn Directory,
) -> crate::Result<()> {
info!("save metas");
let mut buffer = serde_json::to_vec_pretty(metas)?;
// Just adding a new line at the end of the buffer.
writeln!(&mut buffer)?;
crate::fail_point!("save_metas", |msg| Err(crate::TantivyError::from(
std::io::Error::new(
std::io::ErrorKind::Other,
msg.unwrap_or_else(|| "Undefined".to_string())
)
)));
directory.sync_directory()?;
directory.atomic_write(&META_FILEPATH, &buffer[..])?;
debug!("Saved metas {:?}", serde_json::to_string_pretty(&metas));
Ok(())
match directory.save_metas(metas, previous_metas, &mut ()) {
Ok(_) => Ok(()),
Err(crate::TantivyError::InternalError(_)) => {
let mut buffer = serde_json::to_vec_pretty(metas)?;
// Just adding a new line at the end of the buffer.
writeln!(&mut buffer)?;
crate::fail_point!("save_metas", |msg| Err(crate::TantivyError::from(
std::io::Error::new(
std::io::ErrorKind::Other,
msg.unwrap_or_else(|| "Undefined".to_string())
)
)));
directory.sync_directory()?;
directory.atomic_write(&META_FILEPATH, &buffer[..])?;
debug!("Saved metas {:?}", serde_json::to_string_pretty(&metas));
Ok(())
}
Err(e) => Err(e),
}
}
/// Describes a routine for allowing an operation in tantivy to be cleanly cancelled
///
/// We provide an implementation for `Fn() -> bool`.
pub trait CancelSentinel: Send + Sync + 'static {
fn box_clone(&self) -> Box<dyn CancelSentinel>;
fn wants_cancel(&self) -> bool;
}
impl<F: Fn() -> bool + Send + Sync + 'static> CancelSentinel for F
where F: Clone
{
fn box_clone(&self) -> Box<dyn CancelSentinel> {
Box::new(self.clone())
}
fn wants_cancel(&self) -> bool {
self()
}
}
// The segment update runner is in charge of processing all
@@ -60,15 +89,26 @@ pub(crate) fn save_metas(metas: &IndexMeta, directory: &dyn Directory) -> crate:
//
// We voluntarily pass a merge_operation ref to guarantee that
// the merge_operation is alive during the process
#[derive(Clone)]
pub(crate) struct SegmentUpdater(Arc<InnerSegmentUpdater>);
pub(crate) struct SegmentUpdater {
inner: Arc<InnerSegmentUpdater>,
cancel: Box<dyn CancelSentinel>,
}
impl Clone for SegmentUpdater {
fn clone(&self) -> Self {
Self {
inner: self.inner.clone(),
cancel: self.cancel.box_clone(),
}
}
}
impl Deref for SegmentUpdater {
type Target = InnerSegmentUpdater;
#[inline]
fn deref(&self) -> &Self::Target {
&self.0
&self.inner
}
}
@@ -88,6 +128,8 @@ fn merge(
index: &Index,
mut segment_entries: Vec<SegmentEntry>,
target_opstamp: Opstamp,
cancel: Box<dyn CancelSentinel>,
ignore_store: bool,
) -> crate::Result<Option<SegmentEntry>> {
let num_docs = segment_entries
.iter()
@@ -114,7 +156,7 @@ fn merge(
.collect();
// An IndexMerger is like a "view" of our merged segments.
let merger: IndexMerger = IndexMerger::open(index.schema(), &segments[..])?;
let merger = IndexMerger::open(index.schema(), &segments[..], cancel, ignore_store)?;
// ... we just serialize this index merger in our new segment to merge the segments.
let segment_serializer = SegmentSerializer::for_segment(merged_segment.clone())?;
@@ -142,6 +184,7 @@ fn merge(
pub fn merge_indices<T: Into<Box<dyn Directory>>>(
indices: &[Index],
output_directory: T,
cancel: Box<dyn CancelSentinel>,
) -> crate::Result<Index> {
if indices.is_empty() {
// If there are no indices to merge, there is no need to do anything.
@@ -169,7 +212,13 @@ pub fn merge_indices<T: Into<Box<dyn Directory>>>(
}
let non_filter = segments.iter().map(|_| None).collect::<Vec<_>>();
merge_filtered_segments(&segments, target_settings, non_filter, output_directory)
merge_filtered_segments(
&segments,
target_settings,
non_filter,
output_directory,
cancel,
)
}
/// Advanced: Merges a list of segments from different indices in a new index.
@@ -190,6 +239,7 @@ pub fn merge_filtered_segments<T: Into<Box<dyn Directory>>>(
target_settings: IndexSettings,
filter_doc_ids: Vec<Option<AliveBitSet>>,
output_directory: T,
cancel: Box<dyn CancelSentinel>,
) -> crate::Result<Index> {
if segments.is_empty() {
// If there are no indices to merge, there is no need to do anything.
@@ -218,8 +268,13 @@ pub fn merge_filtered_segments<T: Into<Box<dyn Directory>>>(
)?;
let merged_segment = merged_index.new_segment();
let merged_segment_id = merged_segment.id();
let merger: IndexMerger =
IndexMerger::open_with_custom_alive_set(merged_index.schema(), segments, filter_doc_ids)?;
let merger = IndexMerger::open_with_custom_alive_set(
merged_index.schema(),
segments,
filter_doc_ids,
cancel,
false,
)?;
let segment_serializer = SegmentSerializer::for_segment(merged_segment)?;
let num_docs = merger.write(segment_serializer)?;
@@ -237,19 +292,37 @@ pub fn merge_filtered_segments<T: Into<Box<dyn Directory>>>(
);
let index_meta = IndexMeta {
index_settings: target_settings, // index_settings of all segments should be the same
index_settings: target_settings.clone(), /* index_settings of all segments should be the
* same */
segments: vec![segment_meta],
schema: target_schema,
schema: target_schema.clone(),
opstamp: 0u64,
payload: Some(stats),
};
// save the meta.json
save_metas(&index_meta, merged_index.directory_mut())?;
let segment_metas = segments
.iter()
.map(|segment| segment.meta().clone())
.collect();
let previous_meta = IndexMeta {
index_settings: target_settings,
segments: segment_metas,
schema: target_schema,
opstamp: 0u64,
payload: None,
};
save_metas(&index_meta, &previous_meta, merged_index.directory_mut())?;
Ok(merged_index)
}
struct Pools {
pool: ThreadPool,
merge_thread_pool: ThreadPool,
merge_errors: Arc<RwLock<Vec<TantivyError>>>,
}
pub(crate) struct InnerSegmentUpdater {
// we keep a copy of the current active IndexMeta to
// avoid loading the file every time we need it in the
@@ -258,9 +331,7 @@ pub(crate) struct InnerSegmentUpdater {
// This should be up to date as all update happen through
// the unique active `SegmentUpdater`.
active_index_meta: RwLock<Arc<IndexMeta>>,
pool: ThreadPool,
merge_thread_pool: ThreadPool,
pools: Option<Pools>,
index: Index,
segment_manager: SegmentManager,
merge_policy: RwLock<Arc<dyn MergePolicy>>,
@@ -275,57 +346,79 @@ impl SegmentUpdater {
stamper: Stamper,
delete_cursor: &DeleteCursor,
num_merge_threads: usize,
panic_handler: Option<DirectoryPanicHandler>,
cancel: impl Fn() -> bool + 'static + Send + Sync + Clone,
) -> crate::Result<SegmentUpdater> {
let segments = index.searchable_segment_metas()?;
let segment_manager = SegmentManager::from_segments(segments, delete_cursor);
let pool = ThreadPoolBuilder::new()
.thread_name(|_| "segment_updater".to_string())
.num_threads(1)
.build()
.map_err(|_| {
crate::TantivyError::SystemError(
"Failed to spawn segment updater thread".to_string(),
)
})?;
let merge_thread_pool = ThreadPoolBuilder::new()
.thread_name(|i| format!("merge_thread_{i}"))
.num_threads(num_merge_threads)
.panic_handler(move |panic| {
// We don't print the panic content itself,
// it is already printed during the unwinding
if let Some(message) = panic.downcast_ref::<&str>() {
if *message != PANIC_CAUGHT {
error!("uncaught merge panic")
}
}
})
.build()
.map_err(|_| {
crate::TantivyError::SystemError(
"Failed to spawn segment merging thread".to_string(),
)
})?;
let index_meta = index.load_metas()?;
Ok(SegmentUpdater(Arc::new(InnerSegmentUpdater {
active_index_meta: RwLock::new(Arc::new(index_meta)),
pool,
merge_thread_pool,
index,
segment_manager,
merge_policy: RwLock::new(Arc::new(DefaultMergePolicy::default())),
killed: AtomicBool::new(false),
stamper,
merge_operations: Default::default(),
})))
Ok(SegmentUpdater {
inner: Arc::new(InnerSegmentUpdater {
active_index_meta: RwLock::new(Arc::new(index_meta)),
pools: (num_merge_threads > 0).then(|| {
let mut builder = ThreadPoolBuilder::new()
.thread_name(|_| "segment_updater".to_string())
.num_threads(1);
if let Some(panic_handler) = panic_handler.as_ref() {
let panic_handler = panic_handler.clone();
builder = builder.panic_handler(move |any| {
panic_handler(any);
});
}
let pool = builder
.build()
.map_err(|_| {
crate::TantivyError::SystemError(
"Failed to spawn segment updater thread".to_string(),
)
})
.unwrap();
let mut builder = ThreadPoolBuilder::new()
.thread_name(|i| format!("merge_thread_{i}"))
.num_threads(num_merge_threads);
if let Some(panic_handler) = panic_handler {
let panic_handler = panic_handler.clone();
builder = builder.panic_handler(move |any| {
panic_handler(any);
});
}
let merge_thread_pool = builder
.build()
.map_err(|_| {
crate::TantivyError::SystemError(
"Failed to spawn segment merging thread".to_string(),
)
})
.unwrap();
Pools {
pool,
merge_thread_pool,
merge_errors: Default::default(),
}
}),
index,
segment_manager,
merge_policy: RwLock::new(Arc::new(DefaultMergePolicy::default())),
killed: AtomicBool::new(false),
stamper,
merge_operations: Default::default(),
}),
cancel: Box::new(cancel),
})
}
pub fn get_merge_policy(&self) -> Arc<dyn MergePolicy> {
self.merge_policy.read().unwrap().clone()
self.merge_policy.read().clone()
}
pub fn set_merge_policy(&self, merge_policy: Box<dyn MergePolicy>) {
let arc_merge_policy = Arc::from(merge_policy);
*self.merge_policy.write().unwrap() = arc_merge_policy;
*self.merge_policy.write() = arc_merge_policy;
}
fn schedule_task<T: 'static + Send, F: FnOnce() -> crate::Result<T> + 'static + Send>(
@@ -338,10 +431,14 @@ impl SegmentUpdater {
let (scheduled_result, sender) = FutureResult::create(
"A segment_updater future did not succeed. This should never happen.",
);
self.pool.spawn(|| {
let task_result = task();
let _ = sender.send(task_result);
});
self.pools
.as_ref()
.expect("thread pools should have been configured")
.pool
.spawn(|| {
let task_result = task();
let _ = sender.send(task_result);
});
scheduled_result
}
@@ -349,7 +446,8 @@ impl SegmentUpdater {
let segment_updater = self.clone();
self.schedule_task(move || {
segment_updater.segment_manager.add_segment(segment_entry);
segment_updater.consider_merge_options();
// mingy98: We don't need to consider merge options for every segment, just at the very
// end segment_updater.consider_merge_options();
Ok(())
})
}
@@ -384,6 +482,7 @@ impl SegmentUpdater {
&self,
opstamp: Opstamp,
commit_message: Option<String>,
previous_metas: &IndexMeta,
) -> crate::Result<()> {
if self.is_alive() {
let index = &self.index;
@@ -412,7 +511,11 @@ impl SegmentUpdater {
payload: commit_message,
};
// TODO add context to the error.
save_metas(&index_meta, directory.box_clone().borrow_mut())?;
save_metas(
&index_meta,
&previous_metas,
directory.box_clone().borrow_mut(),
)?;
self.store_meta(&index_meta);
}
Ok(())
@@ -443,11 +546,14 @@ impl SegmentUpdater {
opstamp: Opstamp,
payload: Option<String>,
) -> FutureResult<Opstamp> {
let segment_updater: SegmentUpdater = self.clone();
let segment_updater = self.clone();
self.schedule_task(move || {
let segment_entries = segment_updater.purge_deletes(opstamp)?;
segment_updater.segment_manager.commit(segment_entries);
segment_updater.save_metas(opstamp, payload)?;
let previous_metas = segment_updater.load_meta();
segment_updater
.segment_manager
.commit(&segment_updater.index, segment_entries);
segment_updater.save_metas(opstamp, payload, &previous_metas)?;
let _ = garbage_collect_files(segment_updater.clone());
segment_updater.consider_merge_options();
Ok(opstamp)
@@ -455,16 +561,25 @@ impl SegmentUpdater {
}
fn store_meta(&self, index_meta: &IndexMeta) {
*self.active_index_meta.write().unwrap() = Arc::new(index_meta.clone());
*self.active_index_meta.write() = Arc::new(index_meta.clone());
}
fn load_meta(&self) -> Arc<IndexMeta> {
self.active_index_meta.read().unwrap().clone()
self.active_index_meta.read().clone()
}
pub(crate) fn make_merge_operation(&self, segment_ids: &[SegmentId]) -> MergeOperation {
pub(crate) fn make_merge_operation(
&self,
segment_ids: &[SegmentId],
ignore_store: bool,
) -> MergeOperation {
let commit_opstamp = self.load_meta().opstamp;
MergeOperation::new(&self.merge_operations, commit_opstamp, segment_ids.to_vec())
MergeOperation::new(
&self.merge_operations,
commit_opstamp,
segment_ids.to_vec(),
ignore_store,
)
}
// Starts a merge operation. This function will block until the merge operation is effectively
@@ -496,7 +611,7 @@ impl SegmentUpdater {
let segment_updater = self.clone();
let segment_entries: Vec<SegmentEntry> = match self
.segment_manager
.start_merge(merge_operation.segment_ids())
.start_merge(&self.index, merge_operation.segment_ids())
{
Ok(segment_entries) => segment_entries,
Err(err) => {
@@ -512,66 +627,98 @@ impl SegmentUpdater {
let (scheduled_result, merging_future_send) =
FutureResult::create("Merge operation failed.");
self.merge_thread_pool.spawn(move || {
// The fact that `merge_operation` is moved here is important.
// Its lifetime is used to track how many merging thread are currently running,
// as well as which segment is currently in merge and therefore should not be
// candidate for another merge.
let merge_panic_res = std::panic::catch_unwind(std::panic::AssertUnwindSafe(|| {
merge(
let cancel = self.cancel.box_clone();
let merge_errors = self
.pools
.as_ref()
.expect("thread pools should have been configured")
.merge_errors
.clone();
self.pools
.as_ref()
.expect("thread pools should have been configured")
.merge_thread_pool
.spawn(move || {
// The fact that `merge_operation` is moved here is important.
// Its lifetime is used to track how many merging thread are currently running,
// as well as which segment is currently in merge and therefore should not be
// candidate for another merge.
match merge(
&segment_updater.index,
segment_entries,
merge_operation.target_opstamp(),
)
}));
let merge_res = match merge_panic_res {
Ok(merge_res) => merge_res,
Err(panic_err) => {
let panic_str = if let Some(msg) = panic_err.downcast_ref::<&str>() {
*msg
} else if let Some(msg) = panic_err.downcast_ref::<String>() {
msg.as_str()
} else {
"UNKNOWN"
};
let _send_result = merging_future_send.send(Err(TantivyError::SystemError(
format!("Merge thread panicked: {panic_str}"),
)));
// Resume unwinding because we forced unwind safety with
// `std::panic::AssertUnwindSafe` Use a specific message so
// the panic_handler can double check that we properly caught the panic.
let boxed_panic_message: Box<dyn Any + Send> = Box::new(PANIC_CAUGHT);
std::panic::resume_unwind(boxed_panic_message);
}
};
match merge_res {
Ok(after_merge_segment_entry) => {
let res = segment_updater.end_merge(merge_operation, after_merge_segment_entry);
let _send_result = merging_future_send.send(res);
}
Err(merge_error) => {
warn!(
"Merge of {:?} was cancelled: {:?}",
merge_operation.segment_ids().to_vec(),
merge_error
);
if cfg!(test) {
panic!("{merge_error:?}");
cancel,
false,
) {
Ok(after_merge_segment_entry) => {
let res =
segment_updater.end_merge(merge_operation, after_merge_segment_entry);
let _send_result = merging_future_send.send(res);
}
Err(merge_error) => {
warn!(
"Merge of {:?} was cancelled: {:?}",
merge_operation.segment_ids().to_vec(),
merge_error
);
if cfg!(test) {
panic!("{merge_error:?}");
}
merge_errors.write().push(merge_error.clone());
let _send_result = merging_future_send.send(Err(merge_error));
}
let _send_result = merging_future_send.send(Err(merge_error));
}
}
});
});
scheduled_result
}
pub(crate) fn merge_foreground(
&self,
merge_operation: MergeOperation,
) -> crate::Result<Option<SegmentMeta>> {
assert!(
!merge_operation.segment_ids().is_empty(),
"Segment_ids cannot be empty."
);
let segment_updater = self.clone();
let segment_entries = self
.segment_manager
.start_merge(&self.index, merge_operation.segment_ids())?;
info!("Starting merge - {:?}", merge_operation.segment_ids());
let cancel = self.cancel.box_clone();
match merge(
&segment_updater.index,
segment_entries,
merge_operation.target_opstamp(),
cancel,
merge_operation.ignore_store(),
) {
Ok(after_merge_segment_entry) => {
segment_updater.end_merge_foreground(merge_operation, after_merge_segment_entry)
}
Err(merge_error) => Err(merge_error),
}
}
pub(crate) fn get_mergeable_segments(&self) -> (Vec<SegmentMeta>, Vec<SegmentMeta>) {
let merge_segment_ids: HashSet<SegmentId> = self.merge_operations.segment_in_merge();
self.segment_manager
.get_mergeable_segments(&merge_segment_ids)
}
pub(crate) fn get_merge_errors(&self) -> Vec<TantivyError> {
if let Some(pools) = self.pools.as_ref() {
pools.merge_errors.read().clone()
} else {
Vec::new()
}
}
fn consider_merge_options(&self) {
let (mut committed_segments, mut uncommitted_segments) = self.get_mergeable_segments();
if committed_segments.len() == 1 && committed_segments[0].num_deleted_docs() == 0 {
@@ -587,19 +734,29 @@ impl SegmentUpdater {
let current_opstamp = self.stamper.stamp();
let mut merge_candidates: Vec<MergeOperation> = merge_policy
.compute_merge_candidates(&uncommitted_segments)
.compute_merge_candidates(Some(self.index.directory()), &uncommitted_segments)
.into_iter()
.map(|merge_candidate| {
MergeOperation::new(&self.merge_operations, current_opstamp, merge_candidate.0)
MergeOperation::new(
&self.merge_operations,
current_opstamp,
merge_candidate.0,
false,
)
})
.collect();
let commit_opstamp = self.load_meta().opstamp;
let committed_merge_candidates = merge_policy
.compute_merge_candidates(&committed_segments)
.compute_merge_candidates(Some(self.index.directory()), &committed_segments)
.into_iter()
.map(|merge_candidate: MergeCandidate| {
MergeOperation::new(&self.merge_operations, commit_opstamp, merge_candidate.0)
MergeOperation::new(
&self.merge_operations,
commit_opstamp,
merge_candidate.0,
false,
)
});
merge_candidates.extend(committed_merge_candidates);
@@ -633,7 +790,7 @@ impl SegmentUpdater {
let mut delete_cursor = after_merge_segment_entry.delete_cursor().clone();
if let Some(delete_operation) = delete_cursor.get() {
let committed_opstamp = segment_updater.load_meta().opstamp;
if delete_operation.opstamp < committed_opstamp {
if delete_operation.opstamp() < committed_opstamp {
// We are not up to date! Let's create a new tombstone file for our
// freshly create split.
let index = &segment_updater.index;
@@ -664,8 +821,11 @@ impl SegmentUpdater {
.end_merge(merge_operation.segment_ids(), after_merge_segment_entry)?;
if segments_status == SegmentsStatus::Committed {
segment_updater
.save_metas(previous_metas.opstamp, previous_metas.payload.clone())?;
segment_updater.save_metas(
previous_metas.opstamp,
previous_metas.payload.clone(),
&previous_metas,
)?;
}
segment_updater.consider_merge_options();
@@ -678,6 +838,71 @@ impl SegmentUpdater {
Ok(after_merge_segment_meta)
}
fn end_merge_foreground(
&self,
merge_operation: MergeOperation,
mut after_merge_segment_entry: Option<SegmentEntry>,
) -> crate::Result<Option<SegmentMeta>> {
let segment_updater = self.clone();
let after_merge_segment_meta = after_merge_segment_entry
.as_ref()
.map(|after_merge_segment_entry| after_merge_segment_entry.meta().clone());
info!(
"End merge {:?}",
after_merge_segment_entry.as_ref().map(|entry| entry.meta())
);
{
if let Some(after_merge_segment_entry) = after_merge_segment_entry.as_mut() {
// Deletes and commits could have happened as we were merging.
// We need to make sure we are up to date with deletes before accepting the
// segment.
let mut delete_cursor = after_merge_segment_entry.delete_cursor().clone();
if let Some(delete_operation) = delete_cursor.get() {
let committed_opstamp = segment_updater.load_meta().opstamp;
if delete_operation.opstamp() < committed_opstamp {
// We are not up to date! Let's create a new tombstone file for our
// freshly create split.
let index = &segment_updater.index;
let segment = index.segment(after_merge_segment_entry.meta().clone());
if let Err(advance_deletes_err) =
advance_deletes(segment, after_merge_segment_entry, committed_opstamp)
{
error!(
"Merge of {:?} was cancelled (advancing deletes failed): {:?}",
merge_operation.segment_ids(),
advance_deletes_err
);
assert!(!cfg!(test), "Merge failed.");
// ... cancel merge
// `merge_operations` are tracked. As it is dropped, the
// the segment_ids will be available again for merge.
return Err(advance_deletes_err);
}
}
}
}
let previous_metas = segment_updater.load_meta();
let segments_status = segment_updater
.segment_manager
.end_merge(merge_operation.segment_ids(), after_merge_segment_entry)?;
if segments_status == SegmentsStatus::Committed {
segment_updater.save_metas(
previous_metas.opstamp,
previous_metas.payload.clone(),
&previous_metas,
)?;
}
// NB: We don't want to consider merging again after we just did a merge
// segment_updater.consider_merge_options();
} // we drop all possible handle to a now useless `SegmentMeta`.
let _ = garbage_collect_files(segment_updater);
Ok(after_merge_segment_meta)
}
/// Wait for current merging threads.
///
/// Upon termination of the current merging threads,
@@ -768,9 +993,11 @@ mod tests {
}
index_writer.commit()?;
let seg_ids = index.searchable_segment_ids()?;
// docs exist, should have at least 1 segment
assert!(!seg_ids.is_empty());
let _seg_ids = index.searchable_segment_ids()?;
// In Tantivy upstream, this test results in 0 segments after delete.
// However, due to our custom, visibility rules, we leave the segment.
// See committed_segment_metas in segment_manager.rs.
// assert!(!seg_ids.is_empty());
let term = Term::from_field_text(text_field, "a");
index_writer.delete_term(term);
@@ -785,14 +1012,15 @@ mod tests {
let reader = index.reader()?;
assert_eq!(reader.searcher().num_docs(), 0);
let seg_ids = index.searchable_segment_ids()?;
assert!(seg_ids.is_empty());
let _seg_ids = index.searchable_segment_ids()?;
// Skipped due to custom ParadeDB visibility rules.
// assert!(seg_ids.is_empty());
reader.reload()?;
assert_eq!(reader.searcher().num_docs(), 0);
// empty segments should be erased
assert!(index.searchable_segment_metas()?.is_empty());
assert!(reader.searcher().segment_readers().is_empty());
// Skipped due to custom ParadeDB visibility rules.
// assert!(index.searchable_segment_metas()?.is_empty());
// assert!(reader.searcher().segment_readers().is_empty());
Ok(())
}
@@ -822,9 +1050,11 @@ mod tests {
index_writer.add_document(doc!(text_field=>"f"))?;
index_writer.commit()?;
let seg_ids = index.searchable_segment_ids()?;
// docs exist, should have at least 1 segment
assert!(!seg_ids.is_empty());
let _seg_ids = index.searchable_segment_ids()?;
// In Tantivy upstream, this test results in 0 segments after delete.
// However, due to our custom, visibility rules, we leave the segment.
// See committed_segment_metas in segment_manager.rs.
// assert!(!seg_ids.is_empty());
let term_vals = vec!["a", "b", "c", "d", "e", "f"];
for term_val in term_vals {
@@ -838,14 +1068,15 @@ mod tests {
let reader = index.reader()?;
assert_eq!(reader.searcher().num_docs(), 0);
let seg_ids = index.searchable_segment_ids()?;
assert!(seg_ids.is_empty());
let _seg_ids = index.searchable_segment_ids()?;
// Skipped due to custom ParadeDB visibility rules.
// assert!(seg_ids.is_empty());
reader.reload()?;
assert_eq!(reader.searcher().num_docs(), 0);
// empty segments should be erased
assert!(index.searchable_segment_metas()?.is_empty());
assert!(reader.searcher().segment_readers().is_empty());
// Skipped due to custom ParadeDB visibility rules.
// assert!(index.searchable_segment_metas()?.is_empty());
// assert!(reader.searcher().segment_readers().is_empty());
Ok(())
}
@@ -901,7 +1132,7 @@ mod tests {
assert_eq!(indices.len(), 3);
let output_directory: Box<dyn Directory> = Box::<RamDirectory>::default();
let index = merge_indices(&indices, output_directory)?;
let index = merge_indices(&indices, output_directory, Box::new(|| false))?;
assert_eq!(index.schema(), schema);
let segments = index.searchable_segments()?;
@@ -915,7 +1146,7 @@ mod tests {
#[test]
fn test_merge_empty_indices_array() {
let merge_result = merge_indices(&[], RamDirectory::default());
let merge_result = merge_indices(&[], RamDirectory::default(), Box::new(|| false));
assert!(merge_result.is_err());
}
@@ -942,7 +1173,11 @@ mod tests {
};
// mismatched schema index list
let result = merge_indices(&[first_index, second_index], RamDirectory::default());
let result = merge_indices(
&[first_index, second_index],
RamDirectory::default(),
Box::new(|| false),
);
assert!(result.is_err());
Ok(())
@@ -990,6 +1225,7 @@ mod tests {
target_settings,
filter_segments,
RamDirectory::default(),
Box::new(|| false),
)?;
let segments = merged_index.searchable_segments()?;
@@ -1035,6 +1271,7 @@ mod tests {
target_settings,
filter_segments,
RamDirectory::default(),
Box::new(|| false),
)?;
let segments = index.searchable_segments()?;
@@ -1098,10 +1335,12 @@ mod tests {
target_schema,
target_settings.clone(),
)?;
let merger: IndexMerger = IndexMerger::open_with_custom_alive_set(
let merger = IndexMerger::open_with_custom_alive_set(
merged_index.schema(),
&segments[..],
filter_segments,
Box::new(|| false),
false,
)?;
let doc_ids_alive: Vec<_> = merger.readers[0].doc_ids_alive().collect();
@@ -1113,10 +1352,12 @@ mod tests {
let target_schema = segments[0].schema();
let merged_index =
Index::create(RamDirectory::default(), target_schema, target_settings)?;
let merger: IndexMerger = IndexMerger::open_with_custom_alive_set(
let merger = IndexMerger::open_with_custom_alive_set(
merged_index.schema(),
&segments[..],
filter_segments,
Box::new(|| false),
false,
)?;
let doc_ids_alive: Vec<_> = merger.readers[0].doc_ids_alive().collect();

View File

@@ -49,7 +49,8 @@ impl<D: Document> SingleSegmentIndexWriter<D> {
opstamp: 0,
payload: None,
};
save_metas(&index_meta, index.directory())?;
let previous_meta = index.load_metas()?;
save_metas(&index_meta, &previous_meta, index.directory())?;
index.directory().sync_directory()?;
Ok(segment.index().clone())
}

View File

@@ -1,7 +1,6 @@
#![doc(html_logo_url = "http://fulmicoton.com/tantivy-logo/tantivy-logo.png")]
#![cfg_attr(all(feature = "unstable", test), feature(test))]
#![doc(test(attr(allow(unused_variables), deny(warnings))))]
#![warn(missing_docs)]
#![allow(
clippy::len_without_is_empty,
clippy::derive_partial_eq_without_eq,
@@ -233,10 +232,10 @@ pub const INDEX_FORMAT_OLDEST_SUPPORTED_VERSION: u32 = 4;
/// Structure version for the index.
#[derive(Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct Version {
major: u32,
minor: u32,
patch: u32,
index_format_version: u32,
pub(crate) major: u32,
pub(crate) minor: u32,
pub(crate) patch: u32,
pub(crate) index_format_version: u32,
}
impl fmt::Debug for Version {
@@ -1168,6 +1167,77 @@ pub mod tests {
Ok(())
}
#[test]
fn test_delete_by_address() -> crate::Result<()> {
use crate::collector::Count;
use crate::index::SegmentId;
use crate::indexer::NoMergePolicy;
use crate::query::AllQuery;
const DOC_COUNT: u64 = 2u64;
let mut schema_builder = SchemaBuilder::default();
let id = schema_builder.add_u64_field("id", INDEXED);
let schema = schema_builder.build();
let index = Index::create_in_ram(schema);
let index_reader = index.reader()?;
let mut index_writer: IndexWriter = index.writer_for_tests()?;
index_writer.set_merge_policy(Box::new(NoMergePolicy));
for doc_id in 0u64..DOC_COUNT {
index_writer.add_document(doc!(id => doc_id))?;
}
index_writer.commit()?;
index_reader.reload()?;
let searcher = index_reader.searcher();
assert_eq!(
searcher.search(&AllQuery, &Count).unwrap(),
DOC_COUNT as usize
);
let segment_readers = searcher.segment_readers();
assert!(segment_readers.len() == 1);
let segment_id = segment_readers[0].segment_id();
// update the 10 elements by deleting and re-adding
for doc_id in 0u64..DOC_COUNT {
index_writer.delete_by_address(
segment_id,
doc_id
.try_into()
.expect("test doc_id should fit as a DocId"),
);
index_writer.commit()?;
index_reader.reload()?;
index_writer.add_document(doc!(id => doc_id))?;
index_writer.commit()?;
index_reader.reload()?;
let searcher = index_reader.searcher();
// The number of document should be stable.
assert_eq!(
searcher.search(&AllQuery, &Count).unwrap(),
DOC_COUNT as usize
);
}
index_reader.reload()?;
let searcher = index_reader.searcher();
let segment_ids: Vec<SegmentId> = searcher
.segment_readers()
.iter()
.map(|reader| reader.segment_id())
.collect();
index_writer.merge(&segment_ids).wait()?;
index_reader.reload()?;
let searcher = index_reader.searcher();
assert_eq!(searcher.search(&AllQuery, &Count)?, DOC_COUNT as usize);
Ok(())
}
#[test]
fn test_validate_checksum() -> crate::Result<()> {
let index_path = tempfile::tempdir().expect("dir");

View File

@@ -28,6 +28,7 @@
//! * *VIntPosDeltas* := *VIntPosDelta*^(*P* % 128).
//!
//! The skip widths encoded separately makes it easy and fast to rapidly skip over n positions.
mod reader;
mod serializer;

View File

@@ -45,13 +45,109 @@ impl<W: io::Write> PositionSerializer<W> {
/// Writes all of the given positions delta.
pub fn write_positions_delta(&mut self, mut positions_delta: &[u32]) {
while !positions_delta.is_empty() {
let remaining_block_len = self.remaining_block_len();
let num_to_write = remaining_block_len.min(positions_delta.len());
self.block.extend(&positions_delta[..num_to_write]);
positions_delta = &positions_delta[num_to_write..];
if self.remaining_block_len() == 0 {
self.flush_block();
match positions_delta.len() {
0 => {}
1 => {
if self.remaining_block_len() == 0 {
self.flush_block();
}
self.block.push(positions_delta[0]);
}
2 => {
let rem = self.remaining_block_len();
if rem < 2 {
if rem == 1 {
self.block.push(positions_delta[0]);
self.flush_block();
self.block.push(positions_delta[1]);
} else {
self.flush_block();
self.block.push(positions_delta[0]);
self.block.push(positions_delta[1]);
}
} else {
self.block.push(positions_delta[0]);
self.block.push(positions_delta[1]);
}
}
3 => {
let rem = self.remaining_block_len();
match rem {
3.. => {
self.block.push(positions_delta[0]);
self.block.push(positions_delta[1]);
self.block.push(positions_delta[2]);
}
2 => {
self.block.push(positions_delta[0]);
self.block.push(positions_delta[1]);
self.flush_block();
self.block.push(positions_delta[2]);
}
1 => {
self.block.push(positions_delta[0]);
self.flush_block();
self.block.push(positions_delta[1]);
self.block.push(positions_delta[2]);
}
0 => {
self.flush_block();
self.block.push(positions_delta[0]);
self.block.push(positions_delta[1]);
self.block.push(positions_delta[2]);
}
}
}
4 => {
let rem = self.remaining_block_len();
match rem {
4.. => {
self.block.push(positions_delta[0]);
self.block.push(positions_delta[1]);
self.block.push(positions_delta[2]);
self.block.push(positions_delta[3]);
}
3 => {
self.block.push(positions_delta[0]);
self.block.push(positions_delta[1]);
self.block.push(positions_delta[2]);
self.flush_block();
self.block.push(positions_delta[3]);
}
2 => {
self.block.push(positions_delta[0]);
self.block.push(positions_delta[1]);
self.flush_block();
self.block.push(positions_delta[2]);
self.block.push(positions_delta[3]);
}
1 => {
self.block.push(positions_delta[0]);
self.flush_block();
self.block.push(positions_delta[1]);
self.block.push(positions_delta[2]);
self.block.push(positions_delta[3]);
}
0 => {
self.flush_block();
self.block.push(positions_delta[0]);
self.block.push(positions_delta[1]);
self.block.push(positions_delta[2]);
self.block.push(positions_delta[3]);
}
}
}
_ => {
while !positions_delta.is_empty() {
let remaining_block_len = self.remaining_block_len();
let num_to_write = remaining_block_len.min(positions_delta.len());
self.block
.extend_from_slice(&positions_delta[..num_to_write]);
positions_delta = &positions_delta[num_to_write..];
if self.remaining_block_len() == 0 {
self.flush_block();
}
}
}
}
}

View File

@@ -2,7 +2,7 @@ use std::io;
use common::VInt;
use crate::directory::{FileSlice, OwnedBytes};
use crate::directory::OwnedBytes;
use crate::fieldnorm::FieldNormReader;
use crate::postings::compression::{BlockDecoder, VIntDecoder, COMPRESSION_BLOCK_SIZE};
use crate::postings::{BlockInfo, FreqReadingOption, SkipReader};
@@ -10,7 +10,7 @@ use crate::query::Bm25Weight;
use crate::schema::IndexRecordOption;
use crate::{DocId, Score, TERMINATED};
fn max_score<I: Iterator<Item = Score>>(mut it: I) -> Option<Score> {
pub(crate) fn max_score<I: Iterator<Item = Score>>(mut it: I) -> Option<Score> {
it.next().map(|first| it.fold(first, Score::max))
}
@@ -33,7 +33,7 @@ pub struct BlockSegmentPostings {
skip_reader: SkipReader,
}
fn decode_bitpacked_block(
pub(crate) fn decode_bitpacked_block(
doc_decoder: &mut BlockDecoder,
freq_decoder_opt: Option<&mut BlockDecoder>,
data: &[u8],
@@ -53,7 +53,7 @@ fn decode_bitpacked_block(
}
}
fn decode_vint_block(
pub(crate) fn decode_vint_block(
doc_decoder: &mut BlockDecoder,
freq_decoder_opt: Option<&mut BlockDecoder>,
data: &[u8],
@@ -96,11 +96,10 @@ impl BlockSegmentPostings {
/// term frequency blocks.
pub(crate) fn open(
doc_freq: u32,
data: FileSlice,
bytes: OwnedBytes,
mut record_option: IndexRecordOption,
requested_option: IndexRecordOption,
) -> io::Result<BlockSegmentPostings> {
let bytes = data.read_bytes()?;
let (skip_data_opt, postings_data) = split_into_skips_and_postings(doc_freq, bytes)?;
let skip_reader = match skip_data_opt {
Some(skip_data) => {

View File

@@ -1,8 +1,10 @@
use bitpacking::{BitPacker, BitPacker4x};
use common::FixedSize;
pub const COMPRESSION_BLOCK_SIZE: usize = BitPacker4x::BLOCK_LEN;
const COMPRESSED_BLOCK_MAX_SIZE: usize = COMPRESSION_BLOCK_SIZE * u32::SIZE_IN_BYTES;
// in vint encoding, each byte stores 7 bits of data, so we need at most 32 / 7 = 4.57 bytes to
// store a u32 in the worst case, rounding up to 5 bytes total
const MAX_VINT_SIZE: usize = 5;
const COMPRESSED_BLOCK_MAX_SIZE: usize = COMPRESSION_BLOCK_SIZE * MAX_VINT_SIZE;
mod vint;
@@ -267,7 +269,6 @@ impl VIntDecoder for BlockDecoder {
#[cfg(test)]
pub(crate) mod tests {
use super::*;
use crate::TERMINATED;
@@ -372,6 +373,13 @@ pub(crate) mod tests {
}
}
}
#[test]
fn test_compress_vint_unsorted_does_not_overflow() {
let mut encoder = BlockEncoder::new();
let input: Vec<u32> = vec![u32::MAX; COMPRESSION_BLOCK_SIZE];
encoder.compress_vint_unsorted(&input);
}
}
#[cfg(all(test, feature = "unstable"))]

View File

@@ -1,7 +1,13 @@
use std::cell::RefCell;
use stacker::{ArenaHashMap, MemoryArena};
use crate::indexer::path_to_unordered_id::PathToUnorderedId;
thread_local! {
static CONTEXT_POOL: RefCell<Vec<IndexingContext>> = RefCell::new(Vec::new());
}
/// IndexingContext contains all of the transient memory arenas
/// required for building the inverted index.
pub(crate) struct IndexingContext {
@@ -13,9 +19,27 @@ pub(crate) struct IndexingContext {
pub path_to_unordered_id: PathToUnorderedId,
}
impl Default for IndexingContext {
fn default() -> Self {
Self::create(1)
}
}
impl IndexingContext {
/// Create a new IndexingContext given the size of the term hash map.
/// Gets an IndexingContext from the pool or creates a new one
pub(crate) fn new(table_size: usize) -> IndexingContext {
CONTEXT_POOL
.with(|pool| pool.borrow_mut().pop())
.unwrap_or_else(|| Self::create(table_size))
}
/// Returns the memory usage for the inverted index memory arenas, in bytes.
pub(crate) fn mem_usage(&self) -> usize {
self.term_index.mem_usage() + self.arena.mem_usage()
}
/// Create a new IndexingContext given the size of the term hash map.
fn create(table_size: usize) -> IndexingContext {
let term_index = ArenaHashMap::with_capacity(table_size);
IndexingContext {
arena: MemoryArena::default(),
@@ -24,8 +48,12 @@ impl IndexingContext {
}
}
/// Returns the memory usage for the inverted index memory arenas, in bytes.
pub(crate) fn mem_usage(&self) -> usize {
self.term_index.mem_usage() + self.arena.mem_usage()
pub fn checkin(mut ctx: IndexingContext) {
CONTEXT_POOL.with(|pool| {
ctx.term_index.reset();
ctx.arena.reset();
ctx.path_to_unordered_id = PathToUnorderedId::default();
pool.borrow_mut().push(ctx);
});
}
}

View File

@@ -15,10 +15,10 @@ use crate::DocId;
/// terms.
/// E.g. 100_000 terms would need 184MB due to SegmentPostings.
pub struct LoadedPostings {
doc_ids: Box<[DocId]>,
position_offsets: Box<[u32]>,
positions: Box<[u32]>,
cursor: usize,
pub doc_ids: Box<[DocId]>,
pub position_offsets: Box<[u32]>,
pub positions: Box<[u32]>,
pub cursor: usize,
}
impl LoadedPostings {

View File

@@ -4,7 +4,8 @@ mod block_search;
pub(crate) use self::block_search::branchless_binary_search;
mod block_segment_postings;
pub(crate) mod block_segment_postings;
pub(crate) mod compression;
mod indexing_context;
mod json_postings_writer;
@@ -15,10 +16,10 @@ mod postings_writer;
mod recorder;
mod segment_postings;
mod serializer;
mod skip;
pub(crate) mod skip;
mod term_info;
pub(crate) use loaded_postings::LoadedPostings;
pub use loaded_postings::LoadedPostings;
pub(crate) use stacker::compute_table_memory_size;
pub use self::block_segment_postings::BlockSegmentPostings;
@@ -491,10 +492,12 @@ pub(crate) mod tests {
}
let searcher = index.reader()?.searcher();
// finally, check that it's empty
// In Tantivy upstream, this test results in 0 segments after delete.
// However, due to our custom, visibility rules, we leave the segment.
// See committed_segment_metas in segment_manager.rs.
{
let searchable_segment_ids = index.searchable_segment_ids()?;
assert!(searchable_segment_ids.is_empty());
let _searchable_segment_ids = index.searchable_segment_ids()?;
// assert!(searchable_segment_ids.is_empty());
assert_eq!(searcher.num_docs(), 0);
}
Ok(())

View File

@@ -91,6 +91,8 @@ pub(crate) fn serialize_postings(
field_serializer.close()?;
}
IndexingContext::checkin(ctx);
Ok(())
}

View File

@@ -273,7 +273,6 @@ impl Recorder for TfAndPositionRecorder {
#[cfg(test)]
mod tests {
use common::write_u32_vint;
use super::{BufferLender, VInt32Reader};

View File

@@ -81,7 +81,7 @@ impl SegmentPostings {
}
let block_segment_postings = BlockSegmentPostings::open(
docs.len() as u32,
FileSlice::from(buffer),
FileSlice::from(buffer).read_bytes().unwrap(),
IndexRecordOption::Basic,
IndexRecordOption::Basic,
)
@@ -129,7 +129,7 @@ impl SegmentPostings {
.unwrap();
let block_segment_postings = BlockSegmentPostings::open(
doc_and_tfs.len() as u32,
FileSlice::from(buffer),
FileSlice::from(buffer).read_bytes().unwrap(),
IndexRecordOption::WithFreqs,
IndexRecordOption::WithFreqs,
)

View File

@@ -10,23 +10,23 @@ use crate::{DocId, Score, TERMINATED};
// - 1: unused
// - 2: is delta-1 encoded. 0 if not, 1, if yes
// - 3: a 6 bit number in 0..=32, the actual bitwidth
fn encode_bitwidth(bitwidth: u8, delta_1: bool) -> u8 {
pub(crate) fn encode_bitwidth(bitwidth: u8, delta_1: bool) -> u8 {
bitwidth | ((delta_1 as u8) << 6)
}
fn decode_bitwidth(raw_bitwidth: u8) -> (u8, bool) {
let delta_1 = ((raw_bitwidth >> 6) & 1) != 0;
pub(crate) fn decode_bitwidth(raw_bitwidth: u8) -> (u8, bool) {
let delta_1 = (raw_bitwidth >> 6 & 1) != 0;
let bitwidth = raw_bitwidth & 0x3f;
(bitwidth, delta_1)
}
#[inline]
fn encode_block_wand_max_tf(max_tf: u32) -> u8 {
pub(crate) fn encode_block_wand_max_tf(max_tf: u32) -> u8 {
max_tf.min(u8::MAX as u32) as u8
}
#[inline]
fn decode_block_wand_max_tf(max_tf_code: u8) -> u32 {
pub(crate) fn decode_block_wand_max_tf(max_tf_code: u8) -> u32 {
if max_tf_code == u8::MAX {
u32::MAX
} else {
@@ -35,12 +35,12 @@ fn decode_block_wand_max_tf(max_tf_code: u8) -> u32 {
}
#[inline]
fn read_u32(data: &[u8]) -> u32 {
pub(crate) fn read_u32(data: &[u8]) -> u32 {
u32::from_le_bytes(data[..4].try_into().unwrap())
}
#[inline]
fn write_u32(val: u32, buf: &mut Vec<u8>) {
pub(crate) fn write_u32(val: u32, buf: &mut Vec<u8>) {
buf.extend_from_slice(&val.to_le_bytes());
}

View File

@@ -1,15 +1,18 @@
use std::any::{Any, TypeId};
use std::io;
use std::sync::Arc;
use common::BitSet;
use tantivy_fst::Automaton;
use super::phrase_prefix_query::prefix_end;
use super::BufferedUnionScorer;
use crate::index::SegmentReader;
use crate::postings::TermInfo;
use crate::query::{BitSetDocSet, ConstScorer, Explanation, Scorer, Weight};
use crate::query::fuzzy_query::DfaWrapper;
use crate::query::score_combiner::SumCombiner;
use crate::query::{ConstScorer, Explanation, Scorer, Weight};
use crate::schema::{Field, IndexRecordOption};
use crate::termdict::{TermDictionary, TermStreamer};
use crate::termdict::{TermDictionary, TermWithStateStreamer};
use crate::{DocId, Score, TantivyError};
/// A weight struct for Fuzzy Term and Regex Queries
@@ -52,9 +55,9 @@ where
fn automaton_stream<'a>(
&'a self,
term_dict: &'a TermDictionary,
) -> io::Result<TermStreamer<'a, &'a A>> {
) -> io::Result<TermWithStateStreamer<'a, &'a A>> {
let automaton: &A = &self.automaton;
let mut term_stream_builder = term_dict.search(automaton);
let mut term_stream_builder = term_dict.search_with_state(automaton);
if let Some(json_path_bytes) = &self.json_path_bytes {
term_stream_builder = term_stream_builder.ge(json_path_bytes);
@@ -85,35 +88,27 @@ where
A::State: Clone,
{
fn scorer(&self, reader: &SegmentReader, boost: Score) -> crate::Result<Box<dyn Scorer>> {
let max_doc = reader.max_doc();
let mut doc_bitset = BitSet::with_max_value(max_doc);
let inverted_index = reader.inverted_index(self.field)?;
let term_dict = inverted_index.terms();
let mut term_stream = self.automaton_stream(term_dict)?;
while term_stream.advance() {
let term_info = term_stream.value();
let mut block_segment_postings = inverted_index
.read_block_postings_from_terminfo(term_info, IndexRecordOption::Basic)?;
loop {
let docs = block_segment_postings.docs();
if docs.is_empty() {
break;
}
for &doc in docs {
doc_bitset.insert(doc);
}
block_segment_postings.advance();
}
let mut scorers = vec![];
while let Some((_term, term_info, state)) = term_stream.next() {
let score = automaton_score(self.automaton.as_ref(), state);
let segment_postings =
inverted_index.read_postings_from_terminfo(term_info, IndexRecordOption::Basic)?;
let scorer = ConstScorer::new(segment_postings, boost * score);
scorers.push(scorer);
}
let doc_bitset = BitSetDocSet::from(doc_bitset);
let const_scorer = ConstScorer::new(doc_bitset, boost);
Ok(Box::new(const_scorer))
let scorer = BufferedUnionScorer::build(scorers, SumCombiner::default, reader.max_doc());
Ok(Box::new(scorer))
}
fn explain(&self, reader: &SegmentReader, doc: DocId) -> crate::Result<Explanation> {
let mut scorer = self.scorer(reader, 1.0)?;
if scorer.seek(doc) == doc {
Ok(Explanation::new("AutomatonScorer", 1.0))
Ok(Explanation::new("AutomatonScorer", scorer.score()))
} else {
Err(TantivyError::InvalidArgument(
"Document does not exist".to_string(),
@@ -122,6 +117,25 @@ where
}
}
fn automaton_score<A>(automaton: &A, state: A::State) -> f32
where
A: Automaton + Send + Sync + 'static,
A::State: Clone,
{
if TypeId::of::<DfaWrapper>() == automaton.type_id() && TypeId::of::<u32>() == state.type_id() {
let dfa = automaton as *const A as *const DfaWrapper;
let dfa = unsafe { &*dfa };
let id = &state as *const A::State as *const u32;
let id = unsafe { *id };
let dist = dfa.0.distance(id).to_u8() as f32;
1.0 / (1.0 + dist)
} else {
1.0
}
}
#[cfg(test)]
mod tests {
use tantivy_fst::Automaton;

Some files were not shown because too many files have changed in this diff Show More