mirror of
https://github.com/quickwit-oss/tantivy.git
synced 2026-02-20 14:50:38 +00:00
Compare commits
16 Commits
ip_fastfie
...
fastfieldc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9aefa349ca | ||
|
|
b9a87d6dc6 | ||
|
|
0ec2ebd791 | ||
|
|
6602786db8 | ||
|
|
c71169b6e0 | ||
|
|
ce45889add | ||
|
|
4875174d16 | ||
|
|
0c634c5bc6 | ||
|
|
e25ab5d537 | ||
|
|
27400c9ad3 | ||
|
|
19074e1d5e | ||
|
|
014b1adc3e | ||
|
|
84295d5b35 | ||
|
|
625bcb4877 | ||
|
|
8e773ade77 | ||
|
|
fad3faefe2 |
@@ -10,6 +10,7 @@ Tantivy's bread and butter is to address the problem of full-text search :
|
||||
Given a large set of textual documents, and a text query, return the K-most relevant documents in a very efficient way. To execute these queries rapidly, the tantivy needs to build an index beforehand. The relevance score implemented in the tantivy is not configurable. Tantivy uses the same score as the default similarity used in Lucene / Elasticsearch, called [BM25](https://en.wikipedia.org/wiki/Okapi_BM25).
|
||||
|
||||
But tantivy's scope does not stop there. Numerous features are required to power rich-search applications. For instance, one may want to:
|
||||
|
||||
- compute the count of documents matching a query in the different section of an e-commerce website,
|
||||
- display an average price per meter square for a real estate search engine,
|
||||
- take into account historical user data to rank documents in a specific way,
|
||||
@@ -22,27 +23,28 @@ rapidly select all documents matching a given predicate (also known as a query)
|
||||
collect some information about them ([See collector](#collector-define-what-to-do-with-matched-documents)).
|
||||
|
||||
Roughly speaking the design is following these guiding principles:
|
||||
|
||||
- Search should be O(1) in memory.
|
||||
- Indexing should be O(1) in memory. (In practice it is just sublinear)
|
||||
- Search should be as fast as possible
|
||||
|
||||
This comes at the cost of the dynamicity of the index: while it is possible to add, and delete documents from our corpus, the tantivy is designed to handle these updates in large batches.
|
||||
|
||||
## [core/](src/core): Index, segments, searchers.
|
||||
## [core/](src/core): Index, segments, searchers
|
||||
|
||||
Core contains all of the high-level code to make it possible to create an index, add documents, delete documents and commit.
|
||||
|
||||
This is both the most high-level part of tantivy, the least performance-sensitive one, the seemingly most mundane code... And paradoxically the most complicated part.
|
||||
|
||||
### Index and Segments...
|
||||
### Index and Segments
|
||||
|
||||
A tantivy index is a collection of smaller independent immutable segments.
|
||||
A tantivy index is a collection of smaller independent immutable segments.
|
||||
Each segment contains its own independent set of data structures.
|
||||
|
||||
A segment is identified by a segment id that is in fact a UUID.
|
||||
The file of a segment has the format
|
||||
|
||||
```segment-id . ext ```
|
||||
```segment-id . ext```
|
||||
|
||||
The extension signals which data structure (or [`SegmentComponent`](src/core/segment_component.rs)) is stored in the file.
|
||||
|
||||
@@ -52,17 +54,15 @@ On commit, one segment per indexing thread is written to disk, and the `meta.jso
|
||||
|
||||
For a better idea of how indexing works, you may read the [following blog post](https://fulmicoton.com/posts/behold-tantivy-part2/).
|
||||
|
||||
|
||||
### Deletes
|
||||
|
||||
Deletes happen by deleting a "term". Tantivy does not offer any notion of primary id, so it is up to the user to use a field in their schema as if it was a primary id, and delete the associated term if they want to delete only one specific document.
|
||||
|
||||
On commit, tantivy will find all of the segments with documents matching this existing term and remove from [alive bitset file](src/fastfield/alive_bitset.rs) that represents the bitset of the alive document ids.
|
||||
Like all segment files, this file is immutable. Because it is possible to have more than one alive bitset file at a given instant, the alive bitset filename has the format ``` segment_id . commit_opstamp . del```.
|
||||
Like all segment files, this file is immutable. Because it is possible to have more than one alive bitset file at a given instant, the alive bitset filename has the format ```segment_id . commit_opstamp . del```.
|
||||
|
||||
An opstamp is simply an incremental id that identifies any operation applied to the index. For instance, performing a commit or adding a document.
|
||||
|
||||
|
||||
### DocId
|
||||
|
||||
Within a segment, all documents are identified by a DocId that ranges within `[0, max_doc)`.
|
||||
@@ -74,6 +74,7 @@ The DocIds are simply allocated in the order documents are added to the index.
|
||||
|
||||
In separate threads, tantivy's index writer search for opportunities to merge segments.
|
||||
The point of segment merge is to:
|
||||
|
||||
- eventually get rid of tombstoned documents
|
||||
- reduce the otherwise ever-growing number of segments.
|
||||
|
||||
@@ -104,6 +105,7 @@ Tantivy's document follows a very strict schema, decided before building any ind
|
||||
The schema defines all of the fields that the indexes [`Document`](src/schema/document.rs) may and should contain, their types (`text`, `i64`, `u64`, `Date`, ...) as well as how it should be indexed / represented in tantivy.
|
||||
|
||||
Depending on the type of the field, you can decide to
|
||||
|
||||
- put it in the docstore
|
||||
- store it as a fast field
|
||||
- index it
|
||||
@@ -117,9 +119,10 @@ As of today, tantivy's schema imposes a 1:1 relationship between a field that is
|
||||
|
||||
This is not something tantivy supports, and it is up to the user to duplicate field / concatenate fields before feeding them to tantivy.
|
||||
|
||||
## General information about these data structures.
|
||||
## General information about these data structures
|
||||
|
||||
All data structures in tantivy, have:
|
||||
|
||||
- a writer
|
||||
- a serializer
|
||||
- a reader
|
||||
@@ -132,7 +135,7 @@ This conversion is done by the serializer.
|
||||
Finally, the reader is in charge of offering an API to read on this on-disk read-only representation.
|
||||
In tantivy, readers are designed to require very little anonymous memory. The data is read straight from an mmapped file, and loading an index is as fast as mmapping its files.
|
||||
|
||||
## [store/](src/store): Here is my DocId, Gimme my document!
|
||||
## [store/](src/store): Here is my DocId, Gimme my document
|
||||
|
||||
The docstore is a row-oriented storage that, for each document, stores a subset of the fields
|
||||
that are marked as stored in the schema. The docstore is compressed using a general-purpose algorithm
|
||||
@@ -146,6 +149,7 @@ Once the top 10 documents have been identified, we fetch them from the store, an
|
||||
**Not useful for**
|
||||
|
||||
Fetching a document from the store is typically a "slow" operation. It usually consists in
|
||||
|
||||
- searching into a compact tree-like data structure to find the position of the right block.
|
||||
- decompressing a small block
|
||||
- returning the document from this block.
|
||||
@@ -154,8 +158,7 @@ It is NOT meant to be called for every document matching a query.
|
||||
|
||||
As a rule of thumb, if you hit the docstore more than 100 times per search query, you are probably misusing tantivy.
|
||||
|
||||
|
||||
## [fastfield/](src/fastfield): Here is my DocId, Gimme my value!
|
||||
## [fastfield/](src/fastfield): Here is my DocId, Gimme my value
|
||||
|
||||
Fast fields are stored in a column-oriented storage that allows for random access.
|
||||
The only compression applied is bitpacking. The column comes with two meta data.
|
||||
@@ -163,7 +166,7 @@ The minimum value in the column and the number of bits per doc.
|
||||
|
||||
Fetching a value for a `DocId` is then as simple as computing
|
||||
|
||||
```
|
||||
```rust
|
||||
min_value + fetch_bits(num_bits * doc_id..num_bits * (doc_id+1))
|
||||
```
|
||||
|
||||
@@ -190,7 +193,7 @@ For advanced search engine, it is possible to store all of the features required
|
||||
|
||||
Finally facets are a specific kind of fast field, and the associated source code is in [`fastfield/facet_reader.rs`](src/fastfield/facet_reader.rs).
|
||||
|
||||
# The inverted search index.
|
||||
# The inverted search index
|
||||
|
||||
The inverted index is the core part of full-text search.
|
||||
When presented a new document with the text field "Hello, happy tax payer!", tantivy breaks it into a list of so-called tokens. In addition to just splitting these strings into tokens, it might also do different kinds of operations like dropping the punctuation, converting the character to lowercase, apply stemming, etc. Tantivy makes it possible to configure the operations to be applied in the schema (tokenizer/ is the place where these operations are implemented).
|
||||
@@ -215,19 +218,18 @@ The inverted index actually consists of two data structures chained together.
|
||||
|
||||
Where [TermInfo](src/postings/term_info.rs) is an object containing some meta data about a term.
|
||||
|
||||
|
||||
## [termdict/](src/termdict): Here is a term, give me the [TermInfo](src/postings/term_info.rs)!
|
||||
## [termdict/](src/termdict): Here is a term, give me the [TermInfo](src/postings/term_info.rs)
|
||||
|
||||
Tantivy's term dictionary is mainly in charge of supplying the function
|
||||
|
||||
[Term](src/schema/term.rs) ⟶ [TermInfo](src/postings/term_info.rs)
|
||||
|
||||
It is itself broken into two parts.
|
||||
|
||||
- [Term](src/schema/term.rs) ⟶ [TermOrdinal](src/termdict/mod.rs) is addressed by a finite state transducer, implemented by the fst crate.
|
||||
- [TermOrdinal](src/termdict/mod.rs) ⟶ [TermInfo](src/postings/term_info.rs) is addressed by the term info store.
|
||||
|
||||
|
||||
## [postings/](src/postings): Iterate over documents... very fast!
|
||||
## [postings/](src/postings): Iterate over documents... very fast
|
||||
|
||||
A posting list makes it possible to store a sorted list of doc ids and for each doc store
|
||||
a term frequency as well.
|
||||
@@ -257,7 +259,6 @@ we advance the position reader by the number of term frequencies of the current
|
||||
The [BM25](https://en.wikipedia.org/wiki/Okapi_BM25) formula also requires to know the number of tokens stored in a specific field for a given document. We store this information on one byte per document in the fieldnorm.
|
||||
The fieldnorm is therefore compressed. Values up to 40 are encoded unchanged.
|
||||
|
||||
|
||||
## [tokenizer/](src/tokenizer): How should we process text?
|
||||
|
||||
Text processing is key to a good search experience.
|
||||
@@ -268,7 +269,6 @@ Text processing can be configured by selecting an off-the-shelf [`Tokenizer`](./
|
||||
|
||||
Tantivy's comes with few tokenizers, but external crates are offering advanced tokenizers, such as [Lindera](https://crates.io/crates/lindera) for Japanese.
|
||||
|
||||
|
||||
## [query/](src/query): Define and compose queries
|
||||
|
||||
The [Query](src/query/query.rs) trait defines what a query is.
|
||||
|
||||
111
CHANGELOG.md
111
CHANGELOG.md
@@ -1,5 +1,6 @@
|
||||
Tantivy 0.19
|
||||
================================
|
||||
|
||||
- Updated [Date Field Type](https://github.com/quickwit-oss/tantivy/pull/1396)
|
||||
The `DateTime` type has been updated to hold timestamps with microseconds precision.
|
||||
`DateOptions` and `DatePrecision` have been added to configure Date fields. The precision is used to hint on fast values compression. Otherwise, seconds precision is used everywhere else (i.e terms, indexing).
|
||||
@@ -7,6 +8,7 @@ Tantivy 0.19
|
||||
|
||||
Tantivy 0.18
|
||||
================================
|
||||
|
||||
- For date values `chrono` has been replaced with `time` (@uklotzde) #1304 :
|
||||
- The `time` crate is re-exported as `tantivy::time` instead of `tantivy::chrono`.
|
||||
- The type alias `tantivy::DateTime` has been removed.
|
||||
@@ -22,6 +24,7 @@ Tantivy 0.18
|
||||
|
||||
Tantivy 0.17
|
||||
================================
|
||||
|
||||
- LogMergePolicy now triggers merges if the ratio of deleted documents reaches a threshold (@shikhar @fulmicoton) [#115](https://github.com/quickwit-oss/tantivy/issues/115)
|
||||
- Adds a searcher Warmer API (@shikhar @fulmicoton)
|
||||
- Change to non-strict schema. Ignore fields in data which are not defined in schema. Previously this returned an error. #1211
|
||||
@@ -36,33 +39,39 @@ Tantivy 0.17
|
||||
|
||||
Tantivy 0.16.2
|
||||
================================
|
||||
|
||||
- Bugfix in FuzzyTermQuery. (transposition_cost_one was not doing anything)
|
||||
|
||||
Tantivy 0.16.1
|
||||
========================
|
||||
|
||||
- Major Bugfix on multivalued fastfield. #1151
|
||||
- Demux operation (@PSeitz)
|
||||
|
||||
Tantivy 0.16.0
|
||||
=========================
|
||||
|
||||
- Bugfix in the filesum check. (@evanxg852000) #1127
|
||||
- Bugfix in positions when the index is sorted by a field. (@appaquet) #1125
|
||||
|
||||
Tantivy 0.15.3
|
||||
=========================
|
||||
- Major bugfix. Deleting documents was broken when the index was sorted by a field. (@appaquet, @fulmicoton) #1101
|
||||
|
||||
- Major bugfix. Deleting documents was broken when the index was sorted by a field. (@appaquet, @fulmicoton) #1101
|
||||
|
||||
Tantivy 0.15.2
|
||||
========================
|
||||
|
||||
- Major bugfix. DocStore still panics when a deleted doc is at the beginning of a block. (@appaquet) #1088
|
||||
|
||||
Tantivy 0.15.1
|
||||
=========================
|
||||
|
||||
- Major bugfix. DocStore panics when first block is deleted. (@appaquet) #1077
|
||||
|
||||
Tantivy 0.15.0
|
||||
=========================
|
||||
|
||||
- API Changes. Using Range instead of (start, end) in the API and internals (`FileSlice`, `OwnedBytes`, `Snippets`, ...)
|
||||
This change is breaking but migration is trivial.
|
||||
- Added an Histogram collector. (@fulmicoton) #994
|
||||
@@ -84,9 +93,9 @@ Tantivy 0.15.0
|
||||
- Updated TermMerger implementation to rely on the union feature of the FST (@scampi) #469
|
||||
- Add boolean marking whether position is required in the query_terms API call (@fulmicoton). #1070
|
||||
|
||||
|
||||
Tantivy 0.14.0
|
||||
=========================
|
||||
|
||||
- Remove dependency to atomicwrites #833 .Implemented by @fulmicoton upon suggestion and research from @asafigan).
|
||||
- Migrated tantivy error from the now deprecated `failure` crate to `thiserror` #760. (@hirevo)
|
||||
- API Change. Accessing the typed value off a `Schema::Value` now returns an Option instead of panicking if the type does not match.
|
||||
@@ -105,16 +114,19 @@ This version breaks compatibility and requires users to reindex everything.
|
||||
|
||||
Tantivy 0.13.2
|
||||
===================
|
||||
|
||||
Bugfix. Acquiring a facet reader on a segment that does not contain any
|
||||
doc with this facet returns `None`. (#896)
|
||||
|
||||
Tantivy 0.13.1
|
||||
===================
|
||||
|
||||
Made `Query` and `Collector` `Send + Sync`.
|
||||
Updated misc dependency versions.
|
||||
|
||||
Tantivy 0.13.0
|
||||
======================
|
||||
|
||||
Tantivy 0.13 introduce a change in the index format that will require
|
||||
you to reindex your index (BlockWAND information are added in the skiplist).
|
||||
The index size increase is minor as this information is only added for
|
||||
@@ -129,6 +141,7 @@ so that we can discuss possible solutions.
|
||||
A freshly created DocSet point directly to their first doc. A sentinel value called TERMINATED marks the end of a DocSet.
|
||||
`.advance()` returns the new DocId. `Scorer::skip(target)` has been replaced by `Scorer::seek(target)` and returns the resulting DocId.
|
||||
As a result, iterating through DocSet now looks as follows
|
||||
|
||||
```rust
|
||||
let mut doc = docset.doc();
|
||||
while doc != TERMINATED {
|
||||
@@ -136,7 +149,9 @@ while doc != TERMINATED {
|
||||
doc = docset.advance();
|
||||
}
|
||||
```
|
||||
|
||||
The change made it possible to greatly simplify a lot of the docset's code.
|
||||
|
||||
- Misc internal optimization and introduction of the `Scorer::for_each_pruning` function. (@fulmicoton)
|
||||
- Added an offset option to the Top(.*)Collectors. (@robyoung)
|
||||
- Added Block WAND. Performance on TOP-K on term-unions should be greatly increased. (@fulmicoton, and special thanks
|
||||
@@ -144,6 +159,7 @@ to the PISA team for answering all my questions!)
|
||||
|
||||
Tantivy 0.12.0
|
||||
======================
|
||||
|
||||
- Removing static dispatch in tokenizers for simplicity. (#762)
|
||||
- Added backward iteration for `TermDictionary` stream. (@halvorboe)
|
||||
- Fixed a performance issue when searching for the posting lists of a missing term (@audunhalland)
|
||||
@@ -154,30 +170,32 @@ Tantivy 0.12.0
|
||||
## How to update?
|
||||
|
||||
Crates relying on custom tokenizer, or registering tokenizer in the manager will require some
|
||||
minor changes. Check https://github.com/quickwit-oss/tantivy/blob/main/examples/custom_tokenizer.rs
|
||||
minor changes. Check <https://github.com/quickwit-oss/tantivy/blob/main/examples/custom_tokenizer.rs>
|
||||
to check for some code sample.
|
||||
|
||||
Tantivy 0.11.3
|
||||
=======================
|
||||
|
||||
- Fixed DateTime as a fast field (#735)
|
||||
|
||||
Tantivy 0.11.2
|
||||
=======================
|
||||
|
||||
- The future returned by `IndexWriter::merge` does not borrow `self` mutably anymore (#732)
|
||||
- Exposing a constructor for `WatchHandle` (#731)
|
||||
|
||||
Tantivy 0.11.1
|
||||
=====================
|
||||
- Bug fix #729
|
||||
|
||||
- Bug fix #729
|
||||
|
||||
Tantivy 0.11.0
|
||||
=====================
|
||||
|
||||
- Added f64 field. Internally reuse u64 code the same way i64 does (@fdb-hiroshima)
|
||||
- Various bugfixes in the query parser.
|
||||
- Better handling of hyphens in query parser. (#609)
|
||||
- Better handling of whitespaces.
|
||||
- Better handling of hyphens in query parser. (#609)
|
||||
- Better handling of whitespaces.
|
||||
- Closes #498 - add support for Elastic-style unbounded range queries for alphanumeric types eg. "title:>hello", "weight:>=70.5", "height:<200" (@petr-tik)
|
||||
- API change around `Box<BoxableTokenizer>`. See detail in #629
|
||||
- Avoid rebuilding Regex automaton whenever a regex query is reused. #639 (@brainlock)
|
||||
@@ -208,7 +226,6 @@ Tantivy 0.10.1
|
||||
Avoid watching the mmap directory until someone effectively creates a reader that uses
|
||||
this functionality.
|
||||
|
||||
|
||||
Tantivy 0.10.0
|
||||
=====================
|
||||
|
||||
@@ -224,6 +241,7 @@ Tantivy 0.10.0
|
||||
|
||||
Minor
|
||||
---------
|
||||
|
||||
- Switched to Rust 2018 (@uvd)
|
||||
- Small simplification of the code.
|
||||
Calling .freq() or .doc() when .advance() has never been called
|
||||
@@ -231,8 +249,7 @@ on segment postings should panic from now on.
|
||||
- Tokens exceeding `u16::max_value() - 4` chars are discarded silently instead of panicking.
|
||||
- Fast fields are now preloaded when the `SegmentReader` is created.
|
||||
- `IndexMeta` is now public. (@hntd187)
|
||||
- `IndexWriter` `add_document`, `delete_term`. `IndexWriter` is `Sync`, making it possible to use it with a `
|
||||
Arc<RwLock<IndexWriter>>`. `add_document` and `delete_term` can
|
||||
- `IndexWriter` `add_document`, `delete_term`. `IndexWriter` is `Sync`, making it possible to use it with a `Arc<RwLock<IndexWriter>>`. `add_document` and `delete_term` can
|
||||
only require a read lock. (@fulmicoton)
|
||||
- Introducing `Opstamp` as an expressive type alias for `u64`. (@petr-tik)
|
||||
- Stamper now relies on `AtomicU64` on all platforms (@petr-tik)
|
||||
@@ -248,16 +265,17 @@ Your program should be usable as is.
|
||||
Fast fields used to be accessed directly from the `SegmentReader`.
|
||||
The API changed, you are now required to acquire your fast field reader via the
|
||||
`segment_reader.fast_fields()`, and use one of the typed method:
|
||||
|
||||
- `.u64()`, `.i64()` if your field is single-valued ;
|
||||
- `.u64s()`, `.i64s()` if your field is multi-valued ;
|
||||
- `.bytes()` if your field is bytes fast field.
|
||||
|
||||
|
||||
|
||||
Tantivy 0.9.0
|
||||
=====================
|
||||
|
||||
*0.9.0 index format is not compatible with the
|
||||
previous index format.*
|
||||
|
||||
- MAJOR BUGFIX :
|
||||
Some `Mmap` objects were being leaked, and would never get released. (@fulmicoton)
|
||||
- Removed most unsafe (@fulmicoton)
|
||||
@@ -301,37 +319,40 @@ To update from tantivy 0.8, you will need to go through the following steps.
|
||||
|
||||
```
|
||||
|
||||
|
||||
Tantivy 0.8.2
|
||||
=====================
|
||||
|
||||
Fixing build for x86_64 platforms. (#496)
|
||||
No need to update from 0.8.1 if tantivy
|
||||
is building on your platform.
|
||||
|
||||
|
||||
Tantivy 0.8.1
|
||||
=====================
|
||||
|
||||
Hotfix of #476.
|
||||
|
||||
Merge was reflecting deletes before commit was passed.
|
||||
Thanks @barrotsteindev for reporting the bug.
|
||||
|
||||
|
||||
Tantivy 0.8.0
|
||||
=====================
|
||||
|
||||
*No change in the index format*
|
||||
|
||||
- API Breaking change in the collector API. (@jwolfe, @fulmicoton)
|
||||
- Multithreaded search (@jwolfe, @fulmicoton)
|
||||
|
||||
|
||||
Tantivy 0.7.1
|
||||
=====================
|
||||
|
||||
*No change in the index format*
|
||||
|
||||
- Bugfix: NGramTokenizer panics on non ascii chars
|
||||
- Added a space usage API
|
||||
|
||||
Tantivy 0.7
|
||||
=====================
|
||||
|
||||
- Skip data for doc ids and positions (@fulmicoton),
|
||||
greatly improving performance
|
||||
- Tantivy error now rely on the failure crate (@drusellers)
|
||||
@@ -341,15 +362,15 @@ Tantivy 0.7
|
||||
|
||||
Tantivy 0.6.1
|
||||
=========================
|
||||
|
||||
- Bugfix #324. GC removing was removing file that were still in useful
|
||||
- Added support for parsing AllQuery and RangeQuery via QueryParser
|
||||
- AllQuery: `*`
|
||||
- RangeQuery:
|
||||
- Inclusive `field:[startIncl to endIncl]`
|
||||
- Exclusive `field:{startExcl to endExcl}`
|
||||
- Mixed `field:[startIncl to endExcl}` and vice versa
|
||||
- Unbounded `field:[start to *]`, `field:[* to end]`
|
||||
|
||||
- AllQuery: `*`
|
||||
- RangeQuery:
|
||||
- Inclusive `field:[startIncl to endIncl]`
|
||||
- Exclusive `field:{startExcl to endExcl}`
|
||||
- Mixed `field:[startIncl to endExcl}` and vice versa
|
||||
- Unbounded `field:[start to *]`, `field:[* to end]`
|
||||
|
||||
Tantivy 0.6
|
||||
==========================
|
||||
@@ -362,58 +383,53 @@ to this release!
|
||||
- Approximate field norms encoded over 1 byte. (@fulmicoton)
|
||||
- Compiles on stable rust (@fulmicoton)
|
||||
- Add &[u8] fastfield for associating arbitrary bytes to each document (@jason-wolfe) (#270)
|
||||
- Completely uncompressed
|
||||
- Internally: One u64 fast field for indexes, one fast field for the bytes themselves.
|
||||
- Completely uncompressed
|
||||
- Internally: One u64 fast field for indexes, one fast field for the bytes themselves.
|
||||
- Add NGram token support (@drusellers)
|
||||
- Add Stopword Filter support (@drusellers)
|
||||
- Add a FuzzyTermQuery (@drusellers)
|
||||
- Add a RegexQuery (@drusellers)
|
||||
- Various performance improvements (@fulmicoton)_
|
||||
|
||||
|
||||
Tantivy 0.5.2
|
||||
===========================
|
||||
|
||||
- bugfix #274
|
||||
- bugfix #280
|
||||
- bugfix #289
|
||||
|
||||
|
||||
Tantivy 0.5.1
|
||||
==========================
|
||||
- bugfix #254 : tantivy failed if no documents in a segment contained a specific field.
|
||||
|
||||
- bugfix #254 : tantivy failed if no documents in a segment contained a specific field.
|
||||
|
||||
Tantivy 0.5
|
||||
==========================
|
||||
|
||||
- Faceting
|
||||
- RangeQuery
|
||||
- Configurable tokenization pipeline
|
||||
- Bugfix in PhraseQuery
|
||||
- Various query optimisation
|
||||
- Allowing very large indexes
|
||||
- 64 bits file address
|
||||
- Smarter encoding of the `TermInfo` objects
|
||||
|
||||
|
||||
- 64 bits file address
|
||||
- Smarter encoding of the `TermInfo` objects
|
||||
|
||||
Tantivy 0.4.3
|
||||
==========================
|
||||
|
||||
- Bugfix race condition when deleting files. (#198)
|
||||
|
||||
|
||||
Tantivy 0.4.2
|
||||
==========================
|
||||
|
||||
- Prevent usage of AVX2 instructions (#201)
|
||||
|
||||
|
||||
Tantivy 0.4.1
|
||||
==========================
|
||||
|
||||
- Bugfix for non-indexed fields. (#199)
|
||||
|
||||
|
||||
Tantivy 0.4.0
|
||||
==========================
|
||||
|
||||
@@ -428,37 +444,31 @@ Tantivy 0.4.0
|
||||
- Searching for a non-indexed field returns an explicit Error
|
||||
- Phrase query for non-tokenized field are not tokenized by the query parser.
|
||||
- Faster/Better indexing (@fulmicoton)
|
||||
- using murmurhash2
|
||||
- faster merging
|
||||
- more memory efficient fast field writer (@lnicola )
|
||||
- better handling of collisions
|
||||
- lesser memory usage
|
||||
- using murmurhash2
|
||||
- faster merging
|
||||
- more memory efficient fast field writer (@lnicola )
|
||||
- better handling of collisions
|
||||
- lesser memory usage
|
||||
- Added API, most notably to iterate over ranges of terms (@fulmicoton)
|
||||
- Bugfix that was preventing to unmap segment files, on index drop (@fulmicoton)
|
||||
- Made the doc! macro public (@fulmicoton)
|
||||
- Added an alternative implementation of the streaming dictionary (@fulmicoton)
|
||||
|
||||
|
||||
|
||||
Tantivy 0.3.1
|
||||
==========================
|
||||
|
||||
- Expose a method to trigger files garbage collection
|
||||
|
||||
|
||||
|
||||
Tantivy 0.3
|
||||
==========================
|
||||
|
||||
|
||||
Special thanks to @Kodraus @lnicola @Ameobea @manuel-woelker @celaus
|
||||
for their contribution to this release.
|
||||
|
||||
Thanks also to everyone in tantivy gitter chat
|
||||
for their advise and company :)
|
||||
|
||||
https://gitter.im/tantivy-search/tantivy
|
||||
|
||||
<https://gitter.im/tantivy-search/tantivy>
|
||||
|
||||
Warning:
|
||||
|
||||
@@ -467,19 +477,16 @@ code and index format.
|
||||
You should not expect backward compatibility before
|
||||
tantivy 1.0.
|
||||
|
||||
|
||||
|
||||
New Features
|
||||
------------
|
||||
|
||||
- Delete. You can now delete documents from an index.
|
||||
- Support for windows (Thanks to @lnicola)
|
||||
|
||||
|
||||
Various Bugfixes & small improvements
|
||||
----------------------------------------
|
||||
|
||||
- Added CI for Windows (https://ci.appveyor.com/project/fulmicoton/tantivy)
|
||||
- Added CI for Windows (<https://ci.appveyor.com/project/fulmicoton/tantivy>)
|
||||
Thanks to @KodrAus ! (#108)
|
||||
- Various dependy version update (Thanks to @Ameobea) #76
|
||||
- Fixed several race conditions in `Index.wait_merge_threads`
|
||||
@@ -491,7 +498,3 @@ Thanks to @KodrAus ! (#108)
|
||||
- Building binary targets for tantivy-cli (Thanks to @KodrAus)
|
||||
- Misc invisible bug fixes, and code cleanup.
|
||||
- Use
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -60,7 +60,6 @@ pretty_assertions = "1.2.1"
|
||||
serde_cbor = { version = "0.11.2", optional = true }
|
||||
async-trait = "0.1.53"
|
||||
arc-swap = "1.5.0"
|
||||
gcd = "2.1.0"
|
||||
|
||||
[target.'cfg(windows)'.dependencies]
|
||||
winapi = "0.3.9"
|
||||
|
||||
16
README.md
16
README.md
@@ -5,7 +5,6 @@
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://crates.io/crates/tantivy)
|
||||
|
||||
|
||||

|
||||
|
||||
**Tantivy** is a **full-text search engine library** written in Rust.
|
||||
@@ -16,7 +15,7 @@ to build such a search engine.
|
||||
|
||||
Tantivy is, in fact, strongly inspired by Lucene's design.
|
||||
|
||||
If you are looking for an alternative to Elasticsearch or Apache Solr, check out [Quickwit](https://github.com/quickwit-oss/quickwit), our search engine built on top of Tantivy.
|
||||
If you are looking for an alternative to Elasticsearch or Apache Solr, check out [Quickwit](https://github.com/quickwit-oss/quickwit), our search engine built on top of Tantivy.
|
||||
|
||||
# Benchmark
|
||||
|
||||
@@ -57,7 +56,6 @@ Your mileage WILL vary depending on the nature of queries and their load.
|
||||
|
||||
Distributed search is out of the scope of Tantivy, but if you are looking for this feature, check out [Quickwit](https://github.com/quickwit-oss/quickwit/).
|
||||
|
||||
|
||||
# Getting started
|
||||
|
||||
Tantivy works on stable Rust (>= 1.27) and supports Linux, macOS, and Windows.
|
||||
@@ -125,7 +123,8 @@ By default, `rustc` compiles everything in the `examples/` directory in debug mo
|
||||
rust-gdb target/debug/examples/$EXAMPLE_NAME
|
||||
$ gdb run
|
||||
```
|
||||
# Companies Using Tantivy
|
||||
|
||||
# Companies Using Tantivy
|
||||
|
||||
<p align="left">
|
||||
<img align="center" src="doc/assets/images/Nuclia.png#gh-light-mode-only" alt="Nuclia" height="25" width="auto" />
|
||||
@@ -134,11 +133,12 @@ $ gdb run
|
||||
<img align="center" src="doc/assets/images/nuclia-dark-theme.png#gh-dark-mode-only" alt="Nuclia" height="35" width="auto" />
|
||||
<img align="center" src="doc/assets/images/humanfirst.ai-dark-theme.png#gh-dark-mode-only" alt="Humanfirst.ai" height="25" width="auto" />
|
||||
<img align="center" src="doc/assets/images/element-dark-theme.png#gh-dark-mode-only" alt="Element.io" height="25" width="auto" />
|
||||
</p>
|
||||
|
||||
</p>
|
||||
|
||||
# FAQ
|
||||
|
||||
### Can I use Tantivy in other languages?
|
||||
|
||||
- Python → [tantivy-py](https://github.com/quickwit-oss/tantivy-py)
|
||||
- Ruby → [tantiny](https://github.com/baygeldin/tantiny)
|
||||
|
||||
@@ -152,13 +152,17 @@ You can also find other bindings on [GitHub](https://github.com/search?q=tantivy
|
||||
- and [more](https://github.com/search?q=tantivy)!
|
||||
|
||||
### On average, how much faster is Tantivy compared to Lucene?
|
||||
|
||||
- According to our [search latency benchmark](https://tantivy-search.github.io/bench/), Tantivy is approximately 2x faster than Lucene.
|
||||
|
||||
### Does tantivy support incremental indexing?
|
||||
|
||||
- Yes.
|
||||
|
||||
### How can I edit documents?
|
||||
|
||||
- Data in tantivy is immutable. To edit a document, the document needs to be deleted and reindexed.
|
||||
|
||||
### When will my documents be searchable during indexing?
|
||||
|
||||
- Documents will be searchable after a `commit` is called on an `IndexWriter`. Existing `IndexReader`s will also need to be reloaded in order to reflect the changes. Finally, changes are only visible to newly acquired `Searcher`.
|
||||
|
||||
@@ -14,6 +14,7 @@ pub struct BlockedBitpacker {
|
||||
buffer: Vec<u64>,
|
||||
offset_and_bits: Vec<BlockedBitpackerEntryMetaData>,
|
||||
}
|
||||
|
||||
impl Default for BlockedBitpacker {
|
||||
fn default() -> Self {
|
||||
BlockedBitpacker::new()
|
||||
@@ -60,12 +61,11 @@ fn metadata_test() {
|
||||
|
||||
impl BlockedBitpacker {
|
||||
pub fn new() -> Self {
|
||||
let mut compressed_blocks = vec![];
|
||||
compressed_blocks.resize(8, 0);
|
||||
let compressed_blocks = vec![0u8; 8];
|
||||
Self {
|
||||
compressed_blocks,
|
||||
buffer: vec![],
|
||||
offset_and_bits: vec![],
|
||||
buffer: Vec::new(),
|
||||
offset_and_bits: Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ pub trait DeserializeFrom<T: BinarySerializable> {
|
||||
|
||||
/// Implement deserialize from &[u8] for all types which implement BinarySerializable.
|
||||
///
|
||||
/// TryFrom would actually be preferrable, but not possible because of the orphan
|
||||
/// TryFrom would actually be preferable, but not possible because of the orphan
|
||||
/// rules (not completely sure if this could be resolved)
|
||||
impl<T: BinarySerializable> DeserializeFrom<T> for &[u8] {
|
||||
fn deserialize(&mut self) -> io::Result<T> {
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
# Summary
|
||||
|
||||
|
||||
|
||||
[Avant Propos](./avant-propos.md)
|
||||
|
||||
- [Segments](./basis.md)
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
> Tantivy is a **search** engine **library** for Rust.
|
||||
|
||||
If you are familiar with Lucene, it's an excellent approximation to consider tantivy as Lucene for rust. tantivy is heavily inspired by Lucene's design and
|
||||
they both have the same scope and targetted use cases.
|
||||
they both have the same scope and targeted use cases.
|
||||
|
||||
If you are not familiar with Lucene, let's break down our little tagline.
|
||||
|
||||
@@ -31,4 +31,4 @@ relevancy, collapsing, highlighting, spatial search.
|
||||
index from a different format.
|
||||
|
||||
Tantivy exposes a lot of low level API to do all of these things.
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ directory shipped with tantivy is the `MmapDirectory`.
|
||||
While this design has some downsides, this greatly simplifies the source code of
|
||||
tantivy. Caching is also entirely delegated to the OS.
|
||||
|
||||
`tantivy` works entirely (or almost) by directly reading the datastructures as they are layed on disk. As a result, the act of opening an indexing does not involve loading different datastructures from the disk into random access memory : starting a process, opening an index, and performing your first query can typically be done in a matter of milliseconds.
|
||||
`tantivy` works entirely (or almost) by directly reading the datastructures as they are laid on disk. As a result, the act of opening an indexing does not involve loading different datastructures from the disk into random access memory : starting a process, opening an index, and performing your first query can typically be done in a matter of milliseconds.
|
||||
|
||||
This is an interesting property for a command line search engine, or for some multi-tenant log search engine : spawning a new process for each new query can be a perfectly sensible solution in some use case.
|
||||
|
||||
@@ -22,7 +22,6 @@ Of course this is crucial to reduce IO, and ensure that as much of our index can
|
||||
Also, whenever possible its data is accessed sequentially. Of course, this is an amazing property when tantivy needs to access the data from your spinning hard disk, but this is also
|
||||
critical for performance, if your data is read from and an `SSD` or even already in your pagecache.
|
||||
|
||||
|
||||
## Segments, and the log method
|
||||
|
||||
That kind of compact layout comes at one cost: it prevents our datastructures from being dynamic.
|
||||
@@ -53,11 +52,7 @@ to get tantivy to fit your use case:
|
||||
|
||||
*Example 2* You could also disable your merge policy and enforce daily segments. Removing data after one week can then be done very efficiently by just editing the `meta.json` and deleting the files associated to segment `D-7`.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# Merging
|
||||
## Merging
|
||||
|
||||
As you index more and more data, your index will accumulate more and more segments.
|
||||
Having a lot of small segments is not really optimal. There is a bit of redundancy in having
|
||||
@@ -66,11 +61,7 @@ all these term dictionary. Also when searching, we will need to do term lookups
|
||||
That's where merging or compacting comes into place. Tantivy will continuously consider merge
|
||||
opportunities and start merging segments in the background.
|
||||
|
||||
|
||||
# Indexing throughput, number of indexing threads
|
||||
|
||||
|
||||
|
||||
## Indexing throughput, number of indexing threads
|
||||
|
||||
[^1]: This may eventually change.
|
||||
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
# Examples
|
||||
|
||||
- [Basic search](/examples/basic_search.html)
|
||||
- [Basic search](/examples/basic_search.html)
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
|
||||
- [Index Sorting](#index-sorting)
|
||||
+ [Why Sorting](#why-sorting)
|
||||
* [Compression](#compression)
|
||||
* [Top-N Optimization](#top-n-optimization)
|
||||
* [Pruning](#pruning)
|
||||
* [Other](#other)
|
||||
+ [Usage](#usage)
|
||||
- [Why Sorting](#why-sorting)
|
||||
- [Compression](#compression)
|
||||
- [Top-N Optimization](#top-n-optimization)
|
||||
- [Pruning](#pruning)
|
||||
- [Other](#other)
|
||||
- [Usage](#usage)
|
||||
|
||||
# Index Sorting
|
||||
|
||||
@@ -15,32 +15,34 @@ Tantivy allows you to sort the index according to a property.
|
||||
|
||||
Presorting an index has several advantages:
|
||||
|
||||
###### Compression
|
||||
### Compression
|
||||
|
||||
When data is sorted it is easier to compress the data. E.g. the numbers sequence [5, 2, 3, 1, 4] would be sorted to [1, 2, 3, 4, 5].
|
||||
When data is sorted it is easier to compress the data. E.g. the numbers sequence [5, 2, 3, 1, 4] would be sorted to [1, 2, 3, 4, 5].
|
||||
If we apply delta encoding this list would be unsorted [5, -3, 1, -2, 3] vs. [1, 1, 1, 1, 1].
|
||||
Compression ratio is mainly affected on the fast field of the sorted property, every thing else is likely unaffected.
|
||||
###### Top-N Optimization
|
||||
Compression ratio is mainly affected on the fast field of the sorted property, every thing else is likely unaffected.
|
||||
|
||||
When data is presorted by a field and search queries request sorting by the same field, we can leverage the natural order of the documents.
|
||||
### Top-N Optimization
|
||||
|
||||
When data is presorted by a field and search queries request sorting by the same field, we can leverage the natural order of the documents.
|
||||
E.g. if the data is sorted by timestamp and want the top n newest docs containing a term, we can simply leveraging the order of the docids.
|
||||
|
||||
Note: Tantivy 0.16 does not do this optimization yet.
|
||||
|
||||
###### Pruning
|
||||
### Pruning
|
||||
|
||||
Let's say we want all documents and want to apply the filter `>= 2010-08-11`. When the data is sorted, we could make a lookup in the fast field to find the docid range and use this as the filter.
|
||||
|
||||
Note: Tantivy 0.16 does not do this optimization yet.
|
||||
|
||||
###### Other?
|
||||
### Other?
|
||||
|
||||
In principle there are many algorithms possible that exploit the monotonically increasing nature. (aggregations maybe?)
|
||||
|
||||
## Usage
|
||||
|
||||
The index sorting can be configured setting [`sort_by_field`](https://github.com/quickwit-oss/tantivy/blob/000d76b11a139a84b16b9b95060a1c93e8b9851c/src/core/index_meta.rs#L238) on `IndexSettings` and passing it to a `IndexBuilder`. As of Tantivy 0.16 only fast fields are allowed to be used.
|
||||
|
||||
```
|
||||
```rust
|
||||
let settings = IndexSettings {
|
||||
sort_by_field: Some(IndexSortByField {
|
||||
field: "intval".to_string(),
|
||||
@@ -58,4 +60,3 @@ let index = index_builder.create_in_ram().unwrap();
|
||||
Sorting an index is applied in the serialization step. In general there are two serialization steps: [Finishing a single segment](https://github.com/quickwit-oss/tantivy/blob/000d76b11a139a84b16b9b95060a1c93e8b9851c/src/indexer/segment_writer.rs#L338) and [merging multiple segments](https://github.com/quickwit-oss/tantivy/blob/000d76b11a139a84b16b9b95060a1c93e8b9851c/src/indexer/merger.rs#L1073).
|
||||
|
||||
In both cases we generate a docid mapping reflecting the sort. This mapping is used when serializing the different components (doc store, fastfields, posting list, normfield, facets).
|
||||
|
||||
|
||||
@@ -21,16 +21,17 @@ For instance, if user is a json field, the following document:
|
||||
```
|
||||
|
||||
emits the following tokens:
|
||||
- ("name", Text, "Paul")
|
||||
- ("name", Text, "Masurel")
|
||||
- ("address.city", Text, "Tokyo")
|
||||
- ("address.country", Text, "Japan")
|
||||
- ("created_at", Date, 15420648505)
|
||||
|
||||
- ("name", Text, "Paul")
|
||||
- ("name", Text, "Masurel")
|
||||
- ("address.city", Text, "Tokyo")
|
||||
- ("address.country", Text, "Japan")
|
||||
- ("created_at", Date, 15420648505)
|
||||
|
||||
# Bytes-encoding and lexicographical sort.
|
||||
## Bytes-encoding and lexicographical sort
|
||||
|
||||
Like any other terms, these triplets are encoded into a binary format as follows.
|
||||
|
||||
- `json_path`: the json path is a sequence of "segments". In the example above, `address.city`
|
||||
is just a debug representation of the json path `["address", "city"]`.
|
||||
Its representation is done by separating segments by a unicode char `\x01`, and ending the path by `\x00`.
|
||||
@@ -41,16 +42,16 @@ This representation is designed to align the natural sort of Terms with the lexi
|
||||
of their binary representation (Tantivy's dictionary (whether fst or sstable) is sorted and does prefix encoding).
|
||||
|
||||
In the example above, the terms will be sorted as
|
||||
- ("address.city", Text, "Tokyo")
|
||||
- ("address.country", Text, "Japan")
|
||||
- ("name", Text, "Masurel")
|
||||
- ("name", Text, "Paul")
|
||||
- ("created_at", Date, 15420648505)
|
||||
|
||||
- ("address.city", Text, "Tokyo")
|
||||
- ("address.country", Text, "Japan")
|
||||
- ("name", Text, "Masurel")
|
||||
- ("name", Text, "Paul")
|
||||
- ("created_at", Date, 15420648505)
|
||||
|
||||
As seen in "pitfalls", we may end up having to search for a value for a same path in several different fields. Putting the field code after the path makes it maximizes compression opportunities but also increases the chances for the two terms to end up in the actual same term dictionary block.
|
||||
|
||||
|
||||
# Pitfalls, limitation and corner cases.
|
||||
## Pitfalls, limitation and corner cases
|
||||
|
||||
Json gives very little information about the type of the literals it stores.
|
||||
All numeric types end up mapped as a "Number" and there are no types for dates.
|
||||
@@ -70,19 +71,21 @@ For instance, we do not even know if the type is a number or string based.
|
||||
|
||||
So the query
|
||||
|
||||
```
|
||||
```rust
|
||||
my_path.my_segment:233
|
||||
```
|
||||
|
||||
Will be interpreted as
|
||||
`(my_path.my_segment, String, 233) or (my_path.my_segment, u64, 233)`
|
||||
|
||||
```rust
|
||||
(my_path.my_segment, String, 233) or (my_path.my_segment, u64, 233)
|
||||
```
|
||||
|
||||
Likewise, we need to emit two tokens if the query contains an rfc3999 date.
|
||||
Indeed the date could have been actually a single token inside the text of a document at ingestion time. Generally speaking, we will always at least emit a string token in query parsing, and sometimes more.
|
||||
|
||||
If one more json field is defined, things get even more complicated.
|
||||
|
||||
|
||||
## Default json field
|
||||
|
||||
If the schema contains a text field called "text" and a json field that is set as a default field:
|
||||
@@ -96,11 +99,11 @@ This is a product decision.
|
||||
The user can still target the JSON field by specifying its name explicitly:
|
||||
`json_dynamic.text:hello`.
|
||||
|
||||
## Range queries are not supported.
|
||||
## Range queries are not supported
|
||||
|
||||
Json field do not support range queries.
|
||||
|
||||
## Arrays do not work like nested object.
|
||||
## Arrays do not work like nested object
|
||||
|
||||
If json object contains an array, a search query might return more documents
|
||||
than what might be expected.
|
||||
@@ -120,9 +123,8 @@ Let's take an example.
|
||||
Despite the array structure, a document in tantivy is a bag of terms.
|
||||
The query:
|
||||
|
||||
```
|
||||
```rust
|
||||
cart.product_type:sneakers AND cart.attributes.color:red
|
||||
```
|
||||
|
||||
Actually match the document above.
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
// ---
|
||||
// Importing tantivy...
|
||||
use tantivy::collector::{Collector, SegmentCollector};
|
||||
use tantivy::fastfield::{DynamicFastFieldReader, FastFieldReader};
|
||||
use tantivy::fastfield::{FastFieldReader, FastFieldReaderImpl};
|
||||
use tantivy::query::QueryParser;
|
||||
use tantivy::schema::{Field, Schema, FAST, INDEXED, TEXT};
|
||||
use tantivy::{doc, Index, Score, SegmentReader};
|
||||
@@ -95,7 +95,7 @@ impl Collector for StatsCollector {
|
||||
}
|
||||
|
||||
struct StatsSegmentCollector {
|
||||
fast_field_reader: DynamicFastFieldReader<u64>,
|
||||
fast_field_reader: FastFieldReaderImpl<u64>,
|
||||
stats: Stats,
|
||||
}
|
||||
|
||||
|
||||
@@ -50,7 +50,7 @@ fn main() -> tantivy::Result<()> {
|
||||
// for your unit tests... Or this example.
|
||||
let index = Index::create_in_ram(schema.clone());
|
||||
|
||||
// here we are registering our custome tokenizer
|
||||
// here we are registering our custom tokenizer
|
||||
// this will store tokens of 3 characters each
|
||||
index
|
||||
.tokenizers()
|
||||
|
||||
@@ -11,8 +11,10 @@ description = "Fast field codecs used by tantivy"
|
||||
[dependencies]
|
||||
common = { version = "0.3", path = "../common/", package = "tantivy-common" }
|
||||
tantivy-bitpacker = { version="0.2", path = "../bitpacker/" }
|
||||
prettytable-rs = {version="0.8.0", optional= true}
|
||||
ownedbytes = { version = "0.3.0", path = "../ownedbytes" }
|
||||
prettytable-rs = {version="0.9.0", optional= true}
|
||||
rand = {version="0.8.3", optional= true}
|
||||
fastdivide = "0.4"
|
||||
|
||||
[dev-dependencies]
|
||||
more-asserts = "0.3.0"
|
||||
|
||||
@@ -4,12 +4,10 @@ extern crate test;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use fastfield_codecs::bitpacked::{BitpackedFastFieldReader, BitpackedFastFieldSerializer};
|
||||
use fastfield_codecs::linearinterpol::{
|
||||
LinearInterpolFastFieldReader, LinearInterpolFastFieldSerializer,
|
||||
};
|
||||
use fastfield_codecs::bitpacked::{BitpackedFastFieldCodec, BitpackedFastFieldReader};
|
||||
use fastfield_codecs::linearinterpol::{LinearInterpolCodec, LinearInterpolFastFieldReader};
|
||||
use fastfield_codecs::multilinearinterpol::{
|
||||
MultiLinearInterpolFastFieldReader, MultiLinearInterpolFastFieldSerializer,
|
||||
MultiLinearInterpolFastFieldCodec, MultiLinearInterpolFastFieldReader,
|
||||
};
|
||||
use fastfield_codecs::*;
|
||||
|
||||
@@ -29,10 +27,7 @@ mod tests {
|
||||
fn value_iter() -> impl Iterator<Item = u64> {
|
||||
0..20_000
|
||||
}
|
||||
fn bench_get<S: FastFieldCodecSerializer, R: FastFieldCodecReader>(
|
||||
b: &mut Bencher,
|
||||
data: &[u64],
|
||||
) {
|
||||
fn bench_get<S: FastFieldCodec, R: FastFieldCodecReader>(b: &mut Bencher, data: &[u64]) {
|
||||
let mut bytes = vec![];
|
||||
S::serialize(
|
||||
&mut bytes,
|
||||
@@ -49,7 +44,7 @@ mod tests {
|
||||
}
|
||||
});
|
||||
}
|
||||
fn bench_create<S: FastFieldCodecSerializer>(b: &mut Bencher, data: &[u64]) {
|
||||
fn bench_create<S: FastFieldCodec>(b: &mut Bencher, data: &[u64]) {
|
||||
let mut bytes = vec![];
|
||||
b.iter(|| {
|
||||
S::serialize(
|
||||
@@ -67,32 +62,32 @@ mod tests {
|
||||
#[bench]
|
||||
fn bench_fastfield_bitpack_create(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_create::<BitpackedFastFieldSerializer>(b, &data);
|
||||
bench_create::<BitpackedFastFieldCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_linearinterpol_create(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_create::<LinearInterpolFastFieldSerializer>(b, &data);
|
||||
bench_create::<LinearInterpolCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_multilinearinterpol_create(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_create::<MultiLinearInterpolFastFieldSerializer>(b, &data);
|
||||
bench_create::<MultiLinearInterpolFastFieldCodec>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_bitpack_get(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_get::<BitpackedFastFieldSerializer, BitpackedFastFieldReader>(b, &data);
|
||||
bench_get::<BitpackedFastFieldCodec, BitpackedFastFieldReader>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_linearinterpol_get(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_get::<LinearInterpolFastFieldSerializer, LinearInterpolFastFieldReader>(b, &data);
|
||||
bench_get::<LinearInterpolCodec, LinearInterpolFastFieldReader>(b, &data);
|
||||
}
|
||||
#[bench]
|
||||
fn bench_fastfield_multilinearinterpol_get(b: &mut Bencher) {
|
||||
let data: Vec<_> = get_data();
|
||||
bench_get::<MultiLinearInterpolFastFieldSerializer, MultiLinearInterpolFastFieldReader>(
|
||||
bench_get::<MultiLinearInterpolFastFieldCodec, MultiLinearInterpolFastFieldReader>(
|
||||
b, &data,
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,37 +1,25 @@
|
||||
use std::io::{self, Write};
|
||||
|
||||
use common::BinarySerializable;
|
||||
use ownedbytes::OwnedBytes;
|
||||
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
|
||||
|
||||
use crate::{FastFieldCodecReader, FastFieldCodecSerializer, FastFieldDataAccess, FastFieldStats};
|
||||
use crate::{FastFieldCodec, FastFieldCodecReader, FastFieldStats};
|
||||
|
||||
/// Depending on the field type, a different
|
||||
/// fast field is required.
|
||||
#[derive(Clone)]
|
||||
pub struct BitpackedFastFieldReader {
|
||||
data: OwnedBytes,
|
||||
bit_unpacker: BitUnpacker,
|
||||
pub min_value_u64: u64,
|
||||
pub max_value_u64: u64,
|
||||
}
|
||||
|
||||
impl FastFieldCodecReader for BitpackedFastFieldReader {
|
||||
/// Opens a fast field given a file.
|
||||
fn open_from_bytes(bytes: &[u8]) -> io::Result<Self> {
|
||||
let (_data, mut footer) = bytes.split_at(bytes.len() - 16);
|
||||
let min_value = u64::deserialize(&mut footer)?;
|
||||
let amplitude = u64::deserialize(&mut footer)?;
|
||||
let max_value = min_value + amplitude;
|
||||
let num_bits = compute_num_bits(amplitude);
|
||||
let bit_unpacker = BitUnpacker::new(num_bits);
|
||||
Ok(BitpackedFastFieldReader {
|
||||
min_value_u64: min_value,
|
||||
max_value_u64: max_value,
|
||||
bit_unpacker,
|
||||
})
|
||||
}
|
||||
#[inline]
|
||||
fn get_u64(&self, doc: u64, data: &[u8]) -> u64 {
|
||||
self.min_value_u64 + self.bit_unpacker.get(doc, data)
|
||||
fn get_u64(&self, doc: u64) -> u64 {
|
||||
self.min_value_u64 + self.bit_unpacker.get(doc, &self.data)
|
||||
}
|
||||
#[inline]
|
||||
fn min_value(&self) -> u64 {
|
||||
@@ -92,11 +80,30 @@ impl<'a, W: Write> BitpackedFastFieldSerializerLegacy<'a, W> {
|
||||
}
|
||||
}
|
||||
|
||||
pub struct BitpackedFastFieldSerializer {}
|
||||
pub struct BitpackedFastFieldCodec;
|
||||
|
||||
impl FastFieldCodecSerializer for BitpackedFastFieldSerializer {
|
||||
impl FastFieldCodec for BitpackedFastFieldCodec {
|
||||
const NAME: &'static str = "Bitpacked";
|
||||
const ID: u8 = 1;
|
||||
|
||||
type Reader = BitpackedFastFieldReader;
|
||||
|
||||
/// Opens a fast field given a file.
|
||||
fn open_from_bytes(bytes: OwnedBytes) -> io::Result<Self::Reader> {
|
||||
let footer_offset = bytes.len() - 16;
|
||||
let (data, mut footer) = bytes.split(footer_offset);
|
||||
let min_value = u64::deserialize(&mut footer)?;
|
||||
let amplitude = u64::deserialize(&mut footer)?;
|
||||
let max_value = min_value + amplitude;
|
||||
let num_bits = compute_num_bits(amplitude);
|
||||
let bit_unpacker = BitUnpacker::new(num_bits);
|
||||
Ok(BitpackedFastFieldReader {
|
||||
data,
|
||||
min_value_u64: min_value,
|
||||
max_value_u64: max_value,
|
||||
bit_unpacker,
|
||||
})
|
||||
}
|
||||
|
||||
/// Serializes data with the BitpackedFastFieldSerializer.
|
||||
///
|
||||
/// The serializer in fact encode the values by bitpacking
|
||||
@@ -106,29 +113,25 @@ impl FastFieldCodecSerializer for BitpackedFastFieldSerializer {
|
||||
/// compute the minimum number of bits required to encode
|
||||
/// values.
|
||||
fn serialize(
|
||||
write: &mut impl Write,
|
||||
_fastfield_accessor: &dyn FastFieldDataAccess,
|
||||
&self,
|
||||
write: &mut impl io::Write,
|
||||
vals: &[u64],
|
||||
stats: FastFieldStats,
|
||||
data_iter: impl Iterator<Item = u64>,
|
||||
_data_iter1: impl Iterator<Item = u64>,
|
||||
) -> io::Result<()> {
|
||||
let mut serializer =
|
||||
BitpackedFastFieldSerializerLegacy::open(write, stats.min_value, stats.max_value)?;
|
||||
|
||||
for val in data_iter {
|
||||
for &val in vals {
|
||||
serializer.add_val(val)?;
|
||||
}
|
||||
serializer.close_field()?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
fn is_applicable(
|
||||
_fastfield_accessor: &impl FastFieldDataAccess,
|
||||
_stats: FastFieldStats,
|
||||
) -> bool {
|
||||
fn is_applicable(_vals: &[u64], _stats: FastFieldStats) -> bool {
|
||||
true
|
||||
}
|
||||
fn estimate(_fastfield_accessor: &impl FastFieldDataAccess, stats: FastFieldStats) -> f32 {
|
||||
fn estimate(_vals: &[u64], stats: FastFieldStats) -> f32 {
|
||||
let amplitude = stats.max_value - stats.min_value;
|
||||
let num_bits = compute_num_bits(amplitude);
|
||||
let num_bits_uncompressed = 64;
|
||||
@@ -142,9 +145,7 @@ mod tests {
|
||||
use crate::tests::get_codec_test_data_sets;
|
||||
|
||||
fn create_and_validate(data: &[u64], name: &str) {
|
||||
crate::tests::create_and_validate::<BitpackedFastFieldSerializer, BitpackedFastFieldReader>(
|
||||
data, name,
|
||||
);
|
||||
crate::tests::create_and_validate(&BitpackedFastFieldCodec, data, name);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
254
fastfield_codecs/src/dynamic.rs
Normal file
254
fastfield_codecs/src/dynamic.rs
Normal file
@@ -0,0 +1,254 @@
|
||||
// Copyright (C) 2022 Quickwit, Inc.
|
||||
//
|
||||
// Quickwit is offered under the AGPL v3.0 and as commercial software.
|
||||
// For commercial licensing, contact us at hello@quickwit.io.
|
||||
//
|
||||
// AGPL:
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as
|
||||
// published by the Free Software Foundation, either version 3 of the
|
||||
// License, or (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
//
|
||||
|
||||
use std::io;
|
||||
use std::num::NonZeroU64;
|
||||
use std::sync::Arc;
|
||||
|
||||
use common::BinarySerializable;
|
||||
use fastdivide::DividerU64;
|
||||
use ownedbytes::OwnedBytes;
|
||||
|
||||
use crate::bitpacked::BitpackedFastFieldCodec;
|
||||
use crate::gcd::{find_gcd, GCDFastFieldCodecReader, GCDParams};
|
||||
use crate::linearinterpol::LinearInterpolCodec;
|
||||
use crate::multilinearinterpol::MultiLinearInterpolFastFieldCodec;
|
||||
use crate::{FastFieldCodec, FastFieldCodecReader, FastFieldStats};
|
||||
|
||||
pub struct DynamicFastFieldCodec;
|
||||
|
||||
impl FastFieldCodec for DynamicFastFieldCodec {
|
||||
const NAME: &'static str = "dynamic";
|
||||
|
||||
type Reader = DynamicFastFieldReader;
|
||||
|
||||
fn is_applicable(_vals: &[u64], _stats: crate::FastFieldStats) -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
fn estimate(_vals: &[u64], _stats: crate::FastFieldStats) -> f32 {
|
||||
0f32
|
||||
}
|
||||
|
||||
fn serialize(
|
||||
&self,
|
||||
wrt: &mut impl io::Write,
|
||||
vals: &[u64],
|
||||
stats: crate::FastFieldStats,
|
||||
) -> io::Result<()> {
|
||||
let gcd: NonZeroU64 = find_gcd(vals.iter().copied().map(|val| val - stats.min_value))
|
||||
.unwrap_or(unsafe { NonZeroU64::new_unchecked(1) });
|
||||
if gcd.get() > 1 {
|
||||
let gcd_divider = DividerU64::divide_by(gcd.get());
|
||||
let scaled_vals: Vec<u64> = vals
|
||||
.iter()
|
||||
.copied()
|
||||
.map(|val| gcd_divider.divide(val - stats.min_value))
|
||||
.collect();
|
||||
<CodecType as BinarySerializable>::serialize(&CodecType::Gcd, wrt)?;
|
||||
let gcd_params = GCDParams {
|
||||
min_value: stats.min_value,
|
||||
gcd,
|
||||
};
|
||||
gcd_params.serialize(wrt)?;
|
||||
let codec_type = choose_codec(stats, &scaled_vals);
|
||||
<CodecType as BinarySerializable>::serialize(&codec_type, wrt)?;
|
||||
let scaled_stats = FastFieldStats::compute(&scaled_vals);
|
||||
codec_type.serialize(wrt, &scaled_vals, scaled_stats)?;
|
||||
} else {
|
||||
let codec_type = choose_codec(stats, vals);
|
||||
wrt.write_all(&[codec_type.to_code()])?;
|
||||
codec_type.serialize(wrt, vals, stats)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn open_from_bytes(mut bytes: OwnedBytes) -> io::Result<Self::Reader> {
|
||||
let codec_code = bytes.read_u8();
|
||||
let codec_type = CodecType::from_code(codec_code).ok_or_else(|| {
|
||||
io::Error::new(
|
||||
io::ErrorKind::InvalidData,
|
||||
format!("Unknown codec code `{codec_code}`"),
|
||||
)
|
||||
})?;
|
||||
let fast_field_reader: Arc<dyn FastFieldCodecReader> = match codec_type {
|
||||
CodecType::Bitpacked => Arc::new(BitpackedFastFieldCodec::open_from_bytes(bytes)?),
|
||||
CodecType::LinearInterpol => Arc::new(LinearInterpolCodec::open_from_bytes(bytes)?),
|
||||
CodecType::MultiLinearInterpol => {
|
||||
Arc::new(MultiLinearInterpolFastFieldCodec::open_from_bytes(bytes)?)
|
||||
}
|
||||
CodecType::Gcd => {
|
||||
let gcd_params = GCDParams::deserialize(&mut bytes)?;
|
||||
let inner_codec_type = <CodecType as BinarySerializable>::deserialize(&mut bytes)?;
|
||||
match inner_codec_type {
|
||||
CodecType::Bitpacked => Arc::new(GCDFastFieldCodecReader {
|
||||
params: gcd_params,
|
||||
reader: BitpackedFastFieldCodec::open_from_bytes(bytes)?,
|
||||
}),
|
||||
CodecType::LinearInterpol => Arc::new(GCDFastFieldCodecReader {
|
||||
params: gcd_params,
|
||||
reader: LinearInterpolCodec::open_from_bytes(bytes)?,
|
||||
}),
|
||||
CodecType::MultiLinearInterpol => Arc::new(GCDFastFieldCodecReader {
|
||||
params: gcd_params,
|
||||
reader: MultiLinearInterpolFastFieldCodec::open_from_bytes(bytes)?,
|
||||
}),
|
||||
CodecType::Gcd => {
|
||||
return Err(io::Error::new(
|
||||
io::ErrorKind::InvalidData,
|
||||
"A GCD codec may not wrap another GCD codec.",
|
||||
));
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
Ok(DynamicFastFieldReader(fast_field_reader))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
/// DynamicFastFieldReader wraps different readers to access
|
||||
/// the various encoded fastfield data
|
||||
pub struct DynamicFastFieldReader(Arc<dyn FastFieldCodecReader>);
|
||||
|
||||
#[repr(u8)]
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
pub enum CodecType {
|
||||
Bitpacked = 0,
|
||||
LinearInterpol = 1,
|
||||
MultiLinearInterpol = 2,
|
||||
Gcd = 3,
|
||||
}
|
||||
|
||||
impl BinarySerializable for CodecType {
|
||||
fn serialize<W: io::Write>(&self, wrt: &mut W) -> io::Result<()> {
|
||||
wrt.write_all(&[self.to_code()])?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let codec_code = u8::deserialize(reader)?;
|
||||
let codec_type = CodecType::from_code(codec_code).ok_or_else(|| {
|
||||
io::Error::new(
|
||||
io::ErrorKind::InvalidData,
|
||||
format!("Invalid codec type code {codec_code}"),
|
||||
)
|
||||
})?;
|
||||
Ok(codec_type)
|
||||
}
|
||||
}
|
||||
|
||||
impl CodecType {
|
||||
pub fn from_code(code: u8) -> Option<Self> {
|
||||
match code {
|
||||
0 => Some(CodecType::Bitpacked),
|
||||
1 => Some(CodecType::LinearInterpol),
|
||||
2 => Some(CodecType::MultiLinearInterpol),
|
||||
3 => Some(CodecType::Gcd),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn to_code(self) -> u8 {
|
||||
self as u8
|
||||
}
|
||||
|
||||
fn codec_estimation(
|
||||
&self,
|
||||
stats: FastFieldStats,
|
||||
vals: &[u64],
|
||||
estimations: &mut Vec<(f32, CodecType)>,
|
||||
) {
|
||||
let estimate_opt: Option<f32> = match self {
|
||||
CodecType::Bitpacked => codec_estimation::<BitpackedFastFieldCodec>(stats, vals),
|
||||
CodecType::LinearInterpol => codec_estimation::<LinearInterpolCodec>(stats, vals),
|
||||
CodecType::MultiLinearInterpol => {
|
||||
codec_estimation::<MultiLinearInterpolFastFieldCodec>(stats, vals)
|
||||
}
|
||||
CodecType::Gcd => None,
|
||||
};
|
||||
if let Some(estimate) = estimate_opt {
|
||||
if !estimate.is_nan() && estimate.is_finite() {
|
||||
estimations.push((estimate, *self));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn serialize(
|
||||
&self,
|
||||
wrt: &mut impl io::Write,
|
||||
fastfield_accessor: &[u64],
|
||||
stats: FastFieldStats,
|
||||
) -> io::Result<()> {
|
||||
match self {
|
||||
CodecType::Bitpacked => {
|
||||
BitpackedFastFieldCodec.serialize(wrt, fastfield_accessor, stats)?;
|
||||
}
|
||||
CodecType::LinearInterpol => {
|
||||
LinearInterpolCodec.serialize(wrt, fastfield_accessor, stats)?;
|
||||
}
|
||||
CodecType::MultiLinearInterpol => {
|
||||
MultiLinearInterpolFastFieldCodec.serialize(wrt, fastfield_accessor, stats)?;
|
||||
}
|
||||
CodecType::Gcd => {
|
||||
panic!("GCD should never be called that way.");
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl FastFieldCodecReader for DynamicFastFieldReader {
|
||||
fn get_u64(&self, doc: u64) -> u64 {
|
||||
self.0.get_u64(doc)
|
||||
}
|
||||
|
||||
fn min_value(&self) -> u64 {
|
||||
self.0.min_value()
|
||||
}
|
||||
|
||||
fn max_value(&self) -> u64 {
|
||||
self.0.max_value()
|
||||
}
|
||||
}
|
||||
|
||||
fn codec_estimation<T: FastFieldCodec>(stats: FastFieldStats, vals: &[u64]) -> Option<f32> {
|
||||
if !T::is_applicable(vals, stats.clone()) {
|
||||
return None;
|
||||
}
|
||||
let ratio = T::estimate(vals, stats);
|
||||
Some(ratio)
|
||||
}
|
||||
|
||||
const CODEC_TYPES: [CodecType; 3] = [
|
||||
CodecType::Bitpacked,
|
||||
CodecType::LinearInterpol,
|
||||
CodecType::MultiLinearInterpol,
|
||||
];
|
||||
|
||||
fn choose_codec(stats: FastFieldStats, vals: &[u64]) -> CodecType {
|
||||
let mut estimations = Vec::new();
|
||||
for codec_type in &CODEC_TYPES {
|
||||
codec_type.codec_estimation(stats, vals, &mut estimations);
|
||||
}
|
||||
estimations.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
|
||||
let (_ratio, codec_type) = estimations[0];
|
||||
codec_type
|
||||
}
|
||||
247
fastfield_codecs/src/gcd.rs
Normal file
247
fastfield_codecs/src/gcd.rs
Normal file
@@ -0,0 +1,247 @@
|
||||
use std::io::{self, Write};
|
||||
use std::num::NonZeroU64;
|
||||
|
||||
use common::BinarySerializable;
|
||||
use fastdivide::DividerU64;
|
||||
|
||||
use crate::FastFieldCodecReader;
|
||||
|
||||
/// Wrapper for accessing a fastfield.
|
||||
///
|
||||
/// Holds the data and the codec to the read the data.
|
||||
#[derive(Clone)]
|
||||
pub struct GCDFastFieldCodecReader<CodecReader> {
|
||||
pub params: GCDParams,
|
||||
pub reader: CodecReader,
|
||||
}
|
||||
|
||||
impl<C: FastFieldCodecReader> FastFieldCodecReader for GCDFastFieldCodecReader<C> {
|
||||
#[inline]
|
||||
fn get_u64(&self, doc: u64) -> u64 {
|
||||
self.params.min_value + self.params.gcd.get() * self.reader.get_u64(doc)
|
||||
}
|
||||
|
||||
fn min_value(&self) -> u64 {
|
||||
self.params.min_value + self.params.gcd.get() * self.reader.min_value()
|
||||
}
|
||||
|
||||
fn max_value(&self) -> u64 {
|
||||
self.params.min_value + self.params.gcd.get() * self.reader.max_value()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Copy, Clone)]
|
||||
pub struct GCDParams {
|
||||
pub min_value: u64,
|
||||
pub gcd: NonZeroU64,
|
||||
}
|
||||
|
||||
impl BinarySerializable for GCDParams {
|
||||
fn serialize<W: Write>(&self, wrt: &mut W) -> io::Result<()> {
|
||||
self.gcd.get().serialize(wrt)?;
|
||||
self.min_value.serialize(wrt)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn deserialize<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||
let gcd = NonZeroU64::new(u64::deserialize(reader)?)
|
||||
.ok_or_else(|| io::Error::new(io::ErrorKind::InvalidData, "GCD=0 is invalid."))?;
|
||||
let min_value = u64::deserialize(reader)?;
|
||||
Ok(GCDParams { min_value, gcd })
|
||||
}
|
||||
}
|
||||
|
||||
fn compute_gcd(mut left: u64, mut right: u64) -> u64 {
|
||||
while right != 0 {
|
||||
(left, right) = (right, left % right);
|
||||
}
|
||||
left
|
||||
}
|
||||
|
||||
// Find GCD for iterator of numbers
|
||||
//
|
||||
// If all numbers are '0' (or if there are not numbers, return None).
|
||||
pub fn find_gcd(numbers: impl Iterator<Item = u64>) -> Option<NonZeroU64> {
|
||||
let mut numbers = numbers.filter(|n| *n != 0);
|
||||
let mut gcd = numbers.next()?;
|
||||
if gcd == 1 {
|
||||
return NonZeroU64::new(gcd);
|
||||
}
|
||||
|
||||
let mut gcd_divider = DividerU64::divide_by(gcd);
|
||||
for val in numbers {
|
||||
let remainder = val - gcd_divider.divide(val) * gcd;
|
||||
if remainder == 0 {
|
||||
continue;
|
||||
}
|
||||
gcd = compute_gcd(gcd, val);
|
||||
if gcd == 1 {
|
||||
return NonZeroU64::new(1);
|
||||
}
|
||||
gcd_divider = DividerU64::divide_by(gcd);
|
||||
}
|
||||
NonZeroU64::new(gcd)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
|
||||
// TODO Move test
|
||||
//
|
||||
// use std::collections::HashMap;
|
||||
// use std::path::Path;
|
||||
//
|
||||
// use crate::directory::{CompositeFile, RamDirectory, WritePtr};
|
||||
// use crate::fastfield::serializer::FastFieldCodecEnableCheck;
|
||||
// use crate::fastfield::tests::{FIELD, FIELDI64, SCHEMA, SCHEMAI64};
|
||||
// use super::{
|
||||
// find_gcd, CompositeFastFieldSerializer, DynamicFastFieldReader, FastFieldCodecName,
|
||||
// FastFieldReader, FastFieldsWriter, ALL_CODECS,
|
||||
// };
|
||||
// use crate::schema::Schema;
|
||||
// use crate::Directory;
|
||||
//
|
||||
// fn get_index(
|
||||
// docs: &[crate::Document],
|
||||
// schema: &Schema,
|
||||
// codec_enable_checker: FastFieldCodecEnableCheck,
|
||||
// ) -> crate::Result<RamDirectory> {
|
||||
// let directory: RamDirectory = RamDirectory::create();
|
||||
// {
|
||||
// let write: WritePtr = directory.open_write(Path::new("test")).unwrap();
|
||||
// let mut serializer =
|
||||
// CompositeFastFieldSerializer::from_write_with_codec(write, codec_enable_checker)
|
||||
// .unwrap();
|
||||
// let mut fast_field_writers = FastFieldsWriter::from_schema(schema);
|
||||
// for doc in docs {
|
||||
// fast_field_writers.add_document(doc);
|
||||
// }
|
||||
// fast_field_writers
|
||||
// .serialize(&mut serializer, &HashMap::new(), None)
|
||||
// .unwrap();
|
||||
// serializer.close().unwrap();
|
||||
// }
|
||||
// Ok(directory)
|
||||
// }
|
||||
//
|
||||
// fn test_fastfield_gcd_i64_with_codec(
|
||||
// codec_name: FastFieldCodecName,
|
||||
// num_vals: usize,
|
||||
// ) -> crate::Result<()> {
|
||||
// let path = Path::new("test");
|
||||
// let mut docs = vec![];
|
||||
// for i in 1..=num_vals {
|
||||
// let val = i as i64 * 1000i64;
|
||||
// docs.push(doc!(*FIELDI64=>val));
|
||||
// }
|
||||
// let directory = get_index(&docs, &SCHEMAI64, codec_name.clone().into())?;
|
||||
// let file = directory.open_read(path).unwrap();
|
||||
// assert_eq!(file.len(), 118);
|
||||
// let composite_file = CompositeFile::open(&file)?;
|
||||
// let file = composite_file.open_read(*FIELD).unwrap();
|
||||
// let fast_field_reader = DynamicFastFieldReader::<i64>::open(file)?;
|
||||
// assert_eq!(fast_field_reader.get(0), 1000i64);
|
||||
// assert_eq!(fast_field_reader.get(1), 2000i64);
|
||||
// assert_eq!(fast_field_reader.get(2), 3000i64);
|
||||
// assert_eq!(fast_field_reader.max_value(), num_vals as i64 * 1000);
|
||||
// assert_eq!(fast_field_reader.min_value(), 1000i64);
|
||||
// let file = directory.open_read(path).unwrap();
|
||||
//
|
||||
// Can't apply gcd
|
||||
// let path = Path::new("test");
|
||||
// docs.pop();
|
||||
// docs.push(doc!(*FIELDI64=>2001i64));
|
||||
// let directory = get_index(&docs, &SCHEMAI64, codec_name.into())?;
|
||||
// let file2 = directory.open_read(path).unwrap();
|
||||
// assert!(file2.len() > file.len());
|
||||
//
|
||||
// Ok(())
|
||||
// }
|
||||
//
|
||||
// #[test]
|
||||
// fn test_fastfield_gcd_i64() -> crate::Result<()> {
|
||||
// for codec_name in ALL_CODECS {
|
||||
// test_fastfield_gcd_i64_with_codec(codec_name.clone(), 5005)?;
|
||||
// }
|
||||
// Ok(())
|
||||
// }
|
||||
//
|
||||
// fn test_fastfield_gcd_u64_with_codec(
|
||||
// codec_name: FastFieldCodecName,
|
||||
// num_vals: usize,
|
||||
// ) -> crate::Result<()> {
|
||||
// let path = Path::new("test");
|
||||
// let mut docs = vec![];
|
||||
// for i in 1..=num_vals {
|
||||
// let val = i as u64 * 1000u64;
|
||||
// docs.push(doc!(*FIELD=>val));
|
||||
// }
|
||||
// let directory = get_index(&docs, &SCHEMA, codec_name.clone().into())?;
|
||||
// let file = directory.open_read(path).unwrap();
|
||||
// assert_eq!(file.len(), 118);
|
||||
// let composite_file = CompositeFile::open(&file)?;
|
||||
// let file = composite_file.open_read(*FIELD).unwrap();
|
||||
// let fast_field_reader = DynamicFastFieldReader::<u64>::open(file)?;
|
||||
// assert_eq!(fast_field_reader.get(0), 1000u64);
|
||||
// assert_eq!(fast_field_reader.get(1), 2000u64);
|
||||
// assert_eq!(fast_field_reader.get(2), 3000u64);
|
||||
// assert_eq!(fast_field_reader.max_value(), num_vals as u64 * 1000);
|
||||
// assert_eq!(fast_field_reader.min_value(), 1000u64);
|
||||
// let file = directory.open_read(path).unwrap();
|
||||
//
|
||||
// Can't apply gcd
|
||||
// let path = Path::new("test");
|
||||
// docs.pop();
|
||||
// docs.push(doc!(*FIELDI64=>2001u64));
|
||||
// let directory = get_index(&docs, &SCHEMA, codec_name.into())?;
|
||||
// let file2 = directory.open_read(path).unwrap();
|
||||
// assert!(file2.len() > file.len());
|
||||
//
|
||||
// Ok(())
|
||||
// }
|
||||
//
|
||||
// #[test]
|
||||
// fn test_fastfield_gcd_u64() -> crate::Result<()> {
|
||||
// for codec_name in ALL_CODECS {
|
||||
// test_fastfield_gcd_u64_with_codec(codec_name.clone(), 5005)?;
|
||||
// }
|
||||
// Ok(())
|
||||
// }
|
||||
//
|
||||
// #[test]
|
||||
// pub fn test_fastfield2() {
|
||||
// let test_fastfield = DynamicFastFieldReader::<u64>::from(vec![100, 200, 300]);
|
||||
// assert_eq!(test_fastfield.get(0), 100);
|
||||
// assert_eq!(test_fastfield.get(1), 200);
|
||||
// assert_eq!(test_fastfield.get(2), 300);
|
||||
// }
|
||||
|
||||
use std::num::NonZeroU64;
|
||||
|
||||
use crate::gcd::{compute_gcd, find_gcd};
|
||||
|
||||
#[test]
|
||||
fn test_compute_gcd() {
|
||||
assert_eq!(compute_gcd(0, 0), 0);
|
||||
assert_eq!(compute_gcd(4, 0), 4);
|
||||
assert_eq!(compute_gcd(0, 4), 4);
|
||||
assert_eq!(compute_gcd(1, 4), 1);
|
||||
assert_eq!(compute_gcd(4, 1), 1);
|
||||
assert_eq!(compute_gcd(4, 2), 2);
|
||||
assert_eq!(compute_gcd(10, 25), 5);
|
||||
assert_eq!(compute_gcd(25, 10), 5);
|
||||
assert_eq!(compute_gcd(25, 25), 25);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn find_gcd_test() {
|
||||
assert_eq!(find_gcd([0].into_iter()), None);
|
||||
assert_eq!(find_gcd([0, 10].into_iter()), NonZeroU64::new(10));
|
||||
assert_eq!(find_gcd([10, 0].into_iter()), NonZeroU64::new(10));
|
||||
assert_eq!(find_gcd([].into_iter()), None);
|
||||
assert_eq!(find_gcd([15, 30, 5, 10].into_iter()), NonZeroU64::new(5));
|
||||
assert_eq!(find_gcd([15, 16, 10].into_iter()), NonZeroU64::new(1));
|
||||
assert_eq!(find_gcd([0, 5, 5, 5].into_iter()), NonZeroU64::new(5));
|
||||
assert_eq!(find_gcd([0, 0].into_iter()), None);
|
||||
}
|
||||
}
|
||||
@@ -3,94 +3,95 @@
|
||||
extern crate more_asserts;
|
||||
|
||||
use std::io;
|
||||
use std::io::Write;
|
||||
|
||||
use ownedbytes::OwnedBytes;
|
||||
|
||||
pub mod bitpacked;
|
||||
pub mod dynamic;
|
||||
pub mod gcd;
|
||||
pub mod linearinterpol;
|
||||
pub mod multilinearinterpol;
|
||||
|
||||
pub trait FastFieldCodecReader: Sized {
|
||||
// Unify with FastFieldReader
|
||||
|
||||
pub trait FastFieldCodecReader {
|
||||
/// reads the metadata and returns the CodecReader
|
||||
fn open_from_bytes(bytes: &[u8]) -> std::io::Result<Self>;
|
||||
|
||||
fn get_u64(&self, doc: u64, data: &[u8]) -> u64;
|
||||
|
||||
fn get_u64(&self, doc: u64) -> u64;
|
||||
fn min_value(&self) -> u64;
|
||||
fn max_value(&self) -> u64;
|
||||
}
|
||||
|
||||
/// The FastFieldSerializerEstimate trait is required on all variants
|
||||
/// of fast field compressions, to decide which one to choose.
|
||||
pub trait FastFieldCodecSerializer {
|
||||
/// A codex needs to provide a unique name and id, which is
|
||||
/// used for debugging and de/serialization.
|
||||
pub trait FastFieldCodec {
|
||||
/// A codex needs to provide a unique name used for debugging.
|
||||
const NAME: &'static str;
|
||||
const ID: u8;
|
||||
|
||||
type Reader: FastFieldCodecReader;
|
||||
|
||||
/// Check if the Codec is able to compress the data
|
||||
fn is_applicable(fastfield_accessor: &impl FastFieldDataAccess, stats: FastFieldStats) -> bool;
|
||||
fn is_applicable(vals: &[u64], stats: FastFieldStats) -> bool;
|
||||
|
||||
/// Returns an estimate of the compression ratio.
|
||||
/// The baseline is uncompressed 64bit data.
|
||||
///
|
||||
/// It could make sense to also return a value representing
|
||||
/// computational complexity.
|
||||
fn estimate(fastfield_accessor: &impl FastFieldDataAccess, stats: FastFieldStats) -> f32;
|
||||
fn estimate(vals: &[u64], stats: FastFieldStats) -> f32;
|
||||
|
||||
/// Serializes the data using the serializer into write.
|
||||
/// There are multiple iterators, in case the codec needs to read the data multiple times.
|
||||
/// The iterators should be preferred over using fastfield_accessor for performance reasons.
|
||||
fn serialize(
|
||||
write: &mut impl Write,
|
||||
fastfield_accessor: &dyn FastFieldDataAccess,
|
||||
&self,
|
||||
write: &mut impl io::Write,
|
||||
vals: &[u64],
|
||||
stats: FastFieldStats,
|
||||
data_iter: impl Iterator<Item = u64>,
|
||||
data_iter1: impl Iterator<Item = u64>,
|
||||
) -> io::Result<()>;
|
||||
|
||||
fn open_from_bytes(bytes: OwnedBytes) -> io::Result<Self::Reader>;
|
||||
}
|
||||
|
||||
/// FastFieldDataAccess is the trait to access fast field data during serialization and estimation.
|
||||
pub trait FastFieldDataAccess {
|
||||
/// Return the value associated to the given position.
|
||||
///
|
||||
/// Whenever possible use the Iterator passed to the fastfield creation instead, for performance
|
||||
/// reasons.
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// May panic if `position` is greater than the index.
|
||||
fn get_val(&self, position: u64) -> u64;
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
/// Statistics are used in codec detection and stored in the fast field footer.
|
||||
#[derive(Clone, Copy, Default, Debug)]
|
||||
pub struct FastFieldStats {
|
||||
pub min_value: u64,
|
||||
pub max_value: u64,
|
||||
pub num_vals: u64,
|
||||
}
|
||||
|
||||
impl<'a> FastFieldDataAccess for &'a [u64] {
|
||||
fn get_val(&self, position: u64) -> u64 {
|
||||
self[position as usize]
|
||||
impl FastFieldStats {
|
||||
pub fn compute(vals: &[u64]) -> Self {
|
||||
if vals.is_empty() {
|
||||
return FastFieldStats::default();
|
||||
}
|
||||
let first_val = vals[0];
|
||||
let mut fast_field_stats = FastFieldStats {
|
||||
min_value: first_val,
|
||||
max_value: first_val,
|
||||
num_vals: 1,
|
||||
};
|
||||
for &val in &vals[1..] {
|
||||
fast_field_stats.record(val);
|
||||
}
|
||||
fast_field_stats
|
||||
}
|
||||
}
|
||||
|
||||
impl FastFieldDataAccess for Vec<u64> {
|
||||
fn get_val(&self, position: u64) -> u64 {
|
||||
self[position as usize]
|
||||
pub fn record(&mut self, val: u64) {
|
||||
self.num_vals += 1;
|
||||
self.min_value = self.min_value.min(val);
|
||||
self.max_value = self.max_value.max(val);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::bitpacked::{BitpackedFastFieldReader, BitpackedFastFieldSerializer};
|
||||
use crate::linearinterpol::{LinearInterpolFastFieldReader, LinearInterpolFastFieldSerializer};
|
||||
use crate::multilinearinterpol::{
|
||||
MultiLinearInterpolFastFieldReader, MultiLinearInterpolFastFieldSerializer,
|
||||
};
|
||||
use crate::bitpacked::BitpackedFastFieldCodec;
|
||||
use crate::linearinterpol::LinearInterpolCodec;
|
||||
use crate::multilinearinterpol::MultiLinearInterpolFastFieldCodec;
|
||||
|
||||
pub fn create_and_validate<S: FastFieldCodecSerializer, R: FastFieldCodecReader>(
|
||||
pub fn create_and_validate<S: FastFieldCodec>(
|
||||
codec: &S,
|
||||
data: &[u64],
|
||||
name: &str,
|
||||
) -> (f32, f32) {
|
||||
@@ -98,19 +99,16 @@ mod tests {
|
||||
return (f32::MAX, 0.0);
|
||||
}
|
||||
let estimation = S::estimate(&data, crate::tests::stats_from_vec(data));
|
||||
let mut out = vec![];
|
||||
S::serialize(
|
||||
&mut out,
|
||||
&data,
|
||||
crate::tests::stats_from_vec(data),
|
||||
data.iter().cloned(),
|
||||
data.iter().cloned(),
|
||||
)
|
||||
.unwrap();
|
||||
let mut out: Vec<u8> = Vec::new();
|
||||
codec
|
||||
.serialize(&mut out, &data, crate::tests::stats_from_vec(data))
|
||||
.unwrap();
|
||||
|
||||
let reader = R::open_from_bytes(&out).unwrap();
|
||||
let actual_compression = out.len() as f32 / (data.len() as f32 * 8.0);
|
||||
|
||||
let reader = S::open_from_bytes(OwnedBytes::new(out)).unwrap();
|
||||
for (doc, orig_val) in data.iter().enumerate() {
|
||||
let val = reader.get_u64(doc as u64, &out);
|
||||
let val = reader.get_u64(doc as u64);
|
||||
if val != *orig_val {
|
||||
panic!(
|
||||
"val {:?} does not match orig_val {:?}, in data set {}, data {:?}",
|
||||
@@ -118,7 +116,6 @@ mod tests {
|
||||
);
|
||||
}
|
||||
}
|
||||
let actual_compression = out.len() as f32 / (data.len() as f32 * 8.0);
|
||||
(estimation, actual_compression)
|
||||
}
|
||||
pub fn get_codec_test_data_sets() -> Vec<(Vec<u64>, &'static str)> {
|
||||
@@ -137,11 +134,10 @@ mod tests {
|
||||
data_and_names
|
||||
}
|
||||
|
||||
fn test_codec<S: FastFieldCodecSerializer, R: FastFieldCodecReader>() {
|
||||
let codec_name = S::NAME;
|
||||
fn test_codec<C: FastFieldCodec>(codec: &C) {
|
||||
let codec_name = C::NAME;
|
||||
for (data, data_set_name) in get_codec_test_data_sets() {
|
||||
let (estimate, actual) =
|
||||
crate::tests::create_and_validate::<S, R>(&data, data_set_name);
|
||||
let (estimate, actual) = crate::tests::create_and_validate(codec, &data, data_set_name);
|
||||
let result = if estimate == f32::MAX {
|
||||
"Disabled".to_string()
|
||||
} else {
|
||||
@@ -155,15 +151,15 @@ mod tests {
|
||||
}
|
||||
#[test]
|
||||
fn test_codec_bitpacking() {
|
||||
test_codec::<BitpackedFastFieldSerializer, BitpackedFastFieldReader>();
|
||||
test_codec(&BitpackedFastFieldCodec);
|
||||
}
|
||||
#[test]
|
||||
fn test_codec_interpolation() {
|
||||
test_codec::<LinearInterpolFastFieldSerializer, LinearInterpolFastFieldReader>();
|
||||
test_codec(&LinearInterpolCodec);
|
||||
}
|
||||
#[test]
|
||||
fn test_codec_multi_interpolation() {
|
||||
test_codec::<MultiLinearInterpolFastFieldSerializer, MultiLinearInterpolFastFieldReader>();
|
||||
test_codec(&MultiLinearInterpolFastFieldCodec);
|
||||
}
|
||||
|
||||
use super::*;
|
||||
@@ -182,16 +178,15 @@ mod tests {
|
||||
let data = (10..=20000_u64).collect::<Vec<_>>();
|
||||
|
||||
let linear_interpol_estimation =
|
||||
LinearInterpolFastFieldSerializer::estimate(&data, stats_from_vec(&data));
|
||||
LinearInterpolCodec::estimate(&data, stats_from_vec(&data));
|
||||
assert_le!(linear_interpol_estimation, 0.01);
|
||||
|
||||
let multi_linear_interpol_estimation =
|
||||
MultiLinearInterpolFastFieldSerializer::estimate(&data, stats_from_vec(&data));
|
||||
MultiLinearInterpolFastFieldCodec::estimate(&&data[..], stats_from_vec(&data));
|
||||
assert_le!(multi_linear_interpol_estimation, 0.2);
|
||||
assert_le!(linear_interpol_estimation, multi_linear_interpol_estimation);
|
||||
|
||||
let bitpacked_estimation =
|
||||
BitpackedFastFieldSerializer::estimate(&data, stats_from_vec(&data));
|
||||
let bitpacked_estimation = BitpackedFastFieldCodec::estimate(&data, stats_from_vec(&data));
|
||||
assert_le!(linear_interpol_estimation, bitpacked_estimation);
|
||||
}
|
||||
#[test]
|
||||
@@ -199,11 +194,10 @@ mod tests {
|
||||
let data = vec![200, 10, 10, 10, 10, 1000, 20];
|
||||
|
||||
let linear_interpol_estimation =
|
||||
LinearInterpolFastFieldSerializer::estimate(&data, stats_from_vec(&data));
|
||||
LinearInterpolCodec::estimate(&data, stats_from_vec(&data));
|
||||
assert_le!(linear_interpol_estimation, 0.32);
|
||||
|
||||
let bitpacked_estimation =
|
||||
BitpackedFastFieldSerializer::estimate(&data, stats_from_vec(&data));
|
||||
let bitpacked_estimation = BitpackedFastFieldCodec::estimate(&data, stats_from_vec(&data));
|
||||
assert_le!(bitpacked_estimation, linear_interpol_estimation);
|
||||
}
|
||||
#[test]
|
||||
@@ -214,11 +208,10 @@ mod tests {
|
||||
// in this case the linear interpolation can't in fact not be worse than bitpacking,
|
||||
// but the estimator adds some threshold, which leads to estimated worse behavior
|
||||
let linear_interpol_estimation =
|
||||
LinearInterpolFastFieldSerializer::estimate(&data, stats_from_vec(&data));
|
||||
LinearInterpolCodec::estimate(&data, stats_from_vec(&data));
|
||||
assert_le!(linear_interpol_estimation, 0.35);
|
||||
|
||||
let bitpacked_estimation =
|
||||
BitpackedFastFieldSerializer::estimate(&data, stats_from_vec(&data));
|
||||
let bitpacked_estimation = BitpackedFastFieldCodec::estimate(&data, stats_from_vec(&data));
|
||||
assert_le!(bitpacked_estimation, 0.32);
|
||||
assert_le!(bitpacked_estimation, linear_interpol_estimation);
|
||||
}
|
||||
|
||||
@@ -2,14 +2,16 @@ use std::io::{self, Read, Write};
|
||||
use std::ops::Sub;
|
||||
|
||||
use common::{BinarySerializable, FixedSize};
|
||||
use ownedbytes::OwnedBytes;
|
||||
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
|
||||
|
||||
use crate::{FastFieldCodecReader, FastFieldCodecSerializer, FastFieldDataAccess, FastFieldStats};
|
||||
use crate::{FastFieldCodec, FastFieldCodecReader, FastFieldStats};
|
||||
|
||||
/// Depending on the field type, a different
|
||||
/// fast field is required.
|
||||
#[derive(Clone)]
|
||||
pub struct LinearInterpolFastFieldReader {
|
||||
data: OwnedBytes,
|
||||
bit_unpacker: BitUnpacker,
|
||||
pub footer: LinearInterpolFooter,
|
||||
pub slope: f32,
|
||||
@@ -56,24 +58,10 @@ impl FixedSize for LinearInterpolFooter {
|
||||
}
|
||||
|
||||
impl FastFieldCodecReader for LinearInterpolFastFieldReader {
|
||||
/// Opens a fast field given a file.
|
||||
fn open_from_bytes(bytes: &[u8]) -> io::Result<Self> {
|
||||
let (_data, mut footer) = bytes.split_at(bytes.len() - LinearInterpolFooter::SIZE_IN_BYTES);
|
||||
let footer = LinearInterpolFooter::deserialize(&mut footer)?;
|
||||
let slope = get_slope(footer.first_val, footer.last_val, footer.num_vals);
|
||||
|
||||
let num_bits = compute_num_bits(footer.relative_max_value);
|
||||
let bit_unpacker = BitUnpacker::new(num_bits);
|
||||
Ok(LinearInterpolFastFieldReader {
|
||||
bit_unpacker,
|
||||
footer,
|
||||
slope,
|
||||
})
|
||||
}
|
||||
#[inline]
|
||||
fn get_u64(&self, doc: u64, data: &[u8]) -> u64 {
|
||||
fn get_u64(&self, doc: u64) -> u64 {
|
||||
let calculated_value = get_calculated_value(self.footer.first_val, doc, self.slope);
|
||||
(calculated_value + self.bit_unpacker.get(doc, data)) - self.footer.offset
|
||||
(calculated_value + self.bit_unpacker.get(doc, &self.data)) - self.footer.offset
|
||||
}
|
||||
|
||||
#[inline]
|
||||
@@ -88,7 +76,7 @@ impl FastFieldCodecReader for LinearInterpolFastFieldReader {
|
||||
|
||||
/// Fastfield serializer, which tries to guess values by linear interpolation
|
||||
/// and stores the difference bitpacked.
|
||||
pub struct LinearInterpolFastFieldSerializer {}
|
||||
pub struct LinearInterpolCodec;
|
||||
|
||||
#[inline]
|
||||
fn get_slope(first_val: u64, last_val: u64, num_vals: u64) -> f32 {
|
||||
@@ -105,26 +93,44 @@ fn get_calculated_value(first_val: u64, pos: u64, slope: f32) -> u64 {
|
||||
first_val + (pos as f32 * slope) as u64
|
||||
}
|
||||
|
||||
impl FastFieldCodecSerializer for LinearInterpolFastFieldSerializer {
|
||||
impl FastFieldCodec for LinearInterpolCodec {
|
||||
const NAME: &'static str = "LinearInterpol";
|
||||
const ID: u8 = 2;
|
||||
|
||||
type Reader = LinearInterpolFastFieldReader;
|
||||
|
||||
/// Opens a fast field given a file.
|
||||
fn open_from_bytes(bytes: OwnedBytes) -> io::Result<Self::Reader> {
|
||||
let footer_offset = bytes.len() - LinearInterpolFooter::SIZE_IN_BYTES;
|
||||
let (data, mut footer) = bytes.split(footer_offset);
|
||||
let footer = LinearInterpolFooter::deserialize(&mut footer)?;
|
||||
let slope = get_slope(footer.first_val, footer.last_val, footer.num_vals);
|
||||
let num_bits = compute_num_bits(footer.relative_max_value);
|
||||
let bit_unpacker = BitUnpacker::new(num_bits);
|
||||
Ok(LinearInterpolFastFieldReader {
|
||||
data,
|
||||
bit_unpacker,
|
||||
footer,
|
||||
slope,
|
||||
})
|
||||
}
|
||||
|
||||
/// Creates a new fast field serializer.
|
||||
fn serialize(
|
||||
&self,
|
||||
write: &mut impl Write,
|
||||
fastfield_accessor: &dyn FastFieldDataAccess,
|
||||
vals: &[u64],
|
||||
stats: FastFieldStats,
|
||||
data_iter: impl Iterator<Item = u64>,
|
||||
data_iter1: impl Iterator<Item = u64>,
|
||||
) -> io::Result<()> {
|
||||
assert!(stats.min_value <= stats.max_value);
|
||||
|
||||
let first_val = fastfield_accessor.get_val(0);
|
||||
let last_val = fastfield_accessor.get_val(stats.num_vals as u64 - 1);
|
||||
let first_val = vals[0];
|
||||
let last_val = vals[vals.len() - 1];
|
||||
|
||||
let slope = get_slope(first_val, last_val, stats.num_vals);
|
||||
// calculate offset to ensure all values are positive
|
||||
let mut offset = 0;
|
||||
let mut rel_positive_max = 0;
|
||||
for (pos, actual_value) in data_iter1.enumerate() {
|
||||
for (pos, actual_value) in vals.iter().copied().enumerate() {
|
||||
let calculated_value = get_calculated_value(first_val, pos as u64, slope);
|
||||
if calculated_value > actual_value {
|
||||
// negative value we need to apply an offset
|
||||
@@ -142,7 +148,7 @@ impl FastFieldCodecSerializer for LinearInterpolFastFieldSerializer {
|
||||
|
||||
let num_bits = compute_num_bits(relative_max_value);
|
||||
let mut bit_packer = BitPacker::new();
|
||||
for (pos, val) in data_iter.enumerate() {
|
||||
for (pos, val) in vals.iter().copied().enumerate() {
|
||||
let calculated_value = get_calculated_value(first_val, pos as u64, slope);
|
||||
let diff = (val + offset) - calculated_value;
|
||||
bit_packer.write(diff, num_bits, write)?;
|
||||
@@ -161,17 +167,14 @@ impl FastFieldCodecSerializer for LinearInterpolFastFieldSerializer {
|
||||
footer.serialize(write)?;
|
||||
Ok(())
|
||||
}
|
||||
fn is_applicable(
|
||||
_fastfield_accessor: &impl FastFieldDataAccess,
|
||||
stats: FastFieldStats,
|
||||
) -> bool {
|
||||
fn is_applicable(_vals: &[u64], stats: FastFieldStats) -> bool {
|
||||
if stats.num_vals < 3 {
|
||||
return false; // disable compressor for this case
|
||||
}
|
||||
// On serialisation the offset is added to the actual value.
|
||||
// We need to make sure this won't run into overflow calculation issues.
|
||||
// For this we take the maximum theroretical offset and add this to the max value.
|
||||
// If this doesn't overflow the algortihm should be fine
|
||||
// If this doesn't overflow the algorithm should be fine
|
||||
let theorethical_maximum_offset = stats.max_value - stats.min_value;
|
||||
if stats
|
||||
.max_value
|
||||
@@ -185,22 +188,22 @@ impl FastFieldCodecSerializer for LinearInterpolFastFieldSerializer {
|
||||
/// estimation for linear interpolation is hard because, you don't know
|
||||
/// where the local maxima for the deviation of the calculated value are and
|
||||
/// the offset to shift all values to >=0 is also unknown.
|
||||
fn estimate(fastfield_accessor: &impl FastFieldDataAccess, stats: FastFieldStats) -> f32 {
|
||||
let first_val = fastfield_accessor.get_val(0);
|
||||
let last_val = fastfield_accessor.get_val(stats.num_vals as u64 - 1);
|
||||
fn estimate(vals: &[u64], stats: FastFieldStats) -> f32 {
|
||||
let first_val = vals[0];
|
||||
let last_val = vals[vals.len() - 1];
|
||||
let slope = get_slope(first_val, last_val, stats.num_vals);
|
||||
|
||||
// let's sample at 0%, 5%, 10% .. 95%, 100%
|
||||
let num_vals = stats.num_vals as f32 / 100.0;
|
||||
let sample_positions = (0..20)
|
||||
let sample_positions: Vec<usize> = (0..20)
|
||||
.map(|pos| (num_vals * pos as f32 * 5.0) as usize)
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let max_distance = sample_positions
|
||||
.iter()
|
||||
.into_iter()
|
||||
.map(|pos| {
|
||||
let calculated_value = get_calculated_value(first_val, *pos as u64, slope);
|
||||
let actual_value = fastfield_accessor.get_val(*pos as u64);
|
||||
let calculated_value = get_calculated_value(first_val, pos as u64, slope);
|
||||
let actual_value = vals[pos];
|
||||
distance(calculated_value, actual_value)
|
||||
})
|
||||
.max()
|
||||
@@ -235,10 +238,7 @@ mod tests {
|
||||
use crate::tests::get_codec_test_data_sets;
|
||||
|
||||
fn create_and_validate(data: &[u64], name: &str) -> (f32, f32) {
|
||||
crate::tests::create_and_validate::<
|
||||
LinearInterpolFastFieldSerializer,
|
||||
LinearInterpolFastFieldReader,
|
||||
>(data, name)
|
||||
crate::tests::create_and_validate(&LinearInterpolCodec, data, name)
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
#[macro_use]
|
||||
extern crate prettytable;
|
||||
use fastfield_codecs::linearinterpol::LinearInterpolFastFieldSerializer;
|
||||
use fastfield_codecs::multilinearinterpol::MultiLinearInterpolFastFieldSerializer;
|
||||
use fastfield_codecs::{FastFieldCodecSerializer, FastFieldStats};
|
||||
// use fastfield_codecs::linearinterpol::LinearInterpolFastFieldSerializer;
|
||||
// use fastfield_codecs::multilinearinterpol::MultiLinearInterpolFastFieldSerializer;
|
||||
use fastfield_codecs::bitpacked::BitpackedFastFieldCodec;
|
||||
use fastfield_codecs::{FastFieldCodec, FastFieldStats};
|
||||
use prettytable::{Cell, Row, Table};
|
||||
|
||||
fn main() {
|
||||
@@ -12,14 +13,12 @@ fn main() {
|
||||
table.add_row(row!["", "Compression Ratio", "Compression Estimation"]);
|
||||
|
||||
for (data, data_set_name) in get_codec_test_data_sets() {
|
||||
let mut results = vec![];
|
||||
let res = serialize_with_codec::<LinearInterpolFastFieldSerializer>(&data);
|
||||
results.push(res);
|
||||
let res = serialize_with_codec::<MultiLinearInterpolFastFieldSerializer>(&data);
|
||||
results.push(res);
|
||||
let res = serialize_with_codec::<fastfield_codecs::bitpacked::BitpackedFastFieldSerializer>(
|
||||
&data,
|
||||
);
|
||||
let mut results = Vec::new();
|
||||
// let res = serialize_with_codec::<LinearInterpolFastFieldSerializer>(&data);
|
||||
// results.push(res);
|
||||
// let res = serialize_with_codec::<MultiLinearInterpolFastFieldSerializer>(&data);
|
||||
// results.push(res);
|
||||
let res = serialize_with_codec(&BitpackedFastFieldCodec, &data);
|
||||
results.push(res);
|
||||
|
||||
// let best_estimation_codec = results
|
||||
@@ -91,7 +90,8 @@ pub fn get_codec_test_data_sets() -> Vec<(Vec<u64>, &'static str)> {
|
||||
data_and_names
|
||||
}
|
||||
|
||||
pub fn serialize_with_codec<S: FastFieldCodecSerializer>(
|
||||
pub fn serialize_with_codec<S: FastFieldCodec>(
|
||||
codec: &S,
|
||||
data: &[u64],
|
||||
) -> (bool, f32, f32, &'static str) {
|
||||
let is_applicable = S::is_applicable(&data, stats_from_vec(data));
|
||||
@@ -100,14 +100,9 @@ pub fn serialize_with_codec<S: FastFieldCodecSerializer>(
|
||||
}
|
||||
let estimation = S::estimate(&data, stats_from_vec(data));
|
||||
let mut out = vec![];
|
||||
S::serialize(
|
||||
&mut out,
|
||||
&data,
|
||||
stats_from_vec(data),
|
||||
data.iter().cloned(),
|
||||
data.iter().cloned(),
|
||||
)
|
||||
.unwrap();
|
||||
codec
|
||||
.serialize(&mut out, &data, stats_from_vec(data))
|
||||
.unwrap();
|
||||
|
||||
let actual_compression = out.len() as f32 / (data.len() * 8) as f32;
|
||||
(true, estimation, actual_compression, S::NAME)
|
||||
|
||||
@@ -14,22 +14,24 @@ use std::io::{self, Read, Write};
|
||||
use std::ops::Sub;
|
||||
|
||||
use common::{BinarySerializable, CountingWriter, DeserializeFrom};
|
||||
use ownedbytes::OwnedBytes;
|
||||
use tantivy_bitpacker::{compute_num_bits, BitPacker, BitUnpacker};
|
||||
|
||||
use crate::{FastFieldCodecReader, FastFieldCodecSerializer, FastFieldDataAccess, FastFieldStats};
|
||||
use crate::{FastFieldCodec, FastFieldCodecReader, FastFieldStats};
|
||||
|
||||
const CHUNK_SIZE: u64 = 512;
|
||||
const CHUNK_SIZE: usize = 512;
|
||||
|
||||
/// Depending on the field type, a different
|
||||
/// fast field is required.
|
||||
#[derive(Clone)]
|
||||
pub struct MultiLinearInterpolFastFieldReader {
|
||||
data: OwnedBytes,
|
||||
pub footer: MultiLinearInterpolFooter,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default)]
|
||||
struct Function {
|
||||
// The offset in the data is required, because we have diffrent bit_widths per block
|
||||
// The offset in the data is required, because we have different bit_widths per block
|
||||
data_start_offset: u64,
|
||||
// start_pos in the block will be CHUNK_SIZE * BLOCK_NUM
|
||||
start_pos: u64,
|
||||
@@ -126,43 +128,27 @@ impl BinarySerializable for MultiLinearInterpolFooter {
|
||||
interpolations: Vec::<Function>::deserialize(reader)?,
|
||||
};
|
||||
for (num, interpol) in footer.interpolations.iter_mut().enumerate() {
|
||||
interpol.start_pos = CHUNK_SIZE * num as u64;
|
||||
interpol.start_pos = (CHUNK_SIZE * num) as u64;
|
||||
}
|
||||
Ok(footer)
|
||||
}
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn get_interpolation_position(doc: u64) -> usize {
|
||||
let index = doc / CHUNK_SIZE;
|
||||
index as usize
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn get_interpolation_function(doc: u64, interpolations: &[Function]) -> &Function {
|
||||
&interpolations[get_interpolation_position(doc)]
|
||||
&interpolations[doc as usize / CHUNK_SIZE]
|
||||
}
|
||||
|
||||
impl FastFieldCodecReader for MultiLinearInterpolFastFieldReader {
|
||||
/// Opens a fast field given a file.
|
||||
fn open_from_bytes(bytes: &[u8]) -> io::Result<Self> {
|
||||
let footer_len: u32 = (&bytes[bytes.len() - 4..]).deserialize()?;
|
||||
|
||||
let (_data, mut footer) = bytes.split_at(bytes.len() - (4 + footer_len) as usize);
|
||||
let footer = MultiLinearInterpolFooter::deserialize(&mut footer)?;
|
||||
|
||||
Ok(MultiLinearInterpolFastFieldReader { footer })
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn get_u64(&self, doc: u64, data: &[u8]) -> u64 {
|
||||
fn get_u64(&self, doc: u64) -> u64 {
|
||||
let interpolation = get_interpolation_function(doc, &self.footer.interpolations);
|
||||
let doc = doc - interpolation.start_pos;
|
||||
let calculated_value =
|
||||
get_calculated_value(interpolation.value_start_pos, doc, interpolation.slope);
|
||||
let diff = interpolation
|
||||
.bit_unpacker
|
||||
.get(doc, &data[interpolation.data_start_offset as usize..]);
|
||||
.get(doc, &self.data[interpolation.data_start_offset as usize..]);
|
||||
(calculated_value + diff) - interpolation.positive_val_offset
|
||||
}
|
||||
|
||||
@@ -187,23 +173,33 @@ fn get_calculated_value(first_val: u64, pos: u64, slope: f32) -> u64 {
|
||||
}
|
||||
|
||||
/// Same as LinearInterpolFastFieldSerializer, but working on chunks of CHUNK_SIZE elements.
|
||||
pub struct MultiLinearInterpolFastFieldSerializer {}
|
||||
pub struct MultiLinearInterpolFastFieldCodec;
|
||||
|
||||
impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
|
||||
impl FastFieldCodec for MultiLinearInterpolFastFieldCodec {
|
||||
const NAME: &'static str = "MultiLinearInterpol";
|
||||
const ID: u8 = 3;
|
||||
|
||||
type Reader = MultiLinearInterpolFastFieldReader;
|
||||
|
||||
/// Opens a fast field given a file.
|
||||
fn open_from_bytes(bytes: OwnedBytes) -> io::Result<Self::Reader> {
|
||||
let footer_len: u32 = (&bytes[bytes.len() - 4..]).deserialize()?;
|
||||
let footer_offset = bytes.len() - 4 - footer_len as usize;
|
||||
let (data, mut footer) = bytes.split(footer_offset);
|
||||
let footer = MultiLinearInterpolFooter::deserialize(&mut footer)?;
|
||||
Ok(MultiLinearInterpolFastFieldReader { data, footer })
|
||||
}
|
||||
|
||||
/// Creates a new fast field serializer.
|
||||
fn serialize(
|
||||
write: &mut impl Write,
|
||||
fastfield_accessor: &dyn FastFieldDataAccess,
|
||||
&self,
|
||||
write: &mut impl io::Write,
|
||||
vals: &[u64],
|
||||
stats: FastFieldStats,
|
||||
data_iter: impl Iterator<Item = u64>,
|
||||
_data_iter1: impl Iterator<Item = u64>,
|
||||
) -> io::Result<()> {
|
||||
assert!(stats.min_value <= stats.max_value);
|
||||
|
||||
let first_val = fastfield_accessor.get_val(0);
|
||||
let last_val = fastfield_accessor.get_val(stats.num_vals as u64 - 1);
|
||||
let first_val = vals[0];
|
||||
let last_val = vals[vals.len() - 1];
|
||||
|
||||
let mut first_function = Function {
|
||||
end_pos: stats.num_vals,
|
||||
@@ -214,16 +210,11 @@ impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
|
||||
first_function.calc_slope();
|
||||
let mut interpolations = vec![first_function];
|
||||
|
||||
// Since we potentially apply multiple passes over the data, the data is cached.
|
||||
// Multiple iteration can be expensive (merge with index sorting can add lot of overhead per
|
||||
// iteration)
|
||||
let data = data_iter.collect::<Vec<_>>();
|
||||
|
||||
//// let's split this into chunks of CHUNK_SIZE
|
||||
for data_pos in (0..data.len() as u64).step_by(CHUNK_SIZE as usize).skip(1) {
|
||||
for vals_pos in (0..vals.len()).step_by(CHUNK_SIZE).skip(1) {
|
||||
let new_fun = {
|
||||
let current_interpolation = interpolations.last_mut().unwrap();
|
||||
current_interpolation.split(data_pos, data[data_pos as usize])
|
||||
current_interpolation.split(vals_pos as u64, vals[vals_pos])
|
||||
};
|
||||
interpolations.push(new_fun);
|
||||
}
|
||||
@@ -231,7 +222,7 @@ impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
|
||||
for interpolation in &mut interpolations {
|
||||
let mut offset = 0;
|
||||
let mut rel_positive_max = 0;
|
||||
for (pos, actual_value) in data
|
||||
for (pos, actual_value) in vals
|
||||
[interpolation.start_pos as usize..interpolation.end_pos as usize]
|
||||
.iter()
|
||||
.cloned()
|
||||
@@ -262,7 +253,7 @@ impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
|
||||
for interpolation in &mut interpolations {
|
||||
interpolation.data_start_offset = write.written_bytes();
|
||||
let num_bits = interpolation.num_bits;
|
||||
for (pos, actual_value) in data
|
||||
for (pos, actual_value) in vals
|
||||
[interpolation.start_pos as usize..interpolation.end_pos as usize]
|
||||
.iter()
|
||||
.cloned()
|
||||
@@ -290,17 +281,14 @@ impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn is_applicable(
|
||||
_fastfield_accessor: &impl FastFieldDataAccess,
|
||||
stats: FastFieldStats,
|
||||
) -> bool {
|
||||
fn is_applicable(_vals: &[u64], stats: FastFieldStats) -> bool {
|
||||
if stats.num_vals < 5_000 {
|
||||
return false;
|
||||
}
|
||||
// On serialization the offset is added to the actual value.
|
||||
// We need to make sure this won't run into overflow calculation issues.
|
||||
// For this we take the maximum theroretical offset and add this to the max value.
|
||||
// If this doesn't overflow the algortihm should be fine
|
||||
// If this doesn't overflow the algorithm should be fine
|
||||
let theorethical_maximum_offset = stats.max_value - stats.min_value;
|
||||
if stats
|
||||
.max_value
|
||||
@@ -314,11 +302,11 @@ impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
|
||||
/// estimation for linear interpolation is hard because, you don't know
|
||||
/// where the local maxima are for the deviation of the calculated value and
|
||||
/// the offset is also unknown.
|
||||
fn estimate(fastfield_accessor: &impl FastFieldDataAccess, stats: FastFieldStats) -> f32 {
|
||||
let first_val_in_first_block = fastfield_accessor.get_val(0);
|
||||
let last_elem_in_first_chunk = CHUNK_SIZE.min(stats.num_vals);
|
||||
let last_val_in_first_block =
|
||||
fastfield_accessor.get_val(last_elem_in_first_chunk as u64 - 1);
|
||||
fn estimate(vals: &[u64], stats: FastFieldStats) -> f32 {
|
||||
// TODO simplify now that we have a vals array.
|
||||
let first_val_in_first_block = vals[0];
|
||||
let last_elem_in_first_chunk = CHUNK_SIZE.min(vals.len());
|
||||
let last_val_in_first_block = vals[last_elem_in_first_chunk - 1];
|
||||
let slope = get_slope(
|
||||
first_val_in_first_block,
|
||||
last_val_in_first_block,
|
||||
@@ -332,10 +320,11 @@ impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
|
||||
|
||||
let max_distance = sample_positions
|
||||
.iter()
|
||||
.copied()
|
||||
.map(|pos| {
|
||||
let calculated_value =
|
||||
get_calculated_value(first_val_in_first_block, *pos as u64, slope);
|
||||
let actual_value = fastfield_accessor.get_val(*pos as u64);
|
||||
get_calculated_value(first_val_in_first_block, pos as u64, slope);
|
||||
let actual_value = vals[pos];
|
||||
distance(calculated_value, actual_value)
|
||||
})
|
||||
.max()
|
||||
@@ -351,7 +340,7 @@ impl FastFieldCodecSerializer for MultiLinearInterpolFastFieldSerializer {
|
||||
|
||||
let num_bits = compute_num_bits(relative_max_value as u64) as u64 * stats.num_vals as u64
|
||||
// function metadata per block
|
||||
+ 29 * (stats.num_vals / CHUNK_SIZE);
|
||||
+ 29 * (stats.num_vals / CHUNK_SIZE as u64);
|
||||
let num_bits_uncompressed = 64 * stats.num_vals;
|
||||
num_bits as f32 / num_bits_uncompressed as f32
|
||||
}
|
||||
@@ -371,10 +360,7 @@ mod tests {
|
||||
use crate::tests::get_codec_test_data_sets;
|
||||
|
||||
fn create_and_validate(data: &[u64], name: &str) -> (f32, f32) {
|
||||
crate::tests::create_and_validate::<
|
||||
MultiLinearInterpolFastFieldSerializer,
|
||||
MultiLinearInterpolFastFieldReader,
|
||||
>(data, name)
|
||||
crate::tests::create_and_validate(&MultiLinearInterpolFastFieldCodec, data, name)
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -21,7 +21,7 @@ impl OwnedBytes {
|
||||
OwnedBytes::new(&[][..])
|
||||
}
|
||||
|
||||
/// Creates an `OwnedBytes` intance given a `StableDeref` object.
|
||||
/// Creates an `OwnedBytes` instance given a `StableDeref` object.
|
||||
pub fn new<T: StableDeref + Deref<Target = [u8]> + 'static + Send + Sync>(
|
||||
data_holder: T,
|
||||
) -> OwnedBytes {
|
||||
|
||||
@@ -67,7 +67,7 @@ fn word<'a>() -> impl Parser<&'a str, Output = String> {
|
||||
/// 2021-04-13T19:46:26.266051969+00:00
|
||||
///
|
||||
/// NOTE: also accepts 999999-99-99T99:99:99.266051969+99:99
|
||||
/// We delegate rejecting such invalid dates to the logical AST compuation code
|
||||
/// We delegate rejecting such invalid dates to the logical AST computation code
|
||||
/// which invokes time::OffsetDateTime::parse(..., &Rfc3339) on the value to actually parse
|
||||
/// it (instead of merely extracting the datetime value as string as done here).
|
||||
fn date_time<'a>() -> impl Parser<&'a str, Output = String> {
|
||||
|
||||
@@ -10,7 +10,7 @@ use super::metric::{AverageAggregation, StatsAggregation};
|
||||
use super::segment_agg_result::BucketCount;
|
||||
use super::VecWithNames;
|
||||
use crate::fastfield::{
|
||||
type_and_cardinality, DynamicFastFieldReader, FastType, MultiValuedFastFieldReader,
|
||||
type_and_cardinality, FastFieldReaderImpl, FastType, MultiValuedFastFieldReader,
|
||||
};
|
||||
use crate::schema::{Cardinality, Type};
|
||||
use crate::{InvertedIndexReader, SegmentReader, TantivyError};
|
||||
@@ -37,10 +37,10 @@ impl AggregationsWithAccessor {
|
||||
#[derive(Clone)]
|
||||
pub(crate) enum FastFieldAccessor {
|
||||
Multi(MultiValuedFastFieldReader<u64>),
|
||||
Single(DynamicFastFieldReader<u64>),
|
||||
Single(FastFieldReaderImpl<u64>),
|
||||
}
|
||||
impl FastFieldAccessor {
|
||||
pub fn as_single(&self) -> Option<&DynamicFastFieldReader<u64>> {
|
||||
pub fn as_single(&self) -> Option<&FastFieldReaderImpl<u64>> {
|
||||
match self {
|
||||
FastFieldAccessor::Multi(_) => None,
|
||||
FastFieldAccessor::Single(reader) => Some(reader),
|
||||
@@ -118,7 +118,7 @@ impl BucketAggregationWithAccessor {
|
||||
pub struct MetricAggregationWithAccessor {
|
||||
pub metric: MetricAggregation,
|
||||
pub field_type: Type,
|
||||
pub accessor: DynamicFastFieldReader<u64>,
|
||||
pub accessor: FastFieldReaderImpl<u64>,
|
||||
}
|
||||
|
||||
impl MetricAggregationWithAccessor {
|
||||
|
||||
@@ -57,8 +57,7 @@ impl AggregationResult {
|
||||
match self {
|
||||
AggregationResult::BucketResult(_bucket) => Err(TantivyError::InternalError(
|
||||
"Tried to retrieve value from bucket aggregation. This is not supported and \
|
||||
should not happen during collection phase, but should be catched during \
|
||||
validation"
|
||||
should not happen during collection phase, but should be caught during validation"
|
||||
.to_string(),
|
||||
)),
|
||||
AggregationResult::MetricResult(metric) => metric.get_value(agg_property),
|
||||
|
||||
@@ -14,7 +14,7 @@ use crate::aggregation::intermediate_agg_result::{
|
||||
IntermediateAggregationResults, IntermediateBucketResult, IntermediateHistogramBucketEntry,
|
||||
};
|
||||
use crate::aggregation::segment_agg_result::SegmentAggregationResultsCollector;
|
||||
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader};
|
||||
use crate::fastfield::{FastFieldReader, FastFieldReaderImpl};
|
||||
use crate::schema::Type;
|
||||
use crate::{DocId, TantivyError};
|
||||
|
||||
@@ -70,7 +70,7 @@ pub struct HistogramAggregation {
|
||||
/// The interval to chunk your data range. Each bucket spans a value range of [0..interval).
|
||||
/// Must be a positive value.
|
||||
pub interval: f64,
|
||||
/// Intervals implicitely defines an absolute grid of buckets `[interval * k, interval * (k +
|
||||
/// Intervals implicitly defines an absolute grid of buckets `[interval * k, interval * (k +
|
||||
/// 1))`.
|
||||
///
|
||||
/// Offset makes it possible to shift this grid into
|
||||
@@ -263,7 +263,7 @@ impl SegmentHistogramCollector {
|
||||
req: &HistogramAggregation,
|
||||
sub_aggregation: &AggregationsWithAccessor,
|
||||
field_type: Type,
|
||||
accessor: &DynamicFastFieldReader<u64>,
|
||||
accessor: &FastFieldReaderImpl<u64>,
|
||||
) -> crate::Result<Self> {
|
||||
req.validate()?;
|
||||
let min = f64_from_fastfield_u64(accessor.min_value(), &field_type);
|
||||
|
||||
@@ -210,8 +210,8 @@ impl SegmentRangeCollector {
|
||||
let key = range
|
||||
.key
|
||||
.clone()
|
||||
.map(|key| Key::Str(key))
|
||||
.unwrap_or(range_to_key(&range.range, &field_type));
|
||||
.map(Key::Str)
|
||||
.unwrap_or_else(|| range_to_key(&range.range, &field_type));
|
||||
let to = if range.range.end == u64::MAX {
|
||||
None
|
||||
} else {
|
||||
|
||||
@@ -110,8 +110,8 @@ pub struct TermsAggregation {
|
||||
/// Set the order. `String` is here a target, which is either "_count", "_key", or the name of
|
||||
/// a metric sub_aggregation.
|
||||
///
|
||||
/// Single value metrics like average can be adressed by its name.
|
||||
/// Multi value metrics like stats are required to adress their field by name e.g.
|
||||
/// Single value metrics like average can be addressed by its name.
|
||||
/// Multi value metrics like stats are required to address their field by name e.g.
|
||||
/// "stats.avg"
|
||||
///
|
||||
/// Examples in JSON format:
|
||||
|
||||
@@ -39,7 +39,7 @@ impl AggregationCollector {
|
||||
///
|
||||
/// # Purpose
|
||||
/// AggregationCollector returns `IntermediateAggregationResults` and not the final
|
||||
/// `AggregationResults`, so that results from differenct indices can be merged and then converted
|
||||
/// `AggregationResults`, so that results from different indices can be merged and then converted
|
||||
/// into the final `AggregationResults` via the `into_final_result()` method.
|
||||
pub struct DistributedAggregationCollector {
|
||||
agg: Aggregations,
|
||||
|
||||
@@ -43,7 +43,7 @@ impl IntermediateAggregationResults {
|
||||
/// Convert intermediate result and its aggregation request to the final result.
|
||||
///
|
||||
/// Internal function, AggregationsInternal is used instead Aggregations, which is optimized
|
||||
/// for internal processing, by splitting metric and buckets into seperate groups.
|
||||
/// for internal processing, by splitting metric and buckets into separate groups.
|
||||
pub(crate) fn into_final_bucket_result_internal(
|
||||
self,
|
||||
req: &AggregationsInternal,
|
||||
|
||||
@@ -3,7 +3,7 @@ use std::fmt::Debug;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use crate::aggregation::f64_from_fastfield_u64;
|
||||
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader};
|
||||
use crate::fastfield::{FastFieldReader, FastFieldReaderImpl};
|
||||
use crate::schema::Type;
|
||||
use crate::DocId;
|
||||
|
||||
@@ -43,7 +43,7 @@ pub(crate) struct SegmentAverageCollector {
|
||||
}
|
||||
|
||||
impl Debug for SegmentAverageCollector {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
|
||||
f.debug_struct("AverageCollector")
|
||||
.field("data", &self.data)
|
||||
.finish()
|
||||
@@ -57,7 +57,7 @@ impl SegmentAverageCollector {
|
||||
data: Default::default(),
|
||||
}
|
||||
}
|
||||
pub(crate) fn collect_block(&mut self, doc: &[DocId], field: &DynamicFastFieldReader<u64>) {
|
||||
pub(crate) fn collect_block(&mut self, doc: &[DocId], field: &FastFieldReaderImpl<u64>) {
|
||||
let mut iter = doc.chunks_exact(4);
|
||||
for docs in iter.by_ref() {
|
||||
let val1 = field.get(docs[0]);
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use crate::aggregation::f64_from_fastfield_u64;
|
||||
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader};
|
||||
use crate::fastfield::{FastFieldReader, FastFieldReaderImpl};
|
||||
use crate::schema::Type;
|
||||
use crate::{DocId, TantivyError};
|
||||
|
||||
@@ -163,7 +163,7 @@ impl SegmentStatsCollector {
|
||||
stats: IntermediateStats::default(),
|
||||
}
|
||||
}
|
||||
pub(crate) fn collect_block(&mut self, doc: &[DocId], field: &DynamicFastFieldReader<u64>) {
|
||||
pub(crate) fn collect_block(&mut self, doc: &[DocId], field: &FastFieldReaderImpl<u64>) {
|
||||
let mut iter = doc.chunks_exact(4);
|
||||
for docs in iter.by_ref() {
|
||||
let val1 = field.get(docs[0]);
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
use std::marker::PhantomData;
|
||||
|
||||
use crate::collector::{Collector, SegmentCollector};
|
||||
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader, FastValue};
|
||||
use crate::fastfield::{FastFieldReader, FastFieldReaderImpl, FastValue};
|
||||
use crate::schema::Field;
|
||||
use crate::{Score, SegmentReader, TantivyError};
|
||||
|
||||
@@ -158,7 +158,7 @@ where
|
||||
TPredicate: 'static,
|
||||
TPredicateValue: FastValue,
|
||||
{
|
||||
fast_field_reader: DynamicFastFieldReader<TPredicateValue>,
|
||||
fast_field_reader: FastFieldReaderImpl<TPredicateValue>,
|
||||
segment_collector: TSegmentCollector,
|
||||
predicate: TPredicate,
|
||||
t_predicate_value: PhantomData<TPredicateValue>,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use fastdivide::DividerU64;
|
||||
|
||||
use crate::collector::{Collector, SegmentCollector};
|
||||
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader, FastValue};
|
||||
use crate::fastfield::{FastFieldReader, FastFieldReaderImpl, FastValue};
|
||||
use crate::schema::{Field, Type};
|
||||
use crate::{DocId, Score};
|
||||
|
||||
@@ -84,7 +84,7 @@ impl HistogramComputer {
|
||||
}
|
||||
pub struct SegmentHistogramCollector {
|
||||
histogram_computer: HistogramComputer,
|
||||
ff_reader: DynamicFastFieldReader<u64>,
|
||||
ff_reader: FastFieldReaderImpl<u64>,
|
||||
}
|
||||
|
||||
impl SegmentCollector for SegmentHistogramCollector {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use super::*;
|
||||
use crate::collector::{Count, FilterCollector, TopDocs};
|
||||
use crate::core::SegmentReader;
|
||||
use crate::fastfield::{BytesFastFieldReader, DynamicFastFieldReader, FastFieldReader};
|
||||
use crate::fastfield::{BytesFastFieldReader, FastFieldReader, FastFieldReaderImpl};
|
||||
use crate::query::{AllQuery, QueryParser};
|
||||
use crate::schema::{Field, Schema, FAST, TEXT};
|
||||
use crate::time::format_description::well_known::Rfc3339;
|
||||
@@ -156,7 +156,7 @@ pub struct FastFieldTestCollector {
|
||||
|
||||
pub struct FastFieldSegmentCollector {
|
||||
vals: Vec<u64>,
|
||||
reader: DynamicFastFieldReader<u64>,
|
||||
reader: FastFieldReaderImpl<u64>,
|
||||
}
|
||||
|
||||
impl FastFieldTestCollector {
|
||||
|
||||
@@ -9,7 +9,7 @@ use crate::collector::tweak_score_top_collector::TweakedScoreTopCollector;
|
||||
use crate::collector::{
|
||||
CustomScorer, CustomSegmentScorer, ScoreSegmentTweaker, ScoreTweaker, SegmentCollector,
|
||||
};
|
||||
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader, FastValue};
|
||||
use crate::fastfield::{FastFieldReader, FastFieldReaderImpl, FastValue};
|
||||
use crate::query::Weight;
|
||||
use crate::schema::Field;
|
||||
use crate::{DocAddress, DocId, Score, SegmentOrdinal, SegmentReader, TantivyError};
|
||||
@@ -129,7 +129,7 @@ impl fmt::Debug for TopDocs {
|
||||
}
|
||||
|
||||
struct ScorerByFastFieldReader {
|
||||
ff_reader: DynamicFastFieldReader<u64>,
|
||||
ff_reader: FastFieldReaderImpl<u64>,
|
||||
}
|
||||
|
||||
impl CustomSegmentScorer<u64> for ScorerByFastFieldReader {
|
||||
@@ -499,7 +499,7 @@ impl TopDocs {
|
||||
///
|
||||
/// This method only makes it possible to compute the score from a given
|
||||
/// `DocId`, fastfield values for the doc and any information you could
|
||||
/// have precomputed beforehands. It does not make it possible for instance
|
||||
/// have precomputed beforehand. It does not make it possible for instance
|
||||
/// to compute something like TfIdf as it does not have access to the list of query
|
||||
/// terms present in the document, nor the term frequencies for the different terms.
|
||||
///
|
||||
|
||||
@@ -311,7 +311,7 @@ pub struct IndexMeta {
|
||||
/// `IndexSettings` to configure index options.
|
||||
#[serde(default)]
|
||||
pub index_settings: IndexSettings,
|
||||
/// List of `SegmentMeta` informations associated to each finalized segment of the index.
|
||||
/// List of `SegmentMeta` information associated to each finalized segment of the index.
|
||||
pub segments: Vec<SegmentMeta>,
|
||||
/// Index `Schema`
|
||||
pub schema: Schema,
|
||||
|
||||
@@ -230,4 +230,13 @@ impl InvertedIndexReader {
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Returns the number of documents containing the term asynchronously.
|
||||
pub async fn doc_freq_async(&self, term: &Term) -> crate::AsyncIoResult<u32> {
|
||||
Ok(self
|
||||
.get_term_info_async(term)
|
||||
.await?
|
||||
.map(|term_info| term_info.doc_freq)
|
||||
.unwrap_or(0u32))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -134,6 +134,19 @@ impl Searcher {
|
||||
Ok(total_doc_freq)
|
||||
}
|
||||
|
||||
/// Return the overall number of documents containing
|
||||
/// the given term in an asynchronous manner.
|
||||
#[cfg(feature = "quickwit")]
|
||||
pub async fn doc_freq_async(&self, term: &Term) -> crate::Result<u64> {
|
||||
let mut total_doc_freq = 0;
|
||||
for segment_reader in &self.inner.segment_readers {
|
||||
let inverted_index = segment_reader.inverted_index(term.field())?;
|
||||
let doc_freq = inverted_index.doc_freq_async(term).await?;
|
||||
total_doc_freq += u64::from(doc_freq);
|
||||
}
|
||||
Ok(total_doc_freq)
|
||||
}
|
||||
|
||||
/// Return the list of segment readers
|
||||
pub fn segment_readers(&self) -> &[SegmentReader] {
|
||||
&self.inner.segment_readers
|
||||
|
||||
@@ -16,7 +16,7 @@ use uuid::Uuid;
|
||||
/// by a UUID which is used to prefix the filenames
|
||||
/// of all of the file associated with the segment.
|
||||
///
|
||||
/// In unit test, for reproducability, the `SegmentId` are
|
||||
/// In unit test, for reproducibility, the `SegmentId` are
|
||||
/// simply generated in an autoincrement fashion.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
|
||||
pub struct SegmentId(Uuid);
|
||||
|
||||
@@ -45,7 +45,7 @@ pub static INDEX_WRITER_LOCK: Lazy<Lock> = Lazy::new(|| Lock {
|
||||
/// The meta lock file is here to protect the segment files being opened by
|
||||
/// `IndexReader::reload()` from being garbage collected.
|
||||
/// It makes it possible for another process to safely consume
|
||||
/// our index in-writing. Ideally, we may have prefered `RWLock` semantics
|
||||
/// our index in-writing. Ideally, we may have preferred `RWLock` semantics
|
||||
/// here, but it is difficult to achieve on Windows.
|
||||
///
|
||||
/// Opening segment readers is a very fast process.
|
||||
|
||||
@@ -112,7 +112,7 @@ impl FileSlice {
|
||||
|
||||
/// Returns a `OwnedBytes` with all of the data in the `FileSlice`.
|
||||
///
|
||||
/// The behavior is strongly dependant on the implementation of the underlying
|
||||
/// The behavior is strongly dependent on the implementation of the underlying
|
||||
/// `Directory` and the `FileSliceTrait` it creates.
|
||||
/// In particular, it is up to the `Directory` implementation
|
||||
/// to handle caching if needed.
|
||||
|
||||
@@ -114,7 +114,7 @@ impl ManagedDirectory {
|
||||
let mut files_to_delete = vec![];
|
||||
|
||||
// It is crucial to get the living files after acquiring the
|
||||
// read lock of meta informations. That way, we
|
||||
// read lock of meta information. That way, we
|
||||
// avoid the following scenario.
|
||||
//
|
||||
// 1) we get the list of living files.
|
||||
|
||||
@@ -40,7 +40,7 @@ impl Drop for VecWriter {
|
||||
fn drop(&mut self) {
|
||||
if !self.is_flushed {
|
||||
warn!(
|
||||
"You forgot to flush {:?} before its writter got Drop. Do not rely on drop. This \
|
||||
"You forgot to flush {:?} before its writer got Drop. Do not rely on drop. This \
|
||||
also occurs when the indexer crashed, so you may want to check the logs for the \
|
||||
root cause.",
|
||||
self.path
|
||||
|
||||
@@ -247,7 +247,7 @@ fn test_lock_blocking(directory: &dyn Directory) {
|
||||
//< lock_a_res is sent to the thread.
|
||||
in_thread_clone.store(true, SeqCst);
|
||||
let _just_sync = receiver.recv();
|
||||
// explicitely dropping lock_a_res. It would have been sufficient to just force it
|
||||
// explicitly dropping lock_a_res. It would have been sufficient to just force it
|
||||
// to be part of the move, but the intent seems clearer that way.
|
||||
drop(lock_a_res);
|
||||
});
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use crate::directory::{FileSlice, OwnedBytes};
|
||||
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader, MultiValueLength};
|
||||
use crate::fastfield::{FastFieldReader, FastFieldReaderImpl, MultiValueLength};
|
||||
use crate::DocId;
|
||||
|
||||
/// Reader for byte array fast fields
|
||||
@@ -14,13 +14,13 @@ use crate::DocId;
|
||||
/// and the start index for the next document, and keeping the bytes in between.
|
||||
#[derive(Clone)]
|
||||
pub struct BytesFastFieldReader {
|
||||
idx_reader: DynamicFastFieldReader<u64>,
|
||||
idx_reader: FastFieldReaderImpl<u64>,
|
||||
values: OwnedBytes,
|
||||
}
|
||||
|
||||
impl BytesFastFieldReader {
|
||||
pub(crate) fn open(
|
||||
idx_reader: DynamicFastFieldReader<u64>,
|
||||
idx_reader: FastFieldReaderImpl<u64>,
|
||||
values_file: FileSlice,
|
||||
) -> crate::Result<BytesFastFieldReader> {
|
||||
let values = values_file.read_bytes()?;
|
||||
|
||||
@@ -1,224 +0,0 @@
|
||||
use std::io::{self, Write};
|
||||
|
||||
use common::BinarySerializable;
|
||||
use fastdivide::DividerU64;
|
||||
use fastfield_codecs::FastFieldCodecReader;
|
||||
use gcd::Gcd;
|
||||
|
||||
pub const GCD_DEFAULT: u64 = 1;
|
||||
pub const GCD_CODEC_ID: u8 = 4;
|
||||
|
||||
/// Wrapper for accessing a fastfield.
|
||||
///
|
||||
/// Holds the data and the codec to the read the data.
|
||||
#[derive(Clone)]
|
||||
pub struct GCDFastFieldCodec<CodecReader> {
|
||||
gcd: u64,
|
||||
min_value: u64,
|
||||
reader: CodecReader,
|
||||
}
|
||||
impl<C: FastFieldCodecReader + Clone> FastFieldCodecReader for GCDFastFieldCodec<C> {
|
||||
/// Opens a fast field given the bytes.
|
||||
fn open_from_bytes(bytes: &[u8]) -> std::io::Result<Self> {
|
||||
let (header, mut footer) = bytes.split_at(bytes.len() - 16);
|
||||
let gcd = u64::deserialize(&mut footer)?;
|
||||
let min_value = u64::deserialize(&mut footer)?;
|
||||
let reader = C::open_from_bytes(header)?;
|
||||
|
||||
Ok(GCDFastFieldCodec {
|
||||
gcd,
|
||||
min_value,
|
||||
reader,
|
||||
})
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn get_u64(&self, doc: u64, data: &[u8]) -> u64 {
|
||||
let mut data = self.reader.get_u64(doc, data);
|
||||
data *= self.gcd;
|
||||
data += self.min_value;
|
||||
data
|
||||
}
|
||||
|
||||
fn min_value(&self) -> u64 {
|
||||
self.min_value + self.reader.min_value() * self.gcd
|
||||
}
|
||||
|
||||
fn max_value(&self) -> u64 {
|
||||
self.min_value + self.reader.max_value() * self.gcd
|
||||
}
|
||||
}
|
||||
|
||||
pub fn write_gcd_header<W: Write>(field_write: &mut W, min_value: u64, gcd: u64) -> io::Result<()> {
|
||||
gcd.serialize(field_write)?;
|
||||
min_value.serialize(field_write)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Find GCD for iterator of numbers
|
||||
pub fn find_gcd(numbers: impl Iterator<Item = u64>) -> Option<u64> {
|
||||
let mut numbers = numbers.filter(|n| *n != 0);
|
||||
let mut gcd = numbers.next()?;
|
||||
if gcd == 1 {
|
||||
return Some(1);
|
||||
}
|
||||
|
||||
let mut gcd_divider = DividerU64::divide_by(gcd);
|
||||
for val in numbers {
|
||||
let remainder = val - (gcd_divider.divide(val)) * gcd;
|
||||
if remainder == 0 {
|
||||
continue;
|
||||
}
|
||||
gcd = gcd.gcd(val);
|
||||
if gcd == 1 {
|
||||
return Some(1);
|
||||
}
|
||||
|
||||
gcd_divider = DividerU64::divide_by(gcd);
|
||||
}
|
||||
Some(gcd)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::collections::HashMap;
|
||||
use std::path::Path;
|
||||
|
||||
use common::HasLen;
|
||||
|
||||
use crate::directory::{CompositeFile, RamDirectory, WritePtr};
|
||||
use crate::fastfield::serializer::FastFieldCodecEnableCheck;
|
||||
use crate::fastfield::tests::{FIELD, FIELDI64, SCHEMA, SCHEMAI64};
|
||||
use crate::fastfield::{
|
||||
find_gcd, CompositeFastFieldSerializer, DynamicFastFieldReader, FastFieldCodecName,
|
||||
FastFieldReader, FastFieldsWriter, ALL_CODECS,
|
||||
};
|
||||
use crate::schema::Schema;
|
||||
use crate::Directory;
|
||||
|
||||
fn get_index(
|
||||
docs: &[crate::Document],
|
||||
schema: &Schema,
|
||||
codec_enable_checker: FastFieldCodecEnableCheck,
|
||||
) -> crate::Result<RamDirectory> {
|
||||
let directory: RamDirectory = RamDirectory::create();
|
||||
{
|
||||
let write: WritePtr = directory.open_write(Path::new("test")).unwrap();
|
||||
let mut serializer =
|
||||
CompositeFastFieldSerializer::from_write_with_codec(write, codec_enable_checker)
|
||||
.unwrap();
|
||||
let mut fast_field_writers = FastFieldsWriter::from_schema(schema);
|
||||
for doc in docs {
|
||||
fast_field_writers.add_document(doc);
|
||||
}
|
||||
fast_field_writers
|
||||
.serialize(&mut serializer, &HashMap::new(), None)
|
||||
.unwrap();
|
||||
serializer.close().unwrap();
|
||||
}
|
||||
Ok(directory)
|
||||
}
|
||||
|
||||
fn test_fastfield_gcd_i64_with_codec(
|
||||
codec_name: FastFieldCodecName,
|
||||
num_vals: usize,
|
||||
) -> crate::Result<()> {
|
||||
let path = Path::new("test");
|
||||
let mut docs = vec![];
|
||||
for i in 1..=num_vals {
|
||||
let val = i as i64 * 1000i64;
|
||||
docs.push(doc!(*FIELDI64=>val));
|
||||
}
|
||||
let directory = get_index(&docs, &SCHEMAI64, codec_name.clone().into())?;
|
||||
let file = directory.open_read(path).unwrap();
|
||||
// assert_eq!(file.len(), 118);
|
||||
let composite_file = CompositeFile::open(&file)?;
|
||||
let file = composite_file.open_read(*FIELD).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<i64>::open(file)?;
|
||||
assert_eq!(fast_field_reader.get(0), 1000i64);
|
||||
assert_eq!(fast_field_reader.get(1), 2000i64);
|
||||
assert_eq!(fast_field_reader.get(2), 3000i64);
|
||||
assert_eq!(fast_field_reader.max_value(), num_vals as i64 * 1000);
|
||||
assert_eq!(fast_field_reader.min_value(), 1000i64);
|
||||
let file = directory.open_read(path).unwrap();
|
||||
|
||||
// Can't apply gcd
|
||||
let path = Path::new("test");
|
||||
docs.pop();
|
||||
docs.push(doc!(*FIELDI64=>2001i64));
|
||||
let directory = get_index(&docs, &SCHEMAI64, codec_name.into())?;
|
||||
let file2 = directory.open_read(path).unwrap();
|
||||
assert!(file2.len() > file.len());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_fastfield_gcd_i64() -> crate::Result<()> {
|
||||
for codec_name in ALL_CODECS {
|
||||
test_fastfield_gcd_i64_with_codec(codec_name.clone(), 5005)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn test_fastfield_gcd_u64_with_codec(
|
||||
codec_name: FastFieldCodecName,
|
||||
num_vals: usize,
|
||||
) -> crate::Result<()> {
|
||||
let path = Path::new("test");
|
||||
let mut docs = vec![];
|
||||
for i in 1..=num_vals {
|
||||
let val = i as u64 * 1000u64;
|
||||
docs.push(doc!(*FIELD=>val));
|
||||
}
|
||||
let directory = get_index(&docs, &SCHEMA, codec_name.clone().into())?;
|
||||
let file = directory.open_read(path).unwrap();
|
||||
// assert_eq!(file.len(), 118);
|
||||
let composite_file = CompositeFile::open(&file)?;
|
||||
let file = composite_file.open_read(*FIELD).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<u64>::open(file)?;
|
||||
assert_eq!(fast_field_reader.get(0), 1000u64);
|
||||
assert_eq!(fast_field_reader.get(1), 2000u64);
|
||||
assert_eq!(fast_field_reader.get(2), 3000u64);
|
||||
assert_eq!(fast_field_reader.max_value(), num_vals as u64 * 1000);
|
||||
assert_eq!(fast_field_reader.min_value(), 1000u64);
|
||||
let file = directory.open_read(path).unwrap();
|
||||
|
||||
// Can't apply gcd
|
||||
let path = Path::new("test");
|
||||
docs.pop();
|
||||
docs.push(doc!(*FIELDI64=>2001u64));
|
||||
let directory = get_index(&docs, &SCHEMA, codec_name.into())?;
|
||||
let file2 = directory.open_read(path).unwrap();
|
||||
assert!(file2.len() > file.len());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_fastfield_gcd_u64() -> crate::Result<()> {
|
||||
for codec_name in ALL_CODECS {
|
||||
test_fastfield_gcd_u64_with_codec(codec_name.clone(), 5005)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
pub fn test_fastfield2() {
|
||||
let test_fastfield = DynamicFastFieldReader::<u64>::from(vec![100, 200, 300]);
|
||||
assert_eq!(test_fastfield.get(0), 100);
|
||||
assert_eq!(test_fastfield.get(1), 200);
|
||||
assert_eq!(test_fastfield.get(2), 300);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn find_gcd_test() {
|
||||
assert_eq!(find_gcd([0].into_iter()), None);
|
||||
assert_eq!(find_gcd([0, 10].into_iter()), Some(10));
|
||||
assert_eq!(find_gcd([10, 0].into_iter()), Some(10));
|
||||
assert_eq!(find_gcd([].into_iter()), None);
|
||||
assert_eq!(find_gcd([15, 30, 5, 10].into_iter()), Some(5));
|
||||
assert_eq!(find_gcd([15, 16, 10].into_iter()), Some(1));
|
||||
assert_eq!(find_gcd([0, 5, 5, 5].into_iter()), Some(5));
|
||||
}
|
||||
}
|
||||
@@ -20,16 +20,18 @@
|
||||
//!
|
||||
//! Read access performance is comparable to that of an array lookup.
|
||||
|
||||
use fastfield_codecs::dynamic::DynamicFastFieldCodec;
|
||||
|
||||
pub use self::alive_bitset::{intersect_alive_bitsets, write_alive_bitset, AliveBitSet};
|
||||
pub use self::bytes::{BytesFastFieldReader, BytesFastFieldWriter};
|
||||
pub use self::error::{FastFieldNotAvailableError, Result};
|
||||
pub use self::facet_reader::FacetReader;
|
||||
pub(crate) use self::gcd::{find_gcd, GCDFastFieldCodec, GCD_CODEC_ID, GCD_DEFAULT};
|
||||
pub use self::multivalued::{MultiValuedFastFieldReader, MultiValuedFastFieldWriter};
|
||||
pub use self::reader::{DynamicFastFieldReader, FastFieldReader};
|
||||
pub use self::reader::FastFieldReader;
|
||||
pub use self::readers::FastFieldReaders;
|
||||
pub(crate) use self::readers::{type_and_cardinality, FastType};
|
||||
pub use self::serializer::{CompositeFastFieldSerializer, FastFieldDataAccess, FastFieldStats};
|
||||
pub use self::serializer::{CompositeFastFieldSerializer, FastFieldStats};
|
||||
pub use self::wrapper::FastFieldReaderWrapper;
|
||||
pub use self::writer::{FastFieldsWriter, IntFastFieldWriter};
|
||||
use crate::schema::{Cardinality, FieldType, Type, Value};
|
||||
use crate::{DateTime, DocId};
|
||||
@@ -38,25 +40,13 @@ mod alive_bitset;
|
||||
mod bytes;
|
||||
mod error;
|
||||
mod facet_reader;
|
||||
mod gcd;
|
||||
mod multivalued;
|
||||
mod reader;
|
||||
mod readers;
|
||||
mod serializer;
|
||||
mod wrapper;
|
||||
mod writer;
|
||||
|
||||
#[derive(PartialEq, Eq, PartialOrd, Ord, Debug, Clone)]
|
||||
pub(crate) enum FastFieldCodecName {
|
||||
Bitpacked,
|
||||
LinearInterpol,
|
||||
BlockwiseLinearInterpol,
|
||||
}
|
||||
pub(crate) const ALL_CODECS: &[FastFieldCodecName; 3] = &[
|
||||
FastFieldCodecName::Bitpacked,
|
||||
FastFieldCodecName::LinearInterpol,
|
||||
FastFieldCodecName::BlockwiseLinearInterpol,
|
||||
];
|
||||
|
||||
/// Trait for `BytesFastFieldReader` and `MultiValuedFastFieldReader` to return the length of data
|
||||
/// for a doc_id
|
||||
pub trait MultiValueLength {
|
||||
@@ -126,6 +116,9 @@ impl FastValue for u64 {
|
||||
}
|
||||
}
|
||||
|
||||
// TODO rename
|
||||
pub type FastFieldReaderImpl<V> = FastFieldReaderWrapper<V, DynamicFastFieldCodec>;
|
||||
|
||||
impl FastValue for i64 {
|
||||
fn from_u64(val: u64) -> Self {
|
||||
common::u64_to_i64(val)
|
||||
@@ -290,18 +283,11 @@ mod tests {
|
||||
schema_builder.build()
|
||||
});
|
||||
|
||||
pub static SCHEMAI64: Lazy<Schema> = Lazy::new(|| {
|
||||
let mut schema_builder = Schema::builder();
|
||||
schema_builder.add_i64_field("field", FAST);
|
||||
schema_builder.build()
|
||||
});
|
||||
|
||||
pub static FIELD: Lazy<Field> = Lazy::new(|| SCHEMA.get_field("field").unwrap());
|
||||
pub static FIELDI64: Lazy<Field> = Lazy::new(|| SCHEMAI64.get_field("field").unwrap());
|
||||
|
||||
#[test]
|
||||
pub fn test_fastfield() {
|
||||
let test_fastfield = DynamicFastFieldReader::<u64>::from(vec![100, 200, 300]);
|
||||
let test_fastfield = FastFieldReaderImpl::<u64>::from(&[100, 200, 300]);
|
||||
assert_eq!(test_fastfield.get(0), 100);
|
||||
assert_eq!(test_fastfield.get(1), 200);
|
||||
assert_eq!(test_fastfield.get(2), 300);
|
||||
@@ -333,7 +319,7 @@ mod tests {
|
||||
assert_eq!(file.len(), 37);
|
||||
let composite_file = CompositeFile::open(&file)?;
|
||||
let file = composite_file.open_read(*FIELD).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<u64>::open(file)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<u64>::open(file)?;
|
||||
assert_eq!(fast_field_reader.get(0), 13u64);
|
||||
assert_eq!(fast_field_reader.get(1), 14u64);
|
||||
assert_eq!(fast_field_reader.get(2), 2u64);
|
||||
@@ -365,7 +351,7 @@ mod tests {
|
||||
{
|
||||
let fast_fields_composite = CompositeFile::open(&file)?;
|
||||
let data = fast_fields_composite.open_read(*FIELD).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<u64>::open(data)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<u64>::open(data)?;
|
||||
assert_eq!(fast_field_reader.get(0), 4u64);
|
||||
assert_eq!(fast_field_reader.get(1), 14_082_001u64);
|
||||
assert_eq!(fast_field_reader.get(2), 3_052u64);
|
||||
@@ -401,7 +387,7 @@ mod tests {
|
||||
{
|
||||
let fast_fields_composite = CompositeFile::open(&file).unwrap();
|
||||
let data = fast_fields_composite.open_read(*FIELD).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<u64>::open(data)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<u64>::open(data)?;
|
||||
for doc in 0..10_000 {
|
||||
assert_eq!(fast_field_reader.get(doc), 100_000u64);
|
||||
}
|
||||
@@ -433,7 +419,7 @@ mod tests {
|
||||
{
|
||||
let fast_fields_composite = CompositeFile::open(&file)?;
|
||||
let data = fast_fields_composite.open_read(*FIELD).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<u64>::open(data)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<u64>::open(data)?;
|
||||
assert_eq!(fast_field_reader.get(0), 0u64);
|
||||
for doc in 1..10_001 {
|
||||
assert_eq!(
|
||||
@@ -473,7 +459,7 @@ mod tests {
|
||||
{
|
||||
let fast_fields_composite = CompositeFile::open(&file)?;
|
||||
let data = fast_fields_composite.open_read(i64_field).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<i64>::open(data)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<i64>::open(data)?;
|
||||
|
||||
assert_eq!(fast_field_reader.min_value(), -100i64);
|
||||
assert_eq!(fast_field_reader.max_value(), 9_999i64);
|
||||
@@ -513,7 +499,7 @@ mod tests {
|
||||
{
|
||||
let fast_fields_composite = CompositeFile::open(&file).unwrap();
|
||||
let data = fast_fields_composite.open_read(i64_field).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<i64>::open(data)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<i64>::open(data)?;
|
||||
assert_eq!(fast_field_reader.get(0u32), 0i64);
|
||||
}
|
||||
Ok(())
|
||||
@@ -551,7 +537,7 @@ mod tests {
|
||||
{
|
||||
let fast_fields_composite = CompositeFile::open(&file)?;
|
||||
let data = fast_fields_composite.open_read(*FIELD).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<u64>::open(data)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<u64>::open(data)?;
|
||||
|
||||
for a in 0..n {
|
||||
assert_eq!(fast_field_reader.get(a as u32), permutation[a as usize]);
|
||||
@@ -868,7 +854,7 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
pub fn test_fastfield_bool() {
|
||||
let test_fastfield = DynamicFastFieldReader::<bool>::from(vec![true, false, true, false]);
|
||||
let test_fastfield = FastFieldReaderImpl::<bool>::from(&[true, false, true, false]);
|
||||
assert_eq!(test_fastfield.get(0), true);
|
||||
assert_eq!(test_fastfield.get(1), false);
|
||||
assert_eq!(test_fastfield.get(2), true);
|
||||
@@ -902,7 +888,7 @@ mod tests {
|
||||
assert_eq!(file.len(), 36);
|
||||
let composite_file = CompositeFile::open(&file)?;
|
||||
let file = composite_file.open_read(field).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<bool>::open(file)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<bool>::open(file)?;
|
||||
assert_eq!(fast_field_reader.get(0), true);
|
||||
assert_eq!(fast_field_reader.get(1), false);
|
||||
assert_eq!(fast_field_reader.get(2), true);
|
||||
@@ -938,7 +924,7 @@ mod tests {
|
||||
assert_eq!(file.len(), 48);
|
||||
let composite_file = CompositeFile::open(&file)?;
|
||||
let file = composite_file.open_read(field).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<bool>::open(file)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<bool>::open(file)?;
|
||||
for i in 0..25 {
|
||||
assert_eq!(fast_field_reader.get(i * 2), true);
|
||||
assert_eq!(fast_field_reader.get(i * 2 + 1), false);
|
||||
@@ -972,7 +958,7 @@ mod tests {
|
||||
assert_eq!(file.len(), 35);
|
||||
let composite_file = CompositeFile::open(&file)?;
|
||||
let file = composite_file.open_read(field).unwrap();
|
||||
let fast_field_reader = DynamicFastFieldReader::<bool>::open(file)?;
|
||||
let fast_field_reader = FastFieldReaderImpl::<bool>::open(file)?;
|
||||
assert_eq!(fast_field_reader.get(0), false);
|
||||
|
||||
Ok(())
|
||||
|
||||
@@ -346,26 +346,32 @@ mod tests {
|
||||
assert!(test_multivalued_no_panic(&ops[..]).is_ok());
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_proptest_merge_multivalued_bug() {
|
||||
use IndexingOp::*;
|
||||
let ops = &[AddDoc { id: 7 }, AddDoc { id: 4 }, Merge];
|
||||
assert!(test_multivalued_no_panic(ops).is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_multivalued_proptest_gcd() {
|
||||
use IndexingOp::*;
|
||||
let ops = [AddDoc { id: 9 }, AddDoc { id: 9 }, Merge];
|
||||
|
||||
assert!(test_multivalued_no_panic(&ops[..]).is_ok());
|
||||
let ops = &[AddDoc { id: 9 }, AddDoc { id: 9 }, Merge];
|
||||
assert!(test_multivalued_no_panic(ops).is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_multivalued_proptest_off_by_one_bug_1151() {
|
||||
use IndexingOp::*;
|
||||
let ops = [
|
||||
let ops = &[
|
||||
AddDoc { id: 3 },
|
||||
AddDoc { id: 1 },
|
||||
AddDoc { id: 3 },
|
||||
Commit,
|
||||
Merge,
|
||||
];
|
||||
|
||||
assert!(test_multivalued_no_panic(&ops[..]).is_ok());
|
||||
assert!(test_multivalued_no_panic(ops).is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use std::ops::Range;
|
||||
|
||||
use crate::fastfield::{DynamicFastFieldReader, FastFieldReader, FastValue, MultiValueLength};
|
||||
use crate::fastfield::{FastFieldReader, FastFieldReaderImpl, FastValue, MultiValueLength};
|
||||
use crate::DocId;
|
||||
|
||||
/// Reader for a multivalued `u64` fast field.
|
||||
@@ -12,14 +12,14 @@ use crate::DocId;
|
||||
/// The `idx_reader` associated, for each document, the index of its first value.
|
||||
#[derive(Clone)]
|
||||
pub struct MultiValuedFastFieldReader<Item: FastValue> {
|
||||
idx_reader: DynamicFastFieldReader<u64>,
|
||||
vals_reader: DynamicFastFieldReader<Item>,
|
||||
idx_reader: FastFieldReaderImpl<u64>,
|
||||
vals_reader: FastFieldReaderImpl<Item>,
|
||||
}
|
||||
|
||||
impl<Item: FastValue> MultiValuedFastFieldReader<Item> {
|
||||
pub(crate) fn open(
|
||||
idx_reader: DynamicFastFieldReader<u64>,
|
||||
vals_reader: DynamicFastFieldReader<Item>,
|
||||
idx_reader: FastFieldReaderImpl<u64>,
|
||||
vals_reader: FastFieldReaderImpl<Item>,
|
||||
) -> MultiValuedFastFieldReader<Item> {
|
||||
MultiValuedFastFieldReader {
|
||||
idx_reader,
|
||||
@@ -55,7 +55,7 @@ impl<Item: FastValue> MultiValuedFastFieldReader<Item> {
|
||||
///
|
||||
/// The min value does not take in account of possible
|
||||
/// deleted document, and should be considered as a lower bound
|
||||
/// of the actual mimimum value.
|
||||
/// of the actual minimum value.
|
||||
pub fn min_value(&self) -> Item {
|
||||
self.vals_reader.min_value()
|
||||
}
|
||||
|
||||
@@ -1,26 +1,8 @@
|
||||
use std::collections::HashMap;
|
||||
use std::marker::PhantomData;
|
||||
use std::path::Path;
|
||||
|
||||
use fastfield_codecs::bitpacked::{
|
||||
BitpackedFastFieldReader as BitpackedReader, BitpackedFastFieldSerializer,
|
||||
};
|
||||
use fastfield_codecs::linearinterpol::{
|
||||
LinearInterpolFastFieldReader, LinearInterpolFastFieldSerializer,
|
||||
};
|
||||
use fastfield_codecs::multilinearinterpol::{
|
||||
MultiLinearInterpolFastFieldReader, MultiLinearInterpolFastFieldSerializer,
|
||||
};
|
||||
use fastfield_codecs::{FastFieldCodecReader, FastFieldCodecSerializer};
|
||||
|
||||
use super::{FastValue, GCDFastFieldCodec, GCD_CODEC_ID};
|
||||
use crate::directory::{CompositeFile, Directory, FileSlice, OwnedBytes, RamDirectory, WritePtr};
|
||||
use crate::fastfield::{CompositeFastFieldSerializer, FastFieldsWriter};
|
||||
use crate::schema::{Schema, FAST};
|
||||
use super::FastValue;
|
||||
use crate::DocId;
|
||||
|
||||
/// FastFieldReader is the trait to access fast field data.
|
||||
pub trait FastFieldReader<Item: FastValue>: Clone {
|
||||
pub trait FastFieldReader<Item: FastValue> {
|
||||
/// Return the value associated to the given document.
|
||||
///
|
||||
/// This accessor should return as fast as possible.
|
||||
@@ -49,7 +31,7 @@ pub trait FastFieldReader<Item: FastValue>: Clone {
|
||||
///
|
||||
/// The min value does not take in account of possible
|
||||
/// deleted document, and should be considered as a lower bound
|
||||
/// of the actual mimimum value.
|
||||
/// of the actual minimum value.
|
||||
fn min_value(&self) -> Item;
|
||||
|
||||
/// Returns the maximum value for this fast field.
|
||||
@@ -59,298 +41,3 @@ pub trait FastFieldReader<Item: FastValue>: Clone {
|
||||
/// of the actual maximum value.
|
||||
fn max_value(&self) -> Item;
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
/// DynamicFastFieldReader wraps different readers to access
|
||||
/// the various encoded fastfield data
|
||||
pub enum DynamicFastFieldReader<Item: FastValue> {
|
||||
/// Bitpacked compressed fastfield data.
|
||||
Bitpacked(FastFieldReaderCodecWrapper<Item, BitpackedReader>),
|
||||
/// Linear interpolated values + bitpacked
|
||||
LinearInterpol(FastFieldReaderCodecWrapper<Item, LinearInterpolFastFieldReader>),
|
||||
/// Blockwise linear interpolated values + bitpacked
|
||||
MultiLinearInterpol(FastFieldReaderCodecWrapper<Item, MultiLinearInterpolFastFieldReader>),
|
||||
|
||||
/// GCD and Bitpacked compressed fastfield data.
|
||||
BitpackedGCD(FastFieldReaderCodecWrapper<Item, GCDFastFieldCodec<BitpackedReader>>),
|
||||
/// GCD and Linear interpolated values + bitpacked
|
||||
LinearInterpolGCD(
|
||||
FastFieldReaderCodecWrapper<Item, GCDFastFieldCodec<LinearInterpolFastFieldReader>>,
|
||||
),
|
||||
/// GCD and Blockwise linear interpolated values + bitpacked
|
||||
MultiLinearInterpolGCD(
|
||||
FastFieldReaderCodecWrapper<Item, GCDFastFieldCodec<MultiLinearInterpolFastFieldReader>>,
|
||||
),
|
||||
}
|
||||
|
||||
impl<Item: FastValue> DynamicFastFieldReader<Item> {
|
||||
/// Returns correct the reader wrapped in the `DynamicFastFieldReader` enum for the data.
|
||||
pub fn open_from_id(
|
||||
mut bytes: OwnedBytes,
|
||||
codec_id: u8,
|
||||
) -> crate::Result<DynamicFastFieldReader<Item>> {
|
||||
let reader = match codec_id {
|
||||
BitpackedFastFieldSerializer::ID => {
|
||||
DynamicFastFieldReader::Bitpacked(FastFieldReaderCodecWrapper::<
|
||||
Item,
|
||||
BitpackedReader,
|
||||
>::open_from_bytes(bytes)?)
|
||||
}
|
||||
LinearInterpolFastFieldSerializer::ID => {
|
||||
DynamicFastFieldReader::LinearInterpol(FastFieldReaderCodecWrapper::<
|
||||
Item,
|
||||
LinearInterpolFastFieldReader,
|
||||
>::open_from_bytes(bytes)?)
|
||||
}
|
||||
MultiLinearInterpolFastFieldSerializer::ID => {
|
||||
DynamicFastFieldReader::MultiLinearInterpol(FastFieldReaderCodecWrapper::<
|
||||
Item,
|
||||
MultiLinearInterpolFastFieldReader,
|
||||
>::open_from_bytes(
|
||||
bytes
|
||||
)?)
|
||||
}
|
||||
_ if codec_id == GCD_CODEC_ID => {
|
||||
let codec_id = bytes.read_u8();
|
||||
|
||||
match codec_id {
|
||||
BitpackedFastFieldSerializer::ID => {
|
||||
DynamicFastFieldReader::BitpackedGCD(FastFieldReaderCodecWrapper::<
|
||||
Item,
|
||||
GCDFastFieldCodec<BitpackedReader>,
|
||||
>::open_from_bytes(
|
||||
bytes
|
||||
)?)
|
||||
}
|
||||
LinearInterpolFastFieldSerializer::ID => {
|
||||
DynamicFastFieldReader::LinearInterpolGCD(FastFieldReaderCodecWrapper::<
|
||||
Item,
|
||||
GCDFastFieldCodec<LinearInterpolFastFieldReader>,
|
||||
>::open_from_bytes(
|
||||
bytes
|
||||
)?)
|
||||
}
|
||||
MultiLinearInterpolFastFieldSerializer::ID => {
|
||||
DynamicFastFieldReader::MultiLinearInterpolGCD(
|
||||
FastFieldReaderCodecWrapper::<
|
||||
Item,
|
||||
GCDFastFieldCodec<MultiLinearInterpolFastFieldReader>,
|
||||
>::open_from_bytes(bytes)?,
|
||||
)
|
||||
}
|
||||
_ => {
|
||||
panic!(
|
||||
"unknown fastfield codec id {:?}. Data corrupted or using old tantivy \
|
||||
version.",
|
||||
codec_id
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
panic!(
|
||||
"unknown fastfield codec id {:?}. Data corrupted or using old tantivy version.",
|
||||
codec_id
|
||||
)
|
||||
}
|
||||
};
|
||||
Ok(reader)
|
||||
}
|
||||
/// Returns correct the reader wrapped in the `DynamicFastFieldReader` enum for the data.
|
||||
pub fn open(file: FileSlice) -> crate::Result<DynamicFastFieldReader<Item>> {
|
||||
let mut bytes = file.read_bytes()?;
|
||||
let codec_id = bytes.read_u8();
|
||||
|
||||
Self::open_from_id(bytes, codec_id)
|
||||
}
|
||||
}
|
||||
|
||||
impl<Item: FastValue> FastFieldReader<Item> for DynamicFastFieldReader<Item> {
|
||||
#[inline]
|
||||
fn get(&self, doc: DocId) -> Item {
|
||||
match self {
|
||||
Self::Bitpacked(reader) => reader.get(doc),
|
||||
Self::LinearInterpol(reader) => reader.get(doc),
|
||||
Self::MultiLinearInterpol(reader) => reader.get(doc),
|
||||
Self::BitpackedGCD(reader) => reader.get(doc),
|
||||
Self::LinearInterpolGCD(reader) => reader.get(doc),
|
||||
Self::MultiLinearInterpolGCD(reader) => reader.get(doc),
|
||||
}
|
||||
}
|
||||
#[inline]
|
||||
fn get_range(&self, start: u64, output: &mut [Item]) {
|
||||
match self {
|
||||
Self::Bitpacked(reader) => reader.get_range(start, output),
|
||||
Self::LinearInterpol(reader) => reader.get_range(start, output),
|
||||
Self::MultiLinearInterpol(reader) => reader.get_range(start, output),
|
||||
Self::BitpackedGCD(reader) => reader.get_range(start, output),
|
||||
Self::LinearInterpolGCD(reader) => reader.get_range(start, output),
|
||||
Self::MultiLinearInterpolGCD(reader) => reader.get_range(start, output),
|
||||
}
|
||||
}
|
||||
fn min_value(&self) -> Item {
|
||||
match self {
|
||||
Self::Bitpacked(reader) => reader.min_value(),
|
||||
Self::LinearInterpol(reader) => reader.min_value(),
|
||||
Self::MultiLinearInterpol(reader) => reader.min_value(),
|
||||
Self::BitpackedGCD(reader) => reader.min_value(),
|
||||
Self::LinearInterpolGCD(reader) => reader.min_value(),
|
||||
Self::MultiLinearInterpolGCD(reader) => reader.min_value(),
|
||||
}
|
||||
}
|
||||
fn max_value(&self) -> Item {
|
||||
match self {
|
||||
Self::Bitpacked(reader) => reader.max_value(),
|
||||
Self::LinearInterpol(reader) => reader.max_value(),
|
||||
Self::MultiLinearInterpol(reader) => reader.max_value(),
|
||||
Self::BitpackedGCD(reader) => reader.max_value(),
|
||||
Self::LinearInterpolGCD(reader) => reader.max_value(),
|
||||
Self::MultiLinearInterpolGCD(reader) => reader.max_value(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Wrapper for accessing a fastfield.
|
||||
///
|
||||
/// Holds the data and the codec to the read the data.
|
||||
#[derive(Clone)]
|
||||
pub struct FastFieldReaderCodecWrapper<Item: FastValue, CodecReader> {
|
||||
reader: CodecReader,
|
||||
bytes: OwnedBytes,
|
||||
_phantom: PhantomData<Item>,
|
||||
}
|
||||
|
||||
impl<Item: FastValue, C: FastFieldCodecReader> FastFieldReaderCodecWrapper<Item, C> {
|
||||
/// Opens a fast field given a file.
|
||||
pub fn open(file: FileSlice) -> crate::Result<Self> {
|
||||
let mut bytes = file.read_bytes()?;
|
||||
let codec_id = bytes.read_u8();
|
||||
assert_eq!(
|
||||
BitpackedFastFieldSerializer::ID,
|
||||
codec_id,
|
||||
"Tried to open fast field as bitpacked encoded (id=1), but got serializer with \
|
||||
different id"
|
||||
);
|
||||
Self::open_from_bytes(bytes)
|
||||
}
|
||||
/// Opens a fast field given the bytes.
|
||||
pub fn open_from_bytes(bytes: OwnedBytes) -> crate::Result<Self> {
|
||||
let reader = C::open_from_bytes(bytes.as_slice())?;
|
||||
Ok(FastFieldReaderCodecWrapper {
|
||||
reader,
|
||||
bytes,
|
||||
_phantom: PhantomData,
|
||||
})
|
||||
}
|
||||
#[inline]
|
||||
pub(crate) fn get_u64(&self, doc: u64) -> Item {
|
||||
let data = self.reader.get_u64(doc, self.bytes.as_slice());
|
||||
Item::from_u64(data)
|
||||
}
|
||||
|
||||
/// Internally `multivalued` also use SingleValue Fast fields.
|
||||
/// It works as follows... A first column contains the list of start index
|
||||
/// for each document, a second column contains the actual values.
|
||||
///
|
||||
/// The values associated to a given doc, are then
|
||||
/// `second_column[first_column.get(doc)..first_column.get(doc+1)]`.
|
||||
///
|
||||
/// Which means single value fast field reader can be indexed internally with
|
||||
/// something different from a `DocId`. For this use case, we want to use `u64`
|
||||
/// values.
|
||||
///
|
||||
/// See `get_range` for an actual documentation about this method.
|
||||
pub(crate) fn get_range_u64(&self, start: u64, output: &mut [Item]) {
|
||||
for (i, out) in output.iter_mut().enumerate() {
|
||||
*out = self.get_u64(start + (i as u64));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<Item: FastValue, C: FastFieldCodecReader + Clone> FastFieldReader<Item>
|
||||
for FastFieldReaderCodecWrapper<Item, C>
|
||||
{
|
||||
/// Return the value associated to the given document.
|
||||
///
|
||||
/// This accessor should return as fast as possible.
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// May panic if `doc` is greater than the segment
|
||||
// `maxdoc`.
|
||||
fn get(&self, doc: DocId) -> Item {
|
||||
self.get_u64(u64::from(doc))
|
||||
}
|
||||
|
||||
/// Fills an output buffer with the fast field values
|
||||
/// associated with the `DocId` going from
|
||||
/// `start` to `start + output.len()`.
|
||||
///
|
||||
/// Regardless of the type of `Item`, this method works
|
||||
/// - transmuting the output array
|
||||
/// - extracting the `Item`s as if they were `u64`
|
||||
/// - possibly converting the `u64` value to the right type.
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// May panic if `start + output.len()` is greater than
|
||||
/// the segment's `maxdoc`.
|
||||
fn get_range(&self, start: u64, output: &mut [Item]) {
|
||||
self.get_range_u64(start, output);
|
||||
}
|
||||
|
||||
/// Returns the minimum value for this fast field.
|
||||
///
|
||||
/// The max value does not take in account of possible
|
||||
/// deleted document, and should be considered as an upper bound
|
||||
/// of the actual maximum value.
|
||||
fn min_value(&self) -> Item {
|
||||
Item::from_u64(self.reader.min_value())
|
||||
}
|
||||
|
||||
/// Returns the maximum value for this fast field.
|
||||
///
|
||||
/// The max value does not take in account of possible
|
||||
/// deleted document, and should be considered as an upper bound
|
||||
/// of the actual maximum value.
|
||||
fn max_value(&self) -> Item {
|
||||
Item::from_u64(self.reader.max_value())
|
||||
}
|
||||
}
|
||||
|
||||
impl<Item: FastValue> From<Vec<Item>> for DynamicFastFieldReader<Item> {
|
||||
fn from(vals: Vec<Item>) -> DynamicFastFieldReader<Item> {
|
||||
let mut schema_builder = Schema::builder();
|
||||
let field = schema_builder.add_u64_field("field", FAST);
|
||||
let schema = schema_builder.build();
|
||||
let path = Path::new("__dummy__");
|
||||
let directory: RamDirectory = RamDirectory::create();
|
||||
{
|
||||
let write: WritePtr = directory
|
||||
.open_write(path)
|
||||
.expect("With a RamDirectory, this should never fail.");
|
||||
let mut serializer = CompositeFastFieldSerializer::from_write(write)
|
||||
.expect("With a RamDirectory, this should never fail.");
|
||||
let mut fast_field_writers = FastFieldsWriter::from_schema(&schema);
|
||||
{
|
||||
let fast_field_writer = fast_field_writers
|
||||
.get_field_writer_mut(field)
|
||||
.expect("With a RamDirectory, this should never fail.");
|
||||
for val in vals {
|
||||
fast_field_writer.add_val(val.to_u64());
|
||||
}
|
||||
}
|
||||
fast_field_writers
|
||||
.serialize(&mut serializer, &HashMap::new(), None)
|
||||
.unwrap();
|
||||
serializer.close().unwrap();
|
||||
}
|
||||
|
||||
let file = directory.open_read(path).expect("Failed to open the file");
|
||||
let composite_file = CompositeFile::open(&file).expect("Failed to read the composite file");
|
||||
let field_file = composite_file
|
||||
.open_read(field)
|
||||
.expect("File component not found");
|
||||
DynamicFastFieldReader::open(field_file).unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use super::reader::DynamicFastFieldReader;
|
||||
use crate::directory::{CompositeFile, FileSlice};
|
||||
use crate::fastfield::{
|
||||
BytesFastFieldReader, FastFieldNotAvailableError, FastValue, MultiValuedFastFieldReader,
|
||||
BytesFastFieldReader, FastFieldNotAvailableError, FastFieldReaderImpl, FastValue,
|
||||
MultiValuedFastFieldReader,
|
||||
};
|
||||
use crate::schema::{Cardinality, Field, FieldType, Schema};
|
||||
use crate::space_usage::PerFieldSpaceUsage;
|
||||
@@ -109,14 +109,15 @@ impl FastFieldReaders {
|
||||
&self,
|
||||
field: Field,
|
||||
index: usize,
|
||||
) -> crate::Result<DynamicFastFieldReader<TFastValue>> {
|
||||
) -> crate::Result<FastFieldReaderImpl<TFastValue>> {
|
||||
let fast_field_slice = self.fast_field_data(field, index)?;
|
||||
DynamicFastFieldReader::open(fast_field_slice)
|
||||
let fast_field_data = fast_field_slice.read_bytes()?;
|
||||
FastFieldReaderImpl::open_from_bytes(fast_field_data)
|
||||
}
|
||||
pub(crate) fn typed_fast_field_reader<TFastValue: FastValue>(
|
||||
&self,
|
||||
field: Field,
|
||||
) -> crate::Result<DynamicFastFieldReader<TFastValue>> {
|
||||
) -> crate::Result<FastFieldReaderImpl<TFastValue>> {
|
||||
self.typed_fast_field_reader_with_idx(field, 0)
|
||||
}
|
||||
|
||||
@@ -132,7 +133,7 @@ impl FastFieldReaders {
|
||||
/// Returns the `u64` fast field reader reader associated to `field`.
|
||||
///
|
||||
/// If `field` is not a u64 fast field, this method returns an Error.
|
||||
pub fn u64(&self, field: Field) -> crate::Result<DynamicFastFieldReader<u64>> {
|
||||
pub fn u64(&self, field: Field) -> crate::Result<FastFieldReaderImpl<u64>> {
|
||||
self.check_type(field, FastType::U64, Cardinality::SingleValue)?;
|
||||
self.typed_fast_field_reader(field)
|
||||
}
|
||||
@@ -142,14 +143,14 @@ impl FastFieldReaders {
|
||||
///
|
||||
/// If not, the fastfield reader will returns the u64-value associated to the original
|
||||
/// FastValue.
|
||||
pub fn u64_lenient(&self, field: Field) -> crate::Result<DynamicFastFieldReader<u64>> {
|
||||
pub fn u64_lenient(&self, field: Field) -> crate::Result<FastFieldReaderImpl<u64>> {
|
||||
self.typed_fast_field_reader(field)
|
||||
}
|
||||
|
||||
/// Returns the `i64` fast field reader reader associated to `field`.
|
||||
///
|
||||
/// If `field` is not a i64 fast field, this method returns an Error.
|
||||
pub fn i64(&self, field: Field) -> crate::Result<DynamicFastFieldReader<i64>> {
|
||||
pub fn i64(&self, field: Field) -> crate::Result<FastFieldReaderImpl<i64>> {
|
||||
self.check_type(field, FastType::I64, Cardinality::SingleValue)?;
|
||||
self.typed_fast_field_reader(field)
|
||||
}
|
||||
@@ -157,7 +158,7 @@ impl FastFieldReaders {
|
||||
/// Returns the `date` fast field reader reader associated to `field`.
|
||||
///
|
||||
/// If `field` is not a date fast field, this method returns an Error.
|
||||
pub fn date(&self, field: Field) -> crate::Result<DynamicFastFieldReader<DateTime>> {
|
||||
pub fn date(&self, field: Field) -> crate::Result<FastFieldReaderImpl<DateTime>> {
|
||||
self.check_type(field, FastType::Date, Cardinality::SingleValue)?;
|
||||
self.typed_fast_field_reader(field)
|
||||
}
|
||||
@@ -165,7 +166,7 @@ impl FastFieldReaders {
|
||||
/// Returns the `f64` fast field reader reader associated to `field`.
|
||||
///
|
||||
/// If `field` is not a f64 fast field, this method returns an Error.
|
||||
pub fn f64(&self, field: Field) -> crate::Result<DynamicFastFieldReader<f64>> {
|
||||
pub fn f64(&self, field: Field) -> crate::Result<FastFieldReaderImpl<f64>> {
|
||||
self.check_type(field, FastType::F64, Cardinality::SingleValue)?;
|
||||
self.typed_fast_field_reader(field)
|
||||
}
|
||||
@@ -173,7 +174,7 @@ impl FastFieldReaders {
|
||||
/// Returns the `bool` fast field reader reader associated to `field`.
|
||||
///
|
||||
/// If `field` is not a bool fast field, this method returns an Error.
|
||||
pub fn bool(&self, field: Field) -> crate::Result<DynamicFastFieldReader<bool>> {
|
||||
pub fn bool(&self, field: Field) -> crate::Result<FastFieldReaderImpl<bool>> {
|
||||
self.check_type(field, FastType::Bool, Cardinality::SingleValue)?;
|
||||
self.typed_fast_field_reader(field)
|
||||
}
|
||||
@@ -241,7 +242,8 @@ impl FastFieldReaders {
|
||||
)));
|
||||
}
|
||||
let fast_field_idx_file = self.fast_field_data(field, 0)?;
|
||||
let idx_reader = DynamicFastFieldReader::open(fast_field_idx_file)?;
|
||||
let fast_field_idx_bytes = fast_field_idx_file.read_bytes()?;
|
||||
let idx_reader = FastFieldReaderImpl::open_from_bytes(fast_field_idx_bytes)?;
|
||||
let data = self.fast_field_data(field, 1)?;
|
||||
BytesFastFieldReader::open(idx_reader, data)
|
||||
} else {
|
||||
|
||||
@@ -2,16 +2,12 @@ use std::io::{self, Write};
|
||||
|
||||
use common::{BinarySerializable, CountingWriter};
|
||||
pub use fastfield_codecs::bitpacked::{
|
||||
BitpackedFastFieldSerializer, BitpackedFastFieldSerializerLegacy,
|
||||
BitpackedFastFieldCodec, BitpackedFastFieldSerializerLegacy,
|
||||
};
|
||||
use fastfield_codecs::linearinterpol::LinearInterpolFastFieldSerializer;
|
||||
use fastfield_codecs::multilinearinterpol::MultiLinearInterpolFastFieldSerializer;
|
||||
pub use fastfield_codecs::{FastFieldCodecSerializer, FastFieldDataAccess, FastFieldStats};
|
||||
use fastfield_codecs::dynamic::{CodecType, DynamicFastFieldCodec};
|
||||
pub use fastfield_codecs::{FastFieldCodec, FastFieldStats};
|
||||
|
||||
use super::{find_gcd, FastFieldCodecName, ALL_CODECS, GCD_DEFAULT};
|
||||
use crate::directory::{CompositeWrite, WritePtr};
|
||||
use crate::fastfield::gcd::write_gcd_header;
|
||||
use crate::fastfield::GCD_CODEC_ID;
|
||||
use crate::schema::Field;
|
||||
|
||||
/// `CompositeFastFieldSerializer` is in charge of serializing
|
||||
@@ -36,249 +32,37 @@ use crate::schema::Field;
|
||||
/// * `close()`
|
||||
pub struct CompositeFastFieldSerializer {
|
||||
composite_write: CompositeWrite<WritePtr>,
|
||||
codec_enable_checker: FastFieldCodecEnableCheck,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct FastFieldCodecEnableCheck {
|
||||
enabled_codecs: Vec<FastFieldCodecName>,
|
||||
}
|
||||
impl FastFieldCodecEnableCheck {
|
||||
fn allow_all() -> Self {
|
||||
FastFieldCodecEnableCheck {
|
||||
enabled_codecs: ALL_CODECS.to_vec(),
|
||||
}
|
||||
}
|
||||
fn is_enabled(&self, codec_name: FastFieldCodecName) -> bool {
|
||||
self.enabled_codecs.contains(&codec_name)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<FastFieldCodecName> for FastFieldCodecEnableCheck {
|
||||
fn from(codec_name: FastFieldCodecName) -> Self {
|
||||
FastFieldCodecEnableCheck {
|
||||
enabled_codecs: vec![codec_name],
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// use this, when this is merged and stabilized explicit_generic_args_with_impl_trait
|
||||
// https://github.com/rust-lang/rust/pull/86176
|
||||
fn codec_estimation<T: FastFieldCodecSerializer, A: FastFieldDataAccess>(
|
||||
stats: FastFieldStats,
|
||||
fastfield_accessor: &A,
|
||||
estimations: &mut Vec<(f32, &str, u8)>,
|
||||
) {
|
||||
if !T::is_applicable(fastfield_accessor, stats.clone()) {
|
||||
return;
|
||||
}
|
||||
let (ratio, name, id) = (T::estimate(fastfield_accessor, stats), T::NAME, T::ID);
|
||||
estimations.push((ratio, name, id));
|
||||
}
|
||||
|
||||
impl CompositeFastFieldSerializer {
|
||||
/// Constructor
|
||||
pub fn from_write(write: WritePtr) -> io::Result<CompositeFastFieldSerializer> {
|
||||
Self::from_write_with_codec(write, FastFieldCodecEnableCheck::allow_all())
|
||||
}
|
||||
|
||||
/// Constructor
|
||||
pub fn from_write_with_codec(
|
||||
write: WritePtr,
|
||||
codec_enable_checker: FastFieldCodecEnableCheck,
|
||||
) -> io::Result<CompositeFastFieldSerializer> {
|
||||
// just making room for the pointer to header.
|
||||
let composite_write = CompositeWrite::wrap(write);
|
||||
Ok(CompositeFastFieldSerializer {
|
||||
composite_write,
|
||||
codec_enable_checker,
|
||||
})
|
||||
Ok(CompositeFastFieldSerializer { composite_write })
|
||||
}
|
||||
|
||||
/// Serialize data into a new u64 fast field. The best compression codec will be chosen
|
||||
/// automatically.
|
||||
pub fn create_auto_detect_u64_fast_field<F, I>(
|
||||
pub fn create_auto_detect_u64_fast_field(
|
||||
&mut self,
|
||||
field: Field,
|
||||
stats: FastFieldStats,
|
||||
fastfield_accessor: impl FastFieldDataAccess,
|
||||
iter_gen: F,
|
||||
) -> io::Result<()>
|
||||
where
|
||||
F: Fn() -> I,
|
||||
I: Iterator<Item = u64>,
|
||||
{
|
||||
self.create_auto_detect_u64_fast_field_with_idx(
|
||||
field,
|
||||
stats,
|
||||
fastfield_accessor,
|
||||
iter_gen,
|
||||
0,
|
||||
)
|
||||
}
|
||||
|
||||
/// Serialize data into a new u64 fast field. The best compression codec will be chosen
|
||||
/// automatically.
|
||||
pub fn write_header<W: Write>(field_write: &mut W, codec_id: u8) -> io::Result<()> {
|
||||
codec_id.serialize(field_write)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Serialize data into a new u64 fast field. The best compression codec will be chosen
|
||||
/// automatically.
|
||||
pub fn create_auto_detect_u64_fast_field_with_idx<F, I>(
|
||||
&mut self,
|
||||
field: Field,
|
||||
stats: FastFieldStats,
|
||||
fastfield_accessor: impl FastFieldDataAccess,
|
||||
iter_gen: F,
|
||||
idx: usize,
|
||||
) -> io::Result<()>
|
||||
where
|
||||
F: Fn() -> I,
|
||||
I: Iterator<Item = u64>,
|
||||
{
|
||||
let field_write = self.composite_write.for_field_with_idx(field, idx);
|
||||
let gcd = find_gcd(iter_gen().map(|val| val - stats.min_value)).unwrap_or(GCD_DEFAULT);
|
||||
|
||||
if gcd == 1 {
|
||||
return Self::create_auto_detect_u64_fast_field_with_idx_gcd(
|
||||
self.codec_enable_checker.clone(),
|
||||
field,
|
||||
field_write,
|
||||
stats,
|
||||
fastfield_accessor,
|
||||
iter_gen(),
|
||||
iter_gen(),
|
||||
);
|
||||
}
|
||||
|
||||
Self::write_header(field_write, GCD_CODEC_ID)?;
|
||||
struct GCDWrappedFFAccess<T: FastFieldDataAccess> {
|
||||
fastfield_accessor: T,
|
||||
min_value: u64,
|
||||
gcd: u64,
|
||||
}
|
||||
impl<T: FastFieldDataAccess> FastFieldDataAccess for GCDWrappedFFAccess<T> {
|
||||
fn get_val(&self, position: u64) -> u64 {
|
||||
(self.fastfield_accessor.get_val(position) - self.min_value) / self.gcd
|
||||
}
|
||||
}
|
||||
|
||||
let fastfield_accessor = GCDWrappedFFAccess {
|
||||
fastfield_accessor,
|
||||
min_value: stats.min_value,
|
||||
gcd,
|
||||
};
|
||||
|
||||
let min_value = stats.min_value;
|
||||
let stats = FastFieldStats {
|
||||
min_value: 0,
|
||||
max_value: (stats.max_value - stats.min_value) / gcd,
|
||||
num_vals: stats.num_vals,
|
||||
};
|
||||
let iter1 = iter_gen().map(|val| (val - min_value) / gcd);
|
||||
let iter2 = iter_gen().map(|val| (val - min_value) / gcd);
|
||||
Self::create_auto_detect_u64_fast_field_with_idx_gcd(
|
||||
self.codec_enable_checker.clone(),
|
||||
field,
|
||||
field_write,
|
||||
stats,
|
||||
fastfield_accessor,
|
||||
iter1,
|
||||
iter2,
|
||||
)?;
|
||||
write_gcd_header(field_write, min_value, gcd)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Serialize data into a new u64 fast field. The best compression codec will be chosen
|
||||
/// automatically.
|
||||
pub fn create_auto_detect_u64_fast_field_with_idx_gcd<W: Write>(
|
||||
codec_enable_checker: FastFieldCodecEnableCheck,
|
||||
field: Field,
|
||||
field_write: &mut CountingWriter<W>,
|
||||
stats: FastFieldStats,
|
||||
fastfield_accessor: impl FastFieldDataAccess,
|
||||
iter1: impl Iterator<Item = u64>,
|
||||
iter2: impl Iterator<Item = u64>,
|
||||
vals: &[u64],
|
||||
) -> io::Result<()> {
|
||||
let mut estimations = vec![];
|
||||
|
||||
if codec_enable_checker.is_enabled(FastFieldCodecName::Bitpacked) {
|
||||
codec_estimation::<BitpackedFastFieldSerializer, _>(
|
||||
stats.clone(),
|
||||
&fastfield_accessor,
|
||||
&mut estimations,
|
||||
);
|
||||
}
|
||||
if codec_enable_checker.is_enabled(FastFieldCodecName::LinearInterpol) {
|
||||
codec_estimation::<LinearInterpolFastFieldSerializer, _>(
|
||||
stats.clone(),
|
||||
&fastfield_accessor,
|
||||
&mut estimations,
|
||||
);
|
||||
}
|
||||
if codec_enable_checker.is_enabled(FastFieldCodecName::BlockwiseLinearInterpol) {
|
||||
codec_estimation::<MultiLinearInterpolFastFieldSerializer, _>(
|
||||
stats.clone(),
|
||||
&fastfield_accessor,
|
||||
&mut estimations,
|
||||
);
|
||||
}
|
||||
if let Some(broken_estimation) = estimations.iter().find(|estimation| estimation.0.is_nan())
|
||||
{
|
||||
warn!(
|
||||
"broken estimation for fast field codec {}",
|
||||
broken_estimation.1
|
||||
);
|
||||
}
|
||||
// removing nan values for codecs with broken calculations, and max values which disables
|
||||
// codecs
|
||||
estimations.retain(|estimation| !estimation.0.is_nan() && estimation.0 != f32::MAX);
|
||||
estimations.sort_by(|a, b| a.0.partial_cmp(&b.0).unwrap());
|
||||
let (_ratio, name, id) = estimations[0];
|
||||
debug!(
|
||||
"choosing fast field codec {} for field_id {:?}",
|
||||
name, field
|
||||
); // todo print actual field name
|
||||
|
||||
Self::write_header(field_write, id)?;
|
||||
match name {
|
||||
BitpackedFastFieldSerializer::NAME => {
|
||||
BitpackedFastFieldSerializer::serialize(
|
||||
field_write,
|
||||
&fastfield_accessor,
|
||||
stats,
|
||||
iter1,
|
||||
iter2,
|
||||
)?;
|
||||
}
|
||||
LinearInterpolFastFieldSerializer::NAME => {
|
||||
LinearInterpolFastFieldSerializer::serialize(
|
||||
field_write,
|
||||
&fastfield_accessor,
|
||||
stats,
|
||||
iter1,
|
||||
iter2,
|
||||
)?;
|
||||
}
|
||||
MultiLinearInterpolFastFieldSerializer::NAME => {
|
||||
MultiLinearInterpolFastFieldSerializer::serialize(
|
||||
field_write,
|
||||
&fastfield_accessor,
|
||||
stats,
|
||||
iter1,
|
||||
iter2,
|
||||
)?;
|
||||
}
|
||||
_ => {
|
||||
panic!("unknown fastfield serializer {}", name)
|
||||
}
|
||||
}
|
||||
field_write.flush()?;
|
||||
self.create_auto_detect_u64_fast_field_with_idx(field, stats, vals, 0)
|
||||
}
|
||||
|
||||
/// Serialize data into a new u64 fast field. The best compression codec will be chosen
|
||||
/// automatically.
|
||||
pub fn create_auto_detect_u64_fast_field_with_idx(
|
||||
&mut self,
|
||||
field: Field,
|
||||
stats: FastFieldStats,
|
||||
vals: &[u64],
|
||||
idx: usize,
|
||||
) -> io::Result<()> {
|
||||
let field_write = self.composite_write.for_field_with_idx(field, idx);
|
||||
DynamicFastFieldCodec.serialize(field_write, vals, stats)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -312,8 +96,7 @@ impl CompositeFastFieldSerializer {
|
||||
) -> io::Result<BitpackedFastFieldSerializerLegacy<'_, CountingWriter<WritePtr>>> {
|
||||
let field_write = self.composite_write.for_field_with_idx(field, idx);
|
||||
// Prepend codec id to field data for compatibility with DynamicFastFieldReader.
|
||||
let id = BitpackedFastFieldSerializer::ID;
|
||||
id.serialize(field_write)?;
|
||||
CodecType::Bitpacked.serialize(field_write)?;
|
||||
BitpackedFastFieldSerializerLegacy::open(field_write, min_value, max_value)
|
||||
}
|
||||
|
||||
|
||||
184
src/fastfield/wrapper.rs
Normal file
184
src/fastfield/wrapper.rs
Normal file
@@ -0,0 +1,184 @@
|
||||
// Copyright (C) 2022 Quickwit, Inc.
|
||||
//
|
||||
// Quickwit is offered under the AGPL v3.0 and as commercial software.
|
||||
// For commercial licensing, contact us at hello@quickwit.io.
|
||||
//
|
||||
// AGPL:
|
||||
// This program is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Affero General Public License as
|
||||
// published by the Free Software Foundation, either version 3 of the
|
||||
// License, or (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Affero General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Affero General Public License
|
||||
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
//
|
||||
|
||||
use std::marker::PhantomData;
|
||||
|
||||
use fastfield_codecs::dynamic::DynamicFastFieldCodec;
|
||||
use fastfield_codecs::{FastFieldCodec, FastFieldCodecReader, FastFieldStats};
|
||||
use ownedbytes::OwnedBytes;
|
||||
|
||||
use crate::directory::FileSlice;
|
||||
use crate::fastfield::{FastFieldReader, FastFieldReaderImpl, FastValue};
|
||||
use crate::DocId;
|
||||
|
||||
/// Wrapper for accessing a fastfield.
|
||||
///
|
||||
/// Holds the data and the codec to the read the data.
|
||||
pub struct FastFieldReaderWrapper<Item: FastValue, Codec: FastFieldCodec> {
|
||||
reader: Codec::Reader,
|
||||
_phantom: PhantomData<Item>,
|
||||
_codec: PhantomData<Codec>,
|
||||
}
|
||||
|
||||
impl<Item: FastValue, Codec: FastFieldCodec> FastFieldReaderWrapper<Item, Codec> {
|
||||
fn new(reader: Codec::Reader) -> Self {
|
||||
Self {
|
||||
reader,
|
||||
_phantom: PhantomData,
|
||||
_codec: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<Item: FastValue, Codec: FastFieldCodec> Clone for FastFieldReaderWrapper<Item, Codec>
|
||||
where Codec::Reader: Clone
|
||||
{
|
||||
fn clone(&self) -> Self {
|
||||
Self {
|
||||
reader: self.reader.clone(),
|
||||
_phantom: PhantomData,
|
||||
_codec: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<Item: FastValue, C: FastFieldCodec> FastFieldReader<Item> for FastFieldReaderWrapper<Item, C> {
|
||||
/// Return the value associated to the given document.
|
||||
///
|
||||
/// This accessor should return as fast as possible.
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// May panic if `doc` is greater than the segment
|
||||
// `maxdoc`.
|
||||
fn get(&self, doc: DocId) -> Item {
|
||||
self.get_u64(u64::from(doc))
|
||||
}
|
||||
|
||||
/// Fills an output buffer with the fast field values
|
||||
/// associated with the `DocId` going from
|
||||
/// `start` to `start + output.len()`.
|
||||
///
|
||||
/// Regardless of the type of `Item`, this method works
|
||||
/// - transmuting the output array
|
||||
/// - extracting the `Item`s as if they were `u64`
|
||||
/// - possibly converting the `u64` value to the right type.
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// May panic if `start + output.len()` is greater than
|
||||
/// the segment's `maxdoc`.
|
||||
fn get_range(&self, start: u64, output: &mut [Item]) {
|
||||
self.get_range_u64(start, output);
|
||||
}
|
||||
|
||||
/// Returns the minimum value for this fast field.
|
||||
///
|
||||
/// The max value does not take in account of possible
|
||||
/// deleted document, and should be considered as an upper bound
|
||||
/// of the actual maximum value.
|
||||
fn min_value(&self) -> Item {
|
||||
Item::from_u64(self.reader.min_value())
|
||||
}
|
||||
|
||||
/// Returns the maximum value for this fast field.
|
||||
///
|
||||
/// The max value does not take in account of possible
|
||||
/// deleted document, and should be considered as an upper bound
|
||||
/// of the actual maximum value.
|
||||
fn max_value(&self) -> Item {
|
||||
Item::from_u64(self.reader.max_value())
|
||||
}
|
||||
}
|
||||
|
||||
impl<Item: FastValue, Codec: FastFieldCodec> FastFieldReaderWrapper<Item, Codec> {
|
||||
/// Opens a fast field given a file.
|
||||
pub fn open(file: FileSlice) -> crate::Result<Self> {
|
||||
let mut bytes = file.read_bytes()?;
|
||||
// TODO
|
||||
// let codec_id = bytes.read_u8();
|
||||
// assert_eq!(
|
||||
// 0u8, codec_id,
|
||||
// "Tried to open fast field as bitpacked encoded (id=1), but got serializer with \
|
||||
// different id"
|
||||
// );
|
||||
Self::open_from_bytes(bytes)
|
||||
}
|
||||
|
||||
/// Opens a fast field given the bytes.
|
||||
pub fn open_from_bytes(bytes: OwnedBytes) -> crate::Result<Self> {
|
||||
let reader = Codec::open_from_bytes(bytes)?;
|
||||
Ok(FastFieldReaderWrapper {
|
||||
reader,
|
||||
_codec: PhantomData,
|
||||
_phantom: PhantomData,
|
||||
})
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub(crate) fn get_u64(&self, doc: u64) -> Item {
|
||||
let data = self.reader.get_u64(doc);
|
||||
Item::from_u64(data)
|
||||
}
|
||||
|
||||
/// Internally `multivalued` also use SingleValue Fast fields.
|
||||
/// It works as follows... A first column contains the list of start index
|
||||
/// for each document, a second column contains the actual values.
|
||||
///
|
||||
/// The values associated to a given doc, are then
|
||||
/// `second_column[first_column.get(doc)..first_column.get(doc+1)]`.
|
||||
///
|
||||
/// Which means single value fast field reader can be indexed internally with
|
||||
/// something different from a `DocId`. For this use case, we want to use `u64`
|
||||
/// values.
|
||||
///
|
||||
/// See `get_range` for an actual documentation about this method.
|
||||
pub(crate) fn get_range_u64(&self, start: u64, output: &mut [Item]) {
|
||||
for (i, out) in output.iter_mut().enumerate() {
|
||||
*out = self.get_u64(start + (i as u64));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
use itertools::Itertools;
|
||||
|
||||
impl<Item: FastValue, Arr: AsRef<[Item]>> From<Arr> for FastFieldReaderImpl<Item> {
|
||||
fn from(vals: Arr) -> FastFieldReaderImpl<Item> {
|
||||
let mut buffer = Vec::new();
|
||||
let vals_u64: Vec<u64> = vals.as_ref().iter().map(|val| val.to_u64()).collect();
|
||||
let (min_value, max_value) = vals_u64
|
||||
.iter()
|
||||
.copied()
|
||||
.minmax()
|
||||
.into_option()
|
||||
.expect("Expected non empty");
|
||||
let stats = FastFieldStats {
|
||||
min_value,
|
||||
max_value,
|
||||
num_vals: vals_u64.len() as u64,
|
||||
};
|
||||
DynamicFastFieldCodec
|
||||
.serialize(&mut buffer, &vals_u64, stats)
|
||||
.unwrap();
|
||||
let bytes = OwnedBytes::new(buffer);
|
||||
let fast_field_reader = DynamicFastFieldCodec::open_from_bytes(bytes).unwrap();
|
||||
FastFieldReaderImpl::new(fast_field_reader)
|
||||
}
|
||||
}
|
||||
@@ -7,7 +7,7 @@ use tantivy_bitpacker::BlockedBitpacker;
|
||||
|
||||
use super::multivalued::MultiValuedFastFieldWriter;
|
||||
use super::serializer::FastFieldStats;
|
||||
use super::{FastFieldDataAccess, FastFieldType, FastValue};
|
||||
use super::{FastFieldType, FastValue};
|
||||
use crate::fastfield::{BytesFastFieldWriter, CompositeFastFieldSerializer};
|
||||
use crate::indexer::doc_id_mapping::DocIdMapping;
|
||||
use crate::postings::UnorderedTermId;
|
||||
@@ -217,12 +217,13 @@ impl FastFieldsWriter {
|
||||
) -> io::Result<()> {
|
||||
for field_writer in &self.term_id_writers {
|
||||
let field = field_writer.field();
|
||||
dbg!("multifield", field);
|
||||
field_writer.serialize(serializer, mapping.get(&field), doc_id_map)?;
|
||||
}
|
||||
for field_writer in &self.single_value_writers {
|
||||
dbg!("singlefield");
|
||||
field_writer.serialize(serializer, doc_id_map)?;
|
||||
}
|
||||
|
||||
for field_writer in &self.multi_values_writers {
|
||||
let field = field_writer.field();
|
||||
field_writer.serialize(serializer, mapping.get(&field), doc_id_map)?;
|
||||
@@ -293,7 +294,7 @@ impl IntFastFieldWriter {
|
||||
|
||||
/// Records a new value.
|
||||
///
|
||||
/// The n-th value being recorded is implicitely
|
||||
/// The n-th value being recorded is implicitly
|
||||
/// associated to the document with the `DocId` n.
|
||||
/// (Well, `n-1` actually because of 0-indexing)
|
||||
pub fn add_val(&mut self, val: u64) {
|
||||
@@ -359,64 +360,26 @@ impl IntFastFieldWriter {
|
||||
(self.val_min, self.val_max)
|
||||
};
|
||||
|
||||
let fastfield_accessor = WriterFastFieldAccessProvider {
|
||||
doc_id_map,
|
||||
vals: &self.vals,
|
||||
};
|
||||
let vals = compute_fast_field_vals(&self.vals, doc_id_map);
|
||||
let stats = FastFieldStats {
|
||||
min_value: min,
|
||||
max_value: max,
|
||||
num_vals: self.val_count as u64,
|
||||
};
|
||||
|
||||
if let Some(doc_id_map) = doc_id_map {
|
||||
let iter_gen = || {
|
||||
doc_id_map
|
||||
.iter_old_doc_ids()
|
||||
.map(|doc_id| self.vals.get(doc_id as usize))
|
||||
};
|
||||
serializer.create_auto_detect_u64_fast_field(
|
||||
self.field,
|
||||
stats,
|
||||
fastfield_accessor,
|
||||
iter_gen,
|
||||
)?;
|
||||
} else {
|
||||
let iter_gen = || self.vals.iter();
|
||||
|
||||
serializer.create_auto_detect_u64_fast_field(
|
||||
self.field,
|
||||
stats,
|
||||
fastfield_accessor,
|
||||
iter_gen,
|
||||
)?;
|
||||
};
|
||||
dbg!(&stats);
|
||||
dbg!(&vals);
|
||||
serializer.create_auto_detect_u64_fast_field(self.field, stats, &vals)?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
struct WriterFastFieldAccessProvider<'map, 'bitp> {
|
||||
doc_id_map: Option<&'map DocIdMapping>,
|
||||
vals: &'bitp BlockedBitpacker,
|
||||
}
|
||||
impl<'map, 'bitp> FastFieldDataAccess for WriterFastFieldAccessProvider<'map, 'bitp> {
|
||||
/// Return the value associated to the given doc.
|
||||
///
|
||||
/// Whenever possible use the Iterator passed to the fastfield creation instead, for performance
|
||||
/// reasons.
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// May panic if `doc` is greater than the index.
|
||||
fn get_val(&self, doc: u64) -> u64 {
|
||||
if let Some(doc_id_map) = self.doc_id_map {
|
||||
self.vals
|
||||
.get(doc_id_map.get_old_doc_id(doc as u32) as usize) // consider extra
|
||||
// FastFieldReader wrapper for
|
||||
// non doc_id_map
|
||||
} else {
|
||||
self.vals.get(doc as usize)
|
||||
}
|
||||
fn compute_fast_field_vals(vals: &BlockedBitpacker, doc_id_map: Option<&DocIdMapping>) -> Vec<u64> {
|
||||
if let Some(doc_id_mapping) = doc_id_map {
|
||||
doc_id_mapping
|
||||
.iter_old_doc_ids()
|
||||
.map(|old_doc_id| vals.get(old_doc_id as usize))
|
||||
.collect()
|
||||
} else {
|
||||
vals.iter().collect()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -31,7 +31,7 @@ pub const MARGIN_IN_BYTES: usize = 1_000_000;
|
||||
pub const MEMORY_ARENA_NUM_BYTES_MIN: usize = ((MARGIN_IN_BYTES as u32) * 3u32) as usize;
|
||||
pub const MEMORY_ARENA_NUM_BYTES_MAX: usize = u32::MAX as usize - MARGIN_IN_BYTES;
|
||||
|
||||
// We impose the number of index writter thread to be at most this.
|
||||
// We impose the number of index writer thread to be at most this.
|
||||
pub const MAX_NUM_THREAD: usize = 8;
|
||||
|
||||
// Add document will block if the number of docs waiting in the queue to be indexed
|
||||
@@ -710,7 +710,7 @@ impl IndexWriter {
|
||||
}
|
||||
|
||||
/// Runs a group of document operations ensuring that the operations are
|
||||
/// assigned contigous u64 opstamps and that add operations of the same
|
||||
/// assigned contiguous u64 opstamps and that add operations of the same
|
||||
/// group are flushed into the same segment.
|
||||
///
|
||||
/// If the indexing pipeline is full, this call may block.
|
||||
|
||||
@@ -38,10 +38,10 @@ use crate::{DatePrecision, DateTime, DocId, Term};
|
||||
/// of values, with a position gap. Here we would like `The` and `Who` to get indexed at
|
||||
/// position 2 and 3 respectively.
|
||||
///
|
||||
/// With regular fields, we sort the fields beforehands, so that all terms with the same
|
||||
/// With regular fields, we sort the fields beforehand, so that all terms with the same
|
||||
/// path are indexed consecutively.
|
||||
///
|
||||
/// In JSON object, we do not have this confort, so we need to record these position offsets in
|
||||
/// In JSON object, we do not have this comfort, so we need to record these position offsets in
|
||||
/// a map.
|
||||
///
|
||||
/// Note that using a single position for the entire object would not hurt correctness.
|
||||
|
||||
@@ -43,7 +43,7 @@ pub mod tests {
|
||||
|
||||
/// `MergePolicy` useful for test purposes.
|
||||
///
|
||||
/// Everytime there is more than one segment,
|
||||
/// Every time there is more than one segment,
|
||||
/// it will suggest to merge them.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct MergeWheneverPossible;
|
||||
|
||||
@@ -10,8 +10,8 @@ use crate::core::{Segment, SegmentReader};
|
||||
use crate::docset::{DocSet, TERMINATED};
|
||||
use crate::error::DataCorruption;
|
||||
use crate::fastfield::{
|
||||
AliveBitSet, CompositeFastFieldSerializer, DynamicFastFieldReader, FastFieldDataAccess,
|
||||
FastFieldReader, FastFieldStats, MultiValueLength, MultiValuedFastFieldReader,
|
||||
AliveBitSet, CompositeFastFieldSerializer, FastFieldReader, FastFieldReaderImpl,
|
||||
FastFieldStats, MultiValueLength, MultiValuedFastFieldReader,
|
||||
};
|
||||
use crate::fieldnorm::{FieldNormReader, FieldNormReaders, FieldNormsSerializer, FieldNormsWriter};
|
||||
use crate::indexer::doc_id_mapping::{expect_field_id_for_sort_field, SegmentDocIdMapping};
|
||||
@@ -164,6 +164,30 @@ impl DeltaComputer {
|
||||
}
|
||||
}
|
||||
|
||||
fn compute_sorted_multivalued_vals(
|
||||
doc_id_mapping: &SegmentDocIdMapping,
|
||||
fast_field_readers: &Vec<MultiValuedFastFieldReader<u64>>,
|
||||
) -> Vec<u64> {
|
||||
let mut vals = Vec::new();
|
||||
let mut buf: Vec<u64> = Vec::new();
|
||||
for &(doc_id, segment_ord) in doc_id_mapping.iter() {
|
||||
fast_field_readers[segment_ord as usize].get_vals(doc_id, &mut buf);
|
||||
vals.extend_from_slice(&buf);
|
||||
}
|
||||
vals
|
||||
}
|
||||
|
||||
fn compute_vals_sorted(
|
||||
doc_id_mapping: &SegmentDocIdMapping,
|
||||
fast_field_readers: &[FastFieldReaderImpl<u64>],
|
||||
) -> Vec<u64> {
|
||||
let mut vals = Vec::with_capacity(doc_id_mapping.len());
|
||||
for &(doc_id, segment_ord) in doc_id_mapping.iter() {
|
||||
vals.push(fast_field_readers[segment_ord as usize].get_u64(doc_id as u64));
|
||||
}
|
||||
vals
|
||||
}
|
||||
|
||||
impl IndexMerger {
|
||||
pub fn open(
|
||||
schema: Schema,
|
||||
@@ -342,7 +366,7 @@ impl IndexMerger {
|
||||
.readers
|
||||
.iter()
|
||||
.filter_map(|reader| {
|
||||
let u64_reader: DynamicFastFieldReader<u64> =
|
||||
let u64_reader: FastFieldReaderImpl<u64> =
|
||||
reader.fast_fields().typed_fast_field_reader(field).expect(
|
||||
"Failed to find a reader for single fast field. This is a tantivy bug and \
|
||||
it should never happen.",
|
||||
@@ -356,7 +380,7 @@ impl IndexMerger {
|
||||
.readers
|
||||
.iter()
|
||||
.map(|reader| {
|
||||
let u64_reader: DynamicFastFieldReader<u64> =
|
||||
let u64_reader: crate::fastfield::FastFieldReaderImpl<u64> =
|
||||
reader.fast_fields().typed_fast_field_reader(field).expect(
|
||||
"Failed to find a reader for single fast field. This is a tantivy bug and \
|
||||
it should never happen.",
|
||||
@@ -370,33 +394,9 @@ impl IndexMerger {
|
||||
max_value,
|
||||
num_vals: doc_id_mapping.len() as u64,
|
||||
};
|
||||
#[derive(Clone)]
|
||||
struct SortedDocIdFieldAccessProvider<'a> {
|
||||
doc_id_mapping: &'a SegmentDocIdMapping,
|
||||
fast_field_readers: &'a Vec<DynamicFastFieldReader<u64>>,
|
||||
}
|
||||
impl<'a> FastFieldDataAccess for SortedDocIdFieldAccessProvider<'a> {
|
||||
fn get_val(&self, doc: u64) -> u64 {
|
||||
let (doc_id, reader_ordinal) = self.doc_id_mapping[doc as usize];
|
||||
self.fast_field_readers[reader_ordinal as usize].get(doc_id)
|
||||
}
|
||||
}
|
||||
let fastfield_accessor = SortedDocIdFieldAccessProvider {
|
||||
doc_id_mapping,
|
||||
fast_field_readers: &fast_field_readers,
|
||||
};
|
||||
let iter_gen = || {
|
||||
doc_id_mapping.iter().map(|(doc_id, reader_ordinal)| {
|
||||
let fast_field_reader = &fast_field_readers[*reader_ordinal as usize];
|
||||
fast_field_reader.get(*doc_id)
|
||||
})
|
||||
};
|
||||
fast_field_serializer.create_auto_detect_u64_fast_field(
|
||||
field,
|
||||
stats,
|
||||
fastfield_accessor,
|
||||
iter_gen,
|
||||
)?;
|
||||
|
||||
let vals = compute_vals_sorted(doc_id_mapping, &fast_field_readers);
|
||||
fast_field_serializer.create_auto_detect_u64_fast_field(field, stats, &vals)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -427,7 +427,7 @@ impl IndexMerger {
|
||||
pub(crate) fn get_sort_field_accessor(
|
||||
reader: &SegmentReader,
|
||||
sort_by_field: &IndexSortByField,
|
||||
) -> crate::Result<impl FastFieldReader<u64>> {
|
||||
) -> crate::Result<FastFieldReaderImpl<u64>> {
|
||||
let field_id = expect_field_id_for_sort_field(reader.schema(), sort_by_field)?; // for now expect fastfield, but not strictly required
|
||||
let value_accessor = reader.fast_fields().u64_lenient(field_id)?;
|
||||
Ok(value_accessor)
|
||||
@@ -436,7 +436,7 @@ impl IndexMerger {
|
||||
pub(crate) fn get_reader_with_sort_field_accessor(
|
||||
&self,
|
||||
sort_by_field: &IndexSortByField,
|
||||
) -> crate::Result<Vec<(SegmentOrdinal, impl FastFieldReader<u64> + Clone)>> {
|
||||
) -> crate::Result<Vec<(SegmentOrdinal, FastFieldReaderImpl<u64>)>> {
|
||||
let reader_ordinal_and_field_accessors = self
|
||||
.readers
|
||||
.iter()
|
||||
@@ -545,10 +545,10 @@ impl IndexMerger {
|
||||
|
||||
// copying into a temp vec is not ideal, but the fast field codec api requires random
|
||||
// access, which is used in the estimation. It's possible to 1. calculate random
|
||||
// acccess on the fly or 2. change the codec api to make random access optional, but
|
||||
// access on the fly or 2. change the codec api to make random access optional, but
|
||||
// they both have also major drawbacks.
|
||||
|
||||
let mut offsets = Vec::with_capacity(doc_id_mapping.len());
|
||||
let mut offsets: Vec<u64> = Vec::with_capacity(doc_id_mapping.len());
|
||||
let mut offset = 0;
|
||||
for (doc_id, reader) in doc_id_mapping.iter() {
|
||||
let reader = &reader_and_field_accessors[*reader as usize].1;
|
||||
@@ -557,13 +557,7 @@ impl IndexMerger {
|
||||
}
|
||||
offsets.push(offset);
|
||||
|
||||
let iter_gen = || offsets.iter().cloned();
|
||||
fast_field_serializer.create_auto_detect_u64_fast_field(
|
||||
field,
|
||||
stats,
|
||||
&offsets[..],
|
||||
iter_gen,
|
||||
)?;
|
||||
fast_field_serializer.create_auto_detect_u64_fast_field(field, stats, &offsets[..])?;
|
||||
Ok(offsets)
|
||||
}
|
||||
/// Returns the fastfield index (index for the data, not the data).
|
||||
@@ -572,7 +566,7 @@ impl IndexMerger {
|
||||
field: Field,
|
||||
fast_field_serializer: &mut CompositeFastFieldSerializer,
|
||||
doc_id_mapping: &SegmentDocIdMapping,
|
||||
) -> crate::Result<Vec<u64>> {
|
||||
) -> crate::Result<()> {
|
||||
let reader_ordinal_and_field_accessors = self
|
||||
.readers
|
||||
.iter()
|
||||
@@ -593,7 +587,8 @@ impl IndexMerger {
|
||||
fast_field_serializer,
|
||||
doc_id_mapping,
|
||||
&reader_ordinal_and_field_accessors,
|
||||
)
|
||||
)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn write_term_id_fast_field(
|
||||
@@ -606,7 +601,7 @@ impl IndexMerger {
|
||||
debug_time!("write-term-id-fast-field");
|
||||
|
||||
// Multifastfield consists of 2 fastfields.
|
||||
// The first serves as an index into the second one and is stricly increasing.
|
||||
// The first serves as an index into the second one and is strictly increasing.
|
||||
// The second contains the actual values.
|
||||
|
||||
// First we merge the idx fast field.
|
||||
@@ -678,16 +673,11 @@ impl IndexMerger {
|
||||
doc_id_mapping: &SegmentDocIdMapping,
|
||||
) -> crate::Result<()> {
|
||||
// Multifastfield consists in 2 fastfields.
|
||||
// The first serves as an index into the second one and is stricly increasing.
|
||||
// The first serves as an index into the second one and is strictly increasing.
|
||||
// The second contains the actual values.
|
||||
|
||||
// First we merge the idx fast field.
|
||||
let offsets =
|
||||
self.write_multi_value_fast_field_idx(field, fast_field_serializer, doc_id_mapping)?;
|
||||
|
||||
let mut min_value = u64::MAX;
|
||||
let mut max_value = u64::MIN;
|
||||
let mut num_vals = 0;
|
||||
self.write_multi_value_fast_field_idx(field, fast_field_serializer, doc_id_mapping)?;
|
||||
|
||||
let mut vals = Vec::with_capacity(100);
|
||||
|
||||
@@ -709,75 +699,18 @@ impl IndexMerger {
|
||||
);
|
||||
for doc in reader.doc_ids_alive() {
|
||||
ff_reader.get_vals(doc, &mut vals);
|
||||
for &val in &vals {
|
||||
min_value = cmp::min(val, min_value);
|
||||
max_value = cmp::max(val, max_value);
|
||||
}
|
||||
num_vals += vals.len();
|
||||
}
|
||||
ff_readers.push(ff_reader);
|
||||
// TODO optimize when no deletes
|
||||
}
|
||||
|
||||
if min_value > max_value {
|
||||
min_value = 0;
|
||||
max_value = 0;
|
||||
}
|
||||
let vals = compute_sorted_multivalued_vals(doc_id_mapping, &ff_readers);
|
||||
let stats = FastFieldStats::compute(&vals);
|
||||
|
||||
// We can now initialize our serializer, and push it the different values
|
||||
let stats = FastFieldStats {
|
||||
max_value,
|
||||
num_vals: num_vals as u64,
|
||||
min_value,
|
||||
};
|
||||
|
||||
struct SortedDocIdMultiValueAccessProvider<'a> {
|
||||
doc_id_mapping: &'a SegmentDocIdMapping,
|
||||
fast_field_readers: &'a Vec<MultiValuedFastFieldReader<u64>>,
|
||||
offsets: Vec<u64>,
|
||||
}
|
||||
impl<'a> FastFieldDataAccess for SortedDocIdMultiValueAccessProvider<'a> {
|
||||
fn get_val(&self, pos: u64) -> u64 {
|
||||
// use the offsets index to find the doc_id which will contain the position.
|
||||
// the offsets are stricly increasing so we can do a simple search on it.
|
||||
let new_doc_id = self
|
||||
.offsets
|
||||
.iter()
|
||||
.position(|&offset| offset > pos)
|
||||
.expect("pos is out of bounds")
|
||||
- 1;
|
||||
|
||||
// now we need to find the position of `pos` in the multivalued bucket
|
||||
let num_pos_covered_until_now = self.offsets[new_doc_id];
|
||||
let pos_in_values = pos - num_pos_covered_until_now;
|
||||
|
||||
let (old_doc_id, reader_ordinal) = self.doc_id_mapping[new_doc_id as usize];
|
||||
let num_vals = self.fast_field_readers[reader_ordinal as usize].get_len(old_doc_id);
|
||||
assert!(num_vals >= pos_in_values);
|
||||
let mut vals = vec![];
|
||||
self.fast_field_readers[reader_ordinal as usize].get_vals(old_doc_id, &mut vals);
|
||||
|
||||
vals[pos_in_values as usize]
|
||||
}
|
||||
}
|
||||
let fastfield_accessor = SortedDocIdMultiValueAccessProvider {
|
||||
doc_id_mapping,
|
||||
fast_field_readers: &ff_readers,
|
||||
offsets,
|
||||
};
|
||||
let iter_gen = || {
|
||||
doc_id_mapping.iter().flat_map(|(doc_id, reader_ordinal)| {
|
||||
let ff_reader = &ff_readers[*reader_ordinal as usize];
|
||||
let mut vals = vec![];
|
||||
ff_reader.get_vals(*doc_id, &mut vals);
|
||||
vals.into_iter()
|
||||
})
|
||||
};
|
||||
fast_field_serializer.create_auto_detect_u64_fast_field_with_idx(
|
||||
field,
|
||||
stats,
|
||||
fastfield_accessor,
|
||||
iter_gen,
|
||||
&vals[..],
|
||||
1,
|
||||
)?;
|
||||
|
||||
@@ -2078,7 +2011,7 @@ mod tests {
|
||||
let mut term_scorer = term_query
|
||||
.specialized_weight(&searcher, true)?
|
||||
.specialized_scorer(segment_reader, 1.0)?;
|
||||
// the difference compared to before is instrinsic to the bm25 formula. no worries
|
||||
// the difference compared to before is intrinsic to the bm25 formula. no worries
|
||||
// there.
|
||||
for doc in segment_reader.doc_ids_alive() {
|
||||
assert_eq!(term_scorer.doc(), doc);
|
||||
@@ -2103,7 +2036,7 @@ mod tests {
|
||||
let mut term_scorer = term_query
|
||||
.specialized_weight(&searcher, true)?
|
||||
.specialized_scorer(segment_reader, 1.0)?;
|
||||
// the difference compared to before is instrinsic to the bm25 formula. no worries there.
|
||||
// the difference compared to before is intrinsic to the bm25 formula. no worries there.
|
||||
for doc in segment_reader.doc_ids_alive() {
|
||||
assert_eq!(term_scorer.doc(), doc);
|
||||
assert_nearly_equals!(term_scorer.block_max_score(), 0.003478312);
|
||||
|
||||
@@ -21,7 +21,7 @@ pub(crate) enum SegmentsStatus {
|
||||
}
|
||||
|
||||
impl SegmentRegisters {
|
||||
/// Check if all the segments are committed or uncommited.
|
||||
/// Check if all the segments are committed or uncommitted.
|
||||
///
|
||||
/// If some segment is missing or segments are in a different state (this should not happen
|
||||
/// if tantivy is used correctly), returns `None`.
|
||||
@@ -168,8 +168,8 @@ impl SegmentManager {
|
||||
segment_entries.push(segment_entry);
|
||||
}
|
||||
} else {
|
||||
let error_msg = "Merge operation sent for segments that are not all uncommited or \
|
||||
commited."
|
||||
let error_msg = "Merge operation sent for segments that are not all uncommitted or \
|
||||
committed."
|
||||
.to_string();
|
||||
return Err(TantivyError::InvalidArgument(error_msg));
|
||||
}
|
||||
@@ -182,7 +182,7 @@ impl SegmentManager {
|
||||
}
|
||||
// Replace a list of segments for their equivalent merged segment.
|
||||
//
|
||||
// Returns true if these segments are committed, false if the merge segments are uncommited.
|
||||
// Returns true if these segments are committed, false if the merge segments are uncommitted.
|
||||
pub(crate) fn end_merge(
|
||||
&self,
|
||||
before_merge_segment_ids: &[SegmentId],
|
||||
|
||||
@@ -171,7 +171,7 @@ pub fn merge_indices<T: Into<Box<dyn Directory>>>(
|
||||
if indices.is_empty() {
|
||||
// If there are no indices to merge, there is no need to do anything.
|
||||
return Err(crate::TantivyError::InvalidArgument(
|
||||
"No indices given to marge".to_string(),
|
||||
"No indices given to merge".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
@@ -219,7 +219,7 @@ pub fn merge_filtered_segments<T: Into<Box<dyn Directory>>>(
|
||||
if segments.is_empty() {
|
||||
// If there are no indices to merge, there is no need to do anything.
|
||||
return Err(crate::TantivyError::InvalidArgument(
|
||||
"No segments given to marge".to_string(),
|
||||
"No segments given to merge".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
@@ -282,7 +282,7 @@ pub fn merge_filtered_segments<T: Into<Box<dyn Directory>>>(
|
||||
|
||||
pub(crate) struct InnerSegmentUpdater {
|
||||
// we keep a copy of the current active IndexMeta to
|
||||
// avoid loading the file everytime we need it in the
|
||||
// avoid loading the file every time we need it in the
|
||||
// `SegmentUpdater`.
|
||||
//
|
||||
// This should be up to date as all update happen through
|
||||
@@ -500,7 +500,7 @@ impl SegmentUpdater {
|
||||
// It returns an error if for some reason the merge operation could not be started.
|
||||
//
|
||||
// At this point an error is not necessarily the sign of a malfunction.
|
||||
// (e.g. A rollback could have happened, between the instant when the merge operaiton was
|
||||
// (e.g. A rollback could have happened, between the instant when the merge operation was
|
||||
// suggested and the moment when it ended up being executed.)
|
||||
//
|
||||
// `segment_ids` is required to be non-empty.
|
||||
|
||||
@@ -53,7 +53,7 @@ fn remap_doc_opstamps(
|
||||
/// set of documents.
|
||||
///
|
||||
/// They creates the postings list in anonymous memory.
|
||||
/// The segment is layed on disk when the segment gets `finalized`.
|
||||
/// The segment is laid on disk when the segment gets `finalized`.
|
||||
pub struct SegmentWriter {
|
||||
pub(crate) max_doc: DocId,
|
||||
pub(crate) ctx: IndexingContext,
|
||||
|
||||
@@ -199,7 +199,7 @@ impl BlockSegmentPostings {
|
||||
self.doc_decoder.output_array()
|
||||
}
|
||||
|
||||
/// Returns a full block, regardless of whetehr the block is complete or incomplete (
|
||||
/// Returns a full block, regardless of whether the block is complete or incomplete (
|
||||
/// as it happens for the last block of the posting list).
|
||||
///
|
||||
/// In the latter case, the block is guaranteed to be padded with the sentinel value:
|
||||
@@ -494,7 +494,7 @@ mod tests {
|
||||
let schema = schema_builder.build();
|
||||
let index = Index::create_in_ram(schema);
|
||||
let mut index_writer = index.writer_for_tests()?;
|
||||
// create two postings list, one containg even number,
|
||||
// create two postings list, one containing even number,
|
||||
// the other containing odd numbers.
|
||||
for i in 0..6 {
|
||||
let doc = doc!(int_field=> (i % 2) as u64);
|
||||
|
||||
@@ -500,7 +500,7 @@ pub mod tests {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Wraps a given docset, and forward alls call but the
|
||||
/// Wraps a given docset, and forward all call but the
|
||||
/// `.skip_next(...)`. This is useful to test that a specialized
|
||||
/// implementation of `.skip_next(...)` is consistent
|
||||
/// with the default implementation.
|
||||
|
||||
@@ -14,7 +14,7 @@ pub trait Postings: DocSet + 'static {
|
||||
/// The number of times the term appears in the document.
|
||||
fn term_freq(&self) -> u32;
|
||||
|
||||
/// Returns the positions offseted with a given value.
|
||||
/// Returns the positions offsetted with a given value.
|
||||
/// The output vector will be resized to the `term_freq`.
|
||||
fn positions_with_offset(&mut self, offset: u32, output: &mut Vec<u32>);
|
||||
|
||||
|
||||
@@ -40,7 +40,7 @@ fn len_to_capacity(len: u32) -> CapacityResult {
|
||||
/// An exponential unrolled link.
|
||||
///
|
||||
/// The use case is as follows. Tantivy's indexer conceptually acts like a
|
||||
/// `HashMap<Term, Vec<u32>>`. As we come accross a given term in document
|
||||
/// `HashMap<Term, Vec<u32>>`. As we come across a given term in document
|
||||
/// `D`, we lookup the term in the map and append the document id to its vector.
|
||||
///
|
||||
/// The vector is then only read when it is serialized.
|
||||
|
||||
@@ -21,7 +21,7 @@ fn scorer_union<TScoreCombiner>(scorers: Vec<Box<dyn Scorer>>) -> SpecializedSco
|
||||
where TScoreCombiner: ScoreCombiner {
|
||||
assert!(!scorers.is_empty());
|
||||
if scorers.len() == 1 {
|
||||
return SpecializedScorer::Other(scorers.into_iter().next().unwrap()); //< we checked the size beforehands
|
||||
return SpecializedScorer::Other(scorers.into_iter().next().unwrap()); //< we checked the size beforehand
|
||||
}
|
||||
|
||||
{
|
||||
|
||||
@@ -139,7 +139,7 @@ impl MoreLikeThis {
|
||||
}
|
||||
|
||||
/// Finds terms for a more-like-this query.
|
||||
/// field_to_field_values is a mapping from field to possible values of taht field.
|
||||
/// field_to_field_values is a mapping from field to possible values of that field.
|
||||
fn retrieve_terms_from_doc_fields(
|
||||
&self,
|
||||
searcher: &Searcher,
|
||||
|
||||
@@ -68,7 +68,7 @@ impl PhraseQuery {
|
||||
|
||||
/// Slop allowed for the phrase.
|
||||
///
|
||||
/// The query will match if its terms are seperated by `slop` terms at most.
|
||||
/// The query will match if its terms are separated by `slop` terms at most.
|
||||
/// By default the slop is 0 meaning query terms need to be adjacent.
|
||||
pub fn set_slop(&mut self, value: u32) {
|
||||
self.slop = value;
|
||||
|
||||
@@ -428,7 +428,7 @@ mod tests {
|
||||
}
|
||||
#[test]
|
||||
fn test_slop() {
|
||||
// The slop is not symetric. It does not allow for the phrase to be out of order.
|
||||
// The slop is not symmetric. It does not allow for the phrase to be out of order.
|
||||
test_intersection_aux(&[1], &[2], &[2], 1);
|
||||
test_intersection_aux(&[1], &[3], &[], 1);
|
||||
test_intersection_aux(&[1], &[3], &[3], 2);
|
||||
|
||||
@@ -577,7 +577,7 @@ impl QueryParser {
|
||||
/// object by naturally extending the json field name with a "." separated field_path
|
||||
/// - field_phrase: the phrase that is being searched.
|
||||
///
|
||||
/// The literal identifies the targetted field by a so-called *full field path*,
|
||||
/// The literal identifies the targeted field by a so-called *full field path*,
|
||||
/// specified before the ":". (e.g. identity.username:fulmicoton).
|
||||
///
|
||||
/// The way we split the full field path into (field_name, field_path) can be ambiguous,
|
||||
|
||||
@@ -51,6 +51,11 @@ where
|
||||
self.req_scorer.advance()
|
||||
}
|
||||
|
||||
fn seek(&mut self, target: DocId) -> DocId {
|
||||
self.score_cache = None;
|
||||
self.req_scorer.seek(target)
|
||||
}
|
||||
|
||||
fn doc(&self) -> DocId {
|
||||
self.req_scorer.doc()
|
||||
}
|
||||
@@ -172,4 +177,23 @@ mod tests {
|
||||
skip_docs,
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_reqopt_scorer_seek() {
|
||||
let mut reqoptscorer: RequiredOptionalScorer<_, _, SumCombiner> =
|
||||
RequiredOptionalScorer::new(
|
||||
ConstScorer::new(VecDocSet::from(vec![1, 3, 7, 8, 9, 10, 13, 15]), 1.0),
|
||||
ConstScorer::new(VecDocSet::from(vec![2, 7, 11, 12, 15]), 1.0),
|
||||
);
|
||||
{
|
||||
assert_eq!(reqoptscorer.score(), 1.0);
|
||||
assert_eq!(reqoptscorer.seek(7), 7);
|
||||
assert_eq!(reqoptscorer.score(), 2.0);
|
||||
}
|
||||
{
|
||||
assert_eq!(reqoptscorer.score(), 2.0);
|
||||
assert_eq!(reqoptscorer.seek(12), 13);
|
||||
assert_eq!(reqoptscorer.score(), 1.0);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -188,7 +188,7 @@ where
|
||||
});
|
||||
|
||||
// at this point all of the docsets
|
||||
// are positionned on a doc >= to the target.
|
||||
// are positioned on a doc >= to the target.
|
||||
if !self.refill() {
|
||||
self.doc = TERMINATED;
|
||||
return TERMINATED;
|
||||
|
||||
@@ -16,7 +16,7 @@ use crate::{Index, Inventory, Searcher, SegmentReader, TrackedObject};
|
||||
/// Defines when a new version of the index should be reloaded.
|
||||
///
|
||||
/// Regardless of whether you search and index in the same process, tantivy does not necessarily
|
||||
/// reflects the change that are commited to your index. `ReloadPolicy` precisely helps you define
|
||||
/// reflects the change that are committed to your index. `ReloadPolicy` precisely helps you define
|
||||
/// when you want your index to be reloaded.
|
||||
#[derive(Clone, Copy)]
|
||||
pub enum ReloadPolicy {
|
||||
|
||||
@@ -13,7 +13,7 @@ pub struct BytesOptions {
|
||||
stored: bool,
|
||||
}
|
||||
|
||||
/// For backward compability we add an intermediary to interpret the
|
||||
/// For backward compatibility we add an intermediary to interpret the
|
||||
/// lack of fieldnorms attribute as "true" if and only if indexed.
|
||||
///
|
||||
/// (Downstream, for the moment, this attribute is not used if not indexed...)
|
||||
|
||||
@@ -35,7 +35,7 @@ pub enum FacetParseError {
|
||||
/// have a `Facet` for `/electronics/tv_and_video/led_tv`.
|
||||
///
|
||||
/// A document can be associated to any number of facets.
|
||||
/// The hierarchy implicitely imply that a document
|
||||
/// The hierarchy implicitly imply that a document
|
||||
/// belonging to a facet also belongs to the ancestor of
|
||||
/// its facet. In the example above, `/electronics/tv_and_video/`
|
||||
/// and `/electronics`.
|
||||
@@ -150,13 +150,26 @@ impl Facet {
|
||||
self.0.push_str(facet_str);
|
||||
}
|
||||
|
||||
/// Returns `true` if other is a subfacet of `self`.
|
||||
/// Returns `true` if other is a `strict` subfacet of `self`.
|
||||
///
|
||||
/// Disclaimer: By strict we mean that the relation is not reflexive.
|
||||
/// `/happy` is not a prefix of `/happy`.
|
||||
pub fn is_prefix_of(&self, other: &Facet) -> bool {
|
||||
let self_str = self.encoded_str();
|
||||
let other_str = other.encoded_str();
|
||||
self_str.len() < other_str.len()
|
||||
&& other_str.starts_with(self_str)
|
||||
&& other_str.as_bytes()[self_str.len()] == FACET_SEP_BYTE
|
||||
|
||||
// Fast path, but also required to ensure that / is not a prefix of /.
|
||||
if other_str.len() <= self_str.len() {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Root is a prefix of every other path.
|
||||
// This is not just an optimisation. It is necessary for correctness.
|
||||
if self.is_root() {
|
||||
return true;
|
||||
}
|
||||
|
||||
other_str.starts_with(self_str) && other_str.as_bytes()[self_str.len()] == FACET_SEP_BYTE
|
||||
}
|
||||
|
||||
/// Extract path from the `Facet`.
|
||||
@@ -301,4 +314,17 @@ mod tests {
|
||||
Facet::from_text("INVALID")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn only_proper_prefixes() {
|
||||
assert!(Facet::from("/foo").is_prefix_of(&Facet::from("/foo/bar")));
|
||||
|
||||
assert!(!Facet::from("/foo/bar").is_prefix_of(&Facet::from("/foo/bar")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn root_is_a_prefix() {
|
||||
assert!(Facet::from("/").is_prefix_of(&Facet::from("/foobar")));
|
||||
assert!(!Facet::from("/").is_prefix_of(&Facet::from("/")));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -139,7 +139,7 @@ pub enum FieldType {
|
||||
Bool(NumericOptions),
|
||||
/// Signed 64-bits Date 64 field type configuration,
|
||||
Date(DateOptions),
|
||||
/// Hierachical Facet
|
||||
/// Hierarchical Facet
|
||||
Facet(FacetOptions),
|
||||
/// Bytes (one per document)
|
||||
Bytes(BytesOptions),
|
||||
|
||||
@@ -32,7 +32,7 @@ pub struct NumericOptions {
|
||||
stored: bool,
|
||||
}
|
||||
|
||||
/// For backward compability we add an intermediary to interpret the
|
||||
/// For backward compatibility we add an intermediary to interpret the
|
||||
/// lack of fieldnorms attribute as "true" if and only if indexed.
|
||||
///
|
||||
/// (Downstream, for the moment, this attribute is not used anyway if not indexed...)
|
||||
|
||||
@@ -16,7 +16,7 @@ use crate::{DatePrecision, DateTime};
|
||||
/// If this is a JSON term, the type is the type of the leaf of the json.
|
||||
///
|
||||
/// - <value> is, if this is not the json term, a binary representation specific to the type.
|
||||
/// If it is a JSON Term, then it is preprended with the path that leads to this leaf value.
|
||||
/// If it is a JSON Term, then it is prepended with the path that leads to this leaf value.
|
||||
const FAST_VALUE_TERM_LEN: usize = 4 + 1 + 8;
|
||||
|
||||
/// Separates the different segments of
|
||||
|
||||
@@ -418,7 +418,7 @@ mod binary_serialize {
|
||||
_ => Err(io::Error::new(
|
||||
io::ErrorKind::InvalidData,
|
||||
format!(
|
||||
"No extened field type is associated with code {:?}",
|
||||
"No extended field type is associated with code {:?}",
|
||||
ext_type_code
|
||||
),
|
||||
)),
|
||||
|
||||
@@ -50,7 +50,7 @@ impl FragmentCandidate {
|
||||
}
|
||||
|
||||
/// `Snippet`
|
||||
/// Contains a fragment of a document, and some highlighed parts inside it.
|
||||
/// Contains a fragment of a document, and some highlighted parts inside it.
|
||||
#[derive(Debug)]
|
||||
pub struct Snippet {
|
||||
fragment: String,
|
||||
@@ -69,7 +69,12 @@ impl Snippet {
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns a hignlightned html from the `Snippet`.
|
||||
/// Returns `true` if the snippet is empty.
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.highlighted.len() == 0
|
||||
}
|
||||
|
||||
/// Returns a highlighted html from the `Snippet`.
|
||||
pub fn to_html(&self) -> String {
|
||||
let mut html = String::new();
|
||||
let mut start_from: usize = 0;
|
||||
@@ -92,7 +97,7 @@ impl Snippet {
|
||||
&self.fragment
|
||||
}
|
||||
|
||||
/// Returns a list of higlighted positions from the `Snippet`.
|
||||
/// Returns a list of highlighted positions from the `Snippet`.
|
||||
pub fn highlighted(&self) -> &[Range<usize>] {
|
||||
&self.highlighted
|
||||
}
|
||||
@@ -230,6 +235,20 @@ pub struct SnippetGenerator {
|
||||
}
|
||||
|
||||
impl SnippetGenerator {
|
||||
/// Creates a new snippet generator
|
||||
pub fn new(
|
||||
terms_text: BTreeMap<String, Score>,
|
||||
tokenizer: TextAnalyzer,
|
||||
field: Field,
|
||||
max_num_chars: usize,
|
||||
) -> Self {
|
||||
SnippetGenerator {
|
||||
terms_text,
|
||||
tokenizer,
|
||||
field,
|
||||
max_num_chars,
|
||||
}
|
||||
}
|
||||
/// Creates a new snippet generator
|
||||
pub fn create(
|
||||
searcher: &Searcher,
|
||||
@@ -460,6 +479,7 @@ Survey in 2016, 2017, and 2018."#;
|
||||
let snippet = select_best_fragment_combination(&fragments[..], text);
|
||||
assert_eq!(snippet.fragment, "");
|
||||
assert_eq!(snippet.to_html(), "");
|
||||
assert!(snippet.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -473,6 +493,7 @@ Survey in 2016, 2017, and 2018."#;
|
||||
let snippet = select_best_fragment_combination(&fragments[..], text);
|
||||
assert_eq!(snippet.fragment, "");
|
||||
assert_eq!(snippet.to_html(), "");
|
||||
assert!(snippet.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -120,7 +120,7 @@ impl StoreWriter {
|
||||
|
||||
/// Store a new document.
|
||||
///
|
||||
/// The document id is implicitely the current number
|
||||
/// The document id is implicitly the current number
|
||||
/// of documents.
|
||||
pub fn store(&mut self, stored_document: &Document) -> io::Result<()> {
|
||||
self.intermediary_buffer.clear();
|
||||
@@ -139,7 +139,7 @@ impl StoreWriter {
|
||||
|
||||
/// Store bytes of a serialized document.
|
||||
///
|
||||
/// The document id is implicitely the current number
|
||||
/// The document id is implicitly the current number
|
||||
/// of documents.
|
||||
pub fn store_bytes(&mut self, serialized_document: &[u8]) -> io::Result<()> {
|
||||
let doc_num_bytes = serialized_document.len();
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
//! For instance, in a dictionary containing the sorted terms "abba", "bjork", "blur" and "donovan",
|
||||
//! the `TermOrdinal` are respectively `0`, `1`, `2`, and `3`.
|
||||
//!
|
||||
//! For `u64`-terms, tantivy explicitely uses a `BigEndian` representation to ensure that the
|
||||
//! For `u64`-terms, tantivy explicitly uses a `BigEndian` representation to ensure that the
|
||||
//! lexicographical order matches the natural order of integers.
|
||||
//!
|
||||
//! `i64`-terms are transformed to `u64` using a continuous mapping `val ⟶ val - i64::MIN`
|
||||
|
||||
@@ -87,7 +87,7 @@ where A: Automaton
|
||||
{
|
||||
/// Advance position the stream on the next item.
|
||||
/// Before the first call to `.advance()`, the stream
|
||||
/// is an unitialized state.
|
||||
/// is an uninitialized state.
|
||||
pub fn advance(&mut self) -> bool {
|
||||
if let Some((term, term_ord)) = self.stream.next() {
|
||||
self.current_key.clear();
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
//! For instance, in a dictionary containing the sorted terms "abba", "bjork", "blur" and "donovan",
|
||||
//! the [TermOrdinal] are respectively `0`, `1`, `2`, and `3`.
|
||||
//!
|
||||
//! For `u64`-terms, tantivy explicitely uses a `BigEndian` representation to ensure that the
|
||||
//! For `u64`-terms, tantivy explicitly uses a `BigEndian` representation to ensure that the
|
||||
//! lexicographical order matches the natural order of integers.
|
||||
//!
|
||||
//! `i64`-terms are transformed to `u64` using a continuous mapping `val ⟶ val - i64::MIN`
|
||||
|
||||
@@ -35,7 +35,7 @@ pub struct BlockAddr {
|
||||
struct BlockMeta {
|
||||
/// Any byte string that is lexicographically greater or equal to
|
||||
/// the last key in the block,
|
||||
/// and yet stricly smaller than the first key in the next block.
|
||||
/// and yet strictly smaller than the first key in the next block.
|
||||
pub last_key_or_greater: Vec<u8>,
|
||||
pub block_addr: BlockAddr,
|
||||
}
|
||||
|
||||
@@ -101,7 +101,7 @@ where
|
||||
{
|
||||
/// Advance position the stream on the next item.
|
||||
/// Before the first call to `.advance()`, the stream
|
||||
/// is an unitialized state.
|
||||
/// is an uninitialized state.
|
||||
pub fn advance(&mut self) -> bool {
|
||||
while self.delta_reader.advance().unwrap() {
|
||||
self.term_ord = Some(
|
||||
|
||||
Reference in New Issue
Block a user